ASR abstracts away all syntax and all details of the target machine, no leaks. Contrast to the schoolbook approach of decorating ASTs with semantic information, which often reflects details of a target machine.
Implication is that ASR is a full programming language in its own right (though with no quality-of-life features: everything is explicit, and it's also currently restricted to the operations featured by LFortran and LPython: heavily array-oriented for now, ASR grows as LFortran and LPython grow). I've prototyped, in Clojure, an independent type-checker for ASR (https://github.com/rebcabin/masr), and an interpreter (for "abstract execution") should not be difficult.
Sometimes, standard semantical approaches assume a target stack-machine, or SSA architecture, or something else with insufficient generality to cover all the cases we want (e.g., non-Von architectures like APU from GSIT). ASR assumes only lambda/pi/rho calculus, depending on whether you want concurrency semantics (that's TBD, frankly; today it's justs lambda)
Right. Decompilation with ASR should be relatively easy and more faithful than average decompilation (though, as mentioned by another commenter, the very-long-term value of decompilation in general is debatable in the face of rising AI like CoPilot).
By being a superset of Fortran and subset of Python. It turns out the features map on each other almost 1:1, and the differences can be taken care of by the respective frontends, so the abstracted ASR maps perfectly.
What's novel about it?