I think Python and R are generally superior (in terms of developer experience) when you have to do end-to-end ML work, including acquiring and munging data, plotting results etc. But even then, the core algorithms are generally implemented in libraries built out of native code (C/C++/Fortran), just wrapped in friendly bindings.
For LLMs, unless you're doing extensive work refactoring the inputs, there are fewer productivity gains to be had around the edges - the main gains are just speeding up training, evaluation and inference, i.e. pure performance.
For LLMs, unless you're doing extensive work refactoring the inputs, there are fewer productivity gains to be had around the edges - the main gains are just speeding up training, evaluation and inference, i.e. pure performance.