I'm a very experienced developer with a lot of diverse knowledge and experience in both technical and domain knowledge. I've only tried a handful of AI coding agents/models... I found most of them ranging from somewhat annoying to really annoying. Claude+Opus (4.5 when I started) is the first one I've used where I found it more useful than annoying to use.
I think Github Co-Pilot is most annoying from what I've tried... it's great for finishing off a task that's half done where the structure is laid out, as long as you put blinders keeping it focused on it. OpenAI and Google's options seem to get things mostly right, but do some really goofy wrong things from my own experiences.
They all seem to have trouble using state of the art and current libraries by default, even when you explicitly request them.
I've only used the default selection, whatever it is in VS Code. Even paid for a year at one point as I was first using it with some SQL schema generation and it was pretty useful, kind of as a super auto-complete.
If the default option isn't at least arguably the best option I can't really speak to that. I would suggest that maybe metrics on a given set of technologies be done and that based on the project in use, that it should choose the best option dynamically by default. Such as C#+MS-SQL vs Node+Postgres vs Python+Matlab+DuckDB.
I think Github Co-Pilot is most annoying from what I've tried... it's great for finishing off a task that's half done where the structure is laid out, as long as you put blinders keeping it focused on it. OpenAI and Google's options seem to get things mostly right, but do some really goofy wrong things from my own experiences.
They all seem to have trouble using state of the art and current libraries by default, even when you explicitly request them.