Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This seems hyperbolic to me. Sometimes companies just want to make money.

It's not hyperbolic at all. The entire moat is brand lock-in. OpenAI owns the public impression of what AI is- for now- with a strong second place going to Claude for coders in specific. But that doesn't change that ChatGPT can generate code too, and Claude can also write poems. If you can't lock users into good experiences with your LLM product, you have no future in the market, so data retention and flattery are the names of the game.

All the transformer-based LLMs out there can all do what all the other ones can do. Some are gated off about it, but it's simulated at best. Sometimes even circumvent-able with raw input. Twitter bots regularly get tricked into answering silly prompts by people simply requesting they forget current instructions.

And, between DeepSeek's incredibly resource-light implementations of solid if limited models, which do largely the same sort of work without massive datacenters full of GPUs, plus Apple Intelligence rolling out experiences that largely run on ML-specific hardware in their local devices which immediately, full stop, wins the privacy argument, OpenAI and co are either getting nervous, or they're in denial. The capex for this stuff, the valuations, and the actual user experiences are simply not cohering.

If this was indeed the revolution the valley said it was, and the people were lining up to pay prices that reflected the cost of running this tech, then there wouldn't be a debate at all. But that's simply not true: most LLM products are heavily subsidized, a lot of the big players in the space are downsizing what they had planned to build out to power this "future," and a whole lot of people cite their experiences as "fine." That's not a revolution.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: