Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the risk of being the idiot: Being very smart doesn't prevent you from saying and doing very stupid things.

My problem is that Altman is a very smart idiot. He already admitted that OpenAI have absolutely no idea how to make money. Apparently they've now given up on the idea of asking ChatGPT how to make money. Their "AI" not going to develop fast enough, if ever. So now they are just buying up stuff left and right? It might be part of some coherent plan, but if it is, no one else is seeing it.

Altman is smart enough to see that things are not working out and that he's going to run out of money and investor patience. He might also be smart enough to see that if OpenAI fails, so will 80 - 90% of his competitors, not sure if he care though. He needs OpenAI to survive, but he's not that kind of smart, and honestly I'm not sure anyone is.



I feel like we must live in very different worlds! The major AI companies have in excess of 100mm customers each. There’s so much demand for compute that wise investors are literally buying up nuclear plant building companies.

LLMs have blown through every major test people have put in front of them invariably beating estimates as to how long it would take them. Pull up Dwarkesh’s podcast about ARC wherein the creator of ARC proposes it could likely never be super-human with current architectures, about 3 months before o3 provably became superhuman on ARC, spurring the creation of a new “better” (and it is better!) test.

To my outside eyes the OpenAI plan is simple: get too big to fail and be ready to navigate changing investor appetite. Plus maintain technical leadership if possible. And build an enduring consumer brand. Simple but hard. You will note that (as far as I know) they have invested in zero direct physical infrastructure, preferring compute deals with companies like Microsoft and Coreweave.

To my eyes their risk point would be: massive loss in quality/cost to a competitor (Gemini 2.5 pro underscores that Google is a real contender here, and has like six generations of custom chips that make their economics different), or somehow investors remain bullish on AI but bearish on OpenAI to the extent they can finance a legitimate competitor.

If investors lose interest generally, we will enter a new era of higher-cost inference and comparatively less demand. This is the intent behind doing compute contracts rather than owning data centers — a contract likely shifts most of this risk right out onto data center providers; OpenAI can just pay for less compute time. I don’t think this is a ‘death’ scenario for them, because this will be a general loss of interest and therefore all AI companies will stop being able to give away free inference. OAI might contract (probably would) in this world. They might slow down on new model training. (Probably would). But, so would everyone else.

Another way to say it - they’re spending single digit billions of dollars on training and research right now. Think of that as creating a strategic asset, and ALSO customer acquisition cost (e.g. image creation this year — new, better models = more paying customers).

Against a 200mm customer base, would you spend $20-50 to acquire a customer that pays $20/month? Their CAC is low right now. Really low!

This is why I’d propose the major risk is that they get singled out of the herd as ‘non-investable’ vis-a-vis other AI companies. To my eyes they don’t look to be at risk of this right now; if they somehow got there, this would be a real problem - it would lead to the scenario I think you’re imagining — they’d have no money to give away inference / train models, but competitors would.

So, you have to ask, are they sufficiently large, popular, technology leaders, embedded as a strategic US asset in the military industrial complex to avoid that fate? My outside assessment is: definitely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: