Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI Is a Bad Business (wheresyoured.at)
92 points by KraftyOne on Oct 15, 2024 | hide | past | favorite | 147 comments


They are working on getting all the market share they get at all costs - it's not that different from Amazon and Google from 20 years ago. Will see how it will work out for them: 20 years ago there were a lot of losers and a very few big winners. Most likely history will repeat itself again, with or without OpenAI.

But now even regulators are working for them - the more regulations are there, the harder it will be for late comers to join.


It really is not the same. Amazon was not profitable because it was building out logistics, and then AWS data centers. There were defensible moats around their business that their growth facilitated. Google built out data centers and fiber, again tangible assets they had that competitors did not.

OpenAI's spending is mostly buying compute from other people. In other words, OpenAI's growth is paying for Microsoft's data centers. The only real asset OpenAI is building are their models. While they may be the best models available today it is unclear if that those any durable advantage since everyone in the industry is advancing very quickly, and it is unclear if they can really monetize effectively them without the infrastructure to run them.


OAI moat is the brand and getting people habituated into using it more. The aim is to become synonymous to AI, same as Google is synonymous to online search(though in recent times, I am horrified to see that my non-techy friends now use Facebook to search first and then if not satisfied, goes to Google).

Also, there are immense lobbying everywhere covertly, if you haven’t seen it, look at the prominent news media, AI is now akin to “productivity gain” and “innovation”, and “adapting AI or we fall behind” attitude.

Look at all the hammers falling on different industries about energy waste, climate change while AI’s usage of massive energy is presented as “innovation and next generation of work and productivity gains to be unlocked”.

The target for OAI(and AI wave in general) is normal people, not tech people. And when people say AI, you conveniently see OAI emerge from news, social media, your colleagues, your boss, your friends.


> The only real asset OpenAI is building are their models

Which Zuckerberg is doing his best to completely turn them into commodities.


> Which Zuckerberg is doing his best to completely turn them into commodities

By giving them out for free??


Yes, exactly. In the same way that IBM "gave away" investment in Linux to grow their IT Consulting Business revenues.


"Commoditize your complement."


Amazon was also able to grow because they weren't paying sales tax for, what, 10+ years of their existence?


Yes.

This sort of incentives are what makes unicorns possible.


> it's not that different from Amazon and Google from 20 years ago

I mean it is a little.

Amazon didn't extend its losses while growing it's buisness. Yes it made losses (quite large ones for the time) but the underlying principle was pretty solid. They had a market that clearing going to be profitable, because the overheads were much less.

But openAI doesn't have that market. They offer something free that is comparable with the competitors. A paid for option that isn't much better than the free version, and isn't much different in capability from the competition.

Moreover, in order to remain competitive they need to spend _billions_.

Even if we ignore meta kneecaping them, anthorpic et al are yapping at their heels.

The market isn't actually that solid, there isnt anywhere enough cash to maintain the research spend.


Love the bizarre techno libertarian view that any form of regulation is the result of regulatory capture to kill competition and not a result of like 20 years of bad actors running rampant in the industry at the end there.


Why can't it be both?

Market ideologues have both fought regulation and allowed the cozy situations encompassed by regulation capture (allowing the "revolving door" between industry and government based on "who else would know how to regulate? lol")


sama is basically trying to say that OpenAI is the only one who can be trusted to build LLMs. How is that not regulatory capture?


It's only regulatory capture if the legislative branch actually passes the laws Sam wants. As far as I know, none have. Public officials are also becoming increasingly wary of tech, with laws being discussed in places like California that OpenAI has publicly opposed.


So making a statement is regulatory capture now?


It's a statement that expresses the desire to achieve regulatory capture.


Which isn't up to OpenAI, it's up to the regulators and, to a lesser extent, the voters.


Yeah but the crucial piece of context is who that statement is made to and how it is received. We typically call this process "lobbying."

And what it usually looks like in practice is that regulators will extend an open invite to "industry leaders" to chime in on the proposed regulation. During that process, the interest groups advocating on behalf of the "industry leaders" will utter statements that aim to shape the regulation in a way that is beneficial to the corporation being lobbied for.

The end goal is typically to land on a new bill that will

a) give the regulators new fodder for legislative resume, so they can show their voting constituency that they accomplished something and

b) are not too expensive for the "industry leaders" to comply with. During the regulatory process this will get shrouded in language along the lines of making sure that the regulation won't cost jobs etc. But, if all goes according to plan, the regulation will

c) be insanely expensive for new startups to comply with, thus ensuring that any negative consequences that are caused by the new bill will be "invisible" because they will largely apply to hypotheticals (new startups wanting to enter the market) and small players that aren't on anyone's radar


I think I generally agree with this article, but I have some nitpicks:

"OpenAI (...) has yet to create a truly meaningful product"

Almost everybody I know uses ChatGPT at least semi-regularly for all kinds of things. I'm not sure if it's possible to look at this objectively and claim that ChatGPT isn't "a truly meaningful product."

"OpenAI loses money every single time that somebody uses their product"

I'm not sure if this is true. My understanding is that they (on average, including their free users) make money on usage, but lose money on training. But I could be wrong.

I also think OpenAI isn't yet serious about monetizing its products. You can just go to ChatGPT and use it for free. They have a ton of data about their users that their users give them freely. The free ChatGPT tier is a prime candidate for advertising.


> My understanding is that they (on average, including their free users) make money on usage, but lose money on training.

Doesn't OpenAI need to continue training just to remain relevant? Not even just because there is a race between multiple AI companies, but because new information is regularly published. And if that's the case, it's just part of the cost, right?


There are a few successful game companies that have never turned a profit. Each game brings in more than it cost to make but the development of the following game dwarfs that.

I have never been quite sure if it is good practice. It does make you vulnerable to a flop.

OpenAI seems to be in a similar position.


As I understand it, OpenAI loses billions of dollars every year, and it receives preferential treatment for cloud credits. Does any gaming company make similar total losses year after year?


>As I understand it, OpenAI loses billions of dollars every year, and it receives preferential treatment for cloud credits.

The absolute values or the deals performed are irrelevant to the principle. The principle is that revenues and expenditure go up in a way that never results in a profit, but if you broke each generation of development into a separate company it would be a series of profitable investments requiring increasingly larger investment for the next company. If serving ChatGPT stays profitable enough to cover the training of the model that they are currently serving then they are financially secure. It will only become a problem if they cannot recover the cost of training each model by serving it. We don't yet know if that will happen.

There is insufficient data in this area, Without knowing the capital expenditure on hardware, the costs of running the hardware, and the breakdown between how much of that hardware is running inference and how much is training, we just don't have a reliable way to calculate potential profitability. It's been a long time since GPT4, and all releases since then have been improvements on that generation. Some of those improvements have significantly reduced the cost of inference. There is evidence that they are spending on the next two generations already. We don't yet know how capable the next generation model will be, in the absence of that data point, almost all of the speculation is relatively meaningless.

>but I don't see any indicator that OpenAI will not go the route of enshittifiaction if it gets the opportunity

There is a very significant indicator. We live in a world where it is commonplace to offer free services paid for by advertising. This is the model where the user is not the customer, it is the product. That is the most common path to enshitification, because pleasing your customer takes priority. OpenAI has not yet taken that path (and they have had the opportunity). Both the API and ChatGPT plus work on the old fashioned model of give us some money and we'll give you a service. That is a model that disincentivises screwing over your customers.

It may not stay this way, but they certainly have had opportunities for enshitification that they have chosen not to embrace.


> Almost everybody I know uses ChatGPT at least semi-regularly for all kinds of things.

I simply do not believe you. Most people I speak to attempt to use it periodically, find that for any promising use case it fails completely and then give up for another few weeks.


"I simply do not believe you."

Well, I can't exactly prove it to you. The thing I would offer as evidence, though, is that most of my friends have white-collar jobs. They spend most days sitting in offices writing stuff.

Let me give you an example. One of my friends works in a legal department at a bank. She often gets to review things like ads or promotions the bank runs. These often contain things that are illegal, because the marketing team is not made up of people who have a legal background, and many things that seem like reasonable ideas are illegal. So a lot of her work consists of explaining to people why they can't do the thing they want to do.

She's not particularly good at doing that in an empathetic way, so she's started using ChatGPT to outline these emails for her. "The marketing team at my bank wants to run a competition where people who open a new account get a chance of winning a prize. This is illegal, since it violates sweepstakes laws (see below, where I've pasted the applicable sections). They either have to provide a way to enter the contest freely, without opening an account, or they have to abandon the idea. Can you write an email that explains this in an empathetic, but clear way?"

Obviously, she then has to read the email ChatGPT writes and edit it, but it still saves her time and creates a better end result.

"for any promising use case it fails completely"

I simply do not believe you :-)


> "The marketing team at my bank wants to run a competition where people who open a new account get a chance of winning a prize. This is illegal, since it violates sweepstakes laws (see below, where I've pasted the applicable sections). They either have to provide a way to enter the contest freely, without opening an account, or they have to abandon the idea. Can you write an email that explains this in an empathetic, but clear way?"

Just this paragraph but without the instructions to ChatGPT is sufficient for this case. It's literally more work to get a useful output from ChatGPT than just to do the work yourself. And that's not even accounting for the careful review you need to give every morsel of output from it.


> It's literally more work to get a useful output from ChatGPT than just to do the work yourself.

This feels like opinion stated as fact.

I don't write business language well, but I am able to communicate much more effectively with business people if I run it through an LLM to "translate" for me. I do some editing on the output, and I've saved 10 minutes that I would have otherwise agonized over whether or not I've used the right tone while emailing another department.

"I've fixed the scrolling bug in the calendar component, and it should be published after lunch. Can you let Alex know I'll be around until 5 if he wants to go over it?"

becomes

"I have resolved the scrolling issue in the calendar component, and it is scheduled for publication this afternoon. Please let Alex know that I am available until 5 p.m. today if he would like to review it together."


"Just this paragraph but without the instructions to ChatGPT is sufficient for this case"

It depends on who you're talking to.

But either way, the point is that people do use LLMs, whether you agree with how they use them or not.


ChatGPT has ~200M weekly active users. There isn't really anything to believe here.


ChatGPT's numbers are unaudited and presented without context. OpenAI is not a publicly listed company. They can tell you anything they want to tell you about their business metrics.


It's weekly active users, There is literally no other context that needs to be presented. The worst you can assume without outright fraud is the number of users who send at least one query every week.

Open AI don't need to report any numbers to see it's incredibly widely used. The site has had over 1.5B visits every month since March 2023.


> Almost everybody I know uses ChatGPT at least semi-regularly for all kinds of things.

Count me in the camp that doesn't and wishes fewer people did. I just asked ChatGPT what it knows about me, and apparently I am known for my work in the field of AI. I've apparently contributed to multiple AI projects. I have a globally-unique name and have done absolutely nothing of the sort.

I don't dispute that there are good uses for it right now, but people taking this stuff at face value for anything important is terrifying.


> I just asked ChatGPT what it knows about me, and apparently I am known for my work in the field of AI. [..] people taking this stuff at face value for anything important is terrifying.

Yeah, I'd be more comfortable if it could say "I don't know" instead of generating bullshit.

Simon Willison's article Think of language models like ChatGPT as a “calculator for words”[1] helped me think about this topic better:

> If you ask an LLM a question, it will answer it—no matter what the question! Using them as an alternative to a search engine such as Google is one of the most obvious applications—and for a lot of queries this works just fine. It’s also going to quickly get you into trouble.

All of this is why I only every use LLMs for things that I can verify.

[1]: https://simonwillison.net/2023/Apr/2/calculator-for-words/


"You're holding it wrong."

You're likely to get more value out of it if you try to do something useful with it rather than asking it to answer a question you know ex ante it cannot know the answer for.


I expect it to provide information to me on topics I don’t already know. If it can’t do so accurately on topics I do know, what on earth would make me believe it’s any better on everything else?

I had absolutely zero reason to believe it would mischaracterize me in advance of asking that question. In fact I presumed that would be an easy gimme: simply Googling me is enough to provide a pretty accurate assessment even before following any of the links.


> I expect it to provide information to me on topics I don’t already know.

Presumably you already know about yourself, yet that was the example you chose to pick. It comes across as you basically leading the witness just to be able to say say "gotcha! I knew you'd suck".

It's not a search engine, although you can ask some models/chatbots to search the web. Again, you're holding it wrong.

Instead of trying to confirm your bias that LLMs are bad, try to find discomforming evidence.

“The human understanding when it has once adopted an opinion, draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects.”

— Francis Bacon


> Presumably you already know about yourself, yet that was the example you chose to pick. It comes across as you basically leading the witness just to be able to say say "gotcha! I knew you'd suck".

With all due respect, how the hell else am I supposed to evaluate its ability to answer accurately on topics without testing it first on topics I know about?

> Instead of trying to confirm your bias that LLMs are bad, try to find discomforming evidence.

I wasn't trying to confirm my bias. I literally thought that with all the information OpenAI has collected throughout the web they can probably give me a pretty good summary of the publicly available information about me. They probably even have stuff that's a bit more obscure or difficult to find. This seemed like an extremely easy way for me to be impressed.

I was trying to find disconforming evidence. Yet again I was thoroughly disappointed.

If spot-checking AI with softball questions is considered a "gotcha", that is an extremely damning comment on the current state of things.


But that's not a softball question. You're asking your microwave oven to toast some bread and complaining it comes out hot but not toasted. It's not a toaster, just like an LLM is not meant to be used as a search tool although some integration exists.

Ask it some programming question, to draft an email, to create a marketing campaign, to help you dig deeper into a research topic, I don't know.. things an LLM was built for.


Summarizing the publicly-available information on a person or topic is what LLMs are supposedly built for.


No, summarizing information directly provided to them in their prompt is what LLMs are built for.

Tool-use frameworks with data retrieval tools add can be wrapped around an LLM to make the combined system try to do what you describe, but even the best of those is a lot less reliable than the underlying LLM is at its core functions like summarizing directly provided information.


The summarizing feature of LLMs requires that you provide the input you want to be summarized.


Did you literally read the first sentence and then reply without reading the next one?


> I expect it to provide information to me

Yeah, that's something language models are exceptionally bad at, and, yeah, ChatGPT is more than an LLM it is a UI around an LLM core that has some other facilities for data retrieval, but, still way out of its area of strength (unfortunately, that seems to be a lot of what people want LLM-centered products like ChatGPT to be.)


> make money on usage, but lose money on training. But I could be wrong.

You could be 100% right, and it still could be an unsustainable business. Depending on how much training and R&D costs.


The more you endure losing money the more of your opponents go bankrupt - or outsmart you


This typically works but the incumbent players - Meta, Google, Microsoft (ish) - are very rich, and are experts. They're no IBM.


“Advertising” is not a sure means of profitability. Yahoo was one of the most trafficked website for years and still not profitable.


> Almost everybody I know uses ChatGPT at least semi-regularly for all kinds of things.

Ed's perspective is from a nonengineering side, not the typical Hacker News user or LLM API consumer.

> My understanding is that they (on average, including their free users) make money on usage, but lose money on training.

The billion-dollar YoY losses can only be possible with negative marginal profits. Training is expensive but not that expensive.

> I also think OpenAI isn't yet serious about monetizing its products. You can just go to ChatGPT and use it for free

That's just modern software business: have the user try it for free (loss-leading) with an incentive to upsell to a paid product, notably ChatGPT+. And if OpenAI didn't do it, others will as is currently the case now. But in a post ZIRP world that doesn't work as well.


One would think that the difficulty of making a company profitable when it's training larger and larger LLMs, combined with the diminishing returns and model collapse phenomenon, would make it so that companies don't wish to stop training larger models. I assume that they continue training larger models because whichever company stops training larger models would fall behind in the race to win new rounds of funding, but if that is the case what is the ultimate valuation that these companies are trying to achieve being valued in the biliions already?

Diminishing returns means that the user gets less marginal benefit with each larger model, and the model collapse phenomenon means that models trained on new training data might be less good than older models. Have straightforward mitigations been put in place such as filtering out from the training data forums where users like to share AI generated content?


> but if that is the case what is the ultimate valuation that these companies are trying to achieve being valued in the biliions already?

That's why OpenAI/Sam Altman has been memeing AGI. None of this will work unless they make God.


>> Almost everybody I know uses ChatGPT at least semi-regularly for all kinds of things.

> Ed's perspective is from a nonengineering side, not the typical Hacker News user or LLM API consumer.

ChatGPT has 200M weekly active users.


I'm not saying I agree with him that ChatGPT has no practical use. https://x.com/minimaxir/status/1841549631085543850


I’m responding to the suggestion that being on the non-engineering side explains his opinion.


I've tried the alternatives and even running my own models locally. There is just no other solution I could find that comes close to the general utility of ChatGPT for a broad range of use-cases. I hope someone disrupts them, but until then, ChatGPT is to LLMs what Gmail is to email for me. I love to hate them.


That's an interesting perspective to me since I've gone the opposite way. Claude Sonnet 3.5 beat ChatGPT in every test that mattered to me. Perplexity is a fantastic service when you want gather information or do research a topic, much better than any off-the-shelf LLM imo. I'm not overly surprised that some people have your perspective, since OpenAI has done a great job dominating mind share, but I think if you look at some of the other options you'll find there's a lot of great tools and some of them are even better than ChatGPT. The one caveat I have is that I don't do image generation and it's not very important to me so OpenAI may have the best option there, I don't know.


I have tried other options and you're right, many of them are better at specific things. But I don't have the time or desire to manage lots of tools when ChatGPT is generally good enough for most use-cases.


Claude Sonnet 3.5 beat which version of ChatGPT?


Every single one of them. I’m not being facetious. Sonnet 3.5 outranks every single OpenAI model on most benchmarks.


Claude beats ChatGPT but it lacks many features like audio. Kind of a deal breaker.


What things do you do locally? For me, I found that about 8/10 times - a local model just suffices.


Code and biology. For me, the constraints locally are typically quality and speed of generating responses.


Can you expand?


Loopt was never profitable.

I am yet to hear of any YC company that succeeded and pointed at Altman as a key factor.

OpenAI only got first mover advantage because it was seen as separate entity that had a higher purpose. Now that the truth is out and others realized that Altman was playing a con the whole time, the competition is catching up and turning them into commodities.

Why do people still put so much stock into him?


"ChatGPT Plus has no real moat" other than the fact that they still have the best models. Competitors can be useful in some situations, but in the general case, GPT-4 (and now o1-preview) are more useful than anything else out there.

And that's before factoring in what other models they are cooking that they're in no hurry to release.

If ChatGPT is so easy to outcompete, where's the competition?


> they still have the best models

Only few use cases need the best models. Most of them work ok on mini models and LLaMAs. The problem is that it costs very much to develop this shrinking segment of high end. With each new release in open / smaller models, the island of superiority shrinks for OpenAI. But people's task don't get any more difficult on average. So eventually your phone model will suffice for 99% of use cases, and be free and private. What will OpenAI do with 1% of the market?


The "I need some help on some vague problem and want to ask a machine to act like a junior member of my team I could delegate to and get mostly-right results that I would still need to verify" is a very compelling use case. Arguably the best.


I think it's, by far, the least compelling use case. AI first and foremost will be focused on singular tasks that are fairly complex, in order to automate.

These "catch all" use cases sound cool, but in practice are nearly worthless. Because you'll spend more time verifying than if you just did it yourself. The compelling use cases are the human-less use cases. Imagine ridding your entire accounting department because it can be handled by highly specific LLMs and calculators.


I have never, in 2+ years of using AI spent more time verifying the output lol.

It takes like 10 seconds.

You’re just using it to point you in a direction, not solve the entire problem.


Then I'm questioning if the output is actually correct.

That's just not a compelling use case. It makes far more sense to just hire junior engineers. Because then they're actual engineers, and their knowledge base will grow and transfer over time.

If you have proprietary systems, then it may make sense to train an LLM on those. But I would consider that a more "focused" use case. I just don't see the main value of AI being a glorified, generalized google search.

There's potential here to automate tasks with MUCH less friction. Now business folk and people who understand processes (but not algorithms) can potentially automate business processes.


It makes sense to hire junior engineers for big businesses, sure. I think you underestimate how much work a solo developer can get done with GPT.

Yesterday, I implemented a recursive PostgreSQL function to crunch some pretty complex tables to produce a new output. As a senior engineer, I knew the output I wanted and understood the trade offs, but I didn’t want to spend 3 whole days trying to figure out how to build the recursive function myself.

Claude did it correctly in about 5 minutes, and then I spent another hour or two just tweaking and exploring alternate options.

I think it also goes without saying that I could not handle this to a junior engineer. We’re quite clearly past the “GPT is only as good as a junior engineer” phase.

It’s so different from anything else that’s ever existed. You could not do what I did yesterday without spending a LOT more time. Google would not have solved that problem, nor would a junior engineer.


> Because you'll spend more time verifying than if you just did it yourself.

This hasn't been the case in my experience.


In my experience as a software engineer, reading code is much more difficult than writing it. The trick is you have to understand it, understand the implications, and understand the business impact. If you skip all three of those then it might be faster.


Claude is better than gpt-4.


o1 is better than Claude at more difficult tasks, I have used and continue to use both extensively


Do you find that true for tech / programming tasks, general purpose things, or both?


Not better than o1-preview based on my subjective experience.


Their moat is the data and feedback they get from their users. Each chat has the potential to learn something from. AB tests. Upvotes & Downvotes.

All that feedback has made their product better and they're at a very different scale than everyone else.


Agreed. There is both explicit and implicit feedback in user responses. Did they continue or stop? Clarification? Pushing back? Correction? so many signals. Even more, you can judge a response by looking at how the conversation went after that moment. Hindsight analysis to predict their usefulness.

The more interesting thing is when the user tests the AI ideas. Like running some code or applying advice for your hobbies. The user returns again and again communicating the outcomes in order to get new advice. The model can collect all chats across many days by topic, and judge them in retrospective. Which answers were good or bad?

The user base is so large, 200M users means lots of perspectives, lots of ideas, chatGPT puts trillions of tokens in human brains per month. They influence on a grand scale. After a while some article will describe the outcomes, and that data percolates back in the next training set. A feedback loop. And an indirect agent, working through people. An experience flywheel, collecting from hundreds of millions and serving back.


On the other hand: ChatGPT is the fastest-growing consumer product in history.


A company giving away money for free could probably beat them (and might lose less money per customer than OpenAI).


Loss-leaders tend to grow fast. More importantly, you need a moat to benefit from being a loss-leader and get back to profitability, which OpenAI does not have.


ChatGPT is the best foundational model. The barrier of entry is high because of high compute costs.


Having the best model, which is debatable even in GPT-4o's case, is not a moat.


But if training the model is building the moat, and doing so is highly unprofitable, then the only way for OpenAI to maintain its moat is to continue training better models than the competition, each of which will cost even more money to train and push them further into the red.


There are more people alive today than there ever has been and those people, generally, have a better means of keeping up to date with new tech/consumer products.

Both of those two points alone emphasise the fact that comparing the accelerated growth of a tech company in 2024 to the likes of a Sega Megadrive in the 90s, for example, isn’t a like for like comparison.


But as the article suggests it's not making it profitable.


I was around in 1999 when we had this conversation the first time. People openly mocked Google for throwing money down the well. It was a search engine! People Clicked away from Google. They were fools, it was unsustainable, etc etc. Then adwords launched and we know the rest of that story.


> Then adwords launched and we know the rest of that story.

I guess the moral of that story is that OpenAI may be profitable once they start injecting Temu ads into ChatGPT responses. Profitability, but at what cost?

They appear to already be working on something along those lines: https://news.ycombinator.com/item?id=41658837


Then you also remember all the other companies that threw money down the well in '99 and didn't pull a rabbit out of their hats in time.

Throwing money away isn't the clever bit in the Google story.


You're talking about the dotcom bubble! A time in history where enormous amounts of money were thrown down enormous amounts of overhyped wells, and almost all of them turned out to be stupidly bad business.

People mocking that waste of money were 99% right, and Google is one of the very few exceptions.


Google did become more an more of a portal though offering weather etc. to keep you from clicking away. Carasols of their own YouTube property videos at the top of each result. Amp links similar to AOL keywords running their own ads inside most of the time, etc.


Or you could cite the thousands of companies that died with a whispers

It’s called Survivorship bias


You say this but Google has far better retention than openAI does.


Early stage companies care more about growth than profitability.

Looking at this arbitrary chart from a quick search[1], OpenAI is likely still at the "young growth" stage

[1] https://vlp.teju-finance.com/courses/pluginfile.php/1844/mod...


As long as they remain at in the lead of what could be a humanity changing field I’m not convinced losing money is a certain indicator of a bad business. Definitely not a good sign either but the bet is clearly a long term one. (Or maybe it’s just the classic silicone valley bigger idiot swindle)


Still haven't seen any evidence of LLMs displaying any kind of intelligence at all, never have. This goalpost has not moved. They are not building AGI, Google is not building a universal search engine, Meta is not building a metaverse you can live in, Crypto isn't replacing fiat currencies, ...


Forget about terminology, if you've been following developments for a while, you can't help but notice consistent improvements, and the new models are very powerful. I don't care if anybody calls it AGI, whatever they are, they will be massively transformative.


They're certainly useful, and the goalposts for what is "intelligent" keep shifting, but I think we're seeing only progress towards more polished bigger LLMs. I don't think that progress implies there's a path from LLMs to something substantially better.

LLMs are still prone to hallucinations, and we're only adding more data and workarounds to make it happen less often. Prompt injections limit their usefulness on untrusted inputs. They can't do precise logic and reasoning, and are too likely to follow memorized patterns instead.

We've got something amazing, way better than what we've had before, but the current architecture is still based on a fuzzy translator. It's hard to say whether this is it, and it's going to plateau at this level for a while, or whether there are more breakthroughs around the corner.


OK. With the best of intentions and real curiosity: How?


How they are better? The o1 models succeed where all other models fail.

I don't know how to go more in depth now, but I use Claude, 4o, and o1, both mini and preview in parallel and o1 both mini and preview succeed in many tasks where the others fail.


Fuller question: You say they are massively transformative. In why way? How is life or business going to be changed by this tool?


Like making data entry easier, extract to JSON. Partner for foreign language practice. Brainstorming. Coding and fixing computer problems. Explaining legalese. Reading electronic chips by the label (ICs, transistors) and telling what they do. Fixing my poor English. Turning a list of ideas into a full fledged article. Explaining papers. Writing personalized stories about the adventures of your pet.


Thanks... none of those sound like they would pay the massive bills of running chatgpt (or anything similar).


We're only two years in. To be frank I'm a loss on how to respond to the skepticism. If something so useful can't make a profit in the future you envision, I don't know what can.


They don't seem that useful to me; perhaps that is why our perspectives differ. They seem, at most, like slight conveniences. But the other half is the cost of the service: regardless of whether something is useful (wide spectrum, there), it has to cover the costs of providing it. It costs a lot to provide a service like chatgpt. So far, I don't see people rushing to pay for it. I wouldn't pay for it, in part because sorting out all the misinformation eliminates a lot of the benefit, from my perspective.

Most things that succeed have a very clear value proposition that justifies their cost.

But I watch, and if your envisioned future arrives, that will be interesting.


I don't remember who said it, but it wasn't some AI hype-rer, "LLMs eliminate 40% of the drudgery in code" - which has certainly been my experience, so, on that specific point, this is far from a slight convenience.


Then you shouldn't pay for internet access either because the web is brimming with misinformation and manipulating content. It's not a GPT-specific problem.


I said they will be, from looking at the trajectory. Programmers are just early adopters.


> ChatGPT Plus has no real moat, little product differentiation (outside of its advanced voice mode, which Meta is already working on a competitor to), and increasing commoditization from other models, open source platforms, and even on-device models (like on Copilot+-powered PCs).

This is the only thing that matters imo and the biggest open question to me. All the math surrounding this is irrelevant if being the premier genAI API/service is something you can charge good money for (to businesses who likely won't scrimp). Winning that market share battle is worth burning billions for most likely and they are ahead of everyone else.


Businesses are interested in AI so they can fire employees and do the same amount of work with much cheaper AI instead. The whole point is to scrimp.


There's a wide range of economically attractive cost saving options potentially, that involve replacing employees/contractors with cheaper/fewer Gen AI augmented employees. Perhaps another way to make my point is that in the B2B space you win not just on price, but on reputation also if not primarily (as the people making the decisions are not spending their own money and make decisions primarily to avoid being fired). The competition to OpenAI needs to not just be cheaper to take their business.


Their moat is the data and feedback they get from their users. Each chat has the potential to learn something from. AB tests. Upvotes & Downvotes.

All that feedback has made their product better and they're at a very different scale than everyone else.


The purpose of losing money in a venture backed business is to destroy the economics of it for other businesses so it is unprofitable to compete in the same space. Classic dumping tactics but regulators look the other way.


The author, Ed Zitron has been an OpenAI bear for a long time now.

I hold a neutral position on this, I will sit and watch how this plays out.

I had predicted that Uber would not become profitable. But it did eventually.


I remain skeptical, and I agree with Ed's observations, but I also starting to think that the AI bubble is the only thing holding back a widespread tech depression currently, so I try to remain mostly passive on the topic.


I don't see how the spend can continue a lot longer without some evidence of profits, or at least more potential for profits. I know people aren't going to pay $40 a month for chatgpt as-is.


The past decade and ventures like WeWork and Theranos have shown me that profitability doesn't necessarily have bearing over a certain timescale.

You can keep the bus rolling for a surprisingly long time before the wheels fall off.


I'd agree with much of the comments that there's a nebulous amount of moat, but certainly one exists. Pointing this out though, from the outside when AWS and Google's infrastructure was being built, was it obvious they were building significant moats? Also looking at the counter-factual, would it be better for OpenAI just give up their momentum? Really they're choosing the most rational pathway they have, considering all of the partnership deals they've made. The real question is if you can tell they're picking up pennies for the Microsoft steamroller behind them or if they're going to keep revolutionizing. They've got the trackrecord, but it does seem like OpenAI is entering its late Summer-Fall season of growth.


The problem for them is that you have some of the largest companies on earth creating a very similar product and giving it away for free -- possibly just to prevent OpenAI from developing a profitable business model.


Erm.. no. Kubernetes is OSS, yet very many serious people still pay good money to use hosted and managed K8s anyways because the overhead is too high. Same goes for local models, no one(except hardcore folks) is interested to invest on a beefy GPU or run strange shell command things and suffer laggy subpar response running CPU only models while they can throw some coins at OAI and get all they need.

We need to realize that, the target fir current AI wave is NOT tech folks(claude/mistral/cursor et al. serve that group), but business people who want the “gain in productivity”, “fire the expensive employees and get that pat on the back when margins go high and same work get done by AI and some cheap junior mini team”, “quickly prepare that keynote/email/report for random topic”, “want to learn language X by building a wrapper over API” etc.


A billion in azure "cloud credits" for a company with this level of cloud compute is probably less than a year of runway.


I don’t understand why the article leads with SoftBank. A lot of the other points in the article are pretty good, though I suspect that Microsoft is going to fund OpenAI for a long time since it lets their co-pilot shenanigans go semi-unnoticed. Which may be a little tinfoil, but with stuff like Teams being bundled getting them into regulatory trouble I’d assume they want as little focus on their AI “bundle” as possible. That and a lot of what they invest in OpenAI goes into Azure anyway.

That part, and the points about subscriptions and losses on the dollar comes in later though. Maybe there is something I don’t understand, but I almost stopped reading the article when it touched on the SoftBank investment. How is that even remotely relevant?


He explains very clearly in the article why it's relevant.


It’s exactly that explanation which makes absolutely no sense. Having someone with a history of poor investments bet on you signifies absolutely nothing. Especially not when they’ve done well on some of their investments.

Nowhere does the author explain why on earth OpenAI would be desperate to accept what they call “dumb money”. Maybe if it was enough money to buy leadership, but it obviously isn’t. So what is the issue here exactly?


When people make claims like OpenAI has no moat, why would Microsoft need them and not just spin up their own LLM?


Why would you go through all the trouble to do it all yourself when you someone else can do it at a fraction of cost plus you having almost full access to it whenever you want, however you want?

> "If OpenAl disappeared tomorrow, we have all the IP rights and all the capability. We have the people, we have the compute, we have the data, we have everything. We are below them, above them, around them."

~ Satya Nadella

https://x.com/AiBreakfast/status/1770832950722015660


Microsoft has been publishing many LLMs (notably the Phi series), their work is orthogonal to what OpenAI is doing.

There's no need to compete directly. Yet.


Of course it is, but when has that stopped anyone in tech from attempting a monopoly anyways? The only thing they can sell is their access to resources and scale. Their LLM's are based on the same research papers as everyone else's. They have no innovations there at all.


Repeat it with me ... "they have no moat"


Neither did Google by that logic


Google has usage data that no-one else could get access to. It’s the usage data that improved the algorithms that led to more usage data - effectively setting up a natural monopoly.

By allowing their search algorithm to atrophy they have given up this moat to the point that others can breach it with LLMs that give similar results without the usage data.


Back in the 2000's, Google became the winner in search engines because they had a good product and gave better results. That was not a moat.

Then they expanded into an ecosystem, e.g. Gmail. That was a moat.


And neither did the thousands of other companies that disappeared into obscurity.


isn't moat the many billion dollars one needs to invest in compute to get these models?


It would be if they were the only one with billions of dollars. Perhaps the nuance is that they have a moat but a rather small one with competitors that have credibly signaled that intend to cross it.


A barrier to entry is not a moat from the consumer <-> business standpoint.


Their moat is the data and feedback they get from their users. Each chat has the potential to learn something from. AB tests. Upvotes & Downvotes.

All that feedback has made their product better and they're at a very different scale than everyone else.


This article is a joke. The sheer arrogance of publishing content about venture-backed AI businesses when you clearly have no clue about VC or AI...

> OpenAI has not had anything truly important since the launch of GPT-3.5

> [OpenAI] has yet to create a truly meaningful product outside of Sam Altman's marketing expertise

So 4, 4o, o-1, DALL-E, ChatGPT, and their API platform aren't meaningful products?

> To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.

Uh, that's how venture capital works? Uber burnt cash like nobody's business, took SoftBank money, and is now a $175B public profitable company.

> It’s extremely worrying that the biggest player in the game only makes $1 billion (less than 30% of its revenue) from providing access to their models.

Name a single API that hit $1b in revenue in 2-4 years...


> Uh, that's how venture capital works? Uber burnt cash like nobody's business, took SoftBank money, and is now a $175B public profitable company.

The macroeconomics of venture capital are a bit different now than in the 2010's. OpenAI may be the one case too finanically extreme for conventional VC logic.


> The macroeconomics of venture capital are a bit different now than in the 2010's.

If you are growing revenue to $B in <5 years, and creating the fastest growing consumer app in world history, then 2010's VC rules still very much apply today.


For me as someone who was a subscriber and cancelled and permanently deleted my account, the biggest issue with OpenAI is they compete with everyone in an underhanded way where they say you can’t use the output to compete with them. Yet ChatGPT does everything, so who’s really safe from OpenAI? Same deal with Anthropic, Google and XAI. Just a bunch of shady monopolistic companies, who wants to work with / for someone like that? Answer: people who don’t think about how bad it is.

That said, I reckon the vast majority of their subscribers neither read nor care about the true philosophical implications of their terms of use, which gives them millions of people paying to get “brain raped.” Imitating users, stealing users’ jobs, and banning users who attempt to compete with them, can keep OpenAI around for a long time, because that’s the big tech equivalent of taking souls. Not something anyone with morals wants to take part in, but surely possible for that to evolve into a profitable capitalist business model long term.


> Just a bunch of shady monopolistic companies

"A bunch of monopolistic companies" is an oxymoron. The entire race-to-the-bottom with pricing is because of the lack of moat and open-source pressure.


It's because the companies want to be monopolies: they're monopolistic. Groups who aren't monopolistic in this space behave differently.


Most model providers disallow distilling their models to make copies. It’s fine, we do need diversity. And not the kind of diversity you get than small vocal minority usurped the megaphone.


> they say you can’t use the output to compete with them

Their terms of use words it as:

> Use Output to develop models that compete with OpenAI.

I think this is acceptable for 99.99% of users.

> in an underhanded way

Could you elaborate on why you see that as underhanded? It's not secret, as it's a bullet point in their TOS, and I don't see how it's dishonest. Billions were put into this model. Not letting the competitors save a good chunk of that seems fairly reasonable, from a "we need to make the billions back eventually" business perspective.

I would claim there's a much better chance that Llama is the more underhanded model, existing partly to wipe out/impede competition, including OpenAI, before going behind a pay wall.


I just feel like it’s underhanded to have a provision along the lines of “you’re not allowed to use your chat logs to compete with us while we use your chat logs to compete with you” … how is that reasonable?

At least if Meta makes a great Llama model (since 3.1) both sides can benefit and it doesn’t have a ridiculous customer noncompete, and I can use the model on my own computer as much as I like.

The difference is, one side has a noncompete, and the other is competitive.


"We spent billions on this. You're not allowed to use our billions to save yourself billions, and then cause us loss by competing with us". This seems reasonable to me. Data is the currency of AI companies. If OpenAI wants to keep the lights on, which they may not be able to do as is, they need to make money or they perish. It's reasonable for them to have a logical strategy of self preservation.

I think Llama is closer to, "Good luck trying to make a profit with us giving this stuff away for free. Oh look, there aren't any real competitors left! Introducing Llama 5.0, with 10x the fee and vendor lock in! Btw, thanks for all the research that made it possible!"

This is also expected behavior for a large tech company trying to fight for market dominance. To see similar examples that match this template, see the history of Intel and Microsoft.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: