Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Do you see any reason progress will stop abruptly here?

Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯

I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.

 help



So far the people who bet against scaling laws have all lost money. That does not mean that their luck won’t change, but we should at least admit the winning streak.

You mean Moore's law? Which is now dead?

No I don't mean that. I mean the LLM parameter scaling laws. More importantly, it doesn't matter if I mean that or Moore's law or anything else, because I'm not making a forward looking prediction.

Read what I wrote.

I'm saying is if you bet AGAINST [LLM] scaling laws--meaning you bet that the output would peter out naturally somehow--you've lost 100% so far.

100%

Tomorrow could be your lucky day.

Or not.


This weekend I had 100% success at the blackjack table, until I didn't and lost.

I guess we'll see :)


You gonna go read up on some 0% success rate strategies on the way?

What I’m saying is that we act as though claims about these scaling laws have never been tested. People feel free to just assert that any minute now the train will stop. They have been saying that since the Stochastic parrots.

It has not come true yet.

Tomorrow could be it. Maybe the day after. But it would then be the first victory.


it's not dead. it's enough to look at GB200/GB300 vs Vera Rubin specs.

At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.

I do agree that exponential progress to AGI is speculation.


You think all AI companies will never release a better model days after they all release better models?

That is a position to take.


I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.

I think of it as just another leap in human-computer interface for programming, and a welcome one at that.


If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.

I don’t think that follows, nor do I think it will keep improving indefinitely. It will certainly continue to improve for a while.

We don’t need anything close to AGI to render the job “software engineer” as we know it today completely obsolete. Ever hear of a lorimer?


If it doesn't follow, why not?

The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?


I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.

Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.

Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.


Either the AI can do better than humans at programming, or it can't. If I ask it to make an improved AI, or better tools for making an improved AI, and it can't do it, then at best it's matching human output.

All the current AI success is due to computers getting better at adding and multiplying. That's genuinely the core of how they work. The people who believe AGI is imminent believe the opposite of that last claim.


No one is talking about AGI in this thread except you, though. The post said nothing about it. It's an absolute non sequitur that you brought up yourself.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: