Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From his fourth question,

> If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer?

I'm not a signatory (still on a fence here), but this is a gobsmackingly huge assumption about a correlation between very poorly defined concepts to write in as though everyone would know it is true.

What is "smarter": more powerful? Faster? More introspectively capable? More connected to external resources? A bigger token limit? None of that necessarily implies the system would intrinsically be more safe. (A researcher on the theoretical underpinnings of AI safety working at OpenAI really thinks "smarter => safer"? That's... a little scary!)

He finishes by suggesting that the training of a GPT 4.5 or 5 leading to a doomsday is unlikely, and thus a moratorium seems, well, silly. This is an unnecessary and bizarrely high bar.

The argument of the letter doesn't require that "the next model" directly initiate a fast takeoff. It's instead working off of the idea that this technology is about to become nearly ubiquitous and basically indispensible.

From that point on, no moratorium would even be remotely possible. A fast takeoff might never occur, but at some point, it might be GPT 8, it might be Bard 5, it might be LLaMA 2000B v40 -- but at some point, some really bad things could start happening that a little bit of foresight and judicious planning now could prevent, if only we could find enough time to even realize as a society that this is all happening and needs attention and thought.

As a final point, the examples of other technologies given by Aaronson here are absurd -- the printing press or the radio have no (or astoundingly less) automation or ability to run away with a captured intent. There are many instances of coordinated moratoria involving technologies that seemed potentially harmful: the Asilomar conference on recombinant DNA research is but one example, whose namesake is literally in the open letter. Chemical weapons, biological weapons, human cloning, nuclear research -- several well known families of technologies have met a threshold of risk where we as a society have decided to stop, or at least to strictly regulate.

But very few of them have had so much immediately realizable Venture Capital potential in a surprise land grab situation like this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: