> Job loss is likely to have statistics more comparable to the Black Plague.
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
We can't stop OpenClaw, because humans are curious. It just takes one unleashed model with a crypto account and some way to make money for the first independent AI's to start bleeding into cyberspace.
We can't opt out of AI competition, because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.
> AI job loss would quickly eclipse all other political concerns.
True. I think this is one of only a few certainties.
> because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.
I don't think that it is likely AT ALL, but it is probably only necessary for China and the US to agree to stop, not all organizations and nation states. It is at least possible given leadership in both countries that see AI as an existential threat.
The hardware needed to run and train SOTA AI can only be made by a very small handful of companies in a small handful of countries that either the US or China have significant influence over. Making AI R&D illegal would stop 99% of it overnight, most of the researchers are in it for money rather than some ideological commitment and there are plenty of other well-paid jobs they could take. Doing local inference in secret with existing models and GPUs would be possible, but training new SOTA models probably wouldn't be.
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).