Monday 3 April 2023
I work for Google, but everything here is my personal opinion, not Google's position.
I signed the "Pause" letter because I agree with what it says and I think the actions it advocates are very likely to be beneficial.
To be clear, none of the following are reasons why I signed:
- Because I'm a fan of the Future Of Life Institute, Elon Musk, or any of the other signatories
- Because I think the recommended pause would solve all our problems
- Because I'm confident the pause will actually happen
- To give academics time to publish papers (Scott Aaronson, this one's beneath you)
- To build hype around large language models
- To give some company an advantage over another company
Most of the arguments against a pause are very shallow, e.g. LLMs are a hoax; the proposed pause in giant-model training could lead to government regulation which is inevitably worse than any alternative; people want a pause because of risk X when they should be worried about worse risk Y; AI is so obviously super-beneficial that it's morally wrong to delay those benefits. The two strongest arguments against a pause, IMHO, are a) China and b) potential overhang.
Some people argue that a pause would give China a chance to overtake the rest of the world in AI, and that would be worse than not having a pause. I think that is very unlikely. Western-aligned countries together seem to have three big advantages: a lead in training large models, control over the manufacture and distribution of the most powerful GPUs and TPUs, and most of the world's best AI researchers. Also, from what I've read, the modern CCP is highly averse to social disruption, so I expect them to proceed carefully.
Another argument against a pause is that if we stop training giant models we'll continue to improve hardware and algorithms, so the next time we train a giant model there would be a discontinuous increase in capability, which would be worse than not pausing at all. It seems unlikely to me that this effect, even if it happens, would outweight the benefits of a pause.
Supporters of the proposed pause are motivated by different AI risks. In particular:
- Some people, exemplified by Eliezer Yudkowsky, are convinced AI will inevitably destroy humanity due to the "alignment problem". I think that possibility has to be taken seriously — which alone is a good reason to adopt the "Pause" letter's measures (and more). Still, I doubt doom is guaranteed.
- Some people are more worried about socioeconomic issues like potential mass unemployment. I think these issues, taken together, are also a sufficient reason to slow down AI development and mass deployment. When AI capabilities grow faster than people can learn new skills, we're pushing the limit of the rate of change humans can handle.
- In between "social/economic adjustment" and "alignment extinction risk" are the risks of powerful AIs in the hands of malicious people. We have to learn to defend against the new generations of scams, hacks, warfare and other havoc that are coming, and again, this takes time.
I think all of these risks are valid concerns that warrant great caution and slower rates of change. I'd be delighted if we got the proposed pause. I don't think we will, but I hope that these issues get a lot more attention and we start taking government and self-regulation of AI seriously. It's absurd that we regulate food and drug safety but not AI.