Eyes Above The Waves

Robert O'Callahan. Christian. Repatriate Kiwi. Hacker.

Monday 3 April 2023

Why I Signed The "Pause" Letter

I work for Google, but everything here is my personal opinion, not Google's position.

I signed the "Pause" letter because I agree with what it says and I think the actions it advocates are very likely to be beneficial.

To be clear, none of the following are reasons why I signed:

  • Because I'm a fan of the Future Of Life Institute, Elon Musk, or any of the other signatories
  • Because I think the recommended pause would solve all our problems
  • Because I'm confident the pause will actually happen
  • To give academics time to publish papers (Scott Aaronson, this one's beneath you)
  • To build hype around large language models
  • To give some company an advantage over another company

Most of the arguments against a pause are very shallow, e.g. LLMs are a hoax; the proposed pause in giant-model training could lead to government regulation which is inevitably worse than any alternative; people want a pause because of risk X when they should be worried about worse risk Y; AI is so obviously super-beneficial that it's morally wrong to delay those benefits. The two strongest arguments against a pause, IMHO, are a) China and b) potential overhang.

Some people argue that a pause would give China a chance to overtake the rest of the world in AI, and that would be worse than not having a pause. I think that is very unlikely. Western-aligned countries together seem to have three big advantages: a lead in training large models, control over the manufacture and distribution of the most powerful GPUs and TPUs, and most of the world's best AI researchers. Also, from what I've read, the modern CCP is highly averse to social disruption, so I expect them to proceed carefully.

Another argument against a pause is that if we stop training giant models we'll continue to improve hardware and algorithms, so the next time we train a giant model there would be a discontinuous increase in capability, which would be worse than not pausing at all. It seems unlikely to me that this effect, even if it happens, would outweight the benefits of a pause.


Supporters of the proposed pause are motivated by different AI risks. In particular:

  • Some people, exemplified by Eliezer Yudkowsky, are convinced AI will inevitably destroy humanity due to the "alignment problem". I think that possibility has to be taken seriously — which alone is a good reason to adopt the "Pause" letter's measures (and more). Still, I doubt doom is guaranteed.
  • Some people are more worried about socioeconomic issues like potential mass unemployment. I think these issues, taken together, are also a sufficient reason to slow down AI development and mass deployment. When AI capabilities grow faster than people can learn new skills, we're pushing the limit of the rate of change humans can handle.
  • In between "social/economic adjustment" and "alignment extinction risk" are the risks of powerful AIs in the hands of malicious people. We have to learn to defend against the new generations of scams, hacks, warfare and other havoc that are coming, and again, this takes time.

I think all of these risks are valid concerns that warrant great caution and slower rates of change. I'd be delighted if we got the proposed pause. I don't think we will, but I hope that these issues get a lot more attention and we start taking government and self-regulation of AI seriously. It's absurd that we regulate food and drug safety but not AI.

Comments

Data Science
>>π‘Šβ„Žπ‘’π‘› 𝐴𝐼 π‘π‘Žπ‘π‘Žπ‘π‘–π‘™π‘–π‘‘π‘–π‘’π‘  π‘”π‘Ÿπ‘œπ‘€ π‘“π‘Žπ‘ π‘‘π‘’π‘Ÿ π‘‘β„Žπ‘Žπ‘› π‘π‘’π‘œπ‘π‘™π‘’ π‘π‘Žπ‘› π‘™π‘’π‘Žπ‘Ÿπ‘› 𝑛𝑒𝑀 π‘ π‘˜π‘–π‘™π‘™π‘ , 𝑀𝑒'π‘Ÿπ‘’ π‘π‘’π‘ β„Žπ‘–π‘›π‘” π‘‘β„Žπ‘’ π‘™π‘–π‘šπ‘–π‘‘ π‘œπ‘“ π‘‘β„Žπ‘’ π‘Ÿπ‘Žπ‘‘π‘’ π‘œπ‘“ π‘β„Žπ‘Žπ‘›π‘”π‘’ β„Žπ‘’π‘šπ‘Žπ‘›π‘  π‘π‘Žπ‘› β„Žπ‘Žπ‘›π‘‘π‘™π‘’ Rob, What's your take on GPT 3.5, regarding any impact on NZ's Software Engineering / Data Science job markets over the coming year or so?
Josh Cogliati
I also signed the pause letter. I think that there is no hope that we will be able to control superintelligent AIs, but I do think there is a chance that we might end up with an ethical AI. (ChatGPT definitely has a much better understanding of Human ethics than I could write a program to have, but just because a being knows what good is doesn't mean that being will act good.)