Vitalik Buterin Proposes "Soft Pause" on Global Computing to Mitigate Risks of Superintelligent AI

Vitalik Buterin Proposes "Soft Pause" on Global Computing to Mitigate Risks of Superintelligent AI

Ethereum co-founder Vitalik Buterin has proposed a radical "soft pause" on global computing power as a potential strategy to slow the development of superintelligent AI and allow humanity more time to prepare for the risks associated with its emergence. In a January 5 blog post, Buterin revisited his previous thoughts on the subject, following up on his November 2023 endorsement of a concept he calls "defensive accelerationism" (d/acc). This idea focuses on taking measured actions to mitigate the dangers of AI while encouraging cautious technological progress.


A "Soft Pause" to Buy Time

Buterin’s proposed "soft pause" would involve limiting the available global compute power, essentially reducing it by up to 99% for a period of one to two years. This measure, he argues, would buy humanity more time to prepare for the potential emergence of superintelligent AI, which he predicts could arrive in as little as five years.


Superintelligent AI, in theory, would be vastly more intelligent than humans in all areas of expertise, and its advent could pose profound risks. Buterin acknowledged the uncertainty surrounding the consequences of creating such powerful intelligence, noting that while the outcome could be positive, there’s no guarantee of that. His idea of temporarily restricting computing resources would serve as a last-resort solution to slow down AI progress if signs of an imminent and dangerous superintelligence arise.


Addressing AI Risks Through "Defensive Accelerationism"

In his post, Buterin elaborated on his stance of "defensive accelerationism" (d/acc), a framework he introduced in 2023. Unlike the more aggressive philosophy of "effective accelerationism" (e/acc), which advocates for rapid, unrestrained technological advancement, d/acc emphasizes the importance of cautiously navigating the development of powerful technologies, particularly AI. Buterin emphasized that if AI poses a high risk, the global community must take concrete actions to manage that risk.


While Buterin’s original proposal on d/acc made vague appeals to avoid developing dangerous superintelligent AI, his new post outlines a more concrete action plan. He suggested that one potential intervention could involve a global "soft pause" on the use of industrial-scale computer hardware. This pause would essentially halt the development of superintelligent AI by restricting the compute resources necessary for its advancement.


Conditions for Implementing a "Soft Pause"

Buterin emphasized that he would only advocate for this drastic measure if there were a compelling case that existing measures, such as liability rules, were insufficient to mitigate the risks posed by superintelligent AI. Liability rules would hold AI developers, deployers, and users accountable for damages caused by the technology, but Buterin believes that in certain circumstances, a more drastic approach may be needed.


To implement a global compute pause, Buterin proposed a system where industrial-scale AI hardware would be equipped with chips that could only function if they received approval from international governing bodies. This approval would require the signatures of these organizations, potentially issued weekly. Buterin suggested that this process could be made secure and verifiable through blockchain technology, ensuring that only authorized entities could approve the continued operation of AI hardware.


“The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing,” Buterin wrote. “There would be no practical way to authorize one device to keep running without authorizing all other devices.”


Global Concerns About AI Risks

Buterin’s concerns about AI are shared by many in the tech community. In March 2023, more than 2,600 AI researchers, executives, and other experts signed an open letter calling for a halt to certain forms of AI development due to the profound risks they pose to society. Buterin’s proposal is a response to these growing concerns, offering a potential strategy to mitigate the dangers posed by superintelligent AI while giving humanity more time to address the ethical, societal, and technological challenges it presents.


Conclusion: A Balanced Approach to AI Development

Vitalik Buterin’s proposal to introduce a "soft pause" on global computing resources is a provocative and cautious approach to managing the risks of superintelligent AI. His philosophy of defensive accelerationism seeks to strike a balance between advancing technology and ensuring that its development does not outpace humanity’s ability to manage its consequences.


As AI technology continues to evolve rapidly, Buterin’s ideas contribute to an ongoing debate about how best to address the potential dangers of artificial superintelligence. Whether or not his suggestions gain traction, they represent a thoughtful attempt to ensure that the benefits of AI are realized without compromising safety and control.

Disclaimer: The content on this website is for informational purposes only and does not constitute financial or investment advice. We do not endorse any project or product. Readers should conduct their own research and assume full responsibility for their decisions. We are not liable for any loss or damage arising from reliance on the information provided. Crypto investments carry risks.