Ethereum News: Vitalik Buterin Advocates ‘Soft Pause’ on Compute to Mitigate AI Dangers

1 day ago 1
ARTICLE AD BOX
Vitalik Buterin on Ethereum
  • Vitalik Buterin suggests limiting global computing power to slow risky AI.
  • He proposes weekly authorization for AI hardware by global bodies.

Vitalik Buterin, co-founder of Ethereum, has suggested a potential strategy to address the risks associated with superintelligent artificial intelligence (AI). In a January 5 blog post, Buterin proposed limiting global compute power for up to two years as a last-resort measure to slow the development of potentially dangerous AI systems.

A “Soft Pause” on Compute Power

Buterin’s proposal involves temporarily restricting industrial-scale computing hardware to reduce available computational resources by as much as 99%. This measure aims to give humanity additional time to prepare for the possible emergence of superintelligent AI systems that are significantly more advanced than the most intelligent humans in every domain.

According to Buterin, such a pause could act as a safeguard if it becomes evident that AI risks are too significant to manage through traditional regulatory frameworks. However, he emphasized that this approach would only be considered if other measures, such as liability rules holding AI developers accountable for damages, prove insufficient.

Addressing AI Risks with Defensive Accelerationism

Buterin’s recommendations align with his broader concept of “defensive accelerationism” (d/acc), which advocates for cautious and strategic technological advancement. This approach contrasts with “effective accelerationism” (e/acc), which supports rapid and unrestricted technological progress.

In his latest post, Buterin expanded on his November 2023 introduction of d/acc by offering concrete ideas for scenarios where AI poses significant risks. Among his proposals is implementing a monitoring system for industrial-scale AI hardware.

One suggested method involves requiring AI chips to register with international bodies. Buterin also outlined a potential mechanism for ensuring compliance, such as equipping devices with chips that necessitate weekly authorization from global organizations. This process could leverage blockchain technology for added transparency and security.

The debate over AI risks has intensified recently, with numerous researchers and industry leaders expressing concerns about the technology’s potential to harm humanity. In March 2023, over 2,600 experts signed an open letter urging a pause in AI development to address what they described as “profound risks to society.”

Buterin acknowledged these concerns, noting the uncertainty surrounding the outcomes of superintelligent AI. He argued that proactive measures, including a temporary halt on computing power, could serve as critical steps to ensure AI develops in a manner beneficial to humanity.

A Precautionary Approach by Buterin

While Buterin’s proposal has sparked discussion, he remains cautious about endorsing drastic measures. He underscored that a “soft pause” would only be advocated if the risks of inaction outweighed the downsides of implementing such restrictions.

As AI development accelerates, Buterin’s ideas contribute to the broader conversation on balancing innovation with safety. His suggestions underscore the importance of preparing for scenarios where advanced AI systems might pose existential threats.

Buterin aims to address AI development challenges by promoting a measured and collaborative approach without entirely stifling technological progress. Whether his proposals gain traction remains to be seen, but they highlight the growing urgency of tackling the ethical and safety concerns surrounding AI.

Read Entire Article