In the ever-evolving arena of artificial intelligence, it seems that despite the hurdles faced by major tech labs like OpenAI, Anthropic, and Google in refining advanced AI systems, the technology is still marching forward at an impressive pace. Sam Altman, the CEO of OpenAI, recently shared his thoughts, suggesting that the dawn of artificial general intelligence (AGI) could be just around the corner, and superintelligence might be less than a few thousand days away.
While the potential for AI is undeniably vast, it doesn’t come without its share of challenges, particularly concerning privacy and security. Many users express significant concerns, fearing the technology could bring about catastrophic consequences. Roman Yampolskiy, a specialist in AI safety and the head of the Cyber Security Laboratory at the University of Louisville, has warned of a staggering 99.999999% chance that AI could lead to the end of humanity. His stark advice? The best way to prevent such an outcome is to refrain from developing AI altogether.
The call for establishing strong regulations and safeguards to keep AI from going haywire is growing louder. Ethereum’s co-founder, Vitalik Buterin, offers an intriguing solution: a “global soft pause button.” This would allow us to drastically curtail worldwide computational capacity by 90-99% for a couple of years if necessary, buying precious time to prepare for the potential perils of AI dominance.
Buterin explains, “Imagine having the ability to hit pause and effectively decelerate the progress of AI at a crucial time. Even a year of intense focus, akin to a wartime effort, could equate to a century’s worth of complacency-driven work. Some tangible methods to trigger such a pause include requiring global registration and location verification for hardware.”
Buterin elaborates on his vision with a sophisticated twist involving cryptography. He suggests equipping industrial-scale AI hardware with trusted chips that necessitate weekly authentication via three signatures from major international entities — with at least one being non-military.
“The signatures wouldn’t hinge on specific devices, and in fact, we could leverage zero-knowledge proofs, potentially even publishing them on a blockchain. This ensures that either all devices continue to operate or none at all,” Buterin asserts. This need for constant online validation would naturally dissuade any push to extend this framework to personal devices, maintaining practicality and focus.
Meanwhile, Sam Altman remains optimistic, positing that AI will ultimately master the complexities it creates, including the existential threats some fear. He envisions the moment of AGI arriving with “surprisingly little” disruption to society. Still, Altman insists on the necessity of regulating AI similarly to how we manage air travel — through an international agency dedicated to ensuring rigorous safety standards in AI development.
Buterin’s perspective offers a pause for thought, highlighting the approach’s potential to freeze or slow AI progression if early signs point to disaster, all without heavily burdening developers. As these conversations unfold, one thing is certain: the next steps in AI regulation will play a pivotal role in shaping the future of technology and its place in our world.