Here’s the lowdown: Sam Altman, the CEO of OpenAI, is throwing some serious optimism our way, believing AI might just be clever enough one day to fix the very problems it creates, including potentially dangerous ones for humanity. Altman is banking on researchers sorting out how to prevent AI from going rogue and destroying us all, suggesting that achieving artificial general intelligence (AGI) could even happen sooner than expected. Remarkably, he also thinks this leap won’t cause a societal earthquake.
But while tech giants like Microsoft, Google, Anthropic, and OpenAI are betting big on AI, the absence of strict policies to guide this fast-developing technology has sparked some anxiety. Without clear guidelines, there’s a fear we could lose control if AI takes an unexpected turn.
During a chat at the New York Times Dealbook Summit, Altman expressed his belief in our brightest minds solving the existential threats from superintelligent AI systems. “I’m a little bit too optimistic by nature, but I assume they’re going to figure that out,” he noted, suggesting that AI might be able to tackle these problems itself—with a touch of what he called “magic,” or more accurately, deep learning.
In contrast, a separate study suggests there’s a staggeringly high probability—99.999999%—that AI might wipe out humanity, according to something known as p(doom). Researcher Roman Yampolskiy warns that it would be virtually impossible to control AI once it reaches superintelligence, hinting that maybe the best course of action is to not build it at all.
Yet, OpenAI seems determined to tick AGI off its to-do list despite differing opinions on its impact. Altman recently announced we might see AGI sooner than anticipated, believing it will pass with minimal drama. He even suggested that superintelligence might only be a “few thousand days away,” pushing back on the notion that safety concerns will erupt once AGI arrives.
Meanwhile, OpenAI’s financial scene has been a rollercoaster. Despite coming close to a $5 billion loss, fresh investments from big players like Microsoft and NVIDIA bumped its market value to a whopping $157 billion. However, this financial lifeline came with the caveat of turning OpenAI into a profit-driven entity within two years—or refund investors. This situation could expose the company to risks like external interference or a takeover, with analysts speculating a Microsoft acquisition might happen within three years.
Amid this, Sam Altman has faced criticism, with some labeling his aspirations for AI as pie-in-the-sky. Adding fuel to the fire, former OpenAI co-founder and Tesla CEO Elon Musk has filed lawsuits against OpenAI and Altman, accusing them of betraying their founding principles and engaging in racketeering.
Market dynamics are shifting too. Some experts are seeing waning enthusiasm for AI investments, predicting a possible withdrawal of funds into other areas. A supporting report estimates that by 2025, around 30% of AI-themed projects might be abandoned post-concept phase.
Rumors abound that elite AI labs are feeling the squeeze, struggling to create advanced AI models due to a dearth of quality data. Yet, Sam Altman dismisses these claims, asserting, “There’s no wall” to stopping progress, a sentiment echoed by ex-Google CEO Eric Schmidt, who insists, “There’s no evidence scaling laws have begun to stop.”