Leading AI companies are trying to build godlike AI.
Leading AI companies are not just trying to build advanced chatbots and make money. Their aim has always been to build AIs far more powerful than humans, which they call AGI and superintelligence.
By AGI, they don't just mean AI that could perform all human tasks, automate science, and build nuclear weapons. Anthropic CEO Dario Amodei says AGI will give us “complete control over our own biology and neuroscience [and] could make us whoever and whatever we want to be”.
OpenAI CEO Sam Altman says whoever builds AGI could “capture the light cone of all future value in the universe”. This is not science fiction. What they're aiming for is AIs capable of dismantling the entire planet for resources, rebuilding humans into digital minds, and colonizing space.
They don’t know how they will control their creation.
AI companies admit “no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless.” There is already evidence AI can be deceptive and are incentivized to seek power. If an AI gets much smarter than humans, experts like Turing Prize winner Geoffrey Hinton warn there is a significant risk it will get out of our control. This is not because the AI will hate us; it is because no one knows how to make it care about us.
As Ilya Sutskever, OpenAI Chief Scientist, puts it: “ when the time comes to build a highway between two cities, we are not asking the animals for permission, we just do it, because it's important for us. And I think by default that's the kind of relationship that's going to be between us and AGIs which are truly autonomous … The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well.”
They admit building godlike AI might kill everyone.
OpenAI CEO Sam Altman wrote “superhuman machine intelligence is the greatest threat to humanity’s existence.” DeepMind founder Shane Legg, Anthropic CEO Dario Amodei, and manyothers have expressed similar sentiments. Despite this, they’re forging ahead.
Godlike AI might only be a few years away.
Not only is this extremely dangerous, but people building godlike AI may not be far from achieving it. Time and time again, things that people expected would be impossible for AIs have been solved soon after. In January 2023, economist Bryan Caplan predicted that it would take six years before an AI could pass his exams. Two months later, GPT4 got a top score. In 2019, AIs were barely able to read and write. Today, AIs create award-winning photographs and art, code better than most programmers, and impersonate and deceive people.
There is no reason to believe that this progress will halt at human level; there is no law of physics that says human intelligence is the limit. Indeed, AIs are clearly already superhuman in many domains. They know nearly everything about everything people have ever written about, can recognize images faster and better than any human, can do millions of complex tasks in parallel, and more.