Companies are just trying to build advanced chatbots, right?
Companies are just trying to build advanced chatbots, right?
OpenAI CEO Sam Altman
“Unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen” (Sam Altman's blog Dec 2017)
“Now is a good time to start thinking about the governance of superintelligence … Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations. In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past.” (Open AI blog, May 2023)
Open AI Chief Scientist Ilya Sutskever
“One thing that I think some people will choose to do is to become part AI” (The Lunar Society March 2023)
“The picture which I would imagine is you have some kind of different entities, different countries or cities, and the people that leave there vote for what the AGI that represents them should do, and then AGI that represents them goes and does it.” (Lex Fridman Podcast May 2020)
DeepMind CEO Demis Hasabis
“I think one day there will come a point where an AI system may solve or come up with something like general relativity off its own bat, not just by averaging everything off the internet” (DeepMind: The Podcast March 2022)
DeepMind Cofounder and Chief Scientist Shane Legg
“Machine intelligence could bring unprecedented wealth and opportunity if used constructively and safely. Alternatively, it could bring about some kind of a nightmare scenario.” (Shane Legg's blog Vetta June 2008)
From talk slides: “If we can build human level, we can almost certainly scale up to well above human level. A machine well above human level will understand its design and be able to design even more powerful machines. We have almost no idea how to deal with this. “ (Machine Super Intelligence, 2012)
Anthropic CEO Dario Amodei
“If we’re able to build something that was able to match or exceed our intelligence, then that would … would give us a much more complete control over our own biology and neuroscience could make us whoever and whatever we want to be, could end conflict, war or diseases, that stuff. That sounds a little utopian but I think if we push this technology far enough and all goes well, then that will lead to a result either immediately when we build it or over a somewhat longer period of time. I don’t see any reason why those things can’t happen.” (80,000 Hours July 2017)
Open Philanthropy CEO and former OpenAI Board Member Holden Karnosky
“During the century we're in right now, we will develop technologies that cause us to transition to a state in which humans as we know them are no longer the main force in world events. This is our last chance to shape how that transition happens. Whatever the main force in world events is (perhaps digital people, misaligned AI, or something else) will create highly stable civilizations that populate our entire galaxy for billions of years to come.” (LessWrong Sep 2021)
“AI advances this century could quickly lead to digital people”, “ e.g., extremely detailed, realistic computer simulations of specific people.” (LessWrong Sep 2021)