Expert Quotes on AI Safety
Insights and warnings from leading experts on the risks and challenges of advanced AI.
The following quotes from industry leaders, researchers, and prominent figures highlight the importance of AI safety and the potential risks of advanced artificial intelligence.
I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen.
Mark my words — AI is far more dangerous than nukes.
We need to use the engineering to bootstrap ourselves into a science of AIs before we build the super intelligent AI so that it doesn't kill us all.
I think it's important that people understand it's not just science fiction; it's not just fear-mongering – it is a real risk that we need to think about, and we need to figure out in advance how to deal with it.
OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don't proceed with care.
I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen. I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI.
My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 per cent and 25 per cent.
The development of full artificial intelligence could spell the end of the human race... It would take off on its own, and re-design itself at an ever increasing rate.
We are seeing the most destructive force in history here. We will have something that is smarter than the smartest human.
The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
It's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet.
One of the biggest risks to the future of civilization is AI.
The bad case — and I think this is important to say — is like lights out for all of us.
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. A few decades after that though the intelligence is strong enough to be a concern.
We are finding new jailbreaks. Every day people jailbreak Claude, they jailbreak the other models. [...] I'm actually deeply concerned that in two or three years, we'll get to the point where the models can, I don't know, do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.
It's hard to see how you can prevent the bad actors from using it for bad things. I think we need to worry a lot about that.
We must take the risks of AI as seriously as other major global challenges, like climate change [...] It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI.
Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. This is a much more likely failure mode than humanity killing ourselves with destructive physical technologies.
I take the existential risk scenario seriously enough that I would pause it.
If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct.
If you build something that is a lot smarter than us, not like somewhat smarter… but like it's much smarter than we are as we are than like dogs right, like a big jump. That thing is intrinsically pretty dangerous.
An AI wouldn't necessarily have to hate us or or want to kill us we might just you know be in the way or irrelevant to whatever alien goal it has.
while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans
Since we don't really know how fast technological advances in AI or elsewhere (e.g., biotechnology) will come, it's best to get on with the task of better regulating these kinds of powerful tools right away.
We are as a matter of fact, right now, building creepy, super-capable, amoral, psychopaths that never sleep, think much faster than us, can make copies of themselves and have nothing human about them whatsoever, what could possibly go wrong?
Even if we "win" the global race to develop these uncontrollable AI systems, we risk losing our social stability, security, and possibly even our species in the process.
Join Us in Addressing AI Safety Challenges
Want to learn more about AI safety or get involved with our work? Reach out to us today.
Contact Us