AI Safety Norway - Organization for artificial intelligence safetyAI Safety Norway

The following quotes from industry leaders, researchers, and prominent figures highlight the importance of AI safety and the potential risks of advanced artificial intelligence.

I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen.

Jul 2017 - Source
Dario Amodei
Co-founder & CEO, Anthropic

Mark my words — AI is far more dangerous than nukes.

Mar 2018 - Source
Elon Musk
CEO, Tesla, SpaceX & xAI

We need to use the engineering to bootstrap ourselves into a science of AIs before we build the super intelligent AI so that it doesn't kill us all.

Jun 2023 - Source
Emmet Shear
Founder & CEO, Twitch.tv; former Interim CEO of OpenAI

I think it's important that people understand it's not just science fiction; it's not just fear-mongering – it is a real risk that we need to think about, and we need to figure out in advance how to deal with it.

Jun 2023 - Source
Geoffrey Hinton
Godfather of AI, Nobel Prize Winner

OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don't proceed with care.

May 2024 - Source
Daniel Kokotajlo
Former OpenAI Researcher & AI Safety Advocate

I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen. I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.

May 2024 - Source
Daniel Kokotajlo
Former OpenAI Researcher & AI Safety Advocate

Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.

Feb 2015 - Source
Sam Altman
CEO, OpenAI

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

May 2014 - Source
Stephen Hawking
Physicist & Cosmologist

We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI.

Apr 2023 - Source
Yuval Noah Harari
Historian & Author

My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 per cent and 25 per cent.

Oct 2023 - Source
Dario Amodei
Co-founder & CEO, Anthropic

The development of full artificial intelligence could spell the end of the human race... It would take off on its own, and re-design itself at an ever increasing rate.

Dec 2014 - Source
Stephen Hawking
Physicist & Cosmologist

We are seeing the most destructive force in history here. We will have something that is smarter than the smartest human.

Nov 2023 - Source
Elon Musk
CEO, Tesla, SpaceX & xAI

The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Oct 2016 - Source
Sam Harris
Philosopher & Author

It's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet.

Apr 2024 - Source
Demis Hassabis
CEO, Google DeepMind; Nobel Prize Winner

One of the biggest risks to the future of civilization is AI.

Feb 2023 - Source
Elon Musk
CEO, Tesla, SpaceX & xAI

The bad case — and I think this is important to say — is like lights out for all of us.

Jan 2023 - Source
Sam Altman
CEO, OpenAI

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. A few decades after that though the intelligence is strong enough to be a concern.

Jan 2015 - Source
Bill Gates
Co-founder, Microsoft

We are finding new jailbreaks. Every day people jailbreak Claude, they jailbreak the other models. [...] I'm actually deeply concerned that in two or three years, we'll get to the point where the models can, I don't know, do very dangerous things with science, engineering, biology, and then a jailbreak could be life or death.

Jul 2023 - Source
Dario Amodei
Co-founder & CEO, Anthropic

It's hard to see how you can prevent the bad actors from using it for bad things. I think we need to worry a lot about that.

May 2023 - Source
Geoffrey Hinton
AI Pioneer, Turing Award Winner

We must take the risks of AI as seriously as other major global challenges, like climate change [...] It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI.

Oct 2023 - Source
Demis Hassabis
CEO, Google DeepMind; Nobel Prize Winner

Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. This is a much more likely failure mode than humanity killing ourselves with destructive physical technologies.

Jun 2022 - Source
Paul Christiano
AI Alignment Researcher

I take the existential risk scenario seriously enough that I would pause it.

Aug 2023 - Source
Sam Harris
Philosopher & Author

If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct.

Jul 2023 - Source
Dan Hendrycks
Director, Center for AI Safety

If you build something that is a lot smarter than us, not like somewhat smarter… but like it's much smarter than we are as we are than like dogs right, like a big jump. That thing is intrinsically pretty dangerous.

Jun 2023 - Source
Emmet Shear
Founder & CEO, Twitch.tv; former Interim CEO of OpenAI

An AI wouldn't necessarily have to hate us or or want to kill us we might just you know be in the way or irrelevant to whatever alien goal it has.

Mar 2024 - Source
Scott Aaronson
Professor of Computer Science, UT Austin

while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans

Jul 2024 - Source
Yoshua Bengio
Deep Learning Pioneer, Turing Award Winner

Since we don't really know how fast technological advances in AI or elsewhere (e.g., biotechnology) will come, it's best to get on with the task of better regulating these kinds of powerful tools right away.

Jan 2022 - Source
Yoshua Bengio
Deep Learning Pioneer, Turing Award Winner

We are as a matter of fact, right now, building creepy, super-capable, amoral, psychopaths that never sleep, think much faster than us, can make copies of themselves and have nothing human about them whatsoever, what could possibly go wrong?

Nov 2023 - Source
Max Tegmark
Professor, MIT; AI Safety Researcher

Even if we "win" the global race to develop these uncontrollable AI systems, we risk losing our social stability, security, and possibly even our species in the process.

Oct 2023 - Source
Max Tegmark
Professor, MIT; AI Safety Researcher

Join Us in Addressing AI Safety Challenges

Want to learn more about AI safety or get involved with our work? Reach out to us today.

Contact Us