We need stronger regulatory frameworks comparable to other high-risk industries to ensure AI development proceeds safely and beneficially. This would eventually involve cooperation between all major nation states to strictly regulate the development and deployment of generally capable AI systems.
Shaping the Future of AI Safety in Norway
Leading AI researchers warn that we are racing toward a potential catastrophe. We bring together leading researchers, policymakers, and industry experts to ensure artificial intelligence is developed safely and benefits all of humanity. Join us in building a future where AI serves as a force for good.
The above statement was endorsed by over 350 experts, including AI leaders like Sam Altman (OpenAI), Demis Hassabis (Google DeepMind), Dario Amodei (Anthropic), and Turing Award winners Geoffrey Hinton and Yoshua Bengio.
It aims to highlight the potential existential risks of advanced AI and to encourage global prioritization of AI safety.
The Challenge We Face
The AI Arms Race
Competition between leading AI labs and nation states is driving a race towards increasingly advanced AI systems at the cost of everyone's safety.
Existential Threat
Top AI scientists and leaders incessantly warn about the catastrophic and existential dangers of advancing AI.
The Current Path
The path we're currently on is not safe and involves unacceptable risks to humanity.
AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. Currently, no method exists to contain or control smarter-than-human AI systems. If these companies succeed, the consequences would be catastrophic. Top AI scientists, world leaders, and even AI company CEOs themselves warn this could lead to human extinction.
What Leaders Say About AI Safety
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.
Feb 2015 - Source
Mark my words — AI is far more dangerous than nukes.
Mar 2018 - Source
My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%.
Oct 2023 - Source
We are as a matter of fact, right now, building creepy, super-capable, amoral, psychopaths that never sleep, think much faster than us, can make copies of themselves and have nothing human about them whatsoever, what could possibly go wrong?
Nov 2023 - Source
We must take the risks of AI as seriously as other major global challenges, like climate change [...] It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI.
Oct 2023 - Source
while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans
Jul 2024 - Source