A group of artificial intelligence industry leaders and researchers, and even a few celebrities, have signed a statement warning of the potential for AI to pose an existential risk to humanity.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.
More than 100 people, including OpenAI CEO Sam Altman; the “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; and musician Grimes have already signed the letter. The statement calls for a global effort to ensure that AI is developed and used safely and responsibly.
The statement argues that AI, if it continues to go unchecked, could pose an existential risk in a number of ways. While we’re still a far ways off from developing the sort of AI we see in science fiction, experts believe we need to get ahead of AI development before it leads to a point of no return.
Dan Hendrycks, director of the Center for AI Safety, tweeted that the statement acknowledges that experts should continue to address all types of AI risk, such as algorithmic bias or misinformation.
“Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,'” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”