Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website.

Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. It has sent countries around the world scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move.

“There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority,” Hendrycks said. “So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other.”

More than 1,000 researchers and technologists, including Elon Musk, had signed a much longer letter earlier this year calling for a six-month pause on AI development, saying it poses “profound risks to society and humanity.”

That letter was a response to OpenAI's release of a new AI model, GPT-4, but leaders at OpenAI, its partner Microsoft and rival Google didn't sign on and rejected the call for a voluntary industry pause.

By contrast, the latest statement was endorsed by Microsoft's chief technology and science officers, as well as Demis Hassabis, CEO of Google's AI research lab DeepMind, and two Google executives who lead its AI policy efforts. The statement doesn't propose specific remedies but some, including Altman, have proposed an international regulator along the lines of the U.N. nuclear agency.

Some critics have complained that dire warnings about existential risks voiced by makers of AI have contributed to hyping up the capabilities of their products and distracting from calls for more immediate regulations to rein in their real-world problems.

Hendrycks said there's no reason why society can't manage the “urgent, ongoing harms” of products that generate new text or images, while also starting to address the “potential catastrophes around the corner.”

He compared it to nuclear scientists in the 1930s warning people to be careful even though “we haven’t quite developed the bomb yet.”

“Nobody is saying that GPT-4 or ChatGPT today is causing these sorts of concerns,” Hendrycks said. “We’re trying to address these risks before they happen rather than try and address catastrophes after the fact.”

The letter also was signed by experts in nuclear science, pandemics and climate change. Among the signatories is the writer Bill McKibben, who sounded the alarm on global warming in his 1989 book “The End of Nature” and warned about AI and companion technologies two decades ago in another book.

“Given our failure to heed the early warnings about climate change 35 years ago, it feels to me as if it would be smart to actually think this one through before it’s all a done deal,” he said by email Tuesday.

An academic who helped push for the letter said he used to be mocked for his concerns about AI existential risk, even as rapid advancements in machine-learning research over the past decade have exceeded many people’s expectations.

David Krueger, an assistant computer science professor at the University of Cambridge, said some of the hesitation in speaking out is that scientists don’t want to be seen as suggesting AI “consciousness or AI doing something magic,” but he said AI systems don’t need to be self-aware or setting their own goals to pose a threat to humanity.

“I’m not wedded to some particular kind of risk. I think there’s a lot of different ways for things to go badly," Krueger said. "But I think the one that is historically the most controversial is risk of extinction, specifically by AI systems that get out of control.”

___

O'Brien reported from Providence, Rhode Island. AP Business Writers Frank Bajak in Boston and Kelvin Chan in London contributed.

Share:
More In Technology
Trump signs executive order to block state AI regulations
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
San Francisco woman gives birth in a Waymo self-driving taxi
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
It’s time to unpack Spotify Wrapped...
The end of 2025 is almost upon us. And it’s time to unpack Spotify Wrapped. On Wednesday, the music streaming giant delivered its annual recap — giving its hundreds of millions of users worldwide a look at the top songs, artists, podcasts and other audio they listened to over the past year. Spotify isn’t the only platform to roll out a yearly glimpse of data collected from consumers’ online lives. But since its launch about a decade ago, Wrapped has become one of the most anticipated. And Spotify is billing the 2025 edition to be the biggest yet, with a host of new features it hopes may also address some disappointments users had last year.
Load More