Some of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

Elon Musk was among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.

The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.

The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

If a pause is not put in place soon, the letter said governments should step in and create a moratorium.

The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.

The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.

Correction: An earlier version of this story said Microsoft founder Bill Gates and OpenAI CEO Sam Altman had signed the letter. While the executives were initially listed as signatories, the non-profit behind the letter later removed their names.

The-CNN-Wire™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.