Editor’s note: WRAL TechWire’s newest contributor is Dr. Sarah Glova, a globally recognized speaker, successful entrepreneur, university instructor, and business consultant. A seasoned educator and entrepreneur, Sarah is CEO of the award-winning digital media firm, Reify Media, With a Ph.D. in Instructional Technology and a Master of Science in Technical Communication, she is dedicated to cultivating forward-thinking work environments.

+++

RALEIGH – An open letter signed by prominent tech leaders and artificial intelligence (AI) researchers exploded into headlines last week. The letter warns that large-scale AI projects “can pose profound risks to society and humanity” if not properly managed.

Among the signatories were Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.

Was this letter a surprise to the AI community? I reached out to James Kotecki, a marketing and communications expert who has worked with local AI and machine learning companies like Automated Insights, Infinia ML, and now Agerpoint, to ask him whether he was surprised by the letter—or whether he expected it.

Sarah Glova

“I’m not surprised,” Kotecki told me in an email interview. “There’s long been a concern among some in tech that, as the letter says, ‘nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.’ If you were worried before, the recent and rapid advances from OpenAI haven’t exactly eased your mind.”

OpenAI is the US-based AI research lab that’s responsible for GPT, the language prediction model behind AI tools like ChatGPT.

The open letter, published by the nonprofit Future of Life Institute last Wednesday, calls for a pause on “the training of AI systems more powerful than GPT-4.”

Musk, other leaders demand pause in ‘out of control’ artificial intelligence development

What is GPT-4?

GPT-4 is the fourth in the GPT series, a series of deep learning-based language models developed by OpenAI. These models are sophisticated computer programs that can analyze vast amounts of text data and then use a deep learning technique known as a transformer to process sequences of words and understand their context.

GPT-4 is “multimodal,” meaning it can simulate more than just words in a sentence—it can process the context surrounding the sentence, including visual and auditory cues that may be present.

With its multimodal capability, GPT-4 is a powerful tool for more than just generating human-like language; it can support language translation, text summarization, and creative writing.

OpenAI released GPT-4 on March 14 and has described it as their most advanced system to date, but access to it is currently limited. Only a few users have access to GPT-4 through the pro-version of the popular chatbot ChatGPT, called ChatGPT Plus, while access to GPT-4’s commercial API is being provided via a waitlist.

New wave of artificial intelligence – think ChatGPT – threatens 300 million jobs, report says

Two prominent NC tech leaders signed

According to the open list of signers, at least two prominent North Carolina tech leaders signed the letter: Berndt Mueller, J.B. Duke Professor of Physics at Duke University, and Janna Anderson, Executive Director of the Imagining the Internet Center at Elon University.

Over 3,000 vetted signatures were posted on the open letter when this article was published. But there are likely a lot more people who signed. According to the Future of Life Institute website, signatures are still being collected, but the site is “pausing their appearance on the letter” due to high demand.

The way Kotecki described it to me, the AI threat that tech leaders are concerned about is less like an invasion and more like a race.

“If AI is an existential threat, it’s not like one of those alien movies where all the world’s countries come together to fight the invasion,” said Kotecki. “It’s more like the first country—or company—to successfully contact the aliens gets to use their awesome technology to rule the world.”

Generative artificial intelligence & management: How do business leaders maximize these new tools?

Not the only movement

While the open letter has gained big headlines, the pushback against the rapid advancement of AI has been ongoing. In late 2021, AI scholar Timnit Gebru, who was famously fired by Google in December 2020 after drawing attention to big tech’s control and manipulation of AI, started the “Slow AI” movement.

The movement, aimed at a more cautious approach to AI development, gained momentum with the founding of the Distributed AI Research Institute (DAIR) under Gebru’s leadership. IEEE Spectrum’s Eliza Strickland detailed the “Slow AI” movement and its principles in an article last year, emphasizing the need for a more measured and considerate approach to AI’s integration into society.

“This is a reminder that women, Black scholars are the pioneers of the ‘Slow AI’ movement not the powerful wealthy men who signed on to this letter,” wrote New York-based Mia Shah-Dand, CEO of Lighthouse3 and founder of the Women in AI Ethics global community, in a LinkedIn post on Wednesday.

Her post received hundreds of likes in just a few hours.

Why six months?

In short, the open letter published on Wednesday requests a pause in training AI systems more powerful than GPT-4 for at least six months. During that time, the open letter asks that AI experts and governments work together to develop:

  • safety protocols that are audited and overseen by independent experts
  • governance systems for AI that will protect society from the risks posed by these advanced systems.

I asked Kotecki to reflect on the timeframe; the letter asks for a pause in “giant AI” research for at least six months to accomplish these goals. What does he think of that timeframe?

“It’s both too short and too long,” he told me.

Too short, he says, because the world’s governments won’t be able to figure out AI governance in six months.

“Citizens of many countries might wonder if their political system is capable of passing a budget in six months, let alone averting ‘loss of control of our civilization’ to AI,” he said.

And too long because “bad actors” who ignore the call to pause could use the time to leap forward.

“A lot can happen in six months,” said Kotecki. “ChatGPT has only been out for four. Do we really want to impede the progress of the organizations who are willing to abide by a ban? Sure, a super intelligence trained by a US corporation could be bad. But one trained under an authoritarian regime could be a lot worse.”

Will it work?

Even if the open letter does not result in an AI research cease-fire, it is launching discussions.

“The letter makes more sense to me as a symbolic call to action—a way to jolt people and governments into paying more attention,” said Kotecki. “We’re unlikely to pause the advance of AI technology, but we can at least pause a moment for our own reflection.”

What does ChatGPT think?

Since I’m traveling abroad, I asked Kotecki to share these thoughts with me over email.

He added a final note to his written reply—

“I didn’t use ChatGPT to craft my answers,” he told me. “But the temptation was strong.”

Curious, I went to ChatGPT and showed the chatbot the open letter. Then I asked, “Can you please read this letter and tell me how you would respond if you were an AI researcher?” (It doesn’t hurt to say please to your potential future overlord.)

ChatGPT responded with this:

“As an AI researcher, I would carefully consider the concerns raised about the risks posed by advanced AI systems and the need for responsible development and governance, while also continuing to work towards making AI systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

A pretty answer, but noncommital. Curious, I added ChatGPT a more direct question: “As an AI researcher, would you sign the letter?”

The bot responded:

“As an AI language model, I cannot have personal beliefs or sign letters. However, I can tell you that the letter raises important questions and concerns about the development and use of advanced AI systems, and it is up to individual researchers to consider these issues in their work.”

Indeed.