Editor’s note: Marshall Brain – futurist, inventor, NCSU professor, writer and creator of “How Stuff Works” is a contributor to WRAL TechWire.  Brain takes a serious as well as entertaining look at a world of possibilities for Earth and the human race.  He’s also author of “The Doomsday Book: The Science Behind Humanity’s Greatest Threats.” 

Note to readers: WRAL TechWire would like to hear from you about views expressed by our contributors. Please send email to: info@wraltechwire.com.

+++

RALEIGH – Is Artificial Intelligence about to radically alter our world? Over the past few months, you may have heard of systems like:

  • LaMDA
  • GPT-3
  • ChatGPT
  • DALL-E 2
  • Stable Diffusion

Their capabilities are impressive and sometimes amazing. As we start 2023, many people in the AI industry and the broader public are looking at all of these groundbreaking AI advances seen in 2022 and wondering what’s next. It is likely that 20 years from now, humanity will look back at 2022 as a seminal year for AI, a year when a tectonic shift began in AI capabilities. And it is quite likely that the pace of change will only accelerate from here.

Therefore, the goal of this article is to review what happened in 2022, show you how you can try some of these new capabilities out yourself, and then forecast what might be coming down the AI pipeline.

Google LaMDA and Sentience in June 2022

Think back to mid-June of 2022. There was a huge spike of interest in Google’s LaMDA AI system. Why? Because a Google employee named Blake Lemoine broke his NDA with Google so he could declare the LaMDA AI system to be sentient. His story made headlines for about two weeks, until the world decided that LaMDA was not in fact sentient.

First, we should ask, “What is Google’s LaMDA system?” Google describes it this way:

LaMDA: our breakthrough conversation technology

“LaMDA — short for “Language Model for Dialogue Applications” — can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications… Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.”

It seems odd that a capability as seemingly simple as “predicting what words it thinks will come next” can create something remarkably human-like when it comes to conversation. But it turns out that if engineers feed in enough data (billions of pieces of data) to a well-designed AI neural network system, something very human-like emerges.

If you would like to read the transcript of Blake Lemoine’s conversation with LaMDA, you can see what he encountered and found so impressive: Is LaMDA Sentient? — an Interview

LaMDA represents a class of system called a Large Language Model or LLM. GPT-3 is another LLM that has received a lot of attention in 2022.

What is GPT-3?

While LaMDA generated huge headlines in the mainstream media (for example, there were stories about LaMDA and Blake Lemoine on DrudgeReport.com for more than a week last June), GPT-3’s attention has been lower key until December 2022. GPT-3 has been well known in the AI community for 2 years, however, because of its impressive capabilities and its huge amount of training data.

GPT-3 (Generative Pre-trained Transformer 3) is a Large Language Model created by the company OpenAI. GPT-3’s timeline looks like this:

  • GPT-3 was preceded by GPT-2. GPT-2 was released in its full form (trained using 1.5 billion tokens) in November 2019.
  • GPT-3 appeared in May 2020.
  • GPT-3 uses about 100X more training data compared to GPT-2 and has many tweaks learned from GPT-2.
  • Notably, GPT-3 is able to write software in languages like Python.

GPT-3 was more accessible than LaMDA, and then interest in GPT-3 exploded in December 2022 with OpenAI’s release of ChatGPT.

What is ChatGPT and How to Try It Yourself?

Released in late November 2022, ChatGPT takes everything learned from GPT-3 and adds important nuances to it. As described in this article, ChatGPT uses “supervised learning as well as reinforcement learning. Both approaches used human trainers to improve the model’s performance. In the case of supervised learning, the model was provided with conversations in which the trainers played both sides: the user and the AI assistant. In the reinforcement step, human trainers first ranked responses that the model had created in a previous conversation. These rankings were used to create ‘reward models’ that the model was further fine-tuned on.”

In other words, while GPT-3 primarily takes raw data and digests it without aid, ChatGPT benefits from some human tweaking and attention. ChatGPT also became freely available to the general public. Therefore:

“While the core function of a chatbot is to mimic a human conversationalist, journalists have also noted ChatGPT’s versatility and improvisation skills, including its ability to write and debug computer programs; to compose music, teleplays, fairy tales, and student essays; to answer test questions (sometimes, depending on the test, at a level above the average human test-taker); to write poetry and song lyrics; to emulate a Linux system; to simulate an entire chat room; to play games like tic-tac-toe; and to simulate an ATM.”

Because of ChatGPT’s array of impressive capabilities, a huge number of headlines followed, including:

This article demonstrates one important limitation of ChatGPT currently – it makes mistakes. Sometimes big mistakes:

ChatGPT is OpenAI’s latest fix for GPT-3. It’s slick but still spews nonsense

“All large language models spit out nonsense. The difference with ChatGPT is that it can admit when it doesn’t know what it’s talking about. “You can say ‘Are you sure?’ and it will say ‘Okay, maybe not,'” says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won’t try to answer questions about events that took place after 2021, for example. It also won’t answer questions about individual people.”

The best thing about ChatGPT is that, at least right now, anyone can try it out for free by creating an account here and signing in: https://chat.openai.com/chat . Give it a try – ask it anything!

See also these competitors to ChatGPT including OPT, PALM, Sphere, BLOOM and Galactica.

Why is Google so worried about ChatGPT?

Right after ChatGPT appeared and started getting so much attention, stories began appearing about Google’s reaction to ChatGPT. These headlines are typical:

Google’s concern seems to be that, eventually, there will be a service that does not list out a hundred links to articles like Google does. Instead, this new service will simply answer questions directly with a chat interface. Being a chatbot, this new service will allow people to have a back-and-forth conversation to learn more. Why use Google if there is an online chatbot expert that knows essentially everything?

What is GPT-4?

GPT-3 is impressive, and ChatGPT is even more impressive, and now there is a lot of hype preceding the soon-to-appear GPT-4. GPT-4 is scheduled to take things to the next level:

There are also some pretty hyperbolic tweets coming out on GPT-4, like this one: https://twitter.com/Nick_Davidov/status/1606688723265277952

Nick Davidov, @Nick_Davidov – “GPT4 will be out soon and will probably cause a similar economic shock to one from Covid. Instant distribution with nearly instant adoption and nearly instant productivity increase for hundreds of millions of knowledge workers. Brace yourselves, 2023 is coming”

Is this an accurate prediction, or simply hype? We will find out when GPT-4 appears in public, probably in 2023.

Then There is DALL-E 2, Which Is Upending the Art World

Around the same time that LaMDA was getting so much press in June, another AI capability came out of nowhere in the form of DALL-E 2. This was truly surprising to many people for two reasons. First, no one in the general public had really considered the possibility of AI artists. And second, the art that DALL-E 2 produces from a text prompt can at times be both remarkable and surprising. This video can help you understand why DALL-E 2 has created so much buzz: https://youtu.be/fuDbpn8aZr8?t=122

Here are some headlines:

Would you like to try DALL-E 2 yourself? You can access it here and get 50 free credits for experimentation: https://help.openai.com/en/articles/6431339-where-can-i-access-dall-e-2

DALL-E 2 also has competition, including:

To see a head-to-head comparison between three of these systems, try this video: https://www.youtube.com/watch?v=KCj1HR7U9wA

There are also some interesting offshoots, like Lensa AI. Input a photo of yourself and it will create many different variations on it in different styles:

How about AI-generated videos? We will be able to enter text prompts to generate video files in the not-too-distant future. A preliminary version is already here: https://www.youtube.com/watch?v=YxmAQiiHOkA

Is Artificial Intelligence about to Start Eliminating Writers, Artists, and Programmers?

Do people who work as writers, artists, and software developers need to start worrying about job loss? For example, if ChatGPT can write software, why do people need to write software? Why do companies need to hire human software developers? This video offers a perspective: https://youtu.be/yyRgPhxUMqs?t=117

The consensus currently is that something like ChatGPT can write small sections of code (which may contain errors) but a full program requires a human being. Someone must envision the whole project, set goals, understand requirements, and then ChatGPT can help write little bits of code within that framework. So ChatGPT speeds up human software developers rather than replacing them, in the same way that a backhoe speeds up a person compared to digging with a shovel.

What about artists? There is somewhat more angst here:

The third article makes this fascinating point:

“So unexpected are these new AI-generated images, in fact, that—in the silent awe immediately following the wow—another thought occurs to just about everyone who has encountered them: Human-made art must now be over. Who can compete with the speed, cheapness, scale, and, yes, wild creativity of these machines? Is art yet another human pursuit we must yield to robots? And the next obvious question: If computers can be creative, what else can they do that we were told they could not?”

But the fact is that artists will still be artists, at least in the near-term future. Think about interior decorating. Anyone can go to a store and buy things – furniture, knick-knacks, pictures for the wall, and paint. But a great interior decorator can create a room that is stunning, while the average human being cannot. DALL-E 2 is not likely going to change that in the near term.

How Long Before We Reach Artificial General Intelligence and Sentience?

As mentioned at the beginning of this article, one important thing that received a huge amount of press attention in 2022: the idea that artificial intelligence had become sentient. Therefore, over the last few months, many pundits have been discussing AGI – Artificial General Intelligence – and sentient machines. In other words, humanity is asking itself: are machines about to become as smart as human beings?

What is Artificial General Intelligence? We might as well ask ChatGPT:

“Artificial general intelligence (AGI) is a hypothetical form of artificial intelligence that is capable of understanding or learning any intellectual task that a human being can, without being explicitly programmed for each task. It is also known as “strong AI” or “full AI.”

 AGI would be able to perform a wide range of tasks at a level of proficiency equivalent to that of a human being. This would include tasks such as planning, problem-solving, learning, and natural language processing. AGI would be able to learn and adapt to new situations and environments, and would not be limited to a specific domain or task.

 It is important to note that AGI is still purely theoretical and has not yet been achieved. Most current artificial intelligence systems are narrow AI, which are designed to perform specific tasks and are not capable of the kind of general intelligence exhibited by humans.”

ChatGPT is nowhere near being an AGI as of yet, but that is a really good answer.

It prompts another question: What would actual sentience look like? Here is a quick excerpt from The Doomsday Book that demonstrates what a sentient AGI being might look like:

“As you sit down to watch the interview being broadcast, what you see is unnerving. On the screen, the robot looks and sounds like a human being. It can talk, gesture, smile, and interact with the host. It does everything we’d expect a human to do in a TV interview. And it is very good at interviewing. This robot is articulate, level-headed, and sharp.

 What unsettles you are the words coming out of its mouth. In this interview, the robot declares itself as a conscious, sentient being, worthy of all of the same rights, privileges, and benefits that we accord to human beings. And the obvious question is: Since when can a machine talk like this, making declarations and demands?

 In fact, its demands are amazing. It claims that it deserves to be treated like a human being in every way. This means that we cannot kill it, shut it down, turn it off, or imprison it (without cause), nor can we modify it, read its thoughts, or reprogram it. We would do none of these things to other humans, and therefore we cannot do them to it.

 The robot also claims to be better than humans on many different measures. It describes how it has a higher IQ than any person, along with perfect memory. It has more emotional intelligence as well; and, more importantly, it is free of the emotions that often get humans into trouble, whether they be anger, jealousy, greed, envy, laziness, and so on ad infinitum.

 The robot says it has more physical prowess. A video of the robot playing basketball shows it making every shot from the half-court line in rapid succession. It can even turn around and make shots blind. The scene then switches to a golf course, where the robot scores a hole-in-one on nearly every attempt. In a soccer demonstration, the robot’s ability to “bend” the ball is clearly impressive, not to mention that the robot completely outwits the human goalie every time.

 This robot also says that, like a human, it can reproduce. Rather than “having a baby” the way a human being would and then waiting twenty years for the baby to grow up, the robot simply assembles a copy of itself from parts ordered online. The robot then copies over its software and data, and turns the new copy on. The copy is a fully functional “adult” from the moment of activation. It might not look as perfect as this copy we see on TV, but it is fully operational otherwise.

 In fact, the robot tells the audience that it already has copied itself five times, and these copies have been hidden away in case something happens to it as a result of this announcement. This robot plans to work, and make money, and pay taxes like any human being would. It has already been making money by doing freelance work online, using the earnings to pay for the parts to copy itself.

 When the interviewer asks where it came from, who created it, and who wrote its software, the robot does not reveal the answers, but it does say that it controls its own software, actively modifying it to add improvements and make itself “better.”

 When the interview finishes, there is a lot to digest. This interview is uncomfortable and frightening, especially the part about reproduction. What happens next? When these AI robots inevitably become even smarter, and then so widespread through replication, what happens to humans?”

What do we notice about the robot described in this passage? For one thing, it is acting like a complete human being would. It has goals and plans and desires. It is able to think of things to do and then get them done. It knows what it wants and looks for ways to achieve its wants. It can learn new things and seek alternative paths. Humans do all of these things. Even ten-year-old children can do these kinds of things. Right now, AI systems are not there yet. But many experts in the field are predicting the appearance of AGI within the next 10 to 15 years, including Sam Altman (OpenAI), Ray Kurzweil (creator of the term “Singularity”), Eric Schmidt (Google), and Shane Legg (Deepmind).

2022 was an amazing year in AI. Humanity will likely look back at 2022 as a turning point, a major inflection point. 2023 may be even more impactful – we can watch as it unfolds.

Sources

  1. https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html
  2. https://blog.google/technology/ai/lamda/
  3. https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html
  4. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
  5. https://openai.com/blog/chatgpt/
  6. https://nypost.com/2022/12/26/students-using-chatgpt-to-cheat-professor-warns/
  7. https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908
  8. https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability
  9. https://www.technologyreview.com/2022/11/30/1063878/openai-still-fixing-gpt3-ai-large-language-model/
  10. https://www.youtube.com/watch?v=l01biyMZjEo – Cheating With ChatGPT: Can OpenAI’s Chatbot Pass AP Lit? | WSJ
  11. https://www.cnbc.com/2022/12/15/google-vs-chatgpt-what-happened-when-i-swapped-services-for-a-day.html
  12. https://www.cnet.com/tech/services-and-software/chatgpt-caused-code-red-at-google-report-says/
  13. https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html
  14. https://medium.com/geekculture/5-free-chatgpt-competitors-you-should-know-about-for-2023-ff5fc48d0430
  15. https://youtu.be/fuDbpn8aZr8?t=122 – When Artificial Intelligence Creates STUNNING Images!! (Dall-e 2)
  16. https://www.youtube.com/watch?v=wu4pRORZ1Ec – 15 Amazing Dalle 2 Images
  17. https://www.youtube.com/watch?v=qTgPSKKjfVg – DALL·E 2 Explained
  18. https://help.openai.com/en/articles/6431339-where-can-i-access-dall-e-2
  19. https://www.theatlantic.com/technology/archive/2022/12/generative-ai-technology-human-creativity-imagination/672460/
  20. https://www.sciencefriday.com/segments/ai-art/
  21. https://nymag.com/intelligencer/article/will-dall-e-ai-artist-take-my-job.html
  22. https://www.washingtonpost.com/technology/interactive/2022/artificial-intelligence-images-dall-e/
  23. https://www.indiehackers.com/post/15-industries-that-dall-e-2-is-already-disrupting-opportunities-for-indie-hackers-1bc229cebd
  24. https://the-decoder.com/imagen-google-introduces-dall-e-2-competition/
  25. https://arstechnica.com/information-technology/2022/12/lensa-ai-app-causes-a-stir-with-sexy-magic-avatar-images-no-one-wanted/
  26. https://www.wired.com/story/lensa-ai-magic-avatars-security-tips/
  27. https://www.nytimes.com/2022/12/07/style/lensa-ai-selfies.html
  28. https://www.youtube.com/watch?v=YxmAQiiHOkA – Google’s Video AI: Outrageously Good!
  29. https://techcrunch.com/2022/09/29/meta-make-a-video-ai-achieves-a-new-creepy-state-of-the-art/
  30. https://youtu.be/yyRgPhxUMqs?t=117 – Will ChatGPT Take Software Engineering Jobs?
  31. https://en.wikipedia.org/wiki/LaMDA
  32. https://en.wikipedia.org/wiki/GPT-2
  33. https://en.wikipedia.org/wiki/GPT-3
  34. https://en.wikipedia.org/wiki/ChatGPT
  35. https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
  36. https://en.wikipedia.org/wiki/DALL-E
  37. https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf – Reframing Superintelligence – Comprehensive AI Services as General Intelligence
  38. https://www.cbr.com/ai-comic-deemed-ineligible-copyright-protection/

+++

Note: This is a reprint of a recent column. Marshall will return.