Editor’s note: WRAL TechWire contributing writer Jen McFarland has  20+ years working in IT with experiences across a range of tools and technologies. She wants to help small businesses and teams design, improve, and maintain the technology that helps them succeed. In 2022, she incorporated Marit Digital.

RALEIGH — The emergence of artificial intelligence was indisputably one of the top stories of 2023, as the long-simmering technology burst into public consciousness. As we begin to settle into 2024, many wonder what new functionalities will come next.

But perhaps we should be more worried about existing functionality.

You’ve probably heard of “deepfakes” by now – the manipulation of digital content to convincingly fake or edit media. This might be altered audio, video, images or some combination. Deepfakes have traditionally been challenging to do well, meaning that they were usually easily identified and discredited. The recent generative AI leap, however, has awakened this ability to a much broader population. According to one report, incidents of deepfake frauds jumped by 3,000% last year – 31 times the number reported in 2022.

Fakes feature popes, presidents

This increased volume has provided examples of deepfakes that range from curious to criminal.

Deepfake of Pope Francis

Last fall, the likeness of Tom Hanks was used to promote a dental plan without his knowledge or consent. In March, many were fooled by a deepfake image of Pope Francis taking a walk in a puffy designer coat and silver crucifix. Celebrity videos from the site Cameo, including messages from the likes of actor Elijah Wood and boxer Mike Tyson were edited to push anti-Ukraine propaganda. And in June a series of photos showed former President Donald Trump embracing Anthony Fauci, his former chief medical advisor and frequent opponent during the COVID pandemic.

This last example was spotted quickly; many AI-generated images show hallmarks of their creation with poorly rendered ears and hands or illegible text. And Trump hugging Fauci was an unlikely scenario that naturally generated skepticism. However, it didn’t stop the images from being used by the presidential campaign of Florida Governor Ron DeSantis to misrepresent Trump, a competitor in the upcoming Republican presidential primary.

This isn’t the first time AI has been used in political ads, and with this year’s election, you can be sure we have lots of it on the horizon.

Disinformation is politics as usual

Politics has a well-documented history of disinformation and deepfakes slot nicely into the toolbox for many campaigns. Already we’ve seen ads that include speeches doctored by AI to revise or slow down a candidate’s speech, misrepresenting statements, and making candidates sound drunk or impaired. Another DeSantis ad by a supporting super PAC added fighter jets behind the candidate in a video.

Emerging QR code threat: If you use this mobile tool for shopping, beware of ‘quishing’

Even less sophisticated disinformation attacks are becoming more brazen. In November, hundreds of Facebook ads that featured celebrities including Taylor Swift, Beyoncé, Selena Gomez and Cristiano Ronaldo included anti-Ukraine quotes positioned to look like they came from the celebrities. According to WIRED, the ads reached at least 7.6 million users and were generated by the Russian “influence operation” called Doppelganger. The campaign has links to the Kremlin and Russia’s GRU military spy agency.

The impact of disinformation on the 2016 election — much of it generated by Russian bots — has been widely studied in articles from Nature, JSTOR, and the National Institutes of Health. Congress has talked of solutions for addressing disinformation, however, any real movement has been halted by partisan bickering.

Government, public, tools can’t keep up

The internet-viewing population is often left to wonder at the veracity of these deepfakes.

Technology companies behind the social media tools that spread deepfakes and disinformation are reluctant to step into the role of moderator. There’s also little incentive for them to bother since this type of content typically generates more shares and site usage due to its inflammatory nature.

For celebrities whose likeness is used, there are often too many instances of deepfakes to even have the time or knowledge to address them all. Unless they’re especially egregious, these famous faces are unlikely to make any statements even acknowledging them, let alone trying to discredit them all. Meanwhile, the technology that produces generative AI is moving too quickly to create the tools that might identify them.

The success of those disinformation campaigns, and the inability of technology and governments to regulate them, mean that an influx of new disinformation is headed our way in 2024, including the first “deepfake election.”

Talking to an AI: There’s a ghost in the machine – and it’s us (BTW, ‘yelling’ helps)

What to do?

Some in the media and government have discussed creating a kind of verification service for resources that provide the origin and context. However, it’s hard to find someone to develop this since it’s unlikely to be a money-making endeavor and would almost certainly come with a multitude of headaches to manage. Other tools like cryptographically signing content via C2PA or “fingerprinting” media have potential but still require collaboration and oversight to implement them.

In the meantime, some states are stepping in where the federal government has failed to tread. Washington state, Minnesota, and Michigan have each enacted legislation to ban or require disclosure of “materially deceptive” or “synthetic” media.

For now, AI-generated content may still be distinguishable by the details; the technology is notorious for missing the finer points of generated media. However, that will continue to get better – probably quickly – meaning that it will get harder and harder to identify fakes.

The last line of defense against disinformation is likely our own critical thinking skills. When we see a shocking video or hear an appalling statement, our gut instinct in the misinformation age should be suspicion. Indeed, the most potentially dangerous deepfakes are those that are the most authentic to the subject. If we believe – or want to believe – that a person would behave the way we see them, we’re more likely to accept something at face value, to our own detriment.

So here we find ourselves. The tools that facilitate deepfakes are becoming more sophisticated and easier to access. Generating fake news is easier than ever, and identifying it harder. And it couldn’t be coming at a worse time for the American people.

But try to look on the bright side. Maybe by next New Year’s Eve, we’ll have an AI-generated Dick Clark.