RALEIGH – N.C. Attorney General Josh Stein and 53 other AGs are calling on Congress to crack down on artificial intelligence, citing possible harm to children through “deepfakes.”

“Artificial intelligence is rapidly becoming part of our world, and we have to act now to make sure we protect our kids,” Stein said in a blog post. “We cannot afford to let our laws lag behind technology. I’m pleased to lead this bipartisan coalition with my fellow attorneys general to keep our children safe online.”

The bipartisan coalition of AGs spelled out their concerns in a letter to Congress.

“Specifically, AI can and is being used to exploit children through child sexual abuse material. The Attorneys General are asking Congress to propose and pass legislation to protect children from these abuses,” Stein’s office explained.

Stein and AGs from South Carolina, Mississippi, and Oregon are spearheading the effort.

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter, shared ahead of time with The Associated Press. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

The “deepfake” threats

The AGs cited three specific concerns involving “deepfake” technology to alter photographs:

  • To digitally alter the likeness of a real child who has not been physically abused to make it appear as if the child is being abused.
  • To digitally recreate a child who has been physically abused being abused in other ways.
  • To create a child who does not exist and depict them being abused to feed the market for child sexual abuse material.

South Carolina Attorney General Alan Wilson helped lead the effort to add signatories from all 50 states and four U.S. territories to the letter. The Republican, elected last year to his fourth term, told AP last week that he hoped federal lawmakers would translate the group’s bipartisan support for legislation on the issue into action.

“Everyone’s focused on everything that divides us,” said Wilson, who marshaled the coalition with his counterparts in Mississippi, North Carolina and Oregon. “My hope would be that, no matter how extreme or polar opposites the parties and the people on the spectrum can be, you would think protecting kids from new, innovative and exploitative technologies would be something that even the most diametrically opposite individuals can agree on — and it appears that they have.”

Josh Stein’s statement. Image provided by AG’s office.

The Senate this year has held hearings on the possible threats posed by AI-related technologies. In May, OpenAI CEO Sam Altman, whose company makes free chatbot tool ChatGPT, said that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

While there’s no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

State statute review

In additional to federal action, Wilson said he’s encouraging his fellow attorneys general to scour their own state statutes for possible areas of concern.

“We started thinking, do the child exploitation laws on the books — have the laws kept up with the novelty of this new technology?”

In detail: The “deepfake” threats

According to Wilson, among the dangers AI poses include the creation of “deepfake” scenarios — videos and images that have been digitally created or altered with artificial intelligence or machine learning — of a child that has already been abused, or the alteration of the likeness of a real child from something like a photograph taken from social media, so that it depicts abuse.

“Your child was never assaulted, your child was never exploited, but their likeness is being used as if they were,” he said. “We have a concern that our laws may not address the virtual nature of that, though, because your child wasn’t actually exploited — although they’re being defamed and certainly their image is being exploited.”

A third possibility, he pointed out, is the altogether digital creation of a fictitious child’s image for the purpose of creating pornography.

“The argument would be, ‘well I’m not harming anyone — in fact, it’s not even a real person,’ but you’re creating demand for the industry that exploits children,” Wilson said.

There have been some moves within the tech industry to combat the issue. In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images and AI-generated content.

“AI is a great technology, but it’s an industry disrupter,” Wilson said. “You have new industries, new technologies that are disrupting everything, and the same is true for the law enforcement community and for protecting kids. The bad guys are always evolving on how they can slip off the hook of justice, and we have to evolve with that.”