She sees both vast potential and overwhelming risk in the current state of the AI industry, a wild west of unchecked experimentation, investment and expansion. The recent rise of ChatGPT, an AI-based tool that lets users engage with and order up written products from a computer algorithm has shone new light on the technology, and Rudin says lawmakers need to get a handle on it all – and fast.

Rudin is the Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University, where she runs the Interpretable Machine Learning Lab. She spoke with Duke Today recently about her many concerns related to the growth and power of artificial intelligence and the industries developing tools with it.

Here are excerpts:

  • You feel artificial intelligence technology is out of control right now. Why?

AI technology right now is like a runaway train and we are trying to chase it on foot. I feel like that because the technology is increasing at a very fast rate. It’s amazing what it can do now compared to even a year or two ago.

Misinformation can be generated very, very quickly. Also, recommender systems (that push content to people) in directions we don’t want them to be. And I feel the people haven’t yet had a chance to speak up about this. It’s really technology companies imposing it on us rather than the people getting a chance to decide themselves what they want.”

  • Are there any incentives for tech companies to act ethically in regards to AI?

They’re incentivized to make profits, and if they’re monopolies they’re not really incentivized to compete with other companies in terms of ethics or other things that people want. The problem is when they say things like ‘we want to democratize AI’ it’s really hard to believe that when they’re making billions and billions of dollars. So it would be better if these companies weren’t monopolies and people had a choice of how they wanted this technology to be used.

  • Why is it so important, in your view, for the federal government to regulate tech companies?

Government should definitely step in and regulate AI. It’s not like they didn’t have enough warning. Technology has been building for years. The same technology that built ChatGPT has been used to build chatbots in the past that are actually pretty good. Not as good as ChatGPT, but pretty good. So we’ve had plenty of warning. Recommender systems for content have been used for many years now and we have yet to place any kind of regulations on them. Part of the reason is the government doesn’t yet have any kind of mechanism to regulate AI. There’s no (federal) commission on AI. There’s commissions on many other things, but not AI.

More coverage: Cynthia Rudin on AI

  • How might this AI revolution effect people the most in their daily lives? What should they look out for?

AI affects people, ordinary people, every day of their lives. When you go on the internet to any website, the advertisements on that website are served up just for you. Every time you are on YouTube looking at content, the recommender systems recommending the next thing you watch are based on your data. When you’re reading Twitter, the content that’s given to you, and in what order it’s given to you, is designed by an algorithm. All of these things are AI algorithms that are essentially unregulated. So ordinary people interact with AI all the time.

  • Do people get any real say in how this technology is imposed on them?

Generally, no. You don’t really get a way to tweak the algorithm to feed you content you want. If you know you’re happier when your algorithm is tuned a certain way, there’s not really a way for you to change it. It would be nice if you had a variety of companies to choose from for a lot of these recommender systems of different kinds. Unfortunately, there’s not too many companies out there so you don’t really have much of a choice.

  • What is the worst-case scenario you can envision if there is no regulation?

Misinformation is not innocent. It does real damage to people on a personal level. It’s been the cause of wars in the past. Think of World War II, think of Vietnam. What I’m really concerned about is that misinformation is going to lead to a war in the future, and AI is going to be at least partly to blame.

  • Many of these companies simply claim they’re ‘democratizing’ artificial intelligence with these new tools.

One thing I’m concerned about is you’ve got these companies that are creating these tools, and they’re very excited about the release of these tools to people. And certainly the tools can be useful. But, you know, I think if they were the victims of AI-based bullying, or had some images of them that were fake, that were generated online that they didn’t want, or if they were about to be the victim of an AI-propelled misinformation massacre, they might feel differently.

  • Where does content moderation fit into all of this?

There’s a lot of very dangerous content and a lot dangerous misinformation that’s out there that has cost many people their lives. I’m specifically talking about misinformation around the Rohingya massacres, around the January 6, 2021 insurrection, vaccine misinformation. While it’s important we have free speech, it’s also important that content is moderated and that it’s not circulated. So even if people say things we don’t agree with, we don’t need to circulate those things using algorithms. If misinformation from trolls from different countries try to impact politics or having some kind of social impact, those trolls can take over our algorithms and plant misinformation.

We really don’t want that to happen. Child abuse content (for example)  – we need to be able to filter that off of the Internet.

(C) Duke Today