RALEIGH – Artificial intelligence (AI) systems are now used to help recruiters identify viable candidates, loan underwriters when deciding whether to lend money to customers and even judges when deliberating whether a convicted criminal will re-offend.
But are those computer systems unbiased? Think again.
“Programmers are creating bias within the algorithm, and then using biased data to create a decision,” says Danya Perry, Equitable Economic Development Manager with Wake County Economic Development.
“We assume that the decision has been filtered out and it’s clean data, but that’s not the case.”
The result: people of color and women are getting negatively impacted.
To highlight the issue, Raleigh Chamber hosted “Courageous Conversation” at NC State’s University McKimmon Center this week, tackling challenges surrounding bias in our everyday AI usage.
“We want to help change mindset, and and it starts with conversations like this,” Perry says.
Among the speakers was Phaedra Boinodiris, fellow of the Royal Society of the Arts and Sciences and a member of IBM’s Academy of Technology.
“For whatever reason people think that AI is like a magic box that somehow, you don’t have to worry about any decisions made by an AI is going to be immoral or unethical, which could not be further from the truth,” she says.
“It’s people who decide which data sets to use to train the artificial intelligence. If they’re using historically racist or sexist data sets, we’ve got a problem.”
Companies must get educated and adopt certain standards of governance when it comes to AI. They must also use of technology to mine the data sets and flag for when bias happens in the data sets, she says.
“With these three different kinds of [approaches], you can mitigate the risk of having calcified bias in systems. That’s the thing that people don’t understand is, these guys are making decisions that impact lives. And we just need to be more aware.”