Editor’s note: Veteran entrepreneur and investor Donald Thompson writes a weekly column about management and leadership as well as diversity and other important issues for WRAL TechWire. His columns are published on Wednesdays.

Note to readers: WRAL TechWire would like to hear from you about views expressed by our contributors. Please send email to: info@wraltechwire.com.

+++

RESEARCH TRIANGLE PARK – Depending on your imagination, there are many ways to view artificial intelligence (AI). For those of you with a sci-fi bent, maybe it’s a scary version of humans fighting machines, like The Terminator and The Matrix. Alternatively, AI could be seen as a tool to make life better by tapping into global collective knowledge to make advances in everything from medicine to transportation systems. 

Either way, AI is not new as either a guiding principle in science fiction and fantasy or in the real world. ChatGPT might be scary to some people, but companies have utilized AI for a long time. For example, Julie Basello and Shannon Feeley explain that manufacturing businesses have used AI to “optimize operations and increase efficiency and productivity” that “assists in the management of supply chains, risk, sales volume and quality of products.”

Photo courtesy of Donald Thompson

Donald Thompson

 

The popularity of ChatGPT, however, has sparked new debate about AI and its role in society. Boiling all the different viewpoints down to fundamentals, what we have learned is that AI is based on lots – nearly inconceivable amounts – of data that has been created (mainly) by humans. 

The human role in producing the collective knowledge base means that it has many of the same characteristics that people (generally) possess: both positive and negative. As Porter Braswell explains: “AI is only as good as the data we feed it.” The central idea contained in his quote is why many people who work in diversity, equity and inclusion (DEI) are wary of AI and the possible future it foreshadows.

CHALLENGES  

The underlying challenge with AI is that it is created from data sets that demonstrate the same biases as the people who created them. Observers and analysts are concerned that a tool like ChatGPT may perpetuate or even amplify existing biases and inequalities. In other words, the information used to train the machine brain is filled with inherent “ideas” that are biased. The speed and replication of this thinking then becomes a kind of vicious circle that may lead to even greater discrimination. 

From my perspective, the idea that we are supposed to naturally trust the AI-created output as better or scientific is compromised by the fact that people created the data. Who created it? Did these individuals consider DEI? What are the consequences of biased or discriminatory language existing within the core “thinking” and “learning” of AI systems?

“AI is never infallible, because it is designed by highly fallible humans and it can only learn from existing data sets,” Braswell says. “It will help us create a better future – one which yields more desirable and equitable data sets – but only if humans are there to analyze its outputs and help shape the direction in which they guide us.”

While smart executives should raise questions about how AI is created and by whom, there is also a resource allocation view. As these tools become more ubiquitous, we must ensure that they are inclusive and accessible to all people, regardless of race, gender or socioeconomic status. As leaders in DEI and broadly across organizations, we have to make them available across society so that all people benefit, not just the few. 

THREE STEPS FOR LEADERS THINKING ABOUT AI AND DEI

Here are three specific, actionable steps that senior executives and managers can take now to ensure that AI systems are diversity-forward:

  • Invest in diversity, equity and inclusion training for all employees involved in AI and ChatGPT work – particularly those developing the platforms – including data scientists, software engineers and project managers. Don’t let the idea that these are some of your “smartest” teammates get in the way of addressing unconscious biases and how those might filter into their work. 
  • Create (and force it if you have to) diverse representation on AI teams from design to implementation. Diversity of thought and lived experience will make your teams better by bringing new voices to the process. Since diversity is already a challenge for many industries – especially tech – ensure that these teammates have the power to provide feedback on emerging systems. This effort requires a concerted, industry-wide effort to promote diversity in the tech industry, including recruiting and retaining people from underrepresented groups.
  • Conduct regular audits of AI and ChatGPT systems to identify and address any biases or hidden corners that may exist. Then, implement measures to reduce their impact. 

If you look beyond the tech and its potential for good and bad, what is revealed is that AI actually has a marketing problem. We know we need AI and we are not going to slow down its development, now that it has become the hottest business topic on the planet. Simultaneously, there are basic challenges because people create the infrastructure and people have biases. 

The ironic aspect is that those of us working in DEI face a similar situation. Many executives are wondering why their teams don’t perform better and are asking questions about what is holding them back. The honest answer is usually that they lack something that can be traced back to culture. 

Smart leaders are asking tough questions because their people and organizations want culture change that leads to workplace excellence. We certainly don’t want a situation where AI is actually undercutting critical DEI programming. It is up to executive teams to ensure that these new technologies are developed and implemented in a way that promotes DEI and benefits everyone, not stumbling into a solution that actually perpetuates the stereotypes and inequalities that so many people have dedicated themselves to eradicating. 

There are so many potential benefits with AI. With those benefits comes a great deal of responsibility. It is going to take true leadership to ensure that DEI is prioritized, particularly since many people in those roles are unsure of what they should be doing to make diversity a priority on a personal or organizational level. 

Every new technology has unintended consequences. Let’s all work together now to ensure that bias and discrimination are not part of the AI brain – we can’t wait for Neo to save us! 

About the Author 

Donald Thompson founded The Diversity Movement to literally change the world. As CEO, he has guided its work with hundreds of clients and through hundreds of thousands of data touch points. TDM’s global recognition centers on tying DEI initiatives to business objectives. Recognized by Inc., Fast Company and Forbes, he is the author of Underestimated: A CEO’s Unlikely Path to Success, hosts the podcast “High Octane Leadership in an Empathetic World” and has published widely on leadership and the executive mindset. As a leadership and executive coach, Thompson has created a culture-centric ethos for winning in the marketplace by balancing empathy and economics. Follow him on LinkedIn for updates on news, events, and his podcast, or contact him at info@donaldthompson.com for executive coaching, speaking engagements or DEI-related content.