CARY – SAS opens its SAS Innovate conference on Tuesday in Florida, and a hot topic for discussion is very likely to be the rapid developments in Artificial Intelligence. Reggie Townsend, vice president of the SAS Data Ethics Practice, certainly has a lot on his plate with AI in the headlines every day due in large part to the rise of ChatGBT and warnings from IBM that AI will replace nearly 8,000 of its workers.

In a two-part Q&A, Townsend, who is an advisor to the Biden Administration on AI and is on the board of EqualAI to fight bias in artificial intelligence, talks about the significant issues and possibilities the world faces.

‘Digital nervous system’ from SAS is getting smarter with AI, exec says

  • Some have called 2023 the “year of AI” – what’s the latest in the field, and why might that designation be accurate? 

AI, analytics and machine learning can turn disruption into opportunity even amid geopolitical risk, climate change, supply chain breakdowns and economic inflation. We see organizations around the world using analytics and AI to make more intelligent business decisions. In many cases, the success of those efforts is determined by the speed and quality of how their analytics and AI solutions are deployed.

A recent development in analytics and AI deployment is the use of techniques referred to as ModelOps, short for “Model Operations.” ModelOps is all about getting analytical models up and running quicker so the organization realizes results faster. It’s the must-have technology for rapidly and confidently implementing scalable, predictive analytics and AI.

When an organization uses a ModelOps approach, it’s important for them to ask process questions that can help them identify bottlenecks, develop a better understanding of where to put initial focus and where to make updates.

The ModelOps approach also gives the user an opportunity at the beginning of a project to rally the organization so it becomes centered on the results. ModelOps can improve clarity about the processes involved.

Between new advances like ModelOps, the rapid ascendancy of generative AI like ChatGPT and the AI regulations in development around the world, the year 2023 does feel significant. But, as opposed to a single year, we may look back on it as the dawn of a new age of AI.

SAS exec tapped to join board of EqualAI to fight bias in artificial intelligence

  • Where are potential points of failure for AI deployments, particularly those use cases where AI technology is integrated into supply chain management? What redundancies are built into these systems to prevent failure? 

With any model that’s trained on historic data, when something unprecedented occurs, the risk of model failures increases. That’s true for supply chains or any system depending on AI.

But AI can also be the saving grace. For example, before the pandemic, Georgia-Pacific, one of the world’s leading pulp and paper manufacturers, implemented a comprehensive data and analytics strategy.

They faced a lot of challenges when it came to speed to production. By using SAS Viya and our IoT solutions, they reduced the time needed to build and deploy models by up to 70 percent.

Then COVID-19 hit, and people started stockpiling paper products and consumer goods. Georgia-Pacific saw a 120 percent increase in demand for toilet paper, tissues and other products. At the same time, there was a breakdown in the global supply chain.

Since Georgia-Pacific already had a strong analytics strategy, they were able to scale their current efforts to overcome this disruption.

They reduced unplanned downtime by 30 percent and ultimately improved equipment efficiency by 10 percent to get more products into stores – faster. That led to lower labor costs for maintenance, less scrap and waste, and increased production. Furthermore, they have 15,000 models running on SAS and are fully prepared for the next disruption.

To be better prepared for future disruptions, many companies are tapping into the power of AI through simulation and digital twins. These are digital reproductions of real-world systems like a connected supply chain. These replicas duplicate existing processes with algorithms and IoT connectivity so computers can understand the physical systems involved.

Companies get several benefits – including lower costs – by deploying digital twins because they aren’t restricted to the physical world’s limitations. A company can perfect product formulization, conduct prototyping and product testing at scale, and optimize resiliency for supply chains quicker and less expensively than they could otherwise.

SAS executive to serve as artificial intelligence advisor to Biden Administration

  • How might more widespread adoption of AI and AI tools change the U.S. workforce?  How might workforce pipelines need to change to prepare future workers for relevant digital and AI skills?  

For the widespread adoption of AI to succeed we must increase understanding of AI, because there’s so much fear and confusion around it. We need to build foundational AI knowledge in the public so that people understand the realistic ways it can help us and the ways it is far less likely to harm us.

We’re already seeing examples of AI, like chatbots, handle easily automated tasks. I think we’ll see AI become a complimentary tool, empowering people to work more effectively, accomplish more tasks and focus on work that only humans can do. As impressive as AI can be, it still lacks the complex thinking abilities of human beings. And for any workflow using AI, humans will need to be in the loop to check for bias and fairness and ensure people aren’t being harmed. As you can imagine, those considerations are top priorities for the SAS Data Ethics Practice.

In addition to building fundamental understanding, a lack of AI skills in the workforce inhibits the widespread and effective use of AI. In research published by SAS last fall, 43 percent of respondents from the US, UK and Ireland indicated that AI and machine learning are top investment priorities over the next one to two years. That was well ahead of data technology stalwarts such as data visualization, data analytics and big data. The problem is 63 percent also claim their largest skills shortages are in AI and machine learning.

The survey also indicated that employers are de-emphasizing four-year degrees and placing higher value on practical case studies, project work and other relevant training. Industry-recognized certifications, including from tech vendors, were deemed as relevant as degrees, as was participation in hackathons and data challenges, which demonstrate technical, problem-solving and team-working skills.

To help more people take advantage of the proliferation of AI and close the skills gap it will take a combination of expanding practical AI work in universities, upskilling/reskilling of people in technology and non-technology roles, enabling employees to do online training or participate in hackathons and growing the data science community. It will also be important to use more modern, open, multi-language tools which will increase data science productivity and empower end users to do basic analytics tasks, allowing data scientists to focus on core tasks. By democratizing analytics, more people can join the field.

  • What are the ethical issues of most concern for the development of and deployment of AI and machine learning tools? 

Fundamentally, AI is about automated decision making, so if done with a commitment to fairness, transparency, accountability and with humans at the center, decision making is beneficial everywhere. Whether it should be automated, and how, is the dilemma. There are obviously areas where AI needs to be heavily scrutinized and carefully regulated. Anywhere decisions are being made that affect health, well-being, finances and freedoms, we must beware of AI leading to harm at scale.

If Amazon recommends a shirt I don’t like, that doesn’t really matter. If someone is denied a home loan because of historically, racially-biased data, that’s a serious problem. Or if certain populations are underserved by the health system based on biased data, that’s unacceptable. Law enforcement, national security, health and banking are areas where AI risks perpetuating historical injustices, but also contain AI opportunities that could really help people, too.

I just wrote about the balancing act of innovating while considering the problems of historic data in a recent blog, A call to action: Empowering minority populations in the AI revolution.

  • What will more competition for AI development mean in terms of potential risk as private sector and perhaps government competitors race to leapfrog each other?

The investments being made in AI are incredible. It’s an exciting time to be in this business. I’m fortunate to tackle these issues alongside SAS employees, customers and partners, as well as with my fellow members of the National AI Advisory Committee and on the board of EqualAI. These conversations have made it clear that we’re not going to solve for the risk associated with AI with regulation alone. It requires a comprehensive approach involving people, process and technology.

Limits, in the form of regulation, should be placed on the deployment of AI technology rather than on the development.  If we limit development in general, there are other countries what will leap ahead and gain potentially insurmountable advantages in AI technology.

Regulation will provide the framework and guardrails to instill the responsible use of AI. But to instill commitment and consistency will require widespread cooperation among developers and users of AI. There will still be ample room for innovation but, with clearer guidelines, less risk of unintended harm to society and customers, as well as to a company’s reputation, brand and bottom line.

Reducing risk is also a matter of educating organizations and people about responsible AI practices. So many negative outcomes arise simply from a lack of awareness of the risks involved. If we can increase general AI knowledge, we can see unintended harm decrease dramatically.

PART TWO: What’s happening with AI at SAS