RALEIGH — On Wednesday Senate Majority Leader Chuck Schumer (D-NY) convened the last of his AI Insight Forum sessions in Washington D.C. The forums, which began in September, have brought together leaders from a variety of industries to discuss the implications of AI alongside tech leaders and academics.
The seventh and final planned session focused on the topic of “Transparency & Explainability and Intellectual Property & Copyright” in AI. Among those invited to speak on the topic were two representatives from the Triangle: Howso’s CEO and co-founder Mike Capps, and the Director of Duke’s Interpretable Machine Learning Lab and Computer Science Professor, Cynthia Rudin.
While the session took place behind closed doors, Senator Schumer shared position papers from all participants. The statements from Capps and Rudin brought very similar messages to the Senate.
Dangers of “Black-box” AI
Capps participated to advocate for the use of alternative AI solutions to the default, neural networks that are currently dominating the AI space. Neural networks take advantage of data sets to learn through multitudes of tiny statistical functions. The results are impressive and represent the majority of the investment in and development of AI frameworks. However, these interactions are essentially a “black box” for the user since the AI is incapable of providing details on how the responses were reached, or what data was used to derive them.
IBL, or instance-based learning, instead uses targeted data, and can return the information utilized to inform the responses, and even an assessment of the level of reliability based on the data available. This solution for AI frameworks provides transparency, explainability, and attribution, according to Capps, and it is this type of AI that is at the core of Capps company Howso. Capps participated in the forum to share his position and advocate for regulatory frameworks that can build more open AI solutions.
“Transparent, trustworthy AI is not an aspirational pipe dream—it is available today,” said Capps in his position paper for the forum. “Those tasked with regulating and overseeing AI must demand full transparency for algorithmic decision-making in critical life-affecting decisions.”
While Duke University’s Cynthia Rudin agrees with Capps on the detriment of black box models, her position paper went beyond to address more real-world scenarios and specific actions. For example, her recommendations included a government-sponsored certification for all companies with access to biometric data. She also advocated for academic access to the algorithms used by tech companies to provide recommendations, arguing that these are already known to incite harm in teens and spread disinformation.
“Similar to food safety, and transportation safety, AI safety is imperative. Citizens should expect transparency for when their biometrics are used for AI.”
The session topics of “transparency” and “intellectual property” seemed like a heavy lift for one day’s discussion but Rudin did bring the two issues together. She noted that generative AI is increasingly difficult to detect, thus raising questions about what is “real” and its origin – and whether or not it has been permissibly obtained. Rudin argued that the “provenance” of all information should be required by law, requiring those providing information to also give a source.
Without that information, Rudin stated, “it will be extremely easy to circulate disinformation, bullying, and other harmful content on a massive scale.”
Both Capps and Rudin advocated for much more concrete action on the part of the government than what’s been seen thus far. While the president did release an Executive Order on AI in late October, the U.S. Congress lags far behind its European Union counterparts who have been negotiating an AI Act since 2021. Currently the most aggressive protections for U.S. citizens come from consumer protections laws mainly at the state level.
Though unable to speak about the details of the day, Capps was optimistic that at least some of the Senate heard their concerns and might take recommendations forward.
“I hope we’ll have real impact on the 80 or so staffers who were in attendance, who will be the ones writing any legislation going forward,” said Capps over email.