The session brought attention to the dangers of "black-box" AI, which lacks transparency and explainability. Participants emphasized the significance of regulatory frameworks that promote open AI solutions and protect individuals from harmful content and disinformation. The discussion also highlighted the need for government-sponsored certification and academic access to algorithms.
Howso's CEO advocated for alternative AI solutions, such as instance-based learning (IBL), which offers transparency and explainability.
Duke University's Cynthia Rudin recommended government-sponsored certification for companies with access to biometric data and academic access to tech companies' algorithms.
Rudin emphasized the importance of requiring the "provenance" or source of information to prevent the circulation of harmful content and disinformation.
Both participants called for more concrete action from the government in regulating AI.
The U.S. Congress lags behind the European Union in negotiating AI legislation.
Existing consumer protection laws at the state level provide some safeguards for U.S. citizens.
The Senate AI Insight Forum concluded with discussions on the importance of transparency and intellectual property in AI. Participants highlighted the need for alternative AI solutions, concrete legislative action, and safeguards against harmful content and disinformation. The impact of the forum on future legislation remains to be seen.