The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development.
Read More Publications
China-U.S. Rivalry Will Split the World into Competing AI Camps
The U.S. and China both put forward plans for artificial intelligence last month. The two have long sought to lead on AI, and their competition has led to technological breakthroughs, lower costs, and wider use of the technology. But as their new plans illustrate,...
Taking the Wrong Lesson from China’s AI Strategy
Taking the Wrong Lesson from China’s AI Strategy The United States is mimicking China’s approach to centralized data, risking privacy, security and democratic accountability in the name of AI leadership. Chinese leaders early on recognized the importance of data for...
Regrets of the Tech Bros: In a land ruled by the law of the jungle
On his Inauguration Day, Donald Trump sent a message. The founders and CEOs of Apple,Amazon, Google, Meta, Open AI, and Uber, among other giant high-tech companies, sat in the front rows near the Trump family and cabinet nominees. Trump and his staff wanted to use...