The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development.
Read More Publications
The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies
As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or...
Talking to a Brick Wall: The US Government’s Response to Public Comments on AI
April 28, 2025 Building trust in artificial intelligence (AI) is an elusive goal, especially if AI models are closed or partially open, making it difficult for users to determine if these models are reliable, fair or trustworthy. For this reason, the Biden...
US Import Tariffs Will Hurt Americans, Too
February 12, 2025 A tractor is parked beside a greenhouse in Kingsville, Ontario, Canada, February 4, 2025. A 25 percent US tariff on farm imports from Canada would be devastating for the agricultural sector (Carlos Osorio/REUTERS) US President Donald Trump views...