The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development.
Read More Publications
Trump 2.0: Clash of the tech bros
December 11, 2024 The tech giants courting Trump administration officials have conflicting interests. Getty Images In 2016, tariff man couldn’t care less about tech. Newly elected U.S. President Donald J. Trump knew that the people who created and ran America’s tech...
The Age of AI Nationalism and Its Effects
September 30, 2024 Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where...
AI could become the ‘new steel’ as overcapacity risk goes unnoticed
July 24, 2024 Policymakers in the U.S., Saudi Arabia, Japan, the U.K., and the EU have announced huge public investments in artificial intelligence, which follow large private sector investments. Hu Guan – Xinhua – Getty Images In the 19th century,...