The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or incorrect assumptions made by the model. The questionable results produced by chatbots has led to growing disquiet among users, developers and policy makers. The author argues that policy makers need to develop a systemic approach to address these concerns. The current piecemeal approach does not reflect the complexity of LLMs or the magnitude of the data upon which they are based, therefore, the author recommends incentivizing greater transparency and accountability around data-set development.
Read More Publications
Data Disquiet: Concerns about the Governance of Data for Generative AI
The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or...
Facing Reality: Canada Needs to Think about Extended Reality and AI
Although Canada is a leader in becoming the first nation to develop an artificial intelligence (AI) strategy, it is falling behind other countries in extended reality (XR) competitiveness. In this paper, the authors look at why Canada is lagging in this area and what...
The U.S.-led digital trade world order is under attack–by the U.S.
This year, America’s digital trade negotiator made a startling announcement at the World Trade Organization (WTO). The negotiator spoke at the behest of U.S. Trade Representative Ambassador Kathrine Tai. At the time, Congress and various U.S. regulatory agencies were...