The Biden administration’s new “Blueprint for an AI Bill of Rights” is simultaneously a big step forward and a disappointment. Released last week, the blueprint articulates a set of principles that could address some of the major concerns about artificial intelligence design and deployment. But policymakers will need to do more to achieve an elusive objective: trust in AI.
AI’s trust problems have been apparent for some time. In 2021, the National Institute for Standards published a paper explaining the relationship between artificial intelligence systems and the consumers and firms who use AI systems to make decisions. The AI user has to trust the AI system because of its complexity, unpredictability, and lack of moral or ethical capacity, changing the dynamic between user and system into a relationship. So if AI designers and deployers want AI to be trusted, they must encourage trustworthy behavior by the system as well as trust in the system.
Read More Publications
Data Disquiet: Concerns about the Governance of Data for Generative AI
The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or...
Facing Reality: Canada Needs to Think about Extended Reality and AI
Although Canada is a leader in becoming the first nation to develop an artificial intelligence (AI) strategy, it is falling behind other countries in extended reality (XR) competitiveness. In this paper, the authors look at why Canada is lagging in this area and what...
The U.S.-led digital trade world order is under attack–by the U.S.
This year, America’s digital trade negotiator made a startling announcement at the World Trade Organization (WTO). The negotiator spoke at the behest of U.S. Trade Representative Ambassador Kathrine Tai. At the time, Congress and various U.S. regulatory agencies were...