The Biden administration’s new “Blueprint for an AI Bill of Rights” is simultaneously a big step forward and a disappointment. Released last week, the blueprint articulates a set of principles that could address some of the major concerns about artificial intelligence design and deployment. But policymakers will need to do more to achieve an elusive objective: trust in AI.
AI’s trust problems have been apparent for some time. In 2021, the National Institute for Standards published a paper explaining the relationship between artificial intelligence systems and the consumers and firms who use AI systems to make decisions. The AI user has to trust the AI system because of its complexity, unpredictability, and lack of moral or ethical capacity, changing the dynamic between user and system into a relationship. So if AI designers and deployers want AI to be trusted, they must encourage trustworthy behavior by the system as well as trust in the system.
Read More Publications
A Congressional Trade Office could resolve all this tariff confusion
Susan Ariel Aaronson Congress shares responsibility with the president for trade policy, yet Congress lacks the infrastructure and expertise to set objectives and monitor the administration’s actions. Moreover, because President Trump sees tariffs as his Swiss Army...
Do Chatbot Developers Act Responsibly toward their Users?
Susan Ariel Aaronson and Michael Moreno Please note this paper is forthcoming, to be published by the Balsillie School in February 2026. This study evaluates whether leading AI developers—OpenAI, Google, xAI, and DeepSeek—act responsibly toward users when building...
AI and Trade: The WTO’s Thoughtful but Incomplete Assessment
When the World Trade Organization (WTO) decided in 2024 to produce a report on the trade implications of artificial intelligence (AI), it set out to answer two key questions: How can the WTO help ensure that the benefits of AI are widespread? And, how can the...




