The Biden administration’s new “Blueprint for an AI Bill of Rights” is simultaneously a big step forward and a disappointment. Released last week, the blueprint articulates a set of principles that could address some of the major concerns about artificial intelligence design and deployment. But policymakers will need to do more to achieve an elusive objective: trust in AI.
AI’s trust problems have been apparent for some time. In 2021, the National Institute for Standards published a paper explaining the relationship between artificial intelligence systems and the consumers and firms who use AI systems to make decisions. The AI user has to trust the AI system because of its complexity, unpredictability, and lack of moral or ethical capacity, changing the dynamic between user and system into a relationship. So if AI designers and deployers want AI to be trusted, they must encourage trustworthy behavior by the system as well as trust in the system.
Read More Publications
Regrets of the Tech Bros: In a land ruled by the law of the jungle
On his Inauguration Day, Donald Trump sent a message. The founders and CEOs of Apple,Amazon, Google, Meta, Open AI, and Uber, among other giant high-tech companies, sat in the front rows near the Trump family and cabinet nominees. Trump and his staff wanted to use...
The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies
As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or...
Talking to a Brick Wall: The US Government’s Response to Public Comments on AI
April 28, 2025 Building trust in artificial intelligence (AI) is an elusive goal, especially if AI models are closed or partially open, making it difficult for users to determine if these models are reliable, fair or trustworthy. For this reason, the Biden...