The Biden administration’s new “Blueprint for an AI Bill of Rights” is simultaneously a big step forward and a disappointment. Released last week, the blueprint articulates a set of principles that could address some of the major concerns about artificial intelligence design and deployment. But policymakers will need to do more to achieve an elusive objective: trust in AI.
AI’s trust problems have been apparent for some time. In 2021, the National Institute for Standards published a paper explaining the relationship between artificial intelligence systems and the consumers and firms who use AI systems to make decisions. The AI user has to trust the AI system because of its complexity, unpredictability, and lack of moral or ethical capacity, changing the dynamic between user and system into a relationship. So if AI designers and deployers want AI to be trusted, they must encourage trustworthy behavior by the system as well as trust in the system.
Read More Publications
Trump 2.0: Clash of the tech bros
December 11, 2024 The tech giants courting Trump administration officials have conflicting interests. Getty Images In 2016, tariff man couldn’t care less about tech. Newly elected U.S. President Donald J. Trump knew that the people who created and ran America’s tech...
The Age of AI Nationalism and Its Effects
September 30, 2024 Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where...
AI could become the ‘new steel’ as overcapacity risk goes unnoticed
July 24, 2024 Policymakers in the U.S., Saudi Arabia, Japan, the U.K., and the EU have announced huge public investments in artificial intelligence, which follow large private sector investments. Hu Guan – Xinhua – Getty Images In the 19th century,...