Policy makers in many countries are determined to develop artificial intelligence (AI) within their borders because they view AI as essential to both national security and economic growth. Some countries have proposed adopting AI sovereignty, where the nation develops AI for its people, by its people and within its borders. In this paper, the author makes a distinction between policies designed to advance domestic AI and policies that, with or without direct intent, hamper the production or trade of foreign-produced AI (known as “AI nationalism”). AI nationalist policies in one country can make it harder for firms in another country to develop AI. If officials can limit access to key components of the AI supply chain, such as data, capital, expertise or computing power, they may be able to limit the AI prowess of competitors in country Y and/or Z. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. AI nationalism may seem appropriate given the import of AI, but this paper aims to illuminate how AI nationalistic policies may backfire and could divide the world into AI haves and have nots.
Read More Publications
Trump 2.0: Clash of the tech bros
December 11, 2024 The tech giants courting Trump administration officials have conflicting interests. Getty Images In 2016, tariff man couldn’t care less about tech. Newly elected U.S. President Donald J. Trump knew that the people who created and ran America’s tech...
AI could become the ‘new steel’ as overcapacity risk goes unnoticed
July 24, 2024 Policymakers in the U.S., Saudi Arabia, Japan, the U.K., and the EU have announced huge public investments in artificial intelligence, which follow large private sector investments. Hu Guan – Xinhua – Getty Images In the 19th century,...
Data Disquiet: Concerns about the Governance of Data for Generative AI
The growing popularity of large language models (LLMs) has raised concerns about their accuracy. These chatbots can be used to provide information, but it may be tainted by errors or made-up or false information (hallucinations) caused by problematic data sets or...