Digital Trade and Data Governance Hub 2025 Research Priorities
Since 2019, the Digital Trade and Data Governance Hub at GWU has conducted cutting-edge research on how governments develop data governance frameworks, regulate AI, and shape digital trade policies. Our work includes comparative policy analysis, multi-country datasets, and in-depth studies that help stakeholders navigate the evolving digital landscape.
Beyond research, we foster dialogue through monthly webinars, open to all, where experts discuss key issues in AI, data governance, and digital trade. We lead the Digital Trade Study Group, a forum for trade policymakers in the US, UK and Canada, where officials can learn more about current and future digital trade and data governance challenges. Finally, we develop trainings for policymakers on data-driven change, AI, digital trade, and data governance.
Here our current research projects:
The AI Catch-22: How Efforts to Mitigate the AI Monopoly May Empower the AI Monopolists
Details: Policymakers in many countries are determined to nurture domestic suppliers of artificial intelligence (AI) because they believe AI is both a “general-purpose technology” (essential to their nations’ economic growth) and a “dual-use technology” (essential to national security). Herein we hypothesize that as these officials work to develop national capacity in AI, these countries—in turn—could become ever more dependent on the giant companies that develop and provide AI services and infrastructure such as Google, NVIDIA, and TenCent. Moreover, as policymakers work to foster AI at the national level, these nations may collectively be creating a global overcapacity (overproduction) in AI. Overcapacity is common in dual-use technologies because many officials believe they must maintain domestic capacity for national security purposes, even when domestic producers are uncompetitive. Consequently, these same
policymakers may unintentionally be creating an AI Catch-22: empowering the giant AI service and infrastructure providers, which are better positioned to endure global overcapacity in AI.
The research team will answer 6 questions:
1a.Which nations have determined that they must develop AI?
2.. What policies have these nations adopted to develop domestic AI ?
3.. What is the evidence that these national efforts could result in global overcapacity?
4. What is the evidence of the ability of the AI giant firms to better endure overcapacity?
5. What are the risks of this quandary?
6.. How can policymakers address these risks?
With this project, we hope to encourage researchers and policymakers around the world to ascertain if they are better off investing or not investing in AI and ways to internationalize AI development.
Updating the Global Data Governance Mapping Project
Details: As noted above, since 2019, the Hub has published a metric and dataset that allows researchers to compare data governance in 68 countries and the EU. We developed 26 indicators (yes/no questions) about data governance at the national and international level and used this for 4 years of reports. However, data governance is changing for several reasons, including rapid change in AI, a growing recognition that data governance enforcement is inadequate, and a new understanding that there are types of data that are useful for AI, yet such data is not governed under existing rules (for example creative content on Reddit or Instagram). The Hub staff is trying to update the metric to accommodate these and potential other changes such as data sharing rules or policies to promote data sovereignty.
Are Public Concerns about AI Lost in Translation? A Comparative Study of Public Input in AI Governance and Its Impact on Trust
Details: Public concern about proper governance of artificial intelligence (AI) is rising. AI systems hold enormous potential to enhance human capacity, increase productivity, catalyze innovation, and help mitigate complex problems. Policymakers from many countries believe that public involvement in the design, deployment, and governance of AI (often called participatory or human-centered AI) can give citizens a voice and a measure of control over AI systems. Although many users already rely on a wide range of AI systems, they do not understand how these systems work. Without greater understanding, these users may learn to distrust both AI systems and AI governance. Thus, broad consultation is particularly useful as policymakers attempt to govern rapidly changing systems such as AI. In this project, we will examine if, when, and how officials informed and consulted their citizens on issues related to AI safety and risk.
We will use a landscape analysis to compare 6 case studies of countries that sought public comment on some aspect of AI safety or risk. Our case studies include the US, Canada, the EU, the UK, Australia, and Colombia. Each of these countries sought public comment on some aspect of AI risk and provided information on the comments and commentors unless the citizens chose to remain anonymous.
Our findings will provide new information about whether and how governments consult, whether these processes are collaborative, whether the governments listen and respond to such comments, and discuss if these strategies appear to build and sustain the trust
governments will need to both nurture and effectively govern AI.