Research

Digital Trade and Data Governance Hub 2025 Research Priorities

Since 2019, the Digital Trade and Data Governance Hub at GWU has conducted cutting-edge research on how governments develop data governance frameworks, regulate AI, and shape digital trade policies. Our work includes comparative policy analysis, multi-country datasets, and in-depth studies that help stakeholders navigate the evolving digital landscape.

Beyond research, we foster dialogue through monthly webinars, open to all, where experts discuss key issues in AI, data governance, and digital trade. We lead the Digital Trade Study Group, a forum for trade policymakers in the US, UK and Canada, where officials can learn more about current and future digital trade and data governance challenges. Finally, we develop trainings for policymakers on data-driven change, AI, digital trade, and data governance.

Here our current research projects:

 

The AI Catch-22: How Efforts to Mitigate the AI Monopoly May Empower the AI Monopolists

Details: Policymakers in many countries are determined to nurture domestic suppliers of artificial intelligence (AI) because they believe AI is both a “general-purpose technology” (essential to their nations’ economic growth) and a “dual-use technology” (essential to national security). Herein we hypothesize that as these officials work to develop national capacity in AI, these countries—in turn—could become ever more dependent on the giant companies that develop and provide AI services and infrastructure such as Google, NVIDIA, and TenCent. Moreover, as policymakers work to foster AI at the national level, these nations may collectively be creating a global overcapacity (overproduction) in AI. Overcapacity is common in dual-use technologies because many officials believe they must maintain domestic capacity for national security purposes, even when domestic producers are uncompetitive. Consequently, these same policymakers may unintentionally be creating an AI Catch-22: empowering the giant AI service and infrastructure providers, which are better positioned to endure global overcapacity in AI.

The research team will answer 6 questions:

1a.Which nations have determined that they must develop AI?
2. What policies have these nations adopted to develop domestic AI ?
3. What is the evidence that these national efforts could result in global overcapacity?
4. What is the evidence of the ability of the AI giant firms to better endure overcapacity?
5. What are the risks of this quandary?
6. How can policymakers address these risks?

With this project, we hope to encourage researchers and policymakers around the world to ascertain if they are better off investing or not investing in AI and ways to internationalize AI development.

Comparing Corporate Approaches to Participatory AI

Details: This project is funded by NSF-NIST TRAILS and will compare how US companies involve their stakeholders in AI design, development, deployment and governance. Please find below a current overview:

 

Company Participatory/Constitutional Approach Governance/Ethical Commitments Public-Input Programs or Forums
OpenAI Participated as a “committed audience” in CIPs “Alignment Assemblies” for AI risk; funds Democratic Inputs to AI grants. (Also uses RLHF and explored “Constitutional AI” techniques.) Charter commits to broad public benefit and cooperation; publishes research and safety work; Partnership on AI (co-founder). Democratic Inputs to AI grant program (2023); AI “risk prioritization” with CIP in June 2023, surveyed 1,000 Americans.
Google/DeepMind Guided by Google’s AI Principles; uses internal review councils (RSC, AGI Safety Council) and external experts/red teams. Committed to responsible development under Google AI Principles; co-founder of Partnership on AI; publishes safety research. Open-science initiatives (AlphaFold server, educational outreach) (Unable to verify any public governance initiatives.)
Meta Deliberative “Community Forums” (Oct 2023) using polls with randomly‐selected users (Stanford/BIT partnership). Emphasizes new governance models (e.g. Oversight Board). co-founder Partnership on AI; open-source ~1,000 AI models over last decade. GenAI Community Forum (Oct 2023) – 1,545 participants (USA, BR, GER, ES) deliberating AI chatbot principles.
Perplexity AI Emphasizes source-citing and accuracy in answers (built-in transparency). No known participatory governance initiatives. Claims focus on reliability and user trust by using citations. (No known public governance initiatives.)
Anthropic Uses Constitutional AI: internal model constitution and fine-tuning. Ran a public constitution-drafting process with CIP & Stanford’s Polis (Oct 2023). Publicly shares alignment research; Claude’s published “constitution” of safety/values (2023); co-founder Partnership on AI. Collective Constitutional AI (Oct 2023): ~1,000 Americans helped draft an AI constitution via Polis.
Mistral AI (French startup) No publicly announced participatory programs. Focus on open-source models with privacy/GDPR features. States commitment to transparency and privacy by design (European regulations). (No known public governance initiatives.)
Aleph Alpha (German/EU startup) Emphasizes “sovereign” AI for enterprise/government (data control, explainability). No public consultation initiatives noted. Advocates transparency and security; developed an explainability/content-correction model. (No known public governance initiatives.)
Stability AI Open-source focus: freely releases code/weights for models (Stable Diffusion, etc.) for public scrutiny. Advocates policy transparency. Signatory to White House AI commitments and UK AI safety statements (2023); emphasizes open, collaborative development. (No known public governance initiatives.)
Updating the Global Data Governance Mapping Project to Address Changes in Data Supply, Demand and Governance (in process; partially funded)

Details: As noted above, since 2019, the Hub has published a metric and dataset  that allows researchers to compare data governance in  68 countries and the EU. We developed 26 indicators (yes/no questions) about data governance at the national and international level and used this for 4 years of reports. However, data governance is changing for several reasons, including rapid change in AI, a growing recognition that data governance enforcement is inadequate,  and a new understanding that there are types of data that are useful for AI, yet such data is not governed under existing rules (for example creative content on Reddit or Instagram). The Hub staff is trying to update the metric to accommodate these and potential other changes such as data sharing rules or policies to promote data sovereignty,

Are Public Concerns about AI Lost in Translation? A Comparative Study of Public Input in AI Governance and Its Impact on Trust

Details: Public concern about the proper governance of artificial intelligence (AI) is rising. AI systems hold enormous potential to enhance human capacity, increase productivity, catalyze innovation, and help mitigate complex problems. Policymakers from many countries believe that public involvement in the design, deployment, and governance of AI  (often called participatory or human-centered AI) can give citizens a voice and a measure of control over AI systems. Although many users already rely on a wide range of AI systems, they do not understand how these systems work. Without a greater understanding, these users may learn to distrust both AI systems and AI governance. Thus, broad consultation is particularly useful as policymakers attempt to govern rapidly changing systems such as AI. In this project, we will examine if, when, and how officials informed and consulted their citizens on issues related to AI safety and risk. We will use a landscape analysis to compare 6 case studies of countries that sought public comment on some aspect of AI safety or risk. Our case studies include the US, Canada, the EU, the UK, Australia, and Colombia. Each of these countries sought public comment on some aspect of AI risk and provided information on the comments and commentary unless the citizens chose to remain anonymous.

Our findings will provide new information about whether and how governments consult, whether these processes are collaborative, whether the governments listen and respond to such comments, and discuss whether these strategies appear to build and sustain the trust governments will need to both nurture and effectively govern AI.