How AI is Reshaping Federalism: A new Era of Power, Policy, and Trust
Artificial intelligence is rapidly becoming a central force in public life. But beyond improving government efficiency or transforming how we interact with services, AI is also changing the deeper structure of our democracy—starting with the way power is distributed in federal systems like the United States.
As part of our policy center’s ongoing work, I recently reviewed a comprehensive report from the Carnegie Endowment that dives deep into the relationship between AI and federalism. What follows is a summary of the most important takeaways—for policymakers, public servants, and civic technologists alike.
The New Power Dynamics: State-Led Innovation
AI is triggering a dramatic shift in the balance of power between the federal government and the states.
States like California, Virginia, and Colorado have passed comprehensive data privacy laws that regulate how algorithms make decisions about people’s lives. By 2024, 18 states had enacted laws targeting deceptive AI-generated “deepfakes” during election cycles.
Many states are also using AI in daily operations. A 2023 survey found that 51% of public sector employees use AI tools regularly. Vermont created a Division of Artificial Intelligence to track all government AI systems, while Washington State identified 129 automated decision tools across its agencies.
The Private Sector Challenge: Tech Giants as Quasi-Governments
Governance isn’t just shifting between federal and state governments—it’s shifting away from government altogether.
Private firms like OpenAI, which projected $5 billion in compute costs in 2024, now wield more control over AI development than most public institutions. These firms possess proprietary models, massive datasets, and elite talent pipelines.
This raises serious concerns about democratic oversight. As Bruce Schneier has argued, governments need to create a “public AI option”—publicly supported AI infrastructure to counterbalance corporate control.
Global Models of AI Governance: Lessons from Four Federations
United States — Fragmented, bottom-up approach. States lead on regulation while the federal government struggles to keep pace.
European Union — Strong centralization through the EU AI Act, creating uniform rules across member states and exporting standards via the "Brussels Effect".
Canada — Balanced model via the proposed AI and Data Act (AIDA), which complements provincial efforts like Quebec’s Law 25.
India — Centralized strategy with state implementation under the $1.2 billion India AI Mission.
Trust, Democracy, and the Civic Fabric
AI is both an opportunity and a threat to public trust. According to a Brookings poll, 82% of U.S. voters don’t trust tech executives to regulate AI responsibly—and confidence in the government’s ability to do so isn’t much better.
While AI can increase accessibility and efficiency, the rise of AI-generated misinformation poses grave risks to democratic discourse. Deepfake videos and voice cloning already disrupted the 2024 election cycle.
What Comes Next: Principles for AI Federalism
To ensure AI strengthens rather than erodes democratic governance, governments must:
Balance innovation with protective guardrails
Counterbalance corporate control with public infrastructure
Build transparent, explainable AI systems
Coordinate across levels of government—state, federal, and global
Let’s Continue the Conversation
AI and federalism are no longer niche or abstract topics. They’re at the heart of our democracy’s evolution in the digital age.
If your organization is working on these issues—or wants to explore them further—I’d love to connect. Together, we can build AI governance models that are not only smart but just, inclusive, and resilient.
Recommended Reads
Carnegie Endowment: Technology Federalism
NCSL: AI in State Government
Brookings: Trust in AI
New America: Regulating AI