AI and the capacity to govern: Can democracy keep up?
AI is now writing policy, advising generals and routing decisions at machine-speed. Democracies may be playing catch-up — and the key isn’t ideology, it’s capacity.
Artificial intelligence is no longer novel in our lives. We’ve been using it willingly, or, when it’s crammed into every app we touch, unwillingly. What has changed is that it has escaped the laboratory and is now becoming part of our governance systems.
We can see AI systems taking part, ever more, in decision-making: writing policy briefs, screening job applicants, predicting court rulings, even advising generals on military tactics. Public institutions built for debate and deliberation are being challenged by machines built for speed and optimization. So, the question is changing: it isn’t just can democratic governments use AI responsibly, it’s whether they can govern it at all.
In the competition between algorithms designed to optimize and governments designed to deliberate, time itself has become political.
When institutions move slowly and machines don’t
Democratic governance is designed around process: scrutiny, debate, revision, eventual transparency, and that rhythm is intentional, because public legitimacy comes from deliberation, from public reasoning, ideally consensus in some form, from the space between decision and effect. This ‘slowness’ shows us the wrinkles and helps to iron them out, but when the pace of change accelerates, institutions designed for years of consultative evolution begin to lag. Consider the Artificial Intelligence Act of the European Union: it entered into force in August 2024, yet by the time many of its obligations (such as the transparency and governance rules for general-purpose AI models) applied in August 2025, the underlying models had already shifted. (European Commission).
Then take the United States, where the National Institute of Standards and Technology’s transformation of the former “AI Safety Institute” into the “Center for AI Standards and Innovation” (CAISI) reflects a partisan shift in how AI oversight is conceived. The program’s move from public interest safety to national security standards mirrors the administration’s broader push for executive control and industry partnership over that of transparent, multi-stakeholder review. The result is that questions of bias, privacy, and rights all risk being subordinated to questions of competitiveness and defence.
These cases show us a fundamental mismatch: architectures that take months or years to build are called upon to regulate systems whose capabilities are evolving in mere weeks. The real risk is that democratic governments may end up legislating for yesterday’s tools while tomorrow’s systems are already operating.
Code as politics: The hidden decision-making inside automation
Oversight itself is increasingly being outsourced. “Regtech” solutions, meaning the machine-learning systems that monitor markets, flag fraud, or assess compliance, are now the supervisors of supervisors. The issue is that in turning oversight into code, we embed choices: what data count, whose history matters, what counts as a “flag”? Those choices are political.
Meanwhile, global standards bodies such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Organisation for Economic Co‑operation and Development (OECD) are shifting from questions of physical safety to ones of digital ethics, how to ensure transparency, limit bias and make systems auditable. The language sounds technical, but the choices behind it are political: how much to prioritize innovation over precaution, or privacy over productivity.
At the same time, the companies building the biggest AI systems, such as OpenAI, Google, and Microsoft, are effectively writing their own constitutions. They decide how models are trained, what users can generate, and which laws apply. Their accountability to shareholders is often clearer than their accountability to citizens, those who use their technologies.
In shorter form: oversight is beginning to look like the things it is meant to regulate: fast, opaque and brittle.
Data democracy: whose histories matter?
Let’s take a step back and remember that these systems aren’t innovating for themselves, yet. Any AI system is only as good as the data it rests on, what it has been fed to learn from. However, there lies the rub, since data aren’t neutral. They reflect histories, languages, cultures, and power relations.
Democracies rely on a sense of ‘representativeness’, the notion that decision-making is done to reflect the broad swathe of public opinion. Yet global datasets continue to reflect the cultural and linguistic biases of high-income economies, where the companies reside. Many large language models, therefore, totally underrepresent the Global South, women in general, and minority languages.
It goes without saying, then, that when states deploy proprietary systems trained on narrow populations, the depth of the technology problem becomes visible: facial recognition algorithms misidentify darker-skinned faces, predictive-policing software echoes old, race and ethnicity-based biases, and the language models used are flattening dialects into generic forms, harmonizing and removing the color from our species’ diversity of expression. And to compound this situation, citizens may have no real transparency about exactly how they are being classified, no right to explanation, no ability to contest the categories. No human on the other end of the line to speak to.
The EU’s AI Act obligates providers of general-purpose AI models to draw up technical documentation and publish a summary of the training data. Fine. However, disclosure is only a first step, and without the public capacity to actually understand, contest, or influence how data-driven systems work, the democratic deficit in data becomes a democratic risk, a weakening of the overall system.
Capacity: the new frontline of democracy
Governance of AI is less about ideological preference and more about institutional capability. It is, after all, a tool that needs to be learned in order to make effective use of its promise. Again, the human systems need to play catch-up, since AI demands technical literacy inside institutions that were built for paperwork. To address this, some democracies are quietly experimenting: Estonia, long a leader in digital governance, is embedding machine-learning assistants in its ministries; South Korea is framing a national AI ethics charter through a process of civic consultation; and the EU is establishing the European Artificial Intelligence Office to audit compliance (though as of mid-2025 its staffing remains somewhat modest).
At the same time, international groups like the Global Partnership on AI and the OECD’s policy networks are early tests of shared governance: they can set norms, but not enforce them, so while their legitimacy is real, their power is limited. So, what matters most is developing capacity that enables participation, transparency, and learning, not just more rule-making.
The true test of democratic strength will be whether societies can embed a sense of reflection into design, to insist that automated systems are developed, from the ground up, to include human override, open audit trails, and public sector AI literacy, and not simply to chase regulation after the fact. Indeed, the question isn’t whether democracy can keep up, it’s whether it can keep meaning.
From reaction to stewardship
It has been a heady, swirling few years since the dawn of public AI adoption, from suspicion to reliance, and an internet space that is ever more filled with automatically generated ‘content’, instead of useful, considered information. We are struggling to keep up, to keep a sense of control. Likewise, the next phase of AI governance is not about domination of technology, rather, for good stewardship in its practical use.
Governments cannot hope to control every line of code when the models scale faster than laws, but they can find ways to reshape incentives: funding open-source frameworks, requiring auditability, and protecting the right to explanation for algorithmic decisions. In the U.S., CAISI signifies a shift toward industry collaboration and standards. In the EU, obligations for providers of general-purpose AI models took effect from 2 August 2025. These moves suggest democracies are beginning to shift from reaction to a place of capability-building, and at scale.
That capacity, however, is not neutral, not a given. It requires investment, institutional change, and public literacy. So in our modern world, where data, code, and infrastructure blur into power, capacity becomes the new frontier of democracy.
If governments fail to build it, they will cede control of their tools, sure, but also of their democratic agency, their legitimacy. Algorithms will decide, companies will rule, and we citizens will be spectators. That is not technical failure, it’s the wholesale abdication of democracy.
Read this. Notice that. Do something
Read this.
European Commission announcement that the AI Act entered into force in August 2024, with staged applicability following; NIST / CAISI hub for U.S. Center for AI Standards and Innovation updates and evaluations (establishes the standards-and-testing pivot and ongoing remit); and OECD AI Principles, the intergovernmental baseline for trustworthyAI (human rights, democratic values, accountability) used by many democracies as a reference frame.
Notice that.
Democratic timelines move by law, but AI evolves on its own clock. The EU’s rules started applying to major models in August 2025, while the U.S., through CAISI, has focused on standards, testing, and security, two different ways of facing the same speed problem.
Do something.
Build public sector AI literacy (policy + product + data engineering); require open audit trails / right to explanation for automated decisions in government procurement; and align national guidance with an OECD-compatible accountability stack so that cross-border cooperation doesn’t stall at the border.
Previously on GYST: The paradox of energy independence
Next up: Tariff truce or tactical reset? What the fragile US-China thaw signals for global governance