The battle to govern AI: How Europe regulates, America resists, and China advances
AI is no longer just a technology story – it’s a governance contest. Europe is trying to regulate, America is resisting, and China is advancing with a state-steered model. Three rulebooks are emerging, and the outcome will decide whose values shape the digital future.
Artificial intelligence is no longer just a product category or a research field, it is, day by day, quickly becoming an organizing layer for everything from credit scoring and hiring to border control, policing, and media. It’s embedding faster in our daily processes than we can reasonably keep up with. This means that, with such instant ubiquity, the real contest is not only over who has the most powerful models or the fastest chips, but over who gets to decide the rules that govern those systems, the systems that we increasingly adopt to, well, run daily life.
In an earlier GYST piece, we asked whether democracy could keep up with AI’s pace. Now the question is shifting, more like who writes the rulebook and whose values get embedded in the next layer of global infrastructure?
In this respect, three very different approaches are emerging, different for each of our major, present players:
- The European Union is trying to turn rights-based regulation into a strategic asset.
- The United States is leaning on market power, innovation nationalism, and trade policy to defend its firms from constraint.
- China is building a state-steered model that combines rapid deployment with tight political and informational control.
The battle to govern AI is really a battle over whose values and interests define the architecture of digital society.
Europe: regulatory sovereignty as strategy
Europe’s claim to govern AI rests on a decade of precedent. The General Data Protection Regulation (GDPR), in force since 2018, did more than tidy up privacy notices. It asserted that any company handling the data of people in the EU, wherever that company is based, must play by European rules. And, because the law applies extraterritorially, many firms simply elevated their global standards to match GDPR rather than run different compliance regimes in different markets. Yes, you must pay to play in this backyard, but in doing so, we will help you up your game. Noticeably, many firms decided not to comply, leading to awkward homepages with self-exculpatory messaging that basically states “we’re not willing to participate in the legal protection of your data, so you can’t presently access this site” Many of these sites are also, noticeably, U.S.-based, underlining the tendency there towards ‘freedom’ of operation with regard to privacy. (European Parliament).
In any case, GDPR is a solid example of the “Brussels effect” in practice, that being one market’s law, its legal framework, can shape and become a default global benchmark, as we previously covered in our GYST piece on AI governance capacity.
From there, the EU transitioned decisively into platform power. The Digital Markets Act (DMA) defines a small group of “gatekeeper” platforms and bans practices like “self-preferencing”, or locking business users into one ecosystem. The Digital Services Act (DSA) tackles systemic risk and accountability for very large platforms, obliging them to assess how their services have the potential to amplify harms, and in line with that transparency, to share more data with regulators and researchers. These laws aim to treat platforms not just as private companies operating in a profit-driven marketplace, but as infrastructure.
The AI Act, agreed in 2024, extends that approach into automated decision-making, classifying AI systems by risk: some uses, such as social scoring of citizens or certain forms of biometric surveillance, are prohibited outright; high-risk uses (hiring, credit, education, law enforcement, critical infrastructure, etc.) face strict obligations around data quality, documentation and, critically, a layer of human oversight. (Tech Policy Press).
It’s important to observe that this is not just legal engineering, it’s also an institutional expectation. A 2025 Pew Research Center survey conducted across 25 countries found that a median of 53 percent of people trust the EU to regulate AI effectively, versus 37 percent for the United States and 27 percent for China. That trust is part of the EU core brand, a form of soft political power. However, it is also fragile. (Pew Research Center).
Over the past year, facing the ongoing mandate to comply with EU directives, lobbying by U.S. tech giants, and some European firms, it must be noted, combined with warnings about “pushing innovation out of Europe,” has intensified. Reports and interviews with EU officials describe pressure to delay or dilute parts of the AI Act’s implementation, and to take a lighter, more forgiving approach to enforcing the DMA and DSA in the name of competitiveness. If Europe blinks too hard, for instance by indefinitely postponing enforcement or quietly allowing broad exemptions to play out, it risks eroding its own brand power, turning itself from a largely trusted rule-setter into cautionary tale that fell under the wheels of the ‘free market’ monster truck.
So, the lesson is stark: regulatory sovereignty only matters, only carries weight, if the rules survive first contact with the concentrated focus of corporate and geopolitical pushback.
United States: innovate… then regulate… maybe, not?
The U.S. hosts most of the world’s dominant AI and platform companies, but its regulatory framework is still patchy, which, if you step back, is why these companies thrive there in the first place. There is no federal equivalent to GDPR, and indeed, no comprehensive AI statute as with the EU. Instead, policymakers have relied on a mix of sector rules, agency enforcement, and voluntary commitments, which are, well, voluntary. In October 2023, the Biden administration issued an Executive Order on “safe, secure, and trustworthy AI”, then proceeded to broker voluntary pledges from major AI firms on testing. However, none of that worked to create any enforceable rights or legally defined boundaries of clear liability. (White House Archives).
While the earlier U.S. approach emphasized standards and safety via agencies like the National Institute of Standards and Technology (NIST) and its Center for AI Standards and Innovation (CAISI), the change in government to the current administration has also changed the tone to one of outright resistance. Combatively, the administration has openly framed foreign regulation, meaning the EU rules, as tantamount to attacks on U.S. strategic assets, recently retaliating with the unilateral threat of tariffs against countries that choose to adopt digital services taxes or stricter platform rules that “target American technology companies.”
Alongside Washington’s retaliatory stance, Big Tech and parts of the U.S. trade apparatus have jumped aboard, pushing to embed deregulatory norms into digital trade rules. Provisions on data localization, source-code access, and non-discrimination can effectively block those pesky stricter regulations from overseas. So, by casting greater oversight as a fundamental barrier to trade, the goal is to entrench an anti-regulatory agenda.
So, where, exactly, does this leave the U.S. right now? Arguably, in an odd place: quick to invoke national security to curb Chinese tools and interference, but slow to create reliable, enforceable AI safeguards at home. It urges allies to align on “trusted AI,” then pushes back in the name of free trade when they write actual law to govern and develop transparency. The current administration is now pushing back even further, considering an executive order to punish states that implement AI regulations. (AP).
The net effect of all this is a strategic laissez-faire: protect innovation advantage, forcefully resist constraint and, with fingers crossed, hope the voluntary norms, the best intentions of Big Tech, suffice, even without AI regulation?
China: control, compliance, and exporting the system
China has, in its own way, moved faster than many democracies in converting AI principles into binding rules. However, its approach differs since the goal is one of alignment, not rights. The Personal Information Protection Law (PIPL), alongside algorithmic and generative AI rules, forms the legal basis. In short, this law requires platforms to register their algorithms, offer extensive user controls, and prevent outputs that threaten “social stability.”
Rather than re-explaining the details of this law, what matters now is that China is taking a huge strategic pivot, an informed economic gamble of sorts, on the uptake of AI as a core substrate of its governance models, and no longer just as mere domestic tools. And, while China is wrestling with the thorny issue of a 17-18% youth unemployment rate nationwide (SCMP), it is seeking to embed AI use across its governance systems to the point where, following an acceptable degree of success, they will be templates for export. Chinese firms are expanding into global markets with AI platforms embedded within a logic of control, while Beijing positions its regulatory approach in international standards bodies.
So, what’s the sales pitch here? Pretty simple, actually: powerful systems, industrial support, and governance models that don’t require Western-style checks and balances. This is about entering the “fifth industrial age”, as GYST explored recently, to, from a consumer perspective, “redistribute choice”. From Beijing’s perspective, this equates to taking a lead on standards, capturing value, and putting itself in a political position to rewrite the rulebook.
And for governments that want digital power without digital rights, this is indeed an attractive offer.
Three models, real people, and the next decade
If we put these three example approaches side by side, these aren’t just regulatory styles, rather, they are competing visions of the emergent global digital order.
- Europe is betting that binding, rights-based rules can build the public trust, from the very personal level of digital rights and privacy protection, that can tame the power of Big Tech. The risk is that its internal political will may collapse under lobbying pressure.
- The U.S. is betting that innovation, market dominance, and voluntary norms will suffice. The risk is that ceding the role of global rule-setter, which it arguably already has, to others, is leaving its citizens underprotected.
- China is betting that scale, speed, and ideological alignment can deliver both AI dominance regionally and beyond, and fortify its regime stability at home through the practical embedding of AI-led governance systems. The risk it faces is global mistrust and market rejection by open democracies.
While this argument sits at the governmental and supranational levels, it inherently applies to ordinary people. This is in no way an abstract debate on the vagaries of trade or minutiae of legal interpretations of tech laws. This argument fundamentally shapes how AI will be governed and implemented in our societies and, thus, how it will determine access to credit or jobs, how it is used in surveillance or education, and whether systems can be challenged or audited at all.
The fight to govern AI won’t hinge on who builds the most dazzling models. It will hinge on who writes and enforces rules that protect people rather than profit or power. Right now, Europe is the only major actor attempting that at global scale. The question is whether it can hold the line, or whether it will be outpaced by those with very different priorities.
Read this. Notice that. Do something.
Read this: EU AI Act explainer, the first comprehensive AI law and how its risk-based system works. This could be a historic document/moment when we look back from the future, or completely forgotten.
Notice that: Pew Research global survey shows 53% trust the EU, 37% the US, 27% China to regulate AI effectively.
Do something: Read Tech Policy Press on how digital trade rules can entrench or resist Big Tech’s anti-regulatory push.
Previously on GYST: Climate migration’s first breaking point won’t be borders
Next up: Europe’s global crossroads (Part I): from outsourced sovereignty to strategic agency