Who writes the code rules the world

Software is becoming the world’s quiet constitution. This essay shows how control over code, data, and infrastructure is reshaping power — and what that means for everyday life.

Who writes the code rules the world

As software replaces law as the world’s operating system, a new kind of power struggle is unfolding, not over land or armies, but over who gets to write the rules everyone else must follow.

Code used to be a tool. Now it behaves more like… a government. Algorithms decide what we see, which loans we get, how traffic flows, and who gets a visa. The results can be instant, sure, global, yes, yet also, hard to question.

Governments still project legitimacy by writing laws, but they often arrive late to the party. By the time a regulation appears, the software it was written to control has already redrawn the map, and when online platforms decide, without due regulation to govern them, what speech is allowed or when an AI system approves or rejects an application, we have to consider that the real authority sits inside the code itself.

The rise of coded bureaucracy

The old paper bureaucracy has quietly turned digital. “Fax me a copy”, said nobody recently. In China, the state’s data networks are part of its political design and a measure of just how quickly Beijing moved to embrace digitally enabled governance: the digital yuan lets authorities trace transactions in real time, and the Social Credit experiments convert state moral expectations as they relate to citizen behavior, into algorithms.

In the West, a similar logic runs through private firms. Apple’s privacy settings, Meta’s content rules, OpenAI’s filters, these all work like regulations issued, mandated, by a company instead of a parliament. They’re enforced automatically, sit somewhere within the ‘terms & conditions’ that we never read, but which we must all sign to use new products, and apply to billions of people.

However, what’s new isn’t the surveillance itself. Empires have always watched us. It’s the speed and scale of judgment. We now inhabit a system in which computers perform what used to be human decisions, and the engineers and data scientists effectively write the rules of daily life, whether they mean to or not.

China’s approach is deliberate: set digital standards at home, test and refine, then export them through its infrastructure abroad. Western tech companies do something similar from the opposite direction, spreading their own defaults through apps and platforms we take for granted. Either way, someone’s software logic becomes everyone’s norm.

Code in daily life

You don’t need to work in tech to feel this shift, it’s embedded in daily life: recommendation systems decide what music or news you encounter; automated résumé filters choose which candidates a human recruiter ever sees; traffic algorithms steer us onto new routes, shaping the potential noise and pollution we experience. Even ‘smart city’ energy systems are beginning to possess the power and scope to visualize the discrete needs of the whole, and ration power during heat waves or cold snaps by adjusting supply remotely.

All of this seems convenient, the tangible benefit of the spread of computing power to assist our lives… until it’s not. When a system acts unfairly or blocks a choice, there’s rarely a clear appeals process, nobody to discuss the issue with. There’s just a line of code somewhere deciding, instead, how the world should work. These invisible decisions accumulate into something larger: a layer of governance built not from citizen participation via debates or elections, but from the design choices made at a keyboard.

From code to truth

Every society runs on a shared idea of what constitutes the truth. This is to be built up through shared experience, the distillation of common goals, and how these bear out within our moral conscience. Once written in law books or ledgers, this is now stored as data. So, it’s fair to presume that whoever controls that data controls the boundaries of our shared societal reality. In the case of China, Beijing’s push for “data sovereignty” and strict control over domestic datasets isn’t only about censorship, rather, it’s about shaping what the machines know and what they don’t. A model trained inside those walls, therefore, can only see the world the state permits.

The United States and Europe, on the other hand, talk about openness and transparency, but rely on private infrastructure that thrives on “lock-in”, the way in which a technology or digital service makes it hard for users to leave once they’ve joined. In this context, it means that while Western governments promote ideals like open data and interoperability, the actual digital systems they depend on, think cloud hosting, app stores, identity frameworks, payment rails, these are all owned by private companies whose business models depend on keeping users effectively trapped in their ecosystems.

The open-source movement still matters, the idea that our online lives are a place of very transparent collaboration, yet many of the platforms we need to use to realize this are corporate property. GitHub belongs to Microsoft. The Linux Foundation, that original bastion of digital liberty, depends on corporate donors. Even ‘community’ models often run on venture money.

As researchers at the Carnegie Endowment note, digital governance has become part of technology itself, and both the EU’s AI Act and the White House’s voluntary AI rules are attempts to write laws around systems whose real rules are already inside the code. (Carnegie).

The geopolitical operating system

If the twentieth century’s power came from steel and oil, this century’s power comes from chips and data centers and, of course, the supply chains that make those technologies a reality. Control the semiconductors, the cloud, and the large-scale AI models, and you control the pace of everyone else’s progress, you control how the story unfolds.

In September, the U.S. expanded its export blacklist so that any subsidiary more than 50 percent owned by a restricted Chinese company is automatically banned from buying U.S. technology. That move was described by analysts as “architecture, not sanctions”, a rule that shapes what kind of intelligence China can even build. (Reuters). At the same time, Beijing is racing to define its own technical norms. It plans to publish more than fifty national and industrial AI standards by 2026. And while these standards may sound dry, they will decide what counts as “trustworthy” software and how other nations integrate Chinese tech. (SCIO).

Together, these policies form a global operating system for politics itself. Where colonial powers once exported law and language, digital powers now export platforms, cloud contracts, and code libraries. Every connection carries a bit of someone’s governance model inside it.

Rules that run themselves

The problem with rules written in code is that they don’t wait for anyone. Updates roll out overnight, and a new algorithm can make yesterday’s behavior non-compliant by the next morning.

This makes debates about “open” versus “closed” AI models more than technical. In simple terms, open models are those whose code or training data can be inspected, reused, or modified by others, the digital equivalent of traditional open research. Closed models, on the other hand, are proprietary: their data, architecture, and decision processes remain hidden inside a company’s servers. Supporters of open systems argue they promote transparency and innovation, while advocates of closed ones say they protect safety, privacy, and intellectual property.

Each approach carries trade-offs, of course, and every decision about what data goes in or what outputs are allowed is ultimately both a moral and a political choice. It decides who gets to speak, who gets visibility, and who gets excluded, and all this at the speed of an update. Technical standards sound harmless, but they decide values by stealth. Whether a country requires AI watermarks, defines “high-risk” uses, or limits biometric training data, it’s setting a cultural boundary through the language of the engineers. And once those definitions are built into products, they spread quietly around the world each time we breeze past those terms & conditions and eagerly accept the update.

Open-source communities can help, but even they concentrate power in maintainers who control what gets merged and released. It goes without saying, then, that public funding tied to transparency, or trust-based stewardship models, could be the lever that pushes open projects, transforming them into something closer to genuine public goods.

A more human internet

The point here isn’t to stop innovation; it’s to build consent back into systems that move faster than oversight is capable of observing them. Technology’s speed will always outpace regulation, so long as we are generally guided by the appetites of market economic impulse, so societies need ways to acknowledge that and embed accountability directly into design. In this regard, three steps matter most.

The first is transparency that actually works: real audit logs and independent access for researchers can turn accountability from a slogan into something measurable. The second is public digital infrastructure: open standards for payments, identity, and data sharing, owned by public institutions or citizen groups, can reduce dependence on corporate networks while keeping the spirit of innovation alive. The third is digital due process: if algorithms can deny credit or employment, people should have a clear way to push back and contest the result, and not via an email form that disappears into the void.

Pieces of this are emerging already. Europe’s transparency laws, Japan’s “human-centric” AI principles, and local experiments with open digital IDs all point toward the same idea: that democracy must learn to code for itself. For now, we live in a mixed system, an experimental moment: half public, half private; half law, half software.

The future won’t be ruled by whoever owns the most land, but by whoever writes the assumptions everyone else must run on. It stands, then, that the challenge is to make sure those assumptions remain open to free, democratic debate, not locked inside a black box.


Read this. Notice that. Do something.

Read this: Code has become the quiet constitution of the digital era: less visible than law, but more binding in its real effect. For further context, see Reuters on how the United States expanded its export blacklist to include Chinese subsidiaries; the State Council Information Office of the PRC announcement outlining plans to develop more than fifty new standards for the country’s AI sector by 2026; and the Carnegie Endowment analysis “Rethinking EU Digital Policies” examining Europe’s evolving balance between tech sovereignty and civic accountability.

Notice that: From U.S. chip controls to China’s AI standards, governance is moving inside the machines themselves. Protocols are becoming policy.

Do something: Ask who wrote the code behind the systems you depend on. Support transparency projects. Demand that technology serve people, not the other way around.


Previously on GYST: Chokepoint diplomacy: how China turned minerals into leverage

Next up: Wired planet: can a global power grid survive national politics?