2025: when technology stopped being a free-for-all

For decades, technology expanded faster than the rules meant to govern it. By 2025, that balance flipped. As systems became too embedded and costly to reverse, governments stopped trusting markets to fix problems later and began setting limits up front.

2025: when technology stopped being a free-for-all

Once advanced technologies became too large and too embedded to undo, governments stopped trusting markets to fix them later.

For roughly three decades, advanced technology expanded faster than the rules meant to govern it. Governments tolerated this imbalance because this was, in effect, an easy ride: growth was strong, innovation was politically popular, and failures still looked manageable, especially with a “Let’s head to work on Monday to embrace our mistakes!” tech sector mentality that grabbed all the headlines. Regulation was expected to follow later, once the damage was clearer and systems more settled. That permissive arrangement depended on a belief that reversibility was a cheap option, that mistakes could be corrected after deployment without, in fact, destabilizing the system.

We’re now in the mid-2020s and it feels like that belief collapsed. What changed in 2025 was not innovation itself, but the willingness of states to keep absorbing risk in the hope that markets would simply self-correct.

The cause for this isn’t easy to pin down, for the shift did not arrive through a single crisis or treaty. Instead, it emerged through accumulated pressure across technology, energy, finance, and geopolitics. Systems that once looked flexible began to look brittle at scale, and the costs of undoing deployed technologies, meaning legally, economically, and politically, rose sharply; while, in relative terms, the costs of constraining them fell earlier. By last year, most major actors had stopped asking whether advanced technologies required structural governance and moved on to negotiating how restrictive that governance should be. This change in posture marks the end of the permissive phase.

The permissive bargain finally broke

Since the mid-1990s, globalization and digital growth rested on a permissive bargain, that being where efficiency was treated as an unquestioned good, governance as something that could safely lag behind the glitz of innovation, and the notion of interdependence was a force that would soften coercive power rather than redistribute it. For a time, those assumptions delivered real gains, as we saw in the heady globalizing days of the early century where trade expanded rapidly, capital moved with minimal friction, supply chains massively expanded and optimized across borders, and digital platforms boomed, scaling with limited oversight. The gaps in governance were visible, sure, but these were politically tolerable because failures still appeared contained and, crucially, reversible. Nobody really questions the system when the going is good.

By the early 2020s, however, that tolerance had largely evaporated. Expansion no longer added resilience; it increased exposure, where financial systems transmitted shocks faster than institutions could respond, a pattern documented repeatedly since the global financial crisis (IMF). At the same time, advanced digital systems had grown so large and so embedded that the line between development, deployment, and impact disappeared, forcing responsibility onto those who design and deploy systems instead of creating consequences to be dealt with later. The realities of our insatiable need for energy became apparent, and its demand tied digital growth directly to grid stability, emissions targets, and local politics, shrinking the margin for error [IEA]. So, what had once been manageable as “isolated risk” began to compound across the whole, hugely more interconnected and interdependent, system.

The core problem with this was not that systems failed, but that any failure became intolerably expensive to unwind: once platforms and infrastructure were embedded in public administration, labor markets, and security functions, any realistic reversal then carried legal, economic, and political costs that far exceeded those of any pre-emptive approach of constraint. “Fix it later” stopped being a governing principle and became a liability.

Scale changed things

The first hard constraint was scale. By the early 2020s, performance gains in AI came primarily from unchecked growth, from larger models, more data, and vastly greater computing power, rather than from cleaner architectures via more considered, measured design (Stanford University). As systems grew more complex, responsibility diffused, so when failures did occur it became increasingly difficult to assign liability across model developers, deployers, downstream users, infrastructure providers, or anyone else involved in the massive web of participants. Notably, governance frameworks which had been designed for slower, discrete technologies, could not keep pace with systems deployed continuously and globally.

This scale also concentrated power. Computing power, or compute, became the binding input, and compute was a hungry beast, requiring capital, energy access, and physical infrastructure at a scale available to only a handful of actors. Thus, we now see daily drama around data centers, which became political objects rather than neutral facilities, competing for their unquestioned share of energy generation with the established shares for housing, water use, and climate commitments (IEA). It followed that decisions about deploying new models were no longer abstract technical choices, rather, they carried distinct local political costs, so governments that had once treated digital infrastructure as a market concern, something more optional, were forced to treat it as a matter of public legitimacy.

The shift to precautionary governance

By 2025, we could sense these pressures converging into a clear shift in the underlying logic of governance. At that point, any kind of voluntary commitment or self-regulation stopped functioning as credible guardrails, as once advanced systems were embedded into economic management, public services, and security, any actual delay between rollout and correction became politically unacceptable. Limits are no longer hypothetical, but are being applied.

The European Union’s AI Act makes the shift clear. What started as an attempt to manage risk has turned into a set of hard rules: some uses of AI are simply banned; while others must meet strict requirements before they can be deployed, regardless of how popular or profitable they are (EU). The point here is that some governments are no longer waiting for problems to show up before acting, but are choosing to set limits at the design stage. Other countries have reached the same conclusion but in different ways. Investment screening, export controls, and procurement rules have increasingly ceased to treat advanced technologies as ordinary commercial products, instead elevating them to the status of strategic assets rather than ordinary commercial products, so the measures that were once described as temporary or exceptional have now become standard practice. The focus shifts away from speed and more toward any perceived attachment of risk.

Fragmentation as a constraint

Once it became clear that regulatory differences were not going away, companies began to design around them rather than treating them as a nuisance, a temporary friction. Thus, products were no longer conceived for a single global market and then adjusted at the edges; they were shaped from point of design by the jurisdictions in which they would be allowed to operate. Decisions about model design, data management, and deployment increasingly reflected what regulators would permit in specific markets, often narrowing choices that had previously been driven by open demand or the universal quest for efficiency. So, the assumption that systems could be built once and distributed everywhere without any real consequence gradually fell out of use.

Investment patterns then followed as firms able to demonstrate regulatory compliance, supply-chain control, and secure access to energy found themselves better positioned than those optimized solely for speed or scale. Governments reinforced this through new procurement rules, investment screening, and industrial policy that favored predictability and resilience over peak performance (OECD), and over time, fragmentation ceased to be something debated as an academic abstraction in policy forums and became a real-world condition that companies had to plan for as part of their basic operational design.

Not de-globalization

So, what has happened? Well, trade certainly did not stop. Capital still moved rampantly. Talent continued to circulate. What actually changed was the organizing principle, once “openness”, giving way to “selectivity”. Let’s not jump to any conclusions and clarify that: countries did not abandon interdependence, but they did become far more selective about it. Cross-border ties were accepted only when they could be tracked, controlled, or used to strategic advantage. At the same time, efficiency stopped being the overriding goal. Duplication, or what was system inefficiency, the extra suppliers, spare capacity, or parallel systems, this was no longer treated as waste, but as protection against possible disruption. These changes barely registered in headline economic data, but they did reshape how institutions planned, invested, and made decisions.

​​Earlier in the decade, many of these measures were presented as temporary fixes, but by 2025, reversing them required deliberate political action rather than simply letting them fade. Businesses adapted, regulators built around them, and partners adjusted their expectations. This is now how the system works.

Why may the 2030s feel more settled?

By the end of the decade, today’s arrangements will no longer feel like choices, and different standards, liability rules, and expectations around responsibility will be treated as basic conditions of operating across borders. Moving between systems will carry real costs in duplicated infrastructure, slower rollout, limited collaboration, and higher prices, but those costs will be absorbed as part of maintaining stability in an environment where trust has become harder to win, less an implicit given.

What makes this all easy to misread is that, as we noted back at the start, nothing notably dramatic or calamitous marked the shift. There was no single crisis, no major treaty, no clear break to signify a paradigm shift. Instead, a series of decisions accumulated, which then narrowed the room for alternatives, and by 2025 the argument was no longer about whether advanced technologies needed limits, but about how strict those limits should be. From that point on, the direction has been set.

Looking ahead, debates about which governance model “won” will miss what actually happened. The systems shaping the 2030s will reflect practical constraints more than ideology: what energy systems can support, what legal frameworks can enforce, and what institutions can manage. These systems are no longer neutral.

So what marked 2025? Not a breakthrough, but a closing. Once rules, infrastructure, and investment were built around the element of constraint, other paths became harder to take. And by the time that reality became obvious, most of the work had already been done.


Read this, notice that, do something

Read this: IMF’s Global Financial Stability Report on how interconnected systems transmit shocks faster than institutions can respond; International Energy Agency’s Data centres and data transmission networks on energy demand and infrastructure becoming political constraints; and the EU AI Act (final adopted text) as an example of design-stage limits replacing after-the-fact fixes.

Notice that: None of this is about innovation slowing down. It is about risk becoming too expensive to unwind once systems are embedded.

Do something: Pay attention to where limits are being set before deployment, not just to what shiny new technologies promise. This is where the real shape of the next decade is being decided.


Previously on GYST: Data sovereignty winter: why governments are freezing the open internet

Next up: What phase is the global system in now?