A Coming Wave Meets a Fragile World
- Oliver Nowak
- Apr 11
- 5 min read
A new era of transformative technology is dawning, with recent breakthroughs in Artificial Intelligence (AI) making people stand up and take notice. Myself included. Breakthroughs in artificial intelligence, robotics, and synthetic biology over the coming decade promise to remake economies and daily life. But as we’ve seen with AI, alongside the awe is a jarring undercurrent of scepticism and fear. A sense that we might be summoning forces we can’t fully control. I always find it interesting that AI is often depicted as a glowing digital brain in the palm of a human hand. The question is – are we holding it or is it holding us? Today’s technology leaders themselves warn that these technologies if misused or unleashed without safeguards, could result in disruption, instability, and even catastrophe. And the scariest thing of all is that we don’t know what we don’t know, so we can’t even comprehend the scale. As a result, we face a double-edged revolution: one that could advance civilisation immensely or undermine it irreversibly.

So here’s the challenge of our time - how do we keep inventing amazing new technologies, while containing them? In past technological waves, society worried about adoption and access; now, in a swift change of fortunes, our greater fear is uncontrollable access. When AI and bioengineering can empower a lone bad actor or amplify chaos globally, the usual ethos of “move fast” collides with ominous safety dangers. This blog explores why containment – the responsible restraint and governance of next-gen technology – will determine whether this coming wave lifts all boats or capsizes our ship. And it asks a sobering question: are our fractured societies and dysfunctional institutions anywhere near prepared for that task?
Unlike prior innovations, today’s technologies can spiral out of anyone’s control once they are released. AI and synthetic biology are general-purpose, dual-use tools - capable of tremendous good, but easily weaponised for harm. Preventing worst-case outcomes (from autonomous weapons to engineered pandemics) requires deliberate limits and safeguards at a global scale. The imperative has shifted: in the 21st century, our survival may hinge not on how fast we innovate, but on how wisely we contain what we create.
These unprecedented risks demand a new mindset. Historically, society rarely spoke of reining in emerging technologies. But in this wave, blind optimism is a liability. Failing to consider containment, often dismissed due to “pessimism aversion” among leaders who prefer to ignore dark possibilities, could be catastrophic. We need to normalise cautious restraint as much as we celebrate creativity. Instead of framing containment as halting innovation, we need to frame it as guiding it safely. Think of it like engineering: powerful engines require brakes and steering. Supporters are advocating for kill-switches in AI systems, strict biohazard protocols, and international moratoria on the most dangerous experiments. The purpose of these restraints is to manage the wave of innovation hitting us; maximising the benefits while minimising the risks. This is a big test for us as a species, can we wield our powers of creation with the wisdom and humility to control our impulses?
Unfortunately, this wave is crashing on a world in turmoil. And that has been highlighted to the extreme over the last few weeks. Extreme inequality is slowly dividing societies. It’s fuelling anger, eroding social cohesion, and ultimately undermining faith in the system. At a time when the social fabric is thin, conditions are ripe for unrest and only a minor disruption could tip us over the edge. Public trust in institutions and leaders is plummeting worldwide. Democracies, which depend on trust and shared truth, are questioning themselves. Since 2010, more countries have regressed on democracy than improved, a trend that isn’t slowing down. Instead, new sources of ‘truth’ are emerging as social media driven conspiracy theories and cynicism fill the void as citizens lose confidence that governments act in their interest. This trust deficit means collective action is harder just when it’s most needed.
Across the globe, populist and nationalist movements have surged, capitalising on frustration with the status quo. From Brexit and America-first isolationism to authoritarian strongmen in multiple continents. This trend shatters international solidarity, directly threatening global governance and cooperation. At the very moment where global cooperation on AI responses are crucial, many countries are turning inward, suspicious of outsiders and experts. In fact, nationalist rivalries on these exact topics are further reinforcing division, directly impeding collective containment strategies. Long-term planning is sacrificed for immediate political gain; expertise is often sidelined by ideology. The COVID-19 pandemic and other recent tests exposed how poorly coordinated and politically constrained global responses can be. This fragility means that as destabilising shocks from new technology arrive (e.g. AI-generated misinformation campaigns), our ability to respond swiftly and wisely is very much in doubt.
What happens when destabilising technology hits a destabilised world?
Imagine fragile democracies overwhelmed by misinformation and economic upheaval - trust collapses and demagogues rise, turbocharged by AI propaganda. On the other hand, imagine governments using crises to justify draconian surveillance and bio-control, achieving an almost Orwellian grip on the society with the help of advanced technology. It sounds like science fiction but neither future is that far away. Both trajectories are plausible.

The coming wave doesn’t just introduce new threats; it prises open the cracks already present in society. Cyberattacks and AI-driven disruptions throw fuel on the fire. In an unstable world, the democratisation of powerful technology means any disgruntled individual, rogue state, or extremist group can access tools previously limited to superpowers. And once that starts, it’s going to be difficult to put the genie back in the bottle as a dangerous feedback loop emerges. Technological crises (like a major AI failure or bioterror attack) could further undermine trust, breeding more conflict, which in turn makes coordinated technology governance even harder. It’s a self-reinforcing downward spiral. The conclusion is clear, we can no longer treat technology governance and social stability as separate issues - they are entwined in a single, urgent challenge.
So what’s our call to action? What needs to change?
We need a coordinated, multi-layered response to match the scale of the challenge. That starts with technical safety - building controls, constraints, and shutdown mechanisms directly into our most powerful systems. It means embedding audits and transparency measures to shine a light on what’s being developed, by whom, and with what risks. We need to identify choke points - natural places to apply friction in development - to buy time for society to catch up. It’s critical that makers, the developers, engineers, and researchers, internalise a sense of responsibility and design for containment from the outset. We must demand that businesses align their incentives with long-term safety, not just short-term profit. We need to empower governments, ensuring they have the knowledge, tools, and resolve to both regulate and respond. And because no country can do this alone, we must invest in global alliances - harmonising laws, norms, and rapid response mechanisms. A new culture must emerge, one that shares learnings and failures openly so others can act faster and avoid repeating mistakes. Finally, none of this happens without movements. These are public voices holding every part of the system accountable.
Containment isn’t a passive act. It’s an ongoing, global commitment. And it starts with each of us deciding we won’t look away. If you want to read more on this, I highly recommend The Coming Wave by Mustafa Suleyman.
Comments