top of page

The AI Supercluster Revolution: Are you Keeping Up?

  • Writer: Oliver Nowak
    Oliver Nowak
  • Jan 28
  • 4 min read

Going into this year, everyone is talking about the speed and scale of change—not just in the technology industry but fundamentally across society. How we interact with technology in our work, and how we use it in our personal lives as consumers, is evolving everywhere.


To illustrate this scale and speed of change, I thought I’d focus on a topic that’s on everyone’s mind—AI—and a product we’re all familiar with: ChatGPT.


Let’s rewind six years to February 2019, when GPT-2 was released. At the time, this model consisted of 1.5 billion parameters and was trained on 8 million pages of web text. The size of the boxes below represents this scale.



Fast forward just one year to June 2020, when GPT-3 was released—the first mainstream product from OpenAI. That original model had scaled by over 100x, reaching 175 billion parameters trained on 14 times the volume of data, now including books and articles as well as web text.



Move forward another three years to March 2023, and GPT-4 is released. While OpenAI has yet to publish official figures, external estimates suggest the model grew by another 5x and was trained on 43 times more data. Comparing the sizes of the boxes, it’s clear how far we’ve come in just four years. The models grew by 500x, and the volume of data increased by 600x.



Now, let’s consider the computing power required to run these modern models at such scale. One unit used to measure computing power is a floating-point operation or FLOP. Without diving too deeply into the technical details, a FLOP represents the computational power needed for a pocket calculator to perform a complex multiplication when hitting the equals button. Today’s models operate at 10 petaFLOPs, equivalent to 10 billion people each holding a million calculators and pressing equals simultaneously. Or, to use a more visual analogy: if one FLOP equals a single drop of water, 10 petaFLOPs would equal half the volume of the Pacific Ocean.


Despite this immense power, AI has so far been relatively limited. It excels at classification and prediction but requires clearly defined boundaries and explicit instructions. However, remember how far we advanced between 2019 and 2023—that trajectory isn’t slowing; it’s accelerating. The next stage—Agentics—will see AI systems not only recognise and generate content but also interact, equipped with reliable memory capable of stitching together sequences of actions to achieve complex goals. Soon, AI will master not just language but a wide array of tasks. We’ll shift quickly from “there’s an app for that” to “there’s an agent for that.” Instead of just being tools or platforms, AI will become the maker of tools and systems of all kinds.


How is all this possible at such an astonishing speed and scale?


Innovation can be seen as a cumulative, compounding process that feeds on itself. The more technologies exist, the more they become components of other new technologies. This is why technology often forms clusters of innovation. Think about how many technologies today wouldn’t exist without the internet— email & video conferencing in how we work & communicate, streaming in how we watch TV, social media, even ChatGPT itself can only be accessed using the internet. Today’s technology cluster, accelerated by the likes of the internet, AI, biotech, quantum computing, and robotics, forms a supercluster. For example, AI was once limited by computing power, but now it’s helping design better chips and production processes, which in turn enable even more sophisticated AI models. This acceleration makes it almost impossible to predict where we’ll be in six months, a year, or two years. Quantum computing, still largely theoretical today, could make significant strides in the very near future with the help of AI technologies supporting development.


But how are organisations doing at keeping pace?


The speed of change is so high that starting late means you’ll never catch up. Unsurprisingly, research shows that over 90% of organisations are undergoing some form of digital transformation. Yet many are failing to address the full complexity of this AI supercluster. According to KPMG, organisations are only capturing 30% of expected revenue and 25% of expected cost savings from these initiatives. That’s a catastrophic level of underperformance.



Why is this happening?


There are two key modes of failure. The first is that the technology simply doesn’t deliver the intended outcomes. This is rarely due to the technology itself and more often the result of neglected fundamentals: having an effective strategy, aligning transformation outcomes with business objectives, and ensuring the right skills and operating models are in place. These are the basics that organisations have long struggled with but are generally very aware of.


The second, more nuanced failure is unique to the AI-driven supercluster. The speed and scale of transformation leave organisations fearing exposure to critical vulnerabilities with catastrophic consequences. Technology, once released into the complex, dynamic system of society, creates unpredictable second-, third-, and fourth-order effects. For example, how can businesses control what sensitive data employees input into ChatGPT? With AI solutions arriving at breakneck speed, organisations struggle to anticipate the consequences of these technologies.


How can they address this?


In the absence of perfect foresight, organisations must establish guardrails to mitigate risks. This includes new governance models, comprehensive employee training, anticipating government regulation like the EU AI Act, and implementing technical safety protocols (such as off-switches for compromised solutions). While these measures may still grossly oversimplify what’s truly required, they represent an essential starting point.


The good news is that organisations are increasingly aware of these challenges. Customers are actively engaging with vendors and suppliers, recognising both the opportunities and the work required to achieve them. For instance, only 17% of organisations report being where they want to be with their data, and risk and security have been identified as the top concerns for 2025.


2025 is likely to be a year of transformative change, but the most important point is that 2026 will likely bring even more. Address these critical modes of failure now, or risk becoming the statistic we reference this time next year when we again reflect on the relentless pace of change.

Comentários


©2020 by The Digital Iceberg

bottom of page