LinkedIn reminded me this week that it’s been twelve years since I started at The Next Cloud. Those anniversary notifications usually get a scroll-past. This one stopped me.
Because that name wasn’t just a company name. It was a question. What comes after public cloud? What’s the next cloud?
Twelve years later, I think I finally have the answer.
Where it started — hybrid, sovereign, and already hard
When we founded The Next Cloud in 2014, public cloud was becoming the default assumption. AWS was pulling enterprise workloads off servers at scale. The efficiency logic was overwhelming. And yet a small group of us — Michiel Steltman, Victor Schmedding, Michiel de van der Schueren and me, all with deep roots in cloud infrastructure, compliance, and the Dutch tech ecosystem — were asking a different question.
Those three went on to leave their mark in different ways. Michiel Steltman built a career at the heart of the Dutch digital infrastructure industry, including running key hosting and cloud industry bodies and advising government on digital policy. Victor Schmedding moved into senior roles including advisory work for major cloud players including AWS. And Michiel de van der Schueren — after years of building hosting companies, running cloud advisory practices, and advising both enterprises and government on digital transformation — has just published Digitale Soevereiniteit, a book on exactly why sovereign digital infrastructure is too important to leave to the IT department. Published in March 2026. Twelve years after we started The Next Cloud asking what comes after public cloud.
You could not write a better closing of the loop. Great co-founders!
Back then, we were doing projects for the likes of HP, Dell, and KPN. The work was channel enablement and sales coaching — running workshops and consulting engagements to help their channel partners understand what cloud even was, and how to sell it to enterprise customers. It sounds straightforward now. In 2012 and 2013, it wasn’t. Cloud was still genuinely new, the channel was confused, and someone had to stand in a room and make it make sense commercially. That was part of what we did.
The KPN managed hybrid cloud project is a good example of what large enterprises were actually asking for back then — not a clean migration to public cloud, but a managed combination: hyperscaler for flexible, scalable workloads; local data centres and hosted infrastructure for sensitive data and more static workloads that couldn’t or shouldn’t move. The demand was real. The execution was hard. The tooling wasn’t mature, the commercial models were still being invented, and the organisational complexity inside a telco trying to run something that hybrid was significant.
Interestingly, we spent a lot of energy at the time explaining to enterprises why their fear of moving to the public cloud was overblown. Convincing them to trust the hyperscaler, to move workloads off-premise, to let go of the data centre. Some listened. Some didn’t. The ones who didn’t were called laggards.
Fourteen years later, the laggards look prescient…
Sovereignty is now a strategic imperative. Regulatory constraints, geopolitical pressure, and the financial and reputational risk of depending entirely on global hyperscaler infrastructure have changed the calculus for large enterprises and their boards. And then there is AI — which adds a dimension that didn’t exist before. Processing video streams for real-time security analysis, running inference on sensitive operational data, deploying AI agents inside regulated environments — these workloads cannot simply be routed to a data centre on the other side of the world. Latency, data residency, and compliance make that impossible. The hybrid logic of 2012 is back. This time it is not a compromise. It is the architecture.
There is no one-size-fits-all answer. What runs where depends on the workload, the regulatory environment, the latency requirement, the cost model, and the sensitivity of the data. Orchestration — deciding what runs where, on which model, under what governance — is becoming the central capability. And it is changing fast. Models evolve weekly. Agentic AI is just getting started. The fabric is being built in real time.
The comms decade — building the network
Somewhere around the mid-2010s, The Next Cloud’s focus sharpened. Not away from telcos — toward the layer where telcos could most plausibly compete and win: communications.
CPaaS was emerging as a real category. Programmable communications — APIs for messaging, voice, video, verification — were pulling enterprise spend away from legacy telco services and toward a new class of platform player. Twilio, Vonage, Infobip, Sinch. The stack was being rebuilt. And telcos were at risk of becoming the dumb pipe underneath it.
That’s where most of the last decade went. Advisory work for telcos trying to build or buy CPaaS capability or API stores. Work for scaleups trying to navigate telco relationships and go to market. And co-founding CPaaSAA — started in 2019 as the CPaaS Acceleration Alliance and spun off as an independent entity in 2024, now operating as the Intelligent Engagement Alliance — to build the connective tissue the industry was missing: a working platform where telcos, CPaaS enablers, AI innovators, and enterprises could think together, build together, and move together.
The network that grew around this work — relationships across operators, platforms, startups, and investors in markets from Europe to Southeast Asia to the Americas — was built over years of advisory work, ecosystem building, and genuine industry engagement by The Next Cloud. CPaaSAA was founded on that foundation: a natural next step to give structure and industry-wide reach to what already existed. And then the relationship became genuinely symbiotic. CPaaSAA expanded the network further — new operators, new markets, new conversations — and The Next Cloud deepened its advisory work through the relationships CPaaSAA created. The two have grown together.
What makes this powerful in practice is how naturally the conversations flow. A relationship that starts with a CPaaSAA membership or an Inner Circle discussion becomes a strategic advisory engagement with The Next Cloud. An advisory client becomes a CPaaSAA member, sponsor or speaker. There is no hard line between the two — and that is the point. Well-nurtured relationships, developed over years, create the kind of trust that opens doors that formal business development cannot. That is the real asset. Not a database of contacts. A genuine network of people who know what we stand for and come back because the thinking is useful.
Now — the full fabric
Here’s where the pendulum lands.
The centralisation logic of the last decade was correct for its context. Move workloads to the hyperscaler. Achieve scale. Reduce cost. It worked, within the conditions that made it work.
Those conditions have changed. Geopolitics made the question of where data sits a strategic and regulatory concern, not just a technical preference. The EU AI Act, GDPR, and sector-specific compliance requirements made centralised, US-hosted AI infrastructure legally problematic for large portions of the regulated enterprise market. And AI itself — specifically inference — brought a new set of physics to the problem.
Training large models is done. The frontier race is largely over as a competitive differentiator. The game now is inference: running intelligence close to where decisions are made, where data is generated, where latency matters, where compliance is non-negotiable. Inference-optimised compute at the edge of networks, inside enterprise perimeters, under regional governance. That’s not a niche use case. That’s the architecture of enterprise AI at scale.
And communications is no longer a separate thread. AI voice is emerging as one of the most consequential applications — the convergence of large language models with real-time voice infrastructure, turning the communications stream into an intelligent layer. The players building at that intersection — Radisys, Mavenir, Nokia and others — are working on exactly this: AI-native voice infrastructure that runs inside telco networks, not on top of hyperscaler APIs. This isn’t AI as an add-on to communications. It’s communications as the primary channel through which AI operates. Intelligent Engagement — real-time, context-aware, capable of acting — is what it looks like when you put these pieces together.
The data flowing through communications channels — voice, messaging, interaction — is among the most valuable enterprise data that exists. The telco sits in the middle of it. The question is whether they build the intelligence layer that governs it, or hand that layer to someone else.
What makes this moment different from previous telco AI conversations is what enterprise customers are actually saying. The feedback coming through our network is consistent: large enterprises want an alternative to full dependency on global hyperscalers. They are asking their telcos — operators they have trusted relationships with, who operate under local law, who understand their regulatory environment — to provide that alternative. The demand is not coming from telco strategy decks. It is coming from enterprise procurement conversations. That is a meaningful shift.
Telcos are responding more seriously than at any point in the last decade. The combination of geopolitical pressure, regulatory momentum, and genuine enterprise demand is creating conditions that did not exist before. Full circle, in a way — back to the hybrid logic we were working on fourteen years ago, but with a much stronger commercial case and a much more urgent market.
This is where The Next Cloud’s most recent work fits — and why it feels like a natural continuation rather than a pivot. For the last several months, we have been working with Intel and others on sovereign edge AI: bringing inference-optimised compute into telco environments, close to where data is generated, under regional governance, without data leaving the defined perimeter. The conversations are gaining traction. Projects are starting to emerge. Details will follow as things develop.
The pattern is familiar. Early stage. Leading edge. Helping serious organisations understand what’s coming before it’s obvious — and building the commercial logic around it. Fourteen years ago, that was explaining public cloud to channel partners in a workshop. Today, it’s working with telcos on sovereign edge AI infrastructure. The theme hasn’t changed. The technology has.
What makes The Next Cloud a useful partner in scaling this play is precisely what the last twelve years built. Intel has the hardware and the inference architecture. What is harder to replicate is what sits between the technology and the telco: the relationships, the commercial translation capability, the understanding of how telcos make decisions, where they stall, and what it takes to move them. We are independent — not a vendor, not a hyperscaler, not a system integrator with a conflicting book of business. We operate across the telco ecosystem globally, with operators at different stages of maturity and in different regulatory environments. And CPaaSAA gives direct access to the senior decision-makers at the telcos and communications platforms who are closest to the enterprise demand that sovereign edge AI is designed to serve.
The Intel work does not sit in isolation. We are engaged with Radisys and Mavenir — both active at the intersection of AI-native communications infrastructure and telco networks — and with BT International, which has been building out its AI-ready global network platform and recently announced a significant partnership with Google Cloud to optimise AI workload delivery for multinational enterprises. BT International has been explicit about wanting to build an ecosystem of partners around that play. These are not casual conversations. They are serious engagements with serious organisations who are trying to figure out how sovereign edge AI lands commercially across multiple telco environments.
Rolling out an edge AI play across multiple telcos globally is not a technology problem. It is a commercial and ecosystem problem. That is exactly the problem The Next Cloud was built to solve.
What The Next Cloud is building toward — the thing the name was always pointing at — is this: a distributed AI inference fabric where compute, communications, compliance, and connectivity are integrated. Not separate products bolted together. Not another platform layer. A genuine convergence of everything we’ve been working on for twelve years — the telco relationships, the CPaaS network, the sovereign compute thesis, the AI voice opportunity — into something that is actually the next cloud.
Not a marketing phrase. An architecture.
The uncomfortable part
The telco opportunity here is real. The structural advantages — proximity, compliance, trust, network — are real. The demand from enterprises who cannot send regulated data to a hyperscaler is real.
What’s not yet real, in most cases, is the commercial model.
Too many conversations I’m having are still about platform positioning, API frameworks, ecosystem narratives. Not enough are about a specific customer, a specific problem, a specific price point, and a specific outcome. The gap between infrastructure capability and commercial clarity is where most telco AI plays are stalling.
That’s the work. Not the vision — the execution.
Twelve years in, the question The Next Cloud was founded on is finally answerable. The next cloud is distributed. It’s sovereign. It’s intelligence-native. And it integrates communications as a first-class layer, not an afterthought.
The pendulum has swung. Now the real work starts.

