By Rob Kurver — The Next Cloud | November 2025

Two weeks, two headlines — and a clear signal that the AI market is shifting gears.

First, SoftBank sold $5.8 billion of its Nvidia stake, locking in profits from the AI training boom. Then, Nvidia invested $1 billion in Nokia, targeting AI-native radio networks that push intelligence closer to the edge.

They may look unrelated — one investor cashing out, one vendor doubling down — but together they mark a structural turn. The center of gravity in AI is moving from training in the cloud to inference at the edge. And telcos, long seen as the utilities of connectivity, suddenly find themselves holding the right assets for what comes next.

1. From Training to Inference: The Great AI Pivot

The first phase of the AI revolution was about training gigantic models in even larger data centers. It was the domain of hyperscalers, GPU suppliers, and model labs — a race for scale measured in petaflops and power draw.

But once you’ve trained the models, the challenge shifts from size to deployment. Running AI efficiently, securely, and locally — that’s inference. And it’s inference that turns promise into productivity.

Every voice agent, contract summarizer, and industrial vision system runs inference constantly. To perform at speed and cost-effectively, these workloads must move closer to the data — into cities, campuses, and national networks. That’s where the edge becomes critical.

And it’s also where telcos have an advantage the cloud can’t easily copy: distributed infrastructure, low latency, and regulatory trust.

2. Nvidia’s $1 B Bet: Compute Meets Connectivity

Nvidia’s $1 billion investment in Nokia makes perfect sense in that light. Their partnership to develop AI-RAN (AI-native radio access networks) signals the next frontier: merging compute with connectivity.

Instead of only scaling training clusters in the cloud, Nvidia now wants its hardware and software embedded in the very networks that deliver data. AI will live inside the fabric of telecom infrastructure — continuously optimizing, predicting, and automating.

That’s a fundamentally different play from the hyperscaler model. It’s about distributed intelligence, not central dominance.

And it’s an open invitation for telcos. AI-RAN may start at the base station, but the logic applies across the stack: if you can host and orchestrate real-time AI workloads, you’re part of the new compute economy.

3. The Economics Prove the Shift

According to Analysys Mason’s GPU-as-a-Service forecast (2024), the global GPUaaS market will soar from roughly $3.4 billion in 2023 to $86 billion by 2030 — a compound annual growth rate of about 58 %.

More importantly, the balance of that revenue flips within a few years:

Training workloads plateau around 2026, while Inference takes over, driving > 60 % of total GPUaaS revenue by 2030.

The report highlights telco edge sites as under-utilised assets ideally placed to host these distributed inference workloads — combining local proximity with regulated, high-availability connectivity.

For investors like SoftBank, that means the easy money in training may already be made. For infrastructure players, it’s the start of a new S-curve.

4. Intel and e&: Early Movers on the Edge

Some players aren’t waiting.

Intel has quietly built one of the most comprehensive stacks for production-grade inference — from Gaudi and Xeon processors to orchestration layers like the Prompt Reasoning Engine. Rather than chase the next frontier model, Intel is focused on making AI usable, efficient, and deployable everywhere.

Meanwhile, e& Group (Etisalat) is expanding its edge-AI services across the Middle East and Africa. By combining connectivity, compute, and security into unified enterprise offers, e& is showing how telcos can embed AI directly into their value chain — not bolt it on later.

These two approaches converge on the same vision: AI as a telco-native service, delivered over the network and optimized for enterprise sovereignty and latency.

5. The Missing Link: Network APIs

To unlock that potential, telcos also need new ways to expose and manage their intelligence. That’s where Network APIs enter.

Through initiatives like the GSMA Open Gateway, operators are standardising how developers access network functions securely. AI could be the catalyst that finally brings those APIs to life. Running inference at the edge demands real-time control over identity, consent, quality of service, and routing — exactly what programmable networks provide.

In short: the same APIs designed for connectivity could become the control layer for the AI era.

6. The Telco Opportunity

For telcos, three imperatives stand out:

Host inference workloads at the edge — giving enterprises local, sovereign compute without the hyperscaler lock-in. Bundle network + compute + AI orchestration as a single enterprise product. Monetise AI traffic through differentiated latency, compliance, and model-management services.

Doing this requires a mindset shift — from selling capacity to selling capability, from competing on coverage to competing on control and trust.

Those who move early will shape how AI actually operates in the real world — not in a lab, but in production.

7. What Comes Next

Over the next 18 months, the global AI landscape will rebalance:

Hyperscalers will keep scaling training clusters but face margin pressure. Enterprises will demand cheaper, compliant inference capacity closer to home. Telcos will be courted as infrastructure partners in sovereign AI ecosystems.

Intel, e&, and others are already showing the playbook. The next step is collaboration — chipmakers, telcos, and orchestration vendors forming regional AI fabrics that make intelligence local again.

8. The Network-for-AI Era

SoftBank’s divestment shows capital moving out of training.

Nvidia’s partnership with Nokia shows compute moving into networks.

Analysys Mason’s data shows revenue moving toward inference.

It all points to the same conclusion:

The cloud was built for humans. The next networks will be built for AI.

For telcos, that’s more than an opportunity — it’s a second chance to lead.