Over 800 MW of secured, ready-to-deploy capacity across North America, Europe, Asia, and other global regions for 2025-2026 with flexible options to own or lease the AI infrastructure.
Learn more
Back to Blog
AI Infrastructure

Looking back at NVIDIA GTC26: Recap

April 14, 2026
5
 min read
GTC26 Recap Featured Image
Table of Contents

5C Team Photo at GTC26

Our team at 5C spent the week at the booth, hosting deep dive conversations and taking moments to step back and look at the bigger picture. This was also our first presence as an exhibitor, a big milestone for a company that’s not even a year old yet.

Before we get into the broader industry shifts, here’s a look at what we were up to on the ground.

Kickoff connections

We cohosted an AI Builders Welcome Party withTogether AI, sponsored by Hypertec Group, and Pegatron. It was a perfect way to start the week with industry people, great energy, and conversations that continued well into the evening.

Strategic conversations

As the week accelerated, we hosted an Executive Lunch with Together AI, sponsored by NVIDIA. We brought together a room of strategic decision makers for a focused, practical conversation on what it actually takes to build and power gigawatt AI campuses at true commercial scale.

Defining the 2GW blueprint

Following those deep strategic discussions, our own David Bitton, VP of AI Product and Strategy, took the stage at the Together AI booth to tackle the engineering reality of this massive transition. In his lightning talk, " Planning for a 2GW AI Factory: Scaling Performance from Silicon to Campus " he laid out why the traditional data center concept, as we know it, is no longer sufficient at AI-factory scale.

As workloads scale on architectures like Blackwell and move toward next-generation platforms such as Rubin, David broke down the complex blueprint required to scale AI infrastructure from 10,000 to over 100,000 GPUs. His core message was clear. Every decision around power density, cooling, and electrical design directly impacts training throughput and tokens per second. Today, physical infrastructure is a true performance layer, and this blueprint is how we help partners deploy these large-scale facilities.

Behind the scenes: Filming “Home of AI”

While the convention floor hummed with announcements, our creative team was locked in an offsite studio producing something special. We spent two days filming the foundation of our new documentary style video series, "Home of AI."

The series is entirely editorial and insight driven. We are moving past promotional buzz to explore how next generation infrastructure is actually designed, built, and operated. Over two days, we sat down for interview-style conversations with twelve subject matter experts across our ecosystem, including Schneider, Together AI, NVIDIA, Pegatron, Hypertec, VAST Data, and 5C.  

We dug into real systems, real decisions, and real world experience. The raw insights from those sessions were impressive, and we are thrilled that the project has now officially moved into the next stage of production.

Key takeaways and trends

A few major shifts stood out from the keynote announcements and the broader conversations happening around the convention center.

Inference is becoming central.

The industry was obsessed with training models. But the keynote from Jensen Huang, CEO of NVIDIA, made it clear that the economics and focus are shifting from training to inference. It is continuous, it is user facing, and it scales exponentially with adoption. We are rapidly moving from AI that just thinks to AI that actually does.

This brings us to the next computing platform. We are extending SaaS into Agent-as-a-Service, introducing AI agents that can reason, act, and execute tasks across workflows. For these agents to work on behalf of users, trust is the ultimate gating factor. With the heavy focus on secure frameworks this year, it is obvious that agents must have their own identity, memory, and rock solid privacy controls to be deployed safely across the enterprise.

A hybrid world of open and closed systems

Going forward, AI systems will rely on a highly nuanced mix of open and closed platforms. We are seeing growing traction for open models because they can lower the cost per token and offer better control over deployment and can be optimized for energy efficiency. Companies can lean on open models for flexibility and cost control, while reserving their massive closed models for heavy reasoning and frontier capabilities.

Networking can be the new bottleneck

As Jensen Huang has emphasized with the rollout of Spectrum-X and NVLink technologies, the topology of the network is critical. Optimized topology and multi-plane networks that seamlessly distribute compute and storage traffic are now essential to achieving the efficiency and scalability these large clusters require.

The evolution of scale

This brings us back to the physical reality of power density. We are witnessing a monumental leap from megawatt scale to gigawatt scale clusters. The terminology is evolving rapidly because the physical footprint is evolving. We are no longer building data centers. We are building AI factories. Those factories are expanding into gigawatt AI campuses. Ultimately, we are moving toward an entire grid dedicated exclusively to AI.

What comes next

Those leading this era are the ones who understood early that power, cooling, networking, and physical design are performance layers, not background considerations.

The engineering complexity of scaling toward gigawatt campuses is real, it is layered, and it rewards the people who've done the thinking in advance.

That's exactly where 5C is. The conversations from GTC will carry forward into the months ahead, turning blueprints into buildings.

If you're building AI at scale and want to talk through what comes next, contact us here

You May Also Like

AI Infrastructure

Accelerate your AI Ambitions Effortlessly

.