Datacloud Congress 2025 – Serverfarm’s future ready AI data centers 

Learn more about

Our AI-Ready Data Centers

Serverfarm at Datacloud Global Congress 2025

At Datacloud Congress 2025, Serverfarm explored how next-gen AI workloads are reshaping data center design. From 600kW racks to evolving chip architectures, our team is engineering flexible, future-proof infrastructure to support the AI era.

At Datacloud Congress 2025 in Cannes, France, Serverfarm had many fantastic conversations with customers and prospects about where the industry is headed.

In the AI factories era, the focus of many of our discussions were about liquid cooled and high density data centers supporting 100kW, 300kW, and even 600kW per rack deployments. 

We spoke about how data centers must accommodate power and cooling for multiple new classes of processor types and families. Often in the same hall.

Racks are no longer home to just CPUs. The server market has developed rapidly into GPUs, TPUs and CPUs, from multiple players from NIVIDIA, AMD, ARM to Google.

A quick look at the architecture road maps of the dedicated chips makers tells us much about what data centers will look like over the coming years. 

Starting with NVIDIA’s Blackwell Ultra (launch 2025), Rubin (2026), Rubin Ultra (2027) and Feynman (2028). Already the NVIDIA RTX Pro 6000 Blackwell GPU runs at 600w. The Rubin architecture will require 600kWs per rack. On the CPU side AMD is expected to release its Turin 6th Gen EYPC in 2025 which will range from 225w – 360w depending on model. 

London-AI-data-center-squeeze

So, what does this mean for customers of commercial colocation data center operations?

At Serverfarm, we design and operate data centers in direct response to customer needs. 

Questions we have been discussing at Datacloud include: 

“What percentage of AI in a data hall will dictate changes to Power, Cooling and Rack Architecture?” 
“Can we avoid swapping out entire racks to cope with CPU/GPU changes” 
“How do we respond to the AI workloads changing power profiles – ‘AI wave forms?’”

Serverfarm constantly works on ensuring that new demands do not result in sub-optimal customer operations.

Serverfarm design and ops teams have been busy in areas such as evaluating modular infrastructure; AI pods; adaptive power management; dynamic power distribution, liquid cooling technologies and advanced thermal management systems. 

“Whether it’s new builds or upgrades, our mission is to provide roadmap protection for the AI future.”
Serverfarm
Data Centers

Conclusion

AI is prompting those of us focused on the design and operation of data hall power and cooling infrastructure to think differently. 

Data centers are changing. And that is a good thing. 

Of course, technology is not slowing down, and it is the market that will decide the dominant processor architectures going forward. 

Whether our customers are being driven by AI, location or cloud hyperscale needs, our job as the data center infrastructure provider is to be ready to offer our customers roadmap protection.

Meeting that challenge means that across new builds, change of purpose of facilities, or when upgrading existing data centers, the topologies we design and the power and cooling technologies we deploy must be useful in 5-10 years and beyond. 

And that is one thing that has not changed.

Blog