top of page
Polar Data Centers Logo
gradient

"Polar is Accelerating Infrastructure by developing a unique, technology led, data center platform that can meet the growing, diverse needs of our global customers.” Andy Hayes - CEO

Insights

Liquid Cooling: The New Standard for AI-Era Data Centre Infrastructure

  • as0157
  • 12 minutes ago
  • 4 min read

The AI Revolution and the Need for Liquid Cooling


The recent boom in AI technology has revolutionised industries, driven by advances in machine learning, large language models and powerful GPUs. From automating tasks to enabling groundbreaking research, AI is transforming how we work and live. Its rapid growth is reshaping economies, creating opportunities and challenging traditional business models.

Unlike previous waves of innovation, this transformation has been fuelled by an exponential leap in computational power that exceeds the predictions of Moore’s Law. The shift is fundamentally altering the requirements for data centre infrastructure, pushing traditional air-cooling systems beyond their limits and signalling the change to a new age of liquid cooling.

Central to this revolution is NVIDIA’s GPU technology, which has become the foundation for modern AI workloads. GPUs like the NVIDIA H100 Tensor Core are designed for maximum throughput, handling massive parallel processing tasks that traditional CPUs cannot. However, the performance gains come at the cost of significantly higher power densities. For example, NVIDIA's latest GPUs, can consume over 700 watts per chip, leading to rack densities exceeding 100 kW and forecasts for future densities of over 600 kW per rack.

These new environments have been renamed as "AI factories" — large data centre campuses dedicated to training, inferencing and deploying AI models at scale. These facilities are far denser and more power-hungry than traditional enterprise or hyperscale data centres. Consequently, conventional air-cooling methods are no longer viable at such scale and densities and will require a change to liquid cooling technologies.

Liquid cooling offers a significantly higher thermal transfer efficiency compared to air, enabling data centres to sustain higher compute densities while maintaining optimal hardware performance and energy efficiency. As AI models grow in complexity and scope, liquid cooling is becoming not just an option but a necessity.


Competing Liquid Cooling Technologies

Two key technologies are competing to define the next generation of data centre thermal management.


Direct-to-Chip (D2C) Cooling

Direct-to-chip cooling circulates a cooling liquid directly to cold plates attached to the hottest components such as GPUs and CPUs. This method allows precise targeting of heat sources, enabling efficient thermal management without requiring extensive changes to existing server designs. D2C is particularly attractive for retrofitting high-density racks in existing facilities.


Immersion Cooling

In immersion cooling, entire servers are submerged in a dielectric liquid that dissipates heat efficiently without conducting electricity. Single-phase immersion uses a liquid that stays in the same state while two-phase immersion utilises fluids that vaporise at low temperatures to enhance heat removal. Immersion cooling offers extraordinary thermal efficiency and supports extremely high rack densities but often demands new server designs and significant operational changes.

Each technology presents trade-offs in terms of cost, complexity, maintenance, scalability and retrofit feasibility. The choice often depends on whether a facility is an existing data centre undergoing upgrades, or a new build designed with liquid cooling in mind.




The Advantage of New Build Data Centres

While retrofitting existing data centres for liquid cooling is possible, it introduces a number of challenges including spatial constraints, existing airflow designs and limited plumbing infrastructure. In contrast, new build data centres offer a clean slate, providing a strategic advantage when embracing liquid cooling.


Optimised Infrastructure Design

New facilities can be purpose-built with liquid cooling in mind. This includes designing for higher rack densities, integrating robust plumbing systems and optimising floor layouts for maintenance and scalability. Cooling distribution units (CDUs), leak detection and fluid monitoring systems can be seamlessly integrated, ensuring operational efficiency and minimising downtime risks. In addition, the application of liquid cooling compared to traditional air-cooled solutions, reduces the white space and cooling gallery footprint considerably as well as eliminating the need for raised floors and return air plenums.


Higher Energy Efficiency and Sustainability

Starting from scratch allows operators to achieve industry-leading power usage effectiveness (PUE) metrics. Liquid cooling can drastically reduce the energy consumed by chillers and fans, enabling new builds to achieve PUEs of 1.2 or lower. Additionally, many designs incorporate waste heat recovery systems that repurpose excess heat for district heating or industrial applications, which can help organisations achieve their sustainability goals.


Future Developments

New build data centres can accommodate not only today's AI and HPC demands, but also future requirements as AI models become larger and more computationally intense. Modular liquid cooling designs allow for scalability as the hardware landscape evolves. There are further advances in efficiency still be realised. For example, separate mechanical hydraulic circuits operating at temperatures specific to the technology deployed. Ultimately this means that distribution systems can run hotter, allowing 100% free cooling per annum and enabling better direct export to local district heat networks.


Improved Total Cost of Ownership (TCO)

While the capital expenditure for liquid cooling infrastructure is higher, the operational savings over the facility's lifecycle are substantial. Enhanced energy efficiency, reduced hardware failure rates and increased compute density contribute to a superior TCO profile for purpose-built liquid-cooled data centres.



The Polar approach

Polar’s vision statement is to deliver at scale, at speed and with the flexibility to meet future technological advancements. The company has ambition across multiple regions with a focus on deploying high-capacity AI-ready infrastructure in a sustainable and timely manner.

In order to achieve this, Polar uses a modular, prefabricated and packaged methodology, providing the highest quality and offering greater programme timeline certainty. In addition, Polar works closely with all OEMs to ensure close alignment in cooling technology to ensure that facilities are fit for purpose and ready for rapid IT deployment.

Sites are selected in regions which offer 100% renewable power and with local ambient conditions that maximise the opportunity for free cooling.

Deployment of these advanced liquid cooling solutions coupled with careful site selection enables Polar to target PUEs less than 1.2. These highly efficient sustainable solutions offer maximum IT yield in a reduced building footprint with low operational cost.



Conclusion

The AI revolution, underpinned by breakthroughs in GPU technology and the rise of AI factories, is fundamentally reshaping the data centre landscape. Traditional air-cooling systems are no longer sufficient to manage the power densities and thermal loads demanded by modern AI and HPC workloads. Liquid cooling has emerged as the next frontier, offering unparalleled thermal efficiency, energy savings and sustainability benefits.

While retrofitting existing facilities presents challenges, new build data centres designed with liquid cooling at their core represent the gold standard for the future. They offer the opportunity to optimise density, efficiency, scalability and sustainability from the outset.

As the adoption and use of AI increases, liquid cooling will no longer be a specialised solution but a foundational element of modern data centre design.

 
 
bottom of page