image
INFORMATION AND INSIGHTS FOR A BETTER DECISION

Resources

Browse our blogs, brochures, customer case studies, events, factsheets, infographics, media articles, reports, videos and white papers.

Our Resources

Filter
Type
  • Article
  • Blog
  • Brochure
  • Event
  • Factsheet
  • Infographic
  • Media Article
  • Newsletters
  • Podcast
  • Report
  • Success story
  • Video
  • White Paper
Topic
  • Big Data
  • Colocation Service
  • Corporate social Responsibility
  • Data centre
  • Data centre infrastructure management
  • Data protection
  • Digital transformation
  • Events
Industry
  • Content, Digital, Social Media
  • Financial Services
  • Government
Newest
  • Article
  • Blog
  • Brochure
  • Event
  • Factsheet
  • Infographic
  • Media Article
  • Newsletters
  • Podcast
  • Report
  • Success story
  • Video
  • White Paper
  • Big Data
  • Colocation Service
  • Corporate social Responsibility
  • Data centre
  • Data centre infrastructure management
  • Data protection
  • Digital transformation
  • Events
  • Content, Digital, Social Media
  • Financial Services
  • Government
  • Authored on
  • Asc
  • Desc
Article
Cooling the Future: How AI Is Driving a New Era in Data Centre Architecture

Artificial intelligence (AI) workloads are putting new pressures on data centre designs and systems. These workloads, especially those involving large language model training and inference, rely on accelerated computing powered by dense graphics processing unit (GPU) clusters that consume more electricity and produce more heat than traditional server setups. 
 

According to the International Energy Agency, global data centre electricity use could double to around 945 terawatt hours by 2030, largely due to accelerated computing systems built with specialised hardware for data-intensive workloads. Modern GPU systems already run above 100 kilowatts (kW) per rack. In fact, NVIDIA’s GB200/GB300 NVL72 systems have a thermal design power of 130–140 kW per rack.
 

As a result, air cooling is reaching its limits, with rising heat densities making throttling and hotspots difficult to manage and restricting improvements in power efficiency. 
 

Our earlier pieces, From Air to Liquid and The Rise of Immersion Liquid Cooling, explain why many data centre operators are moving from air cooling to liquid cooling. But what does this change mean for the physical design of a data centre facility?
 

The architectural impact of liquid cooling

Liquid cooling is becoming widely used and its effects are reaching every part of data centre facilities. While much of the conversation centres on cooling technologies, the overall building design of a data centre plays a critical role in shaping cooling efficiency. Data halls that were originally designed around airflow now need to make room for coolant distribution, heat exchange systems and new spatial considerations.
 

  • Pipework and routing
    Pipework is at the centre of this redesign. Direct-to-chip, immersion and hybrid systems all use supply and return lines that can run overhead or under raised floors. Routes variously affect ceiling height, clearance, cable pathways and maintenance access. For instance, overhead pipelines may need stronger ceiling supports and higher ceiling heights, while underfloor pipelines require operators to reconsider raised floors that are generally being phased out of hyperscale data centre designs.
     

  • Raised-floor decisions
    In recent years, raised floors have commonly been removed from data centre design for improvements in cost, schedule and flexibility. This has seen non-raised floor designs as the most common choice for new builds, as operators transition to liquid cooling for similar efficiency and construction benefits. However, the resulting challenge of running liquid over the top of IT racks – which the industry has spent decades trying to avoid – must now be addressed. While introducing a raised floor again could counter this risk, it would also add to the overall required slab-to-slab height and can make maintenance more challenging if raised floor heights are minimised due to other constraints.
     

  • CDUs and floor planning 
    Coolant Distribution Units (CDUs) create further spatial considerations. They require dedicated floor space, plumbing and power supply, and their proximity to IT racks and redundancy configurations directly influence layout. Despite taking up space, CDUs support denser compute clusters and reduce the number of racks needed for a workload, resulting, in theory, net space savings. However, many customers prefer CDUs to be located outside the white space in the cooling corridor, which may not have sufficient room to accommodate the addition of CDUs, particularly where customers also want high availability of air cooling to maintain flexibility in the white space.
     

  • Layout and clearance 
    Rack layout also changes. Rear-door heat exchangers and direct-to-chip loops increase the overall rack depth and clearance needs, such as for hot and cold aisles, while immersion tanks typically have a very different footprint from traditional racks and require complete replanning of the white space. The potential increase in rack weight, along with all the associated services and cooling fluids, needs to be taken into account in structural load assessments.
     

  • Building for growth and flexibility
    One of the key impacts of the introduction of new AI computer systems is that end-user requirements for their data halls (such as temperatures, air flow, the percentage of air versus liquid cooling and rack densities) are diverging. Some customers want the flexibility to use data halls for traditional loads and AI workloads, which can result in significant inefficiencies in the use of space and mechanical, electrical and plumbing equipment.


    One solution is to design load density growth, so a data centre might have a baseline density assuming traditional loads but be configured to add capacity into the same space as densities change and loads move from air to liquid cooling.


    For many existing facilities, this is either challenging or not possible. However, new purpose-built facilities go further by enabling straightforward progression from air to hybrid to full liquid cooling.
     

  • Simplifying liquid cooling adoption
    A key strategy related to this is the form of heat rejection used in the data hall cooling systems. For many years, data hall cooling systems commonly utilised direct or indirect air-cooling systems, which use air as the primary method of heat rejection. When these sites transition to liquid cooling, their only option without expensive retrofits is to utilise sidecar cooling units that convert the air to liquid cooling. In contrast, facilities that use chilled water as the primary cooling system can transition much more easily to provide any percentage of liquid or air cooling as required by the customer.

    At ST Telemedia Global Data Centres (STT GDC), all of our hyperscale data centres use chilled water as the primary cooling medium, meaning these sites can easily be used to support liquid-cooled racks. 

    The distinction between airside free-cooling sites and chilled-water sites is becoming more important for AI deployments. Airside free-cooling systems depend on favourable outdoor conditions and are generally optimised for lower density and air-cooled IT. However, chilled water systems can supply the stable temperatures and higher capacities needed for direct-to-chip and other liquid-cooled solutions, making transitions to environments with heavy AI workloads more straightforward.

    Temperature thresholds also influence cooling system design, as the ability to run equipment safely at higher temperatures is becoming a focus for certain customers who want to reduce power usage effectiveness (PUE). For instance, the new Singapore Standard 715:2025 encourages the use of equipment that can operate safely at up to 35°C, enabling operators to run warmer data halls and potentially reducing power used by mechanical cooling systems. However, achieving PUE savings often depends on cooling systems being designed to operate efficiently at these higher temperatures.
     

Moving to intelligent thermal management

AI workloads rise and fall throughout the day, and fixed cooling settings can waste energy and increase operating costs. Cooling systems now need to respond in real time.
 

At STT GDC, we are taking a pragmatic, forward-looking approach to integrating AI into data centre operations, with a clear focus on insight-led optimisation of cooling performance. As the first data centre operator in Asia to pilot Phaidra’s AI-powered cooling system energy optimisation, we deployed AI agents that analysed thousands of real‑time sensor signals and dynamically recommended set‑point adjustments across chiller, pumping and airflow systems in our Singapore facilities. This monitoring‑led approach is increasingly important as operators manage the thermal demands of high‑performance computing and accelerated AI workloads, enabling improved energy efficiency, supporting higher‑density deployments and reducing operational burden. 
 

Partnerships like these allow STT GDC to test and apply AI-driven optimisation approaches across different facility layouts and cooling configurations, helping to ensure sustained viability across our global portfolio. 
 

Digital twin models also add a predictive layer by simulating how building management systems behave under different conditions. Combined with real-time sensor data, operators gain a clear view of operating environments, enabling them to pinpoint opportunities for optimisation.
 

As liquid cooling becomes more common, these intelligent controls will be key to keeping energy use steady while maintaining resilience.
 

Preparing for denser workloads

STT GDC is adapting our campuses to meet the demands of high-density AI workloads. All our new facilities are designed for liquid-cooling capabilities, including direct-to-chip and immersion cooling, together with intelligent monitoring systems that connect thermal performance with power availability and sustainability targets. This is demonstrated by our status as an NVIDIA Colocation Partner, with STT Singapore 6 and STT Bangkok 1 certified under the NVIDIA DGX-Ready Data Centre program – the first in our portfolio to achieve this.
 

Across new developments, we’re building in flexibility to maximise site densities for the next generation of AI workloads while maintaining our world-class services for traditional cloud and enterprise workloads. This ensures our platforms are prepared not just for today’s AI demands, but for the next wave of innovation that will evolve alongside AI.

3