INFORMATION AND INSIGHTS FOR A BETTER DECISION Resources Browse our blogs, brochures, customer case studies, events, factsheets, infographics, media articles, reports, videos and white papers. Our Resources Filter Type Article Blog Brochure Event Factsheet Infographic Media Article Newsletters Podcast Report Success story Video White Paper Topic Big Data Colocation Service Corporate social Responsibility Data centre Data centre infrastructure management Data protection Digital transformation Events Industry Content, Digital, Social Media Financial Services Government Done Reset Newest Type ArticleBlogBrochureEventFactsheetInfographicMedia ArticleNewslettersPodcastReportSuccess storyVideoWhite Paper Article Blog Brochure Event Factsheet Infographic Media Article Newsletters Podcast Report Success story Video White Paper Topic Big DataColocation ServiceCorporate social ResponsibilityData centreData centre infrastructure managementData protectionDigital transformationEvents Big Data Colocation Service Corporate social Responsibility Data centre Data centre infrastructure management Data protection Digital transformation Events Industry Content, Digital, Social MediaFinancial ServicesGovernment Content, Digital, Social Media Financial Services Government Sort by Authored on Authored on Order AscDesc Asc Desc Article Cooling the Future: How AI Is Driving a New Era in Data Centre Architecture Artificial intelligence (AI) workloads are putting new pressures on data centre designs and systems. These workloads, especially those involving large language model training and inference, rely on accelerated computing powered by dense graphics processing unit (GPU) clusters that consume more electricity and produce more heat than traditional server setups. According to the International Energy Agency, global data centre electricity use could double to around 945 terawatt hours by 2030, largely due to accelerated computing systems built with specialised hardware for data-intensive workloads. Modern GPU systems already run above 100 kilowatts (kW) per rack. In fact, NVIDIA’s GB200/GB300 NVL72 systems have a thermal design power of 130–140 kW per rack. As a result, air cooling is reaching its limits, with rising heat densities making throttling and hotspots difficult to manage and restricting improvements in power efficiency. Our earlier pieces, From Air to Liquid and The Rise of Immersion Liquid Cooling, explain why many data centre operators are moving from air cooling to liquid cooling. But what does this change mean for the physical design of a data centre facility? The architectural impact of liquid coolingLiquid cooling is becoming widely used and its effects are reaching every part of data centre facilities. While much of the conversation centres on cooling technologies, the overall building design of a data centre plays a critical role in shaping cooling efficiency. Data halls that were originally designed around airflow now need to make room for coolant distribution, heat exchange systems and new spatial considerations. Pipework and routingPipework is at the centre of this redesign. Direct-to-chip, immersion and hybrid systems all use supply and return lines that can run overhead or under raised floors. Routes variously affect ceiling height, clearance, cable pathways and maintenance access. For instance, overhead pipelines may need stronger ceiling supports and higher ceiling heights, while underfloor pipelines require operators to reconsider raised floors that are generally being phased out of hyperscale data centre designs. Raised-floor decisionsIn recent years, raised floors have commonly been removed from data centre design for improvements in cost, schedule and flexibility. This has seen non-raised floor designs as the most common choice for new builds, as operators transition to liquid cooling for similar efficiency and construction benefits. However, the resulting challenge of running liquid over the top of IT racks – which the industry has spent decades trying to avoid – must now be addressed. While introducing a raised floor again could counter this risk, it would also add to the overall required slab-to-slab height and can make maintenance more challenging if raised floor heights are minimised due to other constraints. CDUs and floor planning Coolant Distribution Units (CDUs) create further spatial considerations. They require dedicated floor space, plumbing and power supply, and their proximity to IT racks and redundancy configurations directly influence layout. Despite taking up space, CDUs support denser compute clusters and reduce the number of racks needed for a workload, resulting, in theory, net space savings. However, many customers prefer CDUs to be located outside the white space in the cooling corridor, which may not have sufficient room to accommodate the addition of CDUs, particularly where customers also want high availability of air cooling to maintain flexibility in the white space. Layout and clearance Rack layout also changes. Rear-door heat exchangers and direct-to-chip loops increase the overall rack depth and clearance needs, such as for hot and cold aisles, while immersion tanks typically have a very different footprint from traditional racks and require complete replanning of the white space. The potential increase in rack weight, along with all the associated services and cooling fluids, needs to be taken into account in structural load assessments. Building for growth and flexibilityOne of the key impacts of the introduction of new AI computer systems is that end-user requirements for their data halls (such as temperatures, air flow, the percentage of air versus liquid cooling and rack densities) are diverging. Some customers want the flexibility to use data halls for traditional loads and AI workloads, which can result in significant inefficiencies in the use of space and mechanical, electrical and plumbing equipment.One solution is to design load density growth, so a data centre might have a baseline density assuming traditional loads but be configured to add capacity into the same space as densities change and loads move from air to liquid cooling.For many existing facilities, this is either challenging or not possible. However, new purpose-built facilities go further by enabling straightforward progression from air to hybrid to full liquid cooling. Simplifying liquid cooling adoptionA key strategy related to this is the form of heat rejection used in the data hall cooling systems. For many years, data hall cooling systems commonly utilised direct or indirect air-cooling systems, which use air as the primary method of heat rejection. When these sites transition to liquid cooling, their only option without expensive retrofits is to utilise sidecar cooling units that convert the air to liquid cooling. In contrast, facilities that use chilled water as the primary cooling system can transition much more easily to provide any percentage of liquid or air cooling as required by the customer.At ST Telemedia Global Data Centres (STT GDC), all of our hyperscale data centres use chilled water as the primary cooling medium, meaning these sites can easily be used to support liquid-cooled racks. The distinction between airside free-cooling sites and chilled-water sites is becoming more important for AI deployments. Airside free-cooling systems depend on favourable outdoor conditions and are generally optimised for lower density and air-cooled IT. However, chilled water systems can supply the stable temperatures and higher capacities needed for direct-to-chip and other liquid-cooled solutions, making transitions to environments with heavy AI workloads more straightforward.Temperature thresholds also influence cooling system design, as the ability to run equipment safely at higher temperatures is becoming a focus for certain customers who want to reduce power usage effectiveness (PUE). For instance, the new Singapore Standard 715:2025 encourages the use of equipment that can operate safely at up to 35°C, enabling operators to run warmer data halls and potentially reducing power used by mechanical cooling systems. However, achieving PUE savings often depends on cooling systems being designed to operate efficiently at these higher temperatures. Moving to intelligent thermal managementAI workloads rise and fall throughout the day, and fixed cooling settings can waste energy and increase operating costs. Cooling systems now need to respond in real time. At STT GDC, we are taking a pragmatic, forward-looking approach to integrating AI into data centre operations, with a clear focus on insight-led optimisation of cooling performance. As the first data centre operator in Asia to pilot Phaidra’s AI-powered cooling system energy optimisation, we deployed AI agents that analysed thousands of real‑time sensor signals and dynamically recommended set‑point adjustments across chiller, pumping and airflow systems in our Singapore facilities. This monitoring‑led approach is increasingly important as operators manage the thermal demands of high‑performance computing and accelerated AI workloads, enabling improved energy efficiency, supporting higher‑density deployments and reducing operational burden. Partnerships like these allow STT GDC to test and apply AI-driven optimisation approaches across different facility layouts and cooling configurations, helping to ensure sustained viability across our global portfolio. Digital twin models also add a predictive layer by simulating how building management systems behave under different conditions. Combined with real-time sensor data, operators gain a clear view of operating environments, enabling them to pinpoint opportunities for optimisation. As liquid cooling becomes more common, these intelligent controls will be key to keeping energy use steady while maintaining resilience. Preparing for denser workloadsSTT GDC is adapting our campuses to meet the demands of high-density AI workloads. All our new facilities are designed for liquid-cooling capabilities, including direct-to-chip and immersion cooling, together with intelligent monitoring systems that connect thermal performance with power availability and sustainability targets. This is demonstrated by our status as an NVIDIA Colocation Partner, with STT Singapore 6 and STT Bangkok 1 certified under the NVIDIA DGX-Ready Data Centre program – the first in our portfolio to achieve this. Across new developments, we’re building in flexibility to maximise site densities for the next generation of AI workloads while maintaining our world-class services for traditional cloud and enterprise workloads. This ensures our platforms are prepared not just for today’s AI demands, but for the next wave of innovation that will evolve alongside AI. Article Powering AI at Scale: What Your Data Centre Should be Delivering Artificial intelligence (AI) is quickly becoming a vital business tool that is transforming organisations across industries, from healthcare and finance to manufacturing and mobility. This transformation has been especially pronounced in Asia Pacific, with research indicating that countries such as Singapore, Australia, New Zealand and South Korea are outpacing most North American and European markets in enterprise AI adoption. Article Bringing AI-Ready, Sustainable Digital Infrastructure Closer to You In 2025, we expanded access to AI-ready capacity, high-performance cooling and more sustainable solutions across our global platform – strengthening the foundation for organisations building and scaling digital services. Here’s how these developments translate into value for your business. Factsheet STT Bangkok 1 Factsheet STT Bangkok 1 is a carrier-neutral data centre, built to the highest industry standards and strategically located in Hua Mak. Factsheet STT Bangkok 2 Factsheet STT Bangkok 2 is part of the STT Bangkok data centre campus, with a development potential of 24 megawatts of IT power. Purpose-built to support Thailand’s growing demand for critical digital infrastructure, the facility is engineered to meet the evolving needs of customers—especially in high-density and AI-driven environments. Factsheet STT Bangkok 3 Factsheet Situated on Wireless Road, STT Bangkok 3 functions as an interconnection hub with carrier density and low latency capabilities, making it an ideal choice for businesses needing fast and dependable access to data, applications, and cloud services. Article 5 Steps Thailand Should Take Now to Leapfrog Its ASEAN Rivals in the DC Space As a latecomer to the industry, Thailand is in a unique position to leapfrog its ASEAN neighbours by putting into action lessons from other mature data center hubs, whether Singapore, Tokyo, Europe or the US. “Thailand can adopt best practices without repeating early-stage mistakes,” notes Budsarin Pradityont, Country Head of ST Telemedia Global Data Centres (Thailand) in an email interview with w.media. Article Powering ASEAN's Digital Future: Data Centres as Strategic Infrastructure for Growth and Sovereignty Southeast Asia stands at a pivotal crossroads: to be architects of its digital future or outsource the development of its digital backbone. The answer lies not in the mere pace or scale of data centre growth, but in how deeply these investments are integrated into national competitiveness strategies. Article Data centres: Thailand’s gateway to the global AI stage Thailand holds strong potential to become ASEAN’s AI hub. Yet AI processing demands enormous energy and generates extreme heat, posing a major challenge for digital infrastructure. Article Anchoring the Future: Aligning Digital Transformation and Sustainability in APAC The Asia-Pacific region is at the epicentre of global digital transformation. The rapid adoption of cloud technologies, the exponential growth of artificial intelligence (AI) and the emergence of sovereign digital infrastructure are reshaping societies and economies. At the core of this transformation are data centres – strategic assets enabling innovation, economic growth and resilience. Media Article How Thailand is Building Tomorrow’s AI Economy with Critical Digital Infrastructure Artificial intelligence is reshaping economies across ASEAN, redefining how nations compete and grow. For Thailand, the defining factor of whether we emerge as a leader—or remain a follower—in this new era is not low-cost labour, but the strength of our digital infrastructure foundations. We stand at a pivotal moment. The AI economy is expected to transform every sector—from financial services and smart cities to education, healthcare, and manufacturing. According to Statista Market Insights, Thailand’s AI market is forecast to reach US$1.16 billion in 2025, with an annual growth rate of 26.24% through 2031. Yet beyond these figures lies a deeper truth: the shift from computing as basic support infrastructure to computing as a strategic competitive advantage. Podcast PodChats for FutureCOO: Ensuring DC sustainability and regulatory alignment in 2026 In this PodChats for FutureCOO episode, Daniel Pointon, Group CTO at STT GDC, discusses how enterprises can prepare for 2026 by balancing AI-driven growth with sustainability and regulatory alignment. 3 Pagination Current page 1 Page 2 Page 3 Page 4 Next page Next ›