Exploring Modern Data Centre Design


Why DC-powered lighting matters for modern data centres
In this exclusive article for DCNN, Ton van de Wiel, Global Segment Manager, Intelligent Buildings at Signify, outlines why DC-powered LED lighting is emerging as a key consideration in making data centre infrastructure more efficient and resilient: Building resilience from the ground up The digital services that underpin modern economies – from media streaming to cloud computing – depend on a rapidly expanding global network of data centres. These facilities are not only critical to digital connectivity; they represent significant sources of employment, infrastructure investment, and tax revenue through construction and long-term operation. Today, data centre operators face a convergence of challenges. Capacity requirements are accelerating due to AI-driven workloads, energy prices are rising, and expectations around sustainability and carbon reduction are becoming more stringent. In response, the industry is re-examining its electrical infrastructure. Direct current (DC) power architectures, once limited to niche applications, are gaining traction as a foundation for higher efficiency and greater operational resilience. Within this shift, lighting – often treated as a peripheral system – can play a strategic role. DC-powered LED lighting combines high energy efficiency with relatively low implementation risk, making it an effective starting point for broader DC adoption. Beyond energy savings, lighting can also function as an intelligent layer within next-generation data centre infrastructure. How power architectures are changing Operating a data centre requires tight coordination between IT equipment, networking, cooling, security, and electrical distribution. Historically, alternating current (AC) has been the default for power distribution. However, as facilities' scale and power densities increase, electrical efficiency has become a primary design concern. Early facilities relied on 48V DC for backup systems – safe but capacity-constrained. This gave way to 230/277V AC distribution, followed by 380V DC for internal systems. Today, the extreme power demands of AI servers are driving another transition towards 650V DC and even 800V DC architectures. According to the Open Direct Current Alliance (ODCA), 650V DC represents the optimal level for building-wide distribution, balancing efficiency with safety, while organisations such as NVIDIA and the Open Compute Project are investigating 800V DC. Despite promising high-power IT loads, these higher voltages do not yet deliver the same system-wide efficiency benefits as a facility-level 650V DC approach. Outside the data centre sector, industrial sites are already deploying 650V DC systems to improve energy efficiency and resilience. One key advantage is the ability to capture regenerative energy from motor drives and robotics – energy that would otherwise be dissipated as heat. Because lighting is a continuous base load, it can readily absorb this recovered energy, reducing grid dependency and operating costs. Integrating lighting, motors, renewables, and storage on a shared DC grid reduces conversion losses, cuts copper usage through fewer conductors, and lowers transmission losses compared with 400V AC systems. When paired with solar PV and batteries, DC grids also improve self-consumption, backup capability, and flexible energy management. What’s driving the move? The momentum behind DC power in data centres is rooted in both engineering logic and economics: • Lower conversion losses — Conventional AC systems require multiple conversion steps, resulting in energy losses of up to 18%. • Alignment with IT equipment — Servers and GPUs operate natively on DC power. • Simpler renewable integration — Solar panels and battery systems produce DC, enabling more efficient connections. • Reduced system complexity — Fewer transformers and rectifiers mean simpler installation and improved reliability. • Preparedness for AI growth — Rising AI workloads are accelerating the shift towards DC-based power systems. DC power is therefore not just an alternative distribution method, but a pathway to smarter, more resilient infrastructure. Lighting as the first step Among all building systems, lighting is often the most practical candidate for early DC adoption. Connected LED lighting allows operators to pilot DC distribution with limited risk before extending it to mission-critical IT loads. The benefits are tangible: • Capital expenditure savings — DC lighting cables reduce copper use by 40%. Three-conductor DC cables (L+, L-, PE) can transmit the same power as five-conductor 400V three-phase AC cables. • Operational cost reductions — With only two current-carrying conductors, DC lighting avoids approximately 33% of cable losses compared with three-phase AC at the same current. • Improved resilience — DC lighting can operate directly from on-site solar generation or battery storage, strengthening microgrid performance during outages. DC-compatible luminaires and components are already commercially available. For example, Signify offers a 100W Xitanium LED driver designed for 620–750V DC operation, integrated into the Pacific LED Gen5 and Maxos Fusion luminaire families. These solutions achieve up to 165lm/W efficacy and can be paired with systems such as Signify Interact and Philips Dynalite. Driver-level efficiency can exceed 95%, with future potential to reach 200lm/W through ultra-high-efficiency LED modules. Sustainability and ESG outcomes DC-powered lighting supports measurable sustainability objectives: • Lower carbon emissions through reduced conversion losses and material usage • Support for certifications such as LEED Zero and BREEAM • Energy optimisation with connected lighting systems, cutting lighting energy use by up to 75% For hyperscalers like Amazon Web Services and Microsoft Azure, as well as colocation providers, these outcomes translate directly into stronger ESG reporting and progress towards carbon neutrality. DC lighting can also be implemented incrementally. Some facilities deploy rack-level DC lighting while retaining an AC backbone. Others adopt facility-wide DC grids that integrate lighting, renewables, storage, and IT infrastructure. In larger deployments, centralised emergency lighting connected to the DC backbone ensures continuous illumination during outages, reinforcing safety in mission-critical spaces. A strategic role for lighting As operators prepare for the next phase of digital expansion, DC-powered lighting offers a practical, high-impact entry point into efficient, renewable-ready DC infrastructure. Modern connected lighting systems extend far beyond illumination. With embedded sensors measuring occupancy, daylight, temperature, humidity, and air quality, luminaires form a dense, facility-wide sensing network without the need for additional hardware. Using open protocols such as DALI, BACnet, and MQTT, DC lighting networks integrate with building management systems and DCIM platforms, enabling predictive maintenance, enhanced operational intelligence, and optimised cooling and space utilisation. By simplifying cabling, reducing losses, and enabling intelligent energy management, DC lighting transforms illumination from a passive load into an active contributor to resilient, sustainable data centre operations.

Secure I.T. completes Qatar financial data centre design
Secure I.T. Environments (SITE), a UK design and build company for modular, containerised, and micro data centres, has completed a full server room design programme for a financial institution in the State of Qatar. The company delivered the engineering and layout documentation, enabling local procurement and installation. The project involved a new server room within an existing building footprint - covering approximately 110m² - and included a separate staging area to improve security and operational flow. The design includes eight IT racks and three communications racks, based on a target density of 6kW per IT rack. Power infrastructure features dual 50kW UPS systems operating in parallel, alongside additional UPS provision for communications equipment. Capacity, cooling, and resilience Cooling is based on an N+1 direct expansion configuration using three air conditioning units, providing around 80kW of sensible cooling capacity. The total estimated site load is approximately 145kVA within a 150kVA allowance. Environmental monitoring and fire protection systems were also incorporated, with humidity control and condensate management designed for high ambient temperatures. The design follows the ISO/IEC TS 22237 data centre facility standards and related international guidance covering power, environmental control, security, and management. Chris Wellfair, Projects Director at Secure I.T. Environments, comments, “For overseas data centre and server room projects, getting the design decisions right up front is what de-risks delivery. "This programme focused on producing a complete, buildable design for a controlled, resilient environment, with clear capacity assumptions, practical access planning, and standards-led engineering across power, cooling, fire, and security. "Having our work in demand internationally is a testament to the work of our design team.” For more from Secure I.T. Environments, click here.

TES Power to deliver modular power for Spanish DC
TES Power, a provider of power distribution equipment and modular electrical rooms for data centres, has been selected to deliver 48 MW of modular power infrastructure for a new greenfield data centre development in Northern Spain, designed to support artificial intelligence workloads. The facility is intended for high-density compute environments, where power resilience, scalability, and deployment speed are key considerations. Growing demand from AI training and inference continues to place pressure on operators to deliver robust electrical infrastructure without compromising availability or reliability. Modular power skids for high-density AI environments As part of the project, TES Power will design and manufacture 25 fully integrated 2.5MW IT power skids. Each skid is a self-contained module incorporating cast resin transformers, LV switchgear, parallel UPS systems, end-of-life battery autonomy, CRAH-based cooling, and high-capacity busbar interconnections. The skids are designed to provide continuous power to critical IT loads, with automatic transfer from mains supply to battery and generator systems in the event of a supply disruption, a requirement increasingly associated with AI-driven data centre operations. Michael Beagan, Managing Director at TES Power, says, “AI is fundamentally changing the scale, speed, and resilience requirements of data centre power infrastructure. This project reflects exactly where the market is heading: larger, higher-density facilities that cannot tolerate risk or delay. "By delivering fully integrated, factory-tested power skids, we’re helping our client accelerate deployment while maintaining the absolute reliability that AI workloads demand.” The project uses off-site manufacture to reduce programme risk and enable parallel delivery, allowing electrical systems to be progressed while civil and building works continue on-site. Each skid will undergo full Factory Acceptance Testing prior to shipment, reducing commissioning risk and limiting on-site installation time. Building Information Modelling is being used to digitally coordinate each skid with wider site services, supporting installation sequencing, clash detection, and longer-term operational planning. TES Power’s scope also includes project management, site services, and final commissioning.

Johnson Controls launches cooling reference design guides
Johnson Controls, a global provider of smart building technologies, has announced the launch of its Reference Design Guide Series for one-gigawatt AI data centres. Each guide in the series maps the full thermal chain, offering cooling architectures tailored to diverse compute densities, geographies, and elevations. The series begins with a blueprint for water-cooled chiller plants, with future guides to address air-cooled and absorption chiller solutions. As AI transforms industries, the scale and complexity of data centre infrastructure is rapidly evolving. The ability to efficiently manage thermal loads at gigawatt scale is now a critical enabler for AI innovation, and the industry faces mounting pressure to deliver facilities that are not only high-performing, but also sustainable and future-ready. Johnson Controls says its Reference Design Guide Series responds to this challenge by outlining how to achieve "industry-leading" energy and water efficiency (PUE and WUE) while maintaining flexibility to scale across diverse climates and operational requirements. The guide outlines a complete thermal architecture supporting both liquid- and air-cooled IT loads through integrated computer room air handlers (CRAHs), fan coil walls, coolant distribution units (CDUs), and high-efficiency YORK centrifugal chillers. It provides sizing guidance for 220MW compute quadrants and defines temperature and operating conditions across all major facility loops, including Technology Cooling System (TCS) loops supporting next-generation GPUs. Stated key outcomes • Zero water consumption — A "fully water-free" heat rejection process using dry coolers, "reducing operational costs and advancing sustainability objectives." • Future-ready thermal flexibility — High-temperature TCS loop readiness aims to ensure compatibility with forthcoming GPU architectures. • Optimised high-density AI performance — Alignment with NVIDIA DSX reference architecture enables scalable deployment of 1-GW-class AI Factories. • Energy-efficient operation — Elevated condenser water temperatures, bifurcated loops, and YORK high-lift chillers aim to deliver good PUE and improved annualised efficiency. Austin Domenici, Vice President & General Manager at Johnson Controls Global Data Center Solutions, says, "AI Factories are production facilities - the places where intelligence is manufactured at an industrial scale. "By supporting the NVIDIA DSX reference architecture and improving water and energy efficiency in the cooling process while maintaining high temperature loop compatibility, our Reference Design Guide equips customers to deploy gigawatt-scale AI infrastructure that is scalable, repeatable, resilient, and sustainable." For more from Johnson Controls, click here.

Thorn, Zumtobel to exhibit at Data Centre World
Thorn and Zumtobel, both lighting brands of the Zumtobel Group, are to present a "unified approach" to data centre lighting at Data Centre World 2026. The companies say the focus will be on three operational priorities for data centre operators and delivery teams: reduced energy consumption, reliable operation, and consistent control across white space, plant, circulation, and perimeter areas. The stand will outline how a coordinated lighting and controls strategy can support specification, installation, and ongoing operation across different data centre environments. The Zumtobel Group says its approach is intended to support consistency across projects, while also simplifying long-term maintenance and operational management. Lighting controls for data centres A central element of the stand will be the use of the LITECOM control platform, which is presented as a way to connect a defined portfolio of luminaires across different zones of a data centre. The companies say this is intended to support scheduling, presence detection, daylight strategies, scene setting, and portfolio standardisation. The stand will also feature TECTON II, shown as part of a continuous-row lighting infrastructure approach, which is designed to support rapid, tool-free assembly and future adaptation. Lighting applications on show will cover white space, technical areas, offices, and exterior zones. Products listed for demonstration include: • Thorn: Aquaforce Pro, ForceLED, Piazza, Omega Pro 2, IQ Beam • Zumtobel: IZURA, TECTON II, MELLOW LIGHT, AMPHIBIA, LANOS All are shown as being controlled via LITECOM. The stand design itself will be intended to reflect Zumtobel Group's stated sustainability principles, using reused and modular components from previous events, with minimal new-build elements. In addition, graphics have been consolidated to reduce printing and waste. Neil Raithatha, Head of Marketing, Thorn and Zumtobel Lighting UK & Ireland, notes, “Data centre customers need lighting that is consistent, efficient, and straightforward to manage. “Our presentation this year brings together proven luminaires with a control platform that helps project teams deliver quickly and run reliably, from the white space to the perimeter.” Thorn and Zumtobel will be exhibiting at Stand F140 at Excel London on 4–5 March 2026. For more from Thorn and Zumtobel, click here.

Data centre waste heat could warm millions of UK homes
New analysis from EnergiRaven, a UK provider of energy management software, and Viegand Maagøe, a Danish sustainability and ESG consultancy, suggests that waste heat from the next generation of UK data centres could be used to heat more than 3.5 million homes by 2035, provided the necessary heat network infrastructure is developed. The research estimates that projected growth in data centres could generate enough recoverable heat to supply between 3.5 million and 6.3 million homes, depending on data centre design efficiency and other technical factors. The report argues that without investment in large-scale heat network infrastructure, much of this heat will be lost. The study highlights a risk that the UK will expand data centre and AI infrastructure without making use of the waste heat produced, missing an opportunity to reduce household energy costs and improve energy resilience. “Our national grid will be powering these data centres - it’s madness to invest in the additional power these facilities will need and waste so much of it as unused heat, driving up costs for taxpayers and bill payers,” argues Simon Kerr, Head of Heat Networks at EnergiRaven. “Microsoft has said it wants its data centres to be ‘good neighbours’ - giving heat back to their communities should be an obvious first step.” Regional opportunities and proximity to housing The research points to examples where data centres are located close to both new housing developments and areas affected by fuel poverty. Around Greater Manchester, for example, 15,000 homes are planned in the Victoria North development, with a further 14,000 to 20,000 planned in Adlington. The area also includes more than a dozen existing data centres, with additional facilities planned. According to the analysis, these sites could potentially supply heat to nearby new housing, reducing the need for individual gas boilers and supporting lower-carbon heating. Moreover, the study maps how similar patterns could be replicated across the UK, linking waste heat sources with residential demand through heat networks. Using waste heat for space heating is common in parts of northern Europe, particularly in Nordic countries. There, waste heat from sources such as data centres, power plants, incinerators, and sewage treatment facilities is often connected to district heat networks, supplying homes via heat interface units instead of individual boilers. In the UK, a number of cities have been designated as Heat Network Zones, where heat networks have been identified as a lower-cost, low-carbon heating option. From 2026, Ofgem will take over regulation of heat networks and new technical standards will be introduced through the Heat Network Technical Assurance Scheme, aimed at improving consumer and investor confidence. Heat networks, regulation, and policy context The Warm Homes Plan includes a target to double the proportion of heat demand met by heat networks in England to 7% by 2035, with longer-term ambitions for heat networks to supply around 20% of heat by 2050. The plan also includes funding support for heat network development. However, Simon argues that current policy does not fully reflect the scale of opportunity from large waste heat sources, continuing, “Current policy in the UK is nudging us towards a patchwork of small networks that might connect heat from a single source to a single housing development. If we continue down this road, we will end up with cherry-picking and small, private monopolies, rather than national infrastructure that can take advantage of the full scale of waste heat sources around the country. “We know that investment in heat networks and thermal infrastructure consistently drives bills down over time and delivers reliable carbon savings, but these projects require long-term finance. "Government-backed low-interest loans, pension fund investment, and institutions such as GB Energy all have a role to play in bridging this gap, as does proactivity from local governments, who can take vital first steps by joining forces to map out potential networks and start laying the groundwork with feasibility studies.” Peter Maagøe Petersen, Director and Partner at Viegand Maagøe, adds, “We should see waste heat as a national opportunity. In addition to heating homes, heat highways can also reduce strain on the electricity grid and act as a large thermal battery, allowing renewables to keep operating even when usage is low and reducing reliance on imported fossil fuels. "As this data shows, the UK has all the pieces it needs to start taking advantage of waste heat - it just needs to join them together. With denser cities than its Nordic neighbours and a wealth of waste heat on the horizon, the UK is a fantastic place for heat networks. It needs to start focusing on heat as much as it does electricity - not just for lower bills, but for future jobs and energy security.”

Datacloud Middle East comes to Dubai
Taking place in Dubai, UAE on 10–12 February 2026, Datacloud Middle East will highlight the region’s rapid emergence as a global data centre hub, driven by hyperscaler investment, sovereign AI strategies, and large-scale digital transformation. Over three days, the event will examine how the Middle East will build future-ready infrastructure to support AI at scale while advancing sustainability and innovation. More than 50 industry experts will share insights on preparing for AI-driven workloads, with focused discussions on energy strategy, high-density design, and major developments such as Stargate UAE. Driving data centre acceleration in the Middle East The agenda will also address financing and delivery challenges, including capital deployment, modular construction, and international expansion. Sessions will explore operational excellence and sustainability, showcasing advanced cooling technologies, sovereign AI initiatives, and interconnection strategies that will enable resilient, high-performance connectivity across the region. With over 500 attendees, Datacloud Middle East will offer a comprehensive view of how gigawatt-scale campuses, cutting-edge cooling, and strategic partnerships will shape the Middle East’s AI infrastructure leadership. Click here to secure your place now.

DCNN to host webinar with CRH
Resilient data centre infrastructure isn’t built at commissioning; it’s built at conception. DCNN and CRH, a US data centre construction specialist, are coming together for a powerful panel discussion exploring how early collaboration with building material providers and site engineers can shape smarter, stronger, and more sustainable data centres. The webinar, 'From the ground up: How future‑proofing data centres starts at the beginning of the project', is a must‑attend session for anyone involved in planning, designing, or delivering next‑generation facilities. Date: 19 February 2026Time: 3pm BST (10am EST)Location: Online (Zoom) Why join this webinar? • Understand how early‑stage decisions influence long‑term resilience • Hear directly from CRH’s global leaders in sustainability, innovation, and infrastructure delivery • Gain insights across the full project lifecycle - from planning to execution • Connect with experts shaping the future of data centre construction Meet the panel Moderator: Joe Peck, Assistant Editor, DCNN Frans Vreeswijk, VP Customer Solutions Strategy, CRH Americas Jenessa Miglietta, VP Sustainability & Innovation, CRH Americas Thomas Donoghue, VP Industry Innovation, CRH Group Attendees will gain insights into how local providers mitigate challenges and address critical issues, along with practical ideas for accelerating construction timelines. They will also learn strategies for expanding partnerships with essential suppliers. Click here to register now and be part of the conversation that starts at the foundation.

Vertiv predicts data centre innovation trends
Data centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global provider of critical digital infrastructure. The Vertiv Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI to digital twins and adaptive liquid cooling. Scott Armul, Chief Product and Technology Officer at Vertiv, says, “The data centre industry is continuing to rapidly evolve how it designs, builds, operates, and services data centres in response to the density and speed of deployment demands of AI factories. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. "On-site energy generation and digital twin technology are also expected to help advance the scale and speed of AI adoption.” The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation. These include: • Extreme densification — accelerated by AI and HPC workloads• Gigawatt scaling at speed — with data centres now being deployed rapidly and at unprecedented scale• Data centre as a unit of compute — as the AI era requires facilities to be built and operated as a single system• Silicon diversification — noting data centre infrastructure must adapt to an increasing range of chips and compute The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape: 1. Powering up for AI Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation - and microgrids - will also drive adoption of higher voltage DC. 2. Distributed AI The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses, but how - and from where - those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries (such as finance, defence, and healthcare) may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities. 3. Energy autonomy accelerates Short-term, on-site energy generation capacity has been essential for most standalone data centres for decades to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as 'Bring Your Own Power (and Cooling)' are likely to be part of ongoing energy autonomy plans. 4. Digital twin-driven design and operations With increasingly dense AI workloads and more powerful GPUs also comes a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually - via digital twins - and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements. 5. Adaptive, resilient liquid cooling AI workloads and infrastructure have accelerated the adoption of liquid cooling, but, conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators, but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads. For more from Vertiv, click here.

Rethinking cooling, power, and design for AI
In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, shares his predictions for the data centre industry in 2026. He explains that surging rack densities and GPU power demands are pushing traditional air-cooling beyond its limits, driving the industry towards hybrid cooling environments where airflow containment, liquid cooling, and intelligent controls operate as a single system. These trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Predictions for data centres in 2026 By 2026, the data centre will no longer function as a static host for digital infrastructure; instead, it will behave as a dynamic, adaptive system - one that evolves in real time alongside the workloads it supports. The driving force behind this shift is AI, which is pushing power, cooling, and physical design beyond previously accepted limits. Rack densities that once seemed impossible - 80 to 120 kW - are now commonplace. As GPUs push past 700 W, the thermal cost of compute is redefining core engineering assumptions across the industry. Traditional air-cooling strategies alone can no longer keep pace. However, the answer isn’t simply replacing air with liquid; what’s emerging instead is a hybrid environment, where airflow containment, liquid cooling, and predictive controls operate together as a single, coordinated system. As a result, the long-standing divide between “air-cooled” and “liquid-cooled” facilities is fading. Even in high-performing direct-to-chip (DTC) environments, significant residual heat must still be managed and removed by air. Preventing hot and cold air from mixing becomes critical - not just for stability, but for efficiency. In high-density and HPC environments, controlled airflow is now essential to reducing energy consumption and maintaining predictable performance. By 2026, AI will also play a more active role in managing the thermodynamics of the data centre itself. Coolant distribution units (CDUs) are evolving beyond basic infrastructure into intelligent control points. By analysing workload fluctuations in real time, CDUs can adapt cooling delivery, protect sensitive IT equipment, and mitigate thermal events before they impact performance, making liquid cooling not only more reliable but secure and scalable. This evolution is accelerating the divide between legacy data centres and a new generation of AI-focused facilities. Traditional data centres were built for consistent loads and flexible whitespace. AI infrastructure demands something different: modular design, fault-predictive monitoring, and engineering frameworks proven at hyperscale. To fully unlock AI’s potential, data centre design must evolve alongside it. Immersion cooling sits at the far end of this transition. While DTC remains the preferred solution today and for the foreseeable future, immersion is increasingly viewed as the long-term endpoint for ultra-high-density computing. It addresses thermal challenges that DTC can only partially relieve, enabling facilities to remove much of their airflow infrastructure altogether. Adoption remains gradual due to cost, maintenance requirements, and operational disruption - to name a few - but the real question is no longer if immersion will arrive, but how prepared operators will be when it eventually does. At the same time, the pace of AI growth is exposing the limitations of global supply chains. Slow manufacturing cycles and delayed engineering can no longer support the speed of deployment required. For example, Subzero Engineering’s new manufacturing and R&D facility in Vietnam (serving the APAC region) reflects a broader shift towards localised production and highly skilled regional workforces. By investing in R&D, application engineering, and precision manufacturing, Subzero Engineering is building the capacity needed to support global demand while developing local expertise that strengthens the industry as a whole. Taken together, these trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Cooling, airflow, energy, and structure are no longer separate considerations, but parts of a synchronised ecosystem. By 2026, data centres will become active contributors to the computing lifecycle itself. Operators that plan for adaptability today will be best positioned to lead in the next phase of the digital economy. For more from Subzero Engineering, click here.



Translate »