Data Centre Infrastructure News & Trends


ZutaCore unveils waterless end-of-row CDUs
ZutaCore, a developer of liquid cooling technology, has introduced a new family of waterless end-of-row (EOR) coolant distribution units (CDUs) designed for high-density artificial intelligence (AI) and high-performance computing (HPC) environments. The units are available in 1.2 MW and 2 MW configurations and form part of the company’s direct-to-chip, two-phase liquid cooling portfolio. According to ZutaCore, the EOR CDU range is intended to support multiple server racks from a single unit while maintaining rack-level monitoring and control. The company states that this centralised design reduces duplicated infrastructure and enables waterless operation inside the white space, addressing energy-efficiency and sustainability requirements in modern data centres. The cooling approach uses ZutaCore’s two-phase, direct-to-chip technology and a low-global warming potential dielectric fluid. Heat is rejected into the facility without water inside the server hall, aiming to reduce condensation and leak risk while improving thermal efficiency. My Truong, Chief Technology Officer at ZutaCore, says, “AI data centres demand reliable, scalable thermal management that provides rapid insights to operate at full potential. Our new end-of-row CDU family gives operators the control, intelligence, and reliability required to scale sustainably. "By integrating advanced cooling physics with modern RESTful APIs for remote monitoring and management, we’re enabling data centres to unlock new performance levels without compromising uptime or efficiency.” Centralised cooling and deployment models ZutaCore states that the systems are designed to support varying availability requirements, with hot-swappable components for continuous operation. Deployment options include a single-unit configuration for cost-effective scaling or an active-standby arrangement for enterprise environments that require higher redundancy levels. The company adds that the units offer encrypted connectivity and real-time monitoring through RESTful APIs, aimed at supporting operational visibility across multiple cooling units. The EOR CDU platform is set to be used in EGIL Wings’ 15 MW AI Vault facility, as part of a combined approach to sustainable, high-density compute infrastructure. Leland Sparks, President of EGIL Wings, claims, “ZutaCore’s end-of-row CDUs are exactly the kind of innovation needed to meet the energy and thermal challenges of AI-scale compute. "By pairing ZutaCore’s waterless cooling with our sustainable power systems, we can deliver data centres that are faster to deploy, more energy-efficient, and ready for the global scale of AI.” ZutaCore notes that its cooling technology has been deployed across more than forty global sites over the past four years, with users including Equinix, SoftBank, and the University of Münster. The company says it continues to expand through partnerships with organisations such as Mitsubishi Heavy Industries, Carrier, and ASRock Rack, including work on systems designed for next-generation AI servers.

National Grid upgrading Oxfordshire substation to connect DCs
National Grid, the UK’s largest electricity distribution network, has started work to upgrade its Didcot substation in Oxfordshire, a key infrastructure development that will connect data centres and battery energy storage systems (BESS) to the electricity transmission network. Situated next to the former Didcot A coal power station and just two miles from the UK’s first AI Growth Zone at Culham, the upgraded substation is aimed at supporting Britain’s digital ambitions while boosting grid capacity for future projects to plug in. Alongside new data centres, 650MW of battery schemes will connect through the extended facility, completing a transition from ‘coal to clean’ at the site and helping to meet growing demand for flexible, zero carbon power in the region. Details of the upgrades The upgrade will see the existing 400kV outdoor air-insulated substation extended with three bays and three supergrid transformers, while a new 132kV indoor gas-insulated switchgear (GIS) facility will be built next door, minimising the footprint of the development and its impact on the environment. The new GIS facility will feature Hitachi Energy’s EconiQ switchgear technology, a sustainable alternative to sulphur hexafluoride (SF6) - a greenhouse gas commonly used as an electrical insulator - marking another step in National Grid’s commitment to reduce SF6 emissions from its network by 50% by 2030. Linxon has been appointed as principal contractor to deliver the substation upgrades, building on its collaboration with National Grid on projects such as London Power Tunnels, which will see the UK’s first SF6-free GIS substation at Bengeworth Road. Work at Didcot comes just months after construction commenced on National Grid’s new Uxbridge Moor substation in neighbouring Buckinghamshire, which is due to connect over a dozen new data centres and which will also use SF6-free switchgear. Peter Hancock, Project Director at National Grid Electricity Transmission, says, “Our Didcot substation extension marks another step forward in powering the UK’s digital future. "By enabling new data centres and battery storage systems to connect to the grid, we’re supporting both the energy transition and the growth of the digital economy regionally and nationally. “With SF6-free technology at its heart, this project reflects our commitment to building a cleaner, greener electricity network for generations to come.” Angel Guijarro, Managing Director of Linxon Europe, adds, “Linxon’s appointment to this project is a testament to our strong partnership with National Grid and our shared vision for a sustainable energy future. "We are committed to delivering a turnkey solution that will enhance the reliability and efficiency of Didcot substation, benefitting both local and national communities.” Electricity demand in Britain is expected to double by 2050, with demand from data centres alone set to triple from 3% of the country’s total in 2025 to 9% by 2035. For more from National Grid, click here.

Capacity Europe 2025 notes record attendance
The 24th edition of Capacity Europe, an event for global digital infrastructure and connectivity, wrapped up last week after three packed days and record-breaking attendance, cementing its status as a major event for the global connectivity ecosystem. Hosted by provider of digital infrastructure events techoraco at the InterContinental London – The O2, the event brought together over 3,600 senior leaders from more than 100 countries, marking the largest turnout in its history. Discussions around a changing landscape This year’s event showcased the industry’s rapid transformation, fuelled by advances in AI and the expansion of data infrastructure, reshaping the telecommunications and digital infrastructure landscape. With a focus on innovation, investment, and next-generation network strategy, Capacity Europe 2025 placed a spotlight firmly on the evolving digital ecosystem. Opening the show was the keynote panel 'Disrupt to Lead: The New Telco Mindset'. The session explored how next-generation infrastructure is reshaping the telecom industry and driving operators toward new business models. Panellists examined the evolution from traditional carriers to "techcos", blending infrastructure with value-added services and platform-based offerings that deliver on-demand, flexible experiences for enterprise customers. Moderated by Silvia Peneva, Managing Director of GLF & ITW at techoraco, the panel featured industry figures including Annette Murphy (CCO, Colt Technology Services), Enrico Bagnasco (CEO, Sparkle), Dimitrios Rizoulis (SVP Global Connectivity, T Wholesale), Fánan Henriques (Director of Product & International Business, Vodafone Business), Valerie Cussac (CEO, Orange Wholesale International), and Mohammed Al-Abbadi (Group Chief Carrier & Wholesale Officer, STC). “Capacity Europe 2025 has been our most impactful year yet,” notes Liss Boot-Handford, Product Director at Capacity Media, techoraco. “The energy, collaboration, and level of deal-making we’ve seen this year demonstrates how vital this event is to the industry’s future.” Key milestones • More than 3,600 senior leaders from over 100 countries• Upwards of 80 keynote sessions and panels• Over 250 exhibitors and sponsors• Record number of partnership deals signed on site Across its keynote sessions and panel discussions, the event delivered insights from leaders across telecoms, cloud, edge, investment, and AI infrastructure. Highlights included explorations of the global dynamics redefining connectivity and the race to expand digital capacity to meet AI-driven demand. As Capacity Europe looks ahead to its milestone 25th anniversary in 2026, this year’s success sets the stage for another chapter in the evolution of global digital connectivity. The 2026 edition will return to the InterContinental London – The O2, from 13-16 October 2026. For more on Capacity Europe, click here.

Vertiv to supply Digital Realty's new Italy campus
Vertiv, a global provider of critical digital infrastructure, today announced it will supply infrastructure for ROM1, Digital Realty's first data centre in Italy and which has a planned capacity exceeding 3 MW. The agreement extends the suite of Vertiv systems and existing technology implementations with Digital Realty across European locations, including Paris, Madrid, and Amsterdam. The ROM1 facility will feature advanced cooling and power infrastructure designed specifically for high-performance computing (HPC) environments. The technology implementation includes free-cooling systems that leverage Rome's climate conditions and energy-efficient power management systems designed to support high-density workloads. The ROM1 project The project will be implemented in phases, with the facility planned to begin operations in 2027. ROM1 will serve as a carrier-neutral facility optimised for AI and machine learning workloads. Its strategic location aims to establish Rome as a key digital hub, connecting to major Mediterranean cities through fibre networks and submarine cables. Expansion plans also include connectivity in Barcelona, launching in early 2026. The new facility will support the company’s growth in the Mediterranean, complementing its existing data centres in Marseille, Athens, and Crete. Alessandro Talotta, Managing Director, Italy at Digital Realty, says, "Rome is emerging as a crucial gateway for digital infrastructure between Europe and the Mediterranean. "The cutting-edge technologies selected for ROM1 will help establish it as a strategic AI hub, setting new benchmarks for energy efficiency and performance in high-performance computing." Karsten Winther, President for EMEA at Vertiv, adds, "The growing adoption of AI applications is driving the need for more sophisticated data centre infrastructure. "Our cooling and power solutions are built on decades of experience in supporting the most demanding applications and are designed for projected scalability while helping customers meet their efficiency objectives." Technical details of ROM1 include AI-ready cooling systems that, Vertiv says, adapt to varying workload demands, as well as high-efficiency power distribution designed for intensive computing. The facility incorporates smart environmental controls that respond to real-time conditions and are integrated with alternative energy sources. The two companies say these technological choices reflect their joint focus on supporting advanced computing needs while minimising energy consumption and environmental impact. For more from Vertiv, click here.

ABB, VoltaGrid to strengthen power stability for AI expansion
ABB, a multinational corporation specialising in industrial automation and electrification products, has secured three new orders from VoltaGrid, a Texas-based microgrid power generation company, to provide grid stabilisation technology supporting data centres across the United States. The projects will supply stable and reliable electricity to facilities currently under construction for AI infrastructure. The contracts were booked during the first three quarters of 2025; financial details were not disclosed. Strengthening grid resilience for AI-driven demand To meet the growing power needs of data centres, ABB will deliver a package of 27 synchronous condensers with flywheels and prefabricated eHouse units. These include power control, automation, and excitation systems integrated into the synchronous condenser panels. The units deliver instantaneous inertia, support short-circuit faults, and regulate network voltage by supplying or absorbing reactive power, helping maintain grid stability as electricity demand increases. VoltaGrid will provide its natural-gas-fuelled power systems, designed for rapid deployment and to meet the specific power requirements of hyperscale data centres. Project delivery will begin in December 2025, with the first systems expected to be operational by April 2026. Nathan Ough, CEO of VoltaGrid, claims, “ABB’s synchronous condensers are vital for meeting the energy demands of next-generation technologies like AI data centres, thanks to their proven ability to ensure grid stability and enhance overall power system resilience. "Partnering with ABB allows us to accelerate project execution and meet the growing performance demands of AI operations.” Supporting the evolving data centre power ecosystem According to recent estimates, data centres accounted for around 1.5% of global electricity consumption in 2024, with the United States responsible for 45% of that total. By 2030, US data centre power use is projected to represent almost half of the country’s total growth in electricity demand. Analysts predict that, by the same year, the US will consume more electricity for data processing than for manufacturing energy-intensive materials such as aluminium, steel, cement, and chemicals. As global demand for AI and cloud computing accelerates, ABB says it continues to provide electrification, automation, and digital technologies to "ensure secure and efficient energy systems for data centre operators." Per Erik Holsten, President of ABB’s Energy Industries division, says, “ABB is proud to partner with VoltaGrid and support the evolving energy ecosystem in the US. "Data centres have become critical national infrastructure and maintaining grid stability has moved from being optional to essential. Reliable, efficient power generation is key to enabling their continued growth.” Kristina Carlquist, Head of the Synchronous Condenser Product Line at ABB’s Motion High Power division, adds, “Although synchronous condensers resemble large motors or generators, their real strength lies in grid support. "As data centres expand, these machines are becoming increasingly important for providing inertia and short-circuit strength. For VoltaGrid, they will help ensure stable and resilient microgrid operation.” For more from ABB, click here.

CEL Critical Power opens $40m US manufacturing facility
CEL Critical Power, an Ireland-based manufacturer of switchgear and power equipment for the global data centre industry, has opened its first large-scale manufacturing facility in Williamsburg, Virginia, USA. The new 400,000-square-foot (37,161-square-metre) plant, operational since June, represents a $40 million (£30.3 million) investment and a major step in CEL Critical Power’s international expansion. The facility increases the company’s manufacturing footprint in the United States - the world’s largest and fastest-growing market for AI and cloud infrastructure - while strengthening its ability to serve key data centre clients. Strengthening US presence and creating skilled jobs The Virginia expansion is intended to generate more than 250 skilled roles within the next year, rising to 500 by 2030 across engineering, manufacturing, quality assurance, logistics, and site services. The facility forms part of CEL Critical Power’s strategy to reach $500 million (£379.5 million) in annual revenue by 2030, supported by its existing operations in Ireland. Together, its global production capacity now exceeds 500,000 square feet (46,451 square metres). A key component of the project is CEL Critical Power’s collaboration with the Virginia Economic Development Partnership (VEDP) and its registration with the US Department of Defense 'SkillBridge' programme. Through partnerships with Naval Station Norfolk and regional alliances, the initiative offers active-duty service members and military veterans opportunities to transition into civilian technical careers. Manufacturing data centre power CEL Critical Power designs and manufactures power distribution units (PDUs), remote power panels (RPPs), and switchgear systems for data centre environments. The company says its engineering approach emphasises reliability, efficiency, and short production cycles, developed through close collaboration with customers from concept through to deployment. Niall McFadden, Group CEO of CEL Critical Power, comments, “The opening of our first US manufacturing facility marks an important step in CEL Critical Power’s growth strategy. "We have listened closely to our customers and recognise their need for trusted partners who can scale alongside them in the United States. This $40 million investment reflects our long-term commitment to supporting those customers in a rapidly expanding market. “Thanks to the support of the Virginia Economic Development Partnership, and our collaboration with the Department of Defense SkillBridge programme and Naval Station Norfolk, we plan to create up to 500 skilled jobs in Virginia by 2030." Alan McCartney, Chief Sales Officer at CEL Critical Power, adds, “As a manufacturer of custom power systems for the global data centre industry, we are expanding our capacity to meet growing demand from customers investing in AI and cloud infrastructure. “Our design-for-manufacture approach allows us to address specific technical and scheduling requirements and to deliver custom-built systems at scale. Our products are designed to support the next generation of AI workloads and the emerging Neo-Cloud sector.” Graham Carr, Vice President of Sales, North America, CEL Critical Power, says, “CEL Critical Power is proud to invest in Virginia, working with VEDP, the DoD SkillBridge programme, and Naval Station Norfolk to create meaningful career pathways for veterans while supporting the state’s growing technology sector. "Virginia offers a strong supply chain, excellent infrastructure, and a deep pool of technical talent.”

Oxford technology supplied to quantum-AI data centre
Oxford Instruments NanoScience, a UK provider of cryogenic systems for quantum computing and materials research, has supplied one of its advanced Cryofree dilution refrigerators, the ProteoxLX, to Oxford Quantum Circuits’ (OQC) newly launched Quantum-AI data centre in New York. As the first facility designed to co-locate quantum computing and classical AI infrastructure at scale, the centre will use the ProteoxLX’s cryogenic capabilities to support OQC’s next-generation quantum processors, helping to advance the development of quantum-enabled AI applications. Supporting quantum and AI integration The announcement follows the opening of OQC’s New York-based Quantum-AI data centre, powered by NVIDIA CPU and GPU Superchips. The facility represents a major step towards practical, scalable quantum computing. Within OQC’s logical-era quantum computer, OQC GENESIS, the ProteoxLX provides the ultra-low temperature environment needed to operate its 16 logical qubits, enabling over 1,000 quantum operations. This capability aims to drive innovation across finance, security, and data-intensive sectors ranging from faster financial modelling and optimisation to quantum-assisted machine learning. Oxford Instruments NanoScience says the collaboration highlights its expanding role in the global quantum computing landscape. OQC’s data centre installations across Europe, North America, and Asia contribute to a distributed quantum infrastructure, accelerating the application of superconducting qubit technologies for industries such as pharmaceuticals. Matthew Martin, Managing Director at Oxford Instruments NanoScience, comments, “We’re proud to support OQC in building the infrastructure that will define the next generation of computing, and it is a privilege to collaborate with our longstanding partner on this project. “Our ProteoxLX is designed to allow users to scale, enabling them to maximise qubit counts with a large sample space and capacity for coaxial lines, so we’re excited to see how OQC will harness this platform to accelerate breakthroughs in real-world application performance.” Simon Phillips, CTO at OQC, adds, “Oxford Instruments NanoScience’s contribution supports the centre’s goal of creating a hybrid quantum-classical computing capability, without modifying the data centre environment or generating the need for additional cooling.” About ProteoxLX Designed for quantum computing applications, the ProteoxLX forms part of Oxford Instruments NanoScience’s latest dilution refrigerator range, all built on a modular framework for cross-compatibility and adaptable cryogenic setups. It offers a large sample space, extensive coaxial wiring capacity, low-vibration operation, and integrated signal conditioning for longer qubit coherence times. The system delivers over 25 µW of cooling power at 20 mK, a base temperature below 7 mK, and several watts of cooling capacity at 4 K via twin pulse tubes. Two fully customisable secondary inserts enable optimised cold-electronics layouts and high-capacity I/O lines, interchangeable across the Proteox family.

Mission Critical Group acquires Leman Engineering
Mission Critical Group (MCG), a critical power infrastructure company, has announced the acquisition of Leman Engineering and Consulting (LEC), a US manufacturer of switchgear, control systems, and power distribution equipment. The acquisition aims to strengthen MCG’s US Midwest manufacturing presence, expand its engineering capabilities, and establish LEC as MCG’s R&D hub for power generation engineering. The company says this hub focuses on switchgear innovation for onsite generation, prime power, generator paralleling, behind-the-meter systems, and microgrids. An electric collaboration With experience in prime power distribution design, precision manufacturing, and engineering, MCG hopes LEC will strengthen its unified power and energy infrastructure platform, delivering high-performance electrical systems across data centre, healthcare, industrial, oil and gas, and other critical markets. Its capabilities in UL 891 switchboards, UL 1558 switchgear, and medium-voltage equipment should expand MCG’s technical depth and manufacturing reach. The establishment of MCG’s R&D hub is also intended to align with strategic university partnerships specialising in power engineering and advance innovation and workforce development across the electrical manufacturing sector. “We’re proud to join MCG and contribute to its continued growth and innovation,” comments Randy Leman, President of Leman Engineering and Consulting. Randy will now serve as Vice President of Engineering and Product Management for Electrical Equipment at MCG. The Indiana team will continue operating under existing leadership, maintaining its local presence and customer focus. The addition of LEC marks MCG’s second acquisition in 2025 and sixth in the past 24 months. With more than one million square feet (93,000 square metres) of US manufacturing space, MCG says it continues to expand its capacity to design, build, deliver, and service resilient power infrastructure US-wide. For more from Mission Critical Group, click here.

Vertiv expands immersion liquid cooling portfolio
Vertiv, a global provider of critical digital infrastructure, has introduced the Vertiv CoolCenter Immersion cooling system, expanding its liquid cooling portfolio to support AI and high-performance computing (HPC) environments. The system is available now in Europe, the Middle East, and Africa (EMEA). Immersion cooling submerges entire servers in a dielectric liquid, providing efficient and uniform heat removal across all components. This is particularly effective for systems where power densities and thermal loads exceed the limits of traditional air-cooling methods. Vertiv has designed its CoolCenter Immersion product as a "complete liquid-cooling architecture", aiming to enable reliable heat removal for dense compute ranging from 25 kW to 240 kW per system. Sam Bainborough, EMEA Vice President of Thermal Business at Vertiv, explains, “Immersion cooling is playing an increasingly important role as AI and HPC deployments push thermal limits far beyond what conventional systems can handle. “With the Vertiv CoolCenter Immersion, we’re applying decades of liquid-cooling expertise to deliver fully engineered systems that handle extreme heat densities safely and efficiently, giving operators a practical path to scale AI infrastructure without compromising reliability or serviceability.” Product features The Vertiv CoolCenter Immersion is available in multiple configurations, including self-contained and multi-tank options, with cooling capacities from 25 kW to 240 kW. Each system includes an internal or external liquid tank, coolant distribution unit (CDU), temperature sensors, variable-speed pumps, and fluid piping, all intended to deliver precise temperature control and consistent thermal performance. Vertiv says that dual power supplies and redundant pumps provide high cooling availability, while integrated monitoring sensors, a nine-inch touchscreen, and building management system (BMS) connectivity simplify operation and system visibility. The system’s design also enables heat reuse opportunities, supporting more efficient thermal management strategies across facilities and aligning with broader energy-efficiency objectives. For more from Vertiv, click here.

CDUs: The brains of direct liquid cooling
As air cooling reaches its limits with AI and HPC workloads exceeding 100 kW per rack, hybrid liquid cooling is becoming essential. To this, coolant distribution units (CDUs) could be the key enabler for next-generation, high-density data centre facilities. In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, discusses further the importance of CDUs in direct liquid cooling: Cooling and the future of data centres Traditional air cooling has hit its limits, with rack power densities surpassing 100 kW due to the relentless growth of AI and high-performance computing (HPC) workloads. Already, CPUs and GPUs exceed 700–1000 W per socket, while projections estimate that to rise to over 1500 W going forward. Fans and heat sinks are just unable to handle these thermal loads at scale. Hybrid cooling strategies are becoming the only scalable, sustainable path forward. Single-phase direct-to-chip (DTC) liquid cooling has emerged as the most practical and serviceable solution, delivering coolant directly to cold plates attached to processors and accelerators. However, direct liquid cooling (DLC) cannot be scaled safely or efficiently with plumbing alone. The key enabler is the coolant distribution unit (CDU), a system that integrates pumps, heat exchangers, sensors, and control logic into a coordinated package. CDUs are often mistaken for passive infrastructure. But far from being a passive subsystem, they act as the brains of DLC, orchestrating isolation, stability, adaptability, and efficiency to make DTC viable at data centre scale. They serve as the intelligent control layer for the entire thermal management system. Intelligent orchestration CDUs do a lot more than just transport fluid around the cooling system; they think, adapt, and protect the liquid cooling portion of the hybrid cooling system. They maintain redundancy to ensure continuous operation, control flow, and pressure, using automated valves and variable speed pumps, filtering particulates to protect cold plates, and maintaining coolant temperature above the dew point to prevent condensation. They contribute to the precise, intelligent, and flexible coordination of the complete thermal management system. Because of their greater cooling capacity, CDUs are ideal for large HPC data centres. However, because they must be connected to the facility's chilled water supply or another heat rejection source to continuously provide liquid to the cold plates for cooling, they can be complicated. CDUs typically fall into two categories: • Liquid to Liquid (L2L): Large HPC facilities are well-suited for high-capacity CDUs known as L2L. Through heat exchangers, they move chip heat into the isolated chilled water loop, such as the facility water system (FWS). • Liquid to Air (L2A): For smaller deployments, L2A CDUs are simpler but have a lower cooling capacity. By utilising conventional HVAC systems, they transfer heat from the returning liquid coolant from the cold plates to the surrounding data centre air by using liquid-to-air heat exchangers rather than a chilled water supply or FWS. Isolation: Safeguarding IT from facility water Acting as the bridge between the FWS and the dedicated technology cooling system (TCS), which provides filtered liquid coolant directly to the chips via cold plate, CDUs isolate sensitive server cold plates from external variability, ensuring a safe and stable environment while constantly adjusting to shifting workloads. One of L2L CDUs' primary functions is to create a dual-loop architecture: • Primary loop (facility side): Connects to building chilled water, district cooling, or dry coolers • Secondary loop (IT side): Delivers conditioned coolant directly to IT racks CDUs isolate the primary loop (which may carry contaminants, particulates, scaling agents, or chemical treatments like biocides and corrosion inhibitors - chemistry that is incompatible with IT gear) from the secondary loop. As well as preventing corrosion and fouling, this isolation offers operators the safety margin that operators need for board-level confidence in liquid. The integrity of the server cold plates is safeguarded by the CDU, which uses a heat exchanger to separate the two environments and maintain a clean, controlled fluid in the IT loop. Because CDUs are fitted with variable speed pumps, automated valves, and sensors, they can dynamically adjust the flow rate and pressure of the TCS to ensure optimal cooling even when HPC workloads change. Stability: Balancing thermal predictability with unpredictable loads HPC and AI workloads are not only high power; they are also volatile. GPU-intensive training jobs or changeable CPU workloads can cause high-frequency power swings, which - without regulation - would translate into thermal instability. The CDU mitigates this risk by controlling temperature, pressure, and flow across all racks and nodes, absorbing dynamic changes and delivering predictable thermal conditions. The CDU absorbs fluctuations by stabilising temperature, pressure, and flow across all racks and nodes, regardless of how erratic the workload is. Sensor arrays ensure the cooling loop remains in accordance with specifications, while variable speed pumps modify flow to fit demand and heat exchangers are calibrated to maintain an established approach temperature. Adaptability: Bridging facility constraints with IT requirements The thermal architecture of data centres varies widely, with some using warm-water loops that operate at temperatures between 20 and 40°C. By adjusting secondary loop conditions to align IT requirements with the facility, the CDU adjusts to these fluctuations. The CDU uses mixing or bypass control to temper supply water. It can alternate between tower-assisted cooling, free cooling, or dry cooler rejection depending on the environmental conditions, and it can adjust flow distribution amongst racks to align with real-time demand. This adaptability makes DTC deployable in a variety of infrastructures without requiring extensive facility renovations. It also makes it possible for liquid cooling to be phased in gradually - ideal for operators who need to make incremental upgrades. Efficiency: Enabling sustainable scale Beyond risk and reliability, CDUs unlock possibilities that make liquid cooling a sustainable option. By managing flow and temperature, CDUs eliminate the inefficiencies of over-pumping and over-cooling. They also maximise scope for free cooling and heat recovery integration such as connecting to district heating networks and reclaiming waste heat as a revenue stream or sustainability benefit. This allows operators to simultaneously lower PUE (Power Usage Effectiveness) to values below 1.1 while simultaneously reducing WUE (Water Usage Effectiveness) by minimising evaporative cooling. All this, while meeting the extreme thermal demands of AI and HPC workloads. CDUs as the thermal control plane Viewed holistically, CDUs are far more than pumps and pipes; they are the thermal control plane for thermal management, orchestrating safe isolation, dynamic stability, infrastructure adaptability, and operational efficiency. They translate unpredictable IT loads into manageable facility-side conditions, ensuring that single-phase DTC can be deployed at scale, enabling HPC and AI data centres to evolve into multi-hundred kilowatt racks without thermal failure. Without CDUs, direct-to-chip cooling would be risky, uncoordinated, and inefficient. With CDUs, it becomes an intelligent and resilient architecture capable of supporting 100 kW (and higher) racks as well as the escalating thermal demands of AI and HPC clusters. As workloads continue to climb and rack power densities surge, the industry’s ability to scale hinges on this intelligence. CDUs are not a supporting component; they are the enabler of single-phase DTC at scale and a cornerstone of the future data centre. For more from Subzero Engineering, click here.



Translate »