Data Centre Infrastructure News & Trends


CDUs: The brains of direct liquid cooling
As air cooling reaches its limits with AI and HPC workloads exceeding 100 kW per rack, hybrid liquid cooling is becoming essential. To this, coolant distribution units (CDUs) could be the key enabler for next-generation, high-density data centre facilities. In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, discusses further the importance of CDUs in direct liquid cooling: Cooling and the future of data centres Traditional air cooling has hit its limits, with rack power densities surpassing 100 kW due to the relentless growth of AI and high-performance computing (HPC) workloads. Already, CPUs and GPUs exceed 700–1000 W per socket, while projections estimate that to rise to over 1500 W going forward. Fans and heat sinks are just unable to handle these thermal loads at scale. Hybrid cooling strategies are becoming the only scalable, sustainable path forward. Single-phase direct-to-chip (DTC) liquid cooling has emerged as the most practical and serviceable solution, delivering coolant directly to cold plates attached to processors and accelerators. However, direct liquid cooling (DLC) cannot be scaled safely or efficiently with plumbing alone. The key enabler is the coolant distribution unit (CDU), a system that integrates pumps, heat exchangers, sensors, and control logic into a coordinated package. CDUs are often mistaken for passive infrastructure. But far from being a passive subsystem, they act as the brains of DLC, orchestrating isolation, stability, adaptability, and efficiency to make DTC viable at data centre scale. They serve as the intelligent control layer for the entire thermal management system. Intelligent orchestration CDUs do a lot more than just transport fluid around the cooling system; they think, adapt, and protect the liquid cooling portion of the hybrid cooling system. They maintain redundancy to ensure continuous operation, control flow, and pressure, using automated valves and variable speed pumps, filtering particulates to protect cold plates, and maintaining coolant temperature above the dew point to prevent condensation. They contribute to the precise, intelligent, and flexible coordination of the complete thermal management system. Because of their greater cooling capacity, CDUs are ideal for large HPC data centres. However, because they must be connected to the facility's chilled water supply or another heat rejection source to continuously provide liquid to the cold plates for cooling, they can be complicated. CDUs typically fall into two categories: • Liquid to Liquid (L2L): Large HPC facilities are well-suited for high-capacity CDUs known as L2L. Through heat exchangers, they move chip heat into the isolated chilled water loop, such as the facility water system (FWS). • Liquid to Air (L2A): For smaller deployments, L2A CDUs are simpler but have a lower cooling capacity. By utilising conventional HVAC systems, they transfer heat from the returning liquid coolant from the cold plates to the surrounding data centre air by using liquid-to-air heat exchangers rather than a chilled water supply or FWS. Isolation: Safeguarding IT from facility water Acting as the bridge between the FWS and the dedicated technology cooling system (TCS), which provides filtered liquid coolant directly to the chips via cold plate, CDUs isolate sensitive server cold plates from external variability, ensuring a safe and stable environment while constantly adjusting to shifting workloads. One of L2L CDUs' primary functions is to create a dual-loop architecture: • Primary loop (facility side): Connects to building chilled water, district cooling, or dry coolers • Secondary loop (IT side): Delivers conditioned coolant directly to IT racks CDUs isolate the primary loop (which may carry contaminants, particulates, scaling agents, or chemical treatments like biocides and corrosion inhibitors - chemistry that is incompatible with IT gear) from the secondary loop. As well as preventing corrosion and fouling, this isolation offers operators the safety margin that operators need for board-level confidence in liquid. The integrity of the server cold plates is safeguarded by the CDU, which uses a heat exchanger to separate the two environments and maintain a clean, controlled fluid in the IT loop. Because CDUs are fitted with variable speed pumps, automated valves, and sensors, they can dynamically adjust the flow rate and pressure of the TCS to ensure optimal cooling even when HPC workloads change. Stability: Balancing thermal predictability with unpredictable loads HPC and AI workloads are not only high power; they are also volatile. GPU-intensive training jobs or changeable CPU workloads can cause high-frequency power swings, which - without regulation - would translate into thermal instability. The CDU mitigates this risk by controlling temperature, pressure, and flow across all racks and nodes, absorbing dynamic changes and delivering predictable thermal conditions. The CDU absorbs fluctuations by stabilising temperature, pressure, and flow across all racks and nodes, regardless of how erratic the workload is. Sensor arrays ensure the cooling loop remains in accordance with specifications, while variable speed pumps modify flow to fit demand and heat exchangers are calibrated to maintain an established approach temperature. Adaptability: Bridging facility constraints with IT requirements The thermal architecture of data centres varies widely, with some using warm-water loops that operate at temperatures between 20 and 40°C. By adjusting secondary loop conditions to align IT requirements with the facility, the CDU adjusts to these fluctuations. The CDU uses mixing or bypass control to temper supply water. It can alternate between tower-assisted cooling, free cooling, or dry cooler rejection depending on the environmental conditions, and it can adjust flow distribution amongst racks to align with real-time demand. This adaptability makes DTC deployable in a variety of infrastructures without requiring extensive facility renovations. It also makes it possible for liquid cooling to be phased in gradually - ideal for operators who need to make incremental upgrades. Efficiency: Enabling sustainable scale Beyond risk and reliability, CDUs unlock possibilities that make liquid cooling a sustainable option. By managing flow and temperature, CDUs eliminate the inefficiencies of over-pumping and over-cooling. They also maximise scope for free cooling and heat recovery integration such as connecting to district heating networks and reclaiming waste heat as a revenue stream or sustainability benefit. This allows operators to simultaneously lower PUE (Power Usage Effectiveness) to values below 1.1 while simultaneously reducing WUE (Water Usage Effectiveness) by minimising evaporative cooling. All this, while meeting the extreme thermal demands of AI and HPC workloads. CDUs as the thermal control plane Viewed holistically, CDUs are far more than pumps and pipes; they are the thermal control plane for thermal management, orchestrating safe isolation, dynamic stability, infrastructure adaptability, and operational efficiency. They translate unpredictable IT loads into manageable facility-side conditions, ensuring that single-phase DTC can be deployed at scale, enabling HPC and AI data centres to evolve into multi-hundred kilowatt racks without thermal failure. Without CDUs, direct-to-chip cooling would be risky, uncoordinated, and inefficient. With CDUs, it becomes an intelligent and resilient architecture capable of supporting 100 kW (and higher) racks as well as the escalating thermal demands of AI and HPC clusters. As workloads continue to climb and rack power densities surge, the industry’s ability to scale hinges on this intelligence. CDUs are not a supporting component; they are the enabler of single-phase DTC at scale and a cornerstone of the future data centre. For more from Subzero Engineering, click here.

ZincFive introduces battery system designed for AI DCs
ZincFive, a producer of nickel-zinc (NiZn) battery-based solutions for immediate power applications, has announced a new nickel-zinc battery cabinet designed for data centres deploying artificial intelligence workloads. The system, named BC AI, is positioned as an uninterruptible power supply (UPS) battery platform that can support both high-intensity AI power surges and conventional backup requirements. The company says the new system builds on its existing nickel-zinc battery range and is engineered for environments where GPU clusters and rapid power fluctuations are driving changes in electrical infrastructure requirements. The battery technology is intended to respond to fast transient loads associated with AI training and inference, while also providing backup during power interruptions. The system includes a battery management platform and nickel-zinc chemistry designed for frequent high-power discharge cycles. The company says this approach reduces reliance on upstream electrical capacity by managing dynamic loads at the UPS level. Nickel-zinc battery design for transient load handling As well as incorporating a new nickel-zinc battery cell designed for high-intensity usage and long service life, ZincFive highlights the product's compact footprint and field-upgradeable design. Nickel-zinc chemistry offers power density characteristics that allow the system to accommodate rapid load spikes without significant footprint expansion. ZincFive says competing approaches may require substantially more physical space to manage similar peak loads, particularly where AI applications can generate power demands above nominal UPS levels. The system is targeted at hyperscale operators, colocation facilities, and UPS manufacturers integrating AI-ready backup capacity. The company also points to potential benefits related to infrastructure design, including reduced UPS sizing requirements and support for power-management strategies aimed at improving grid interaction. Tod Higinbotham, Chief Executive Officer of ZincFive, says, “AI is transforming the very foundation of data centres, creating new challenges that legacy technologies cannot solve. "With BC 2 AI, we are delivering a safe, sustainable, and future-ready power solution designed to handle the most demanding AI workloads while continuing to support traditional IT backup. "This is a defining moment not just for ZincFive, but for the entire data centre industry as it adapts to the AI era.” For more from ZincFive, click here.

Danfoss to showcase DC technologies at SuperComputing 2025
Danfoss, a Danish manufacturer of mobile hydraulic systems and components, plans to present its data centre cooling and power management technologies at SuperComputing 2025, taking place 18–20 November in St Louis, Missouri, USA. The company says it will demonstrate equipment designed to support reliability, energy performance, and liquid-cooling adoption in high-density computing environments. Exhibits will include cooling components, liquid-cooling hardware, and motor-control equipment intended for use across data hall and plant-room applications. Danfoss notes that increasing data centre efficiency while maintaining uptime remains a central challenge for operators and developers, particularly as AI and high-performance computing drive increases in heat output and power usage. Peter Bleday, Vice President, Specialty Business Unit and Data Center at Danfoss Power Solutions, says, “Danfoss technologies are trusted by the world’s leading cloud service providers and chip manufacturers with products installed in facilities around the world. "We look forward to welcoming visitors to our booth to discuss how we can help them achieve smarter, more reliable, and more sustainable data centre operations.” Cooling and power management focus Danfoss will present liquid-cooling components including couplings, hoses, and valve assemblies designed to support leak-tested coolant distribution for rack-level and direct-to-chip cooling. A smart valve train system providing plug-and-play connection between piping and server racks will also be shown, designed to help optimise coolant flow and simplify installation. The company's HVAC portfolio will also feature, including centrifugal compressor technology engineered for high efficiency and low noise in compact installations. Danfoss states that this equipment is designed to support data centre cooling requirements with long-term performance stability. In addition, the manufacturer will highlight its power-conversion and motor-control portfolio, including variable-frequency drives and harmonic-mitigation equipment intended to support low-PUE facilities. The business says its liquid-cooled power-conversion modules are designed to support applications such as energy storage and fuel-cell systems within data centre environments. Danfoss representatives will also discuss the company’s involvement in wider sustainability initiatives, including the Net Zero Innovation Hub for Data Centers, where industry stakeholders such as Google and Microsoft collaborate on energy-efficiency and decarbonisation strategies. For more from Danfoss, click here.

Arista unveils 800G R4 series networking portfolio
Arista Networks, a provider of cloud and AI networking systems, has introduced a new generation of R4 Series networking platforms designed for artificial intelligence, large-scale data centre environments, and routed backbone deployment. The new systems are intended to support high-performance compute clusters, low-latency operation, and large routing backbones. According to Arista, the portfolio is designed to provide high port density and support for 800-gigabit ethernet networks, with integrated security features for encrypted traffic. Seamus Crehan, President of Crehan Research, says, “The 800GbE market is growing explosively with port shipments more than tripling sequentially in Q2 '25, and Arista led in branded market share for both 800GbE as well as overall data centre ethernet switching. "These new 800GbE products from Arista are well-timed to capitalise on this segment’s projected 90% five-year average annual growth rate driven by AI, storage, and general compute workloads.” Arista states that the new systems are designed to reduce operating costs and energy consumption in AI and data centre environments, while supporting routing technologies such as EVPN, VXLAN, MPLS, and SR/SRv6. The company also highlights engineering for predictable latency and packet-handling performance. Focus on high-density 800G networking for AI Arista says the platforms are aimed at workloads including AI training, inference, data centre interconnect, and large-scale routing. The new generation supports a range of 800-gigabit configurations, with capacity options designed for large-enterprise, cloud, and service provider networks. Tim Smith, Senior Vice President of Technical Infrastructure Engineering and Operations at Magnite, comments, “When Magnite needed to build our next-generation data centre solution for AI and other advanced computing needs, Arista was the clear choice given their high quality offering. "We've deployed a dense 800G spine using the modular Arista platform with both AI-optimised and high-scale multiservice routing linecards, providing an ideal foundation for the future.” Arista notes that the systems include options for secure traffic handling with wirespeed encryption across all ports, including MACsec, IPsec, and VXLANsec. The highest-capacity chassis in the range supports hundreds of 800-gigabit ethernet ports in one system. Arista also introduces its HyperPort interface, which the company says can simplify scale-across network designs and reduce AI workload completion times compared with traditional multi-link configurations. Supporting spine and leaf deployments Arista has also expanded its fixed-form systems designed for use as either data centre spines or leaf switches. According to the company, the systems offer flexible port combinations for 800-gigabit and 100-gigabit ethernet environments. Leaf systems in the portfolio are positioned for direct server connectivity and mixed-workload data centres. These switches include copper and fibre options, uplink ports, and hardware-based encryption support. Arista says its larger modular systems and several associated linecards are available now, alongside new fixed-format switches. Additional platforms and configurations are scheduled for release in early 2026. For more from Arista, click here.

Sparkle's BlueMed submarine cable lands in Cyprus
Sparkle, the first international service provider in Italy, and Cyta, a provider of integrated electronic communications in Cyprus, have announced the arrival of the BlueMed submarine cable at Cyta’s Yeroskipos landing station in Cyprus. BlueMed is Sparkle's new cable connecting Italy with several countries bordering the Mediterranean and up to Jordan. It's part of the Blue & Raman Submarine Cable Systems - built in partnership with Google and other operators - that stretch further in the Middle East up to Mumbai, India. With four fibre pairs and an initial design capacity of more than 25 Tbps per pair, the system delivers high-speed, low-latency, and scalable connectivity across Europe, the Middle East, and Africa. A new PoP in Cyprus With the branch to Yeroskipos station, Sparkle secures a key point of presence (PoP) in Cyprus, while Cyta gains access to the BlueMed submarine cable system, enhancing connectivity between Cyprus, Greece, and other Mediterranean countries. This initiative aims to enable Cyta to better meet the growing demand for advanced internet services and digital content in the country, while strengthening Cyprus’ role as a strategic digital hub - providing direct connectivity to Greece, to Western and Central Europe via Genoa and Marseille, and to the Levant through other neighbouring eastern Mediterranean countries. Enrico Bagnasco, CEO of Sparkle, comments, “This is a new, important stage for BlueMed, a project that embodies our commitment to innovation and collaboration, linking Europe with Africa and the Middle East through state-of-the-art infrastructure. “Today, we are also particularly glad to celebrate this milestone with our long-standing partner Cyta, confirming our shared commitment to strengthening connectivity in the Mediterranean basin.” George Malikides, Chief Technology Officer at Cyta, adds, “The connection of Cyta to BlueMed will further enhance the Cyprus digital ecosystem and reinforce the island’s position as a key digital hub in the Eastern Mediterranean." George Metzakis, Chief Commercial Officer at Cyta, concurs, stating, “The arrival of BlueMed in Cyprus marks a pivotal step forward in our ongoing mission to strengthen the island’s international connectivity.” BlueMed has received funding from the European Commission under the Connecting Europe Facility (CEF) programme, highlighting its strategic relevance for improving digital resilience and connectivity across Europe and beyond.

Wolong introduces efficient motors for DC cooling applications
Wolong Electric America, a developer of industrial motor and drive technology, has introduced its Permanent Magnet Direct Drive (PMDD) motors, highlighting their ability to improve energy efficiency and reduce heat generation in high-demand environments such as data centres. Designed to operate without belts or sheaves, PMDD motors use a direct drive system that reduces mechanical complexity and common failure points, improving reliability and minimising maintenance requirements. The approach should also reduce mechanical stress and radial load on bearings, contributing to a longer service life. Lower heat output and energy use in data centres At the core of each motor is a rare earth magnet design that delivers stronger magnetic fields in a compact form factor. This aims to enable higher efficiency and cooler operation compared with traditional induction motors, which would be a key advantage in temperature-sensitive environments such as data centres, where controlling internal heat and power consumption are constant priorities. The motors are operated via a variable frequency drive (VFD), enabling precise speed control, smooth acceleration and deceleration, and reduced electrical strain on supporting systems. A 4:1 turndown ratio allows the motors to maintain torque and efficiency - including at low speeds - supporting variable airflow demands within cooling systems. Wolong Electric says its PMDD motors can be integrated directly into fan assemblies, reducing overall system losses and eliminating inrush current at startup. With reported efficiency improvements of around 20% over conventional induction motors, the design should contribute to measurable reductions in both energy use and waste heat. Flexible configurations for critical environments Wolong Electric says the PMDD motors can be tailored to specific applications, including data centre cooling systems, refineries, and OEM equipment such as heat exchangers. The motors are designed to operate at lower temperatures and with reduced maintenance demands, supporting long-term reliability and stable thermal management across facility operations. The company’s design approach hopes to enable easy integration with OEM and packaged system configurations, helping operators meet efficiency goals while aligning with evolving energy standards.

AI is reshaping data centres - are you ready?
As AI workloads surge, data centres face unprecedented challenges in power quality, grid stability, and sustainability. Hitachi Energy is leading the charge with advanced HVDC and grid integration solutions designed to meet the evolving demands of hyperscale and colocation facilities. The company's solutions empower operators to maintain uptime, reduce environmental impact, and comply with global grid codes, while transforming data centres into versatile energy contributors. From battery energy storage systems (BESS) to hydrogen generators and microgrids, Hitachi Energy helps you balance erratic AI loads and ensure seamless grid reconnection. With standardised, scalable designs and deep consulting expertise, the company tailors infrastructure to your unique needs - delivering performance, reliability, and sustainability at scale. Data centres must evolve to stay resilient. Discover how to adapt to the future of data centre infrastructure by reading the full article here. For more from Hitachi, click here.

Salute introduces DTC liquid cooling operations service
Salute, a US provider of data centre lifecycle services, has announced what it describes as the data centre industry’s first dedicated service for direct-to-chip (DTC) liquid cooling operations, launched at NVIDIA GTC in Washington DC, USA. The service is aimed at supporting the growing number of data centres built for artificial intelligence (AI) and high-performance computing (HPC) workloads. Several data centre operators, including Applied Digital, Compass Datacenters, and SDC, have adopted Salute’s operational model for DTC liquid cooling across new and existing sites. Managing operational risks in high-density environments AI and HPC facilities operate at power densities considerably higher than those of traditional enterprise or cloud environments. In these facilities, heat must be managed directly at the chip level using liquid cooling technologies. Interruptions to coolant flow or system leaks can result in temperature fluctuations, equipment damage, or safety risks due to the proximity of electrical systems and liquids. Erich Sanchack, Chief Executive Officer at Salute, says, “Salute has achieved a long list of industry firsts that have made us an indispensable partner for 80% of companies in the data centre industry. "This first-of-its-kind DTC liquid cooling service is a major new milestone for our industry that solves complex operational challenges for every company making major investments in AI/HPC.” Salute’s service aims to help operators establish and manage DTC liquid cooling systems safely and efficiently. It includes: • Design and operational assessments to create tailored operational models for each facility • Commissioning support to ensure systems are optimised for AI and HPC operations • Access to a continuously updated library of best practices developed through collaborations with NVIDIA, CDU manufacturers, chemical suppliers, and other industry participants • Operational documentation, including procedures for chemistry management, leak prevention, safety, and CDU oversight • Training programmes for data centre staff through classroom, online, and lab-based sessions • Optional operational support to help operators scale teams in line with AI and HPC demand Industry comments John Shultz, Chief Product Officer AI and Learning Officer for Salute, argues, “This service has already proven to be a game changer for the many data centre service providers who partnered with us as early adopters. By successfully mitigating the risks of DTC liquid cooling, Salute is enabling these companies to rapidly expand their AI/HPC operations to meet customer demand. "These companies will rely on this service from Salute to support an estimated 260 MW of data centre capacity in the coming months and will expand that to an estimated 3,300 MW of additional data centre capacity by the end of 2027. This is an enormous validation of the impact of our service on their ability to scale. Now, other companies can benefit from this service to protect their investments in AI.” Laura Laltrello, COO of Operations at Applied Digital, notes, “High-density environments that utilise liquid cooling require an entirely new operational model, which is why we partnered with Salute to implement operational methodologies customised for our facilities and our customers’ needs.” Walter Wang, Founder at SDC, adds, "Salute is making it possible for SDC’s customers to accelerate AI deployments with zero downtime, thanks to the proven operational model, real-world training, and other best practices."

GNM-IX launches new PoP in Bucharest
GNM-IX, a Dutch internet exchange and backbone operator, has announced the launch of a new point of presence (PoP) in Bucharest, Romania, located in NXDATA-1, one of the country’s major carrier-neutral data centres and a digital gateway to Southeast Europe. This marks GNM’s first presence in Romania, expanding the company’s distributed interconnection platform into another region of Europe. The new Bucharest site provides access to GNM’s core connectivity services - internet exchange (GNM-IX), VLAN-based interconnections, and global IP Transit - enabling Romanian operators and content networks to exchange traffic locally and optimise international routes through GNM’s multi-terabit platform. For existing GNM members, the new PoP should strengthen connectivity across Southeast Europe, creating additional redundancy and more efficient routing options towards the Balkans, Central Europe, and beyond. Alex Surkoff, Business Development Director at GNM, comments, “Our goal is to make high-performance connectivity available wherever networks grow. “Expanding to Bucharest enhances our distributed architecture and gives both local and international operators new ways to interconnect - staying local in traffic exchange while remaining part of the global internet fabric.” GNM-IX now has more than 10 Tbps of aggregated traffic and 650+ connected networks. For more from GNM, click here.

Aligned, Calibrant to deploy on-site battery storage
Aligned Data Centers (Aligned), a technology infrastructure company, and Calibrant Energy (Calibrant), a US provider of on-site energy systems, have announced a new energy solution to address an urgent constraint to the data centre industry: access to grid power. The announcement comes as the rapid growth of AI and advanced computing fuels unprecedented power demand, accelerating the need to increase load service and ensure reliable access to grid power. Under the agreement, Calibrant will deliver a 31MW / 62MWh battery energy storage system (BESS) at Aligned’s data centre campus in the Pacific Northwest. The on-site system, planned to be operational in 2026, will enable the facility to come online and scale operations years earlier than would be possible with traditional utility upgrades. Calibrant and Aligned have been partnering with a regional utility in the Pacific Northwest since the start of negotiations to explore flexibility as a means to increase and accelerate interconnection. Phil Martin, CEO at Calibrant, says, “This project flips the script on how data centres access power. “Rather than the false choice between waiting years for system upgrades or having to go off grid entirely, we're working with leading data centre providers like Aligned to use distributed energy solutions to facilitate and accelerate grid interconnection. “This innovative model allows large power users to take control of their energy future while being stewards of their community - ensuring growth objectives are met in a manner that supports grid reliability, has minimal environmental impact, and doesn't burden others with the costs." A US first This will be the first time in the US that a battery system is purpose-built to accelerate interconnection and bring a large-scale data centre online. Developed using Calibrant’s 'Path to Power' solution - a replicable, scalable approach that leverages on-site energy to overcome siting and capacity bottlenecks - the system functions as a grid-responsive asset, designed to discharge during peak demand, bolster grid reliability, and ensure uninterrupted service. Calibrant and Aligned say they prioritised safety and the use of domestically manufactured components for this project, sourcing from suppliers that maintain US-based manufacturing and supply chains. The battery system meets international safety standards by incorporating multiple layers of protection, including safer battery chemistry, built-in fire mitigation measures, and remote 24/7 monitoring. Key equipment, including transformers, switchgear, and batteries, were all manufactured and/or assembled in the United States. Andrew Schaap, CEO at Aligned, comments, “This strategic project redefines how we grow in power-constrained markets. "With this BESS, we’re converting our load from a potential grid liability into a dynamic grid asset, providing the regional utility with the tools needed to accelerate our ramp, and we’re doing it responsibly, without impacting ratepayers. “We're proud to partner with Calibrant on a new market-defining initiative, directly addressing the industry's critical constraint of access to grid power. Their experience in serving large power users and critical facilities was instrumental in our ability to move quickly and efficiently.” Calibrant and Aligned confirmed they are considering similar projects in other markets, signalling a repeatable approach for data centre operators facing interconnection challenges. For more from Aligned, click here.



Translate »