Innovations in Data Center Power and Cooling Solutions


TES Power to deliver modular power for Spanish DC
TES Power, a provider of power distribution equipment and modular electrical rooms for data centres, has been selected to deliver 48 MW of modular power infrastructure for a new greenfield data centre development in Northern Spain, designed to support artificial intelligence workloads. The facility is intended for high-density compute environments, where power resilience, scalability, and deployment speed are key considerations. Growing demand from AI training and inference continues to place pressure on operators to deliver robust electrical infrastructure without compromising availability or reliability. Modular power skids for high-density AI environments As part of the project, TES Power will design and manufacture 25 fully integrated 2.5MW IT power skids. Each skid is a self-contained module incorporating cast resin transformers, LV switchgear, parallel UPS systems, end-of-life battery autonomy, CRAH-based cooling, and high-capacity busbar interconnections. The skids are designed to provide continuous power to critical IT loads, with automatic transfer from mains supply to battery and generator systems in the event of a supply disruption, a requirement increasingly associated with AI-driven data centre operations. Michael Beagan, Managing Director at TES Power, says, “AI is fundamentally changing the scale, speed, and resilience requirements of data centre power infrastructure. This project reflects exactly where the market is heading: larger, higher-density facilities that cannot tolerate risk or delay. "By delivering fully integrated, factory-tested power skids, we’re helping our client accelerate deployment while maintaining the absolute reliability that AI workloads demand.” The project uses off-site manufacture to reduce programme risk and enable parallel delivery, allowing electrical systems to be progressed while civil and building works continue on-site. Each skid will undergo full Factory Acceptance Testing prior to shipment, reducing commissioning risk and limiting on-site installation time. Building Information Modelling is being used to digitally coordinate each skid with wider site services, supporting installation sequencing, clash detection, and longer-term operational planning. TES Power’s scope also includes project management, site services, and final commissioning.

Direct-to-chip liquid cooling market to reach $7.9bn by 2033
Rising computational intensity has placed unprecedented pressure on traditional air-based cooling systems. High-performance computing (HPC), artificial intelligence (AI), cloud data centres, and advanced semiconductor architectures generate dense heat loads that are increasingly difficult to manage using conventional thermal management approaches. According to Research Intelo, a global market research and consulting firm, the global direct-to-chip liquid cooling market was valued at $1.3 billion (£951 million) in 2024 and is projected to reach $7.9 billion (£5.7 billion) by 2033, expanding at a CAGR of 22.3%. This strong growth trajectory underscores the growing reliance on liquid-based cooling technologies to support next-generation digital infrastructure. Direct-to-chip liquid cooling has emerged as a practical and scalable response to these challenges, offering targeted heat removal directly from processors and other high-power components. By reducing thermal resistance and improving heat transfer efficiency, this approach supports higher rack densities while aligning with broader energy efficiency and sustainability objectives. What exactly is direct-to-chip liquid cooling? Direct-to-chip liquid cooling is a thermal management method in which a liquid coolant flows through cold plates mounted directly onto heat-generating components such as CPUs, GPUs, and accelerators. Heat is absorbed at the source and transported away through a closed-loop liquid system, minimising reliance on air circulation. Compared to immersion cooling, which involves submerging entire systems in dielectric fluids, direct-to-chip solutions integrate more easily with existing data centre architectures. This balance between high cooling efficiency and operational compatibility has positioned the technology as a preferred option for gradual infrastructure upgrades and hybrid cooling deployments. Which factors are driving market growth? 1. Technological innovation and automation As processing power and server densities continue to rise, traditional air-cooling solutions are approaching their practical limits, increasing the risk of thermal throttling and hardware degradation. Direct-to-chip liquid cooling technologies provide a highly efficient alternative by enabling precise and consistent heat removal from critical components. Ongoing innovation in cold plate design, advanced coolants, and system integration is further enhancing performance and reliability. The incorporation of smart sensors, real-time monitoring tools, and automated flow controls enables predictive maintenance and dynamic thermal optimisation. These advancements are making direct-to-chip liquid cooling more scalable and accessible across a wide range of computing environments, from hyperscale data centres to edge deployments. 2. Shifts in end-user accelerating market expansion The rapid expansion of data-intensive applications, including AI, machine learning, blockchain, and the Internet of Things (IoT), has led to unprecedented heat generation within servers and computing clusters. Enterprises and data centre operators face increasing pressure to maintain high performance and uptime while controlling operational costs and energy consumption. Direct-to-chip liquid cooling addresses these demands by delivering superior thermal efficiency and reducing dependence on energy-intensive air conditioning systems. The ability to support higher rack densities is particularly valuable in urban data centres and edge locations where space and power constraints are significant. As organisations prioritise sustainability and long-term infrastructure resilience, adoption of liquid cooling technologies is expected to expand across multiple industry verticals. 3. Regulatory support and government incentives Regulatory frameworks aimed at reducing energy consumption and greenhouse gas emissions in data centres are creating favourable conditions for advanced cooling technologies. In regions such as Europe and North America, government incentives - including tax benefits, grants, and energy efficiency programs - are encouraging the adoption of low-impact thermal management solutions. In parallel, international standards for green data centre operations are pushing organisations to modernise their infrastructure and improve environmental performance. These regulatory and policy-driven factors are fostering innovation, reducing adoption barriers, and supporting sustained market growth. What challenges are limiting wider adoption? Despite strong growth prospects, the market faces several challenges that could impact adoption rates. Regulatory uncertainty related to safety standards, environmental compliance, and fluid handling requirements can complicate deployment decisions. Volatility in raw material prices, particularly for copper and specialised cooling fluids, may also influence production costs and pricing strategies. Additionally, standardisation gaps and interoperability issues can pose challenges in complex or legacy IT environments. Addressing these constraints will require continued collaboration among technology providers, regulators, and end-users to establish clear guidelines, improve compatibility, and build confidence in long-term system reliability. Which technologies are shaping product innovation? Manufacturers are continually refining cold plate designs to improve heat transfer efficiency and compatibility with next-generation processors. Innovations such as microchannel architectures, optimised flow paths, and advanced alloys enable higher thermal performance while minimising pressure drop and energy consumption. Customisation tailored to specific processor architectures and workload requirements has become increasingly common. This flexibility supports diverse applications across AI, HPC, cloud computing, and enterprise data centres, further strengthening the market’s value proposition. What regional trends are emerging? • North America dominates the global market, accounting for over 38% of total market share in 2024. This leadership is driven by a mature data centre ecosystem, advanced IT infrastructure, and early adoption of innovative cooling technologies. The strong presence of hyperscale data centre operators and cloud service providers, particularly in the US, has accelerated deployment across the region. • Asia Pacific is projected to register the fastest growth, with a CAGR of 27.1% from 2025 to 2033. Rapid digital transformation, expanding cloud infrastructure, and increasing investments in hyperscale and edge data centres are fuelling demand. Countries such as China, India, Japan, and Singapore are witnessing rising adoption of AI and HPC across sectors including fintech, healthcare, and smart cities, further driving the need for advanced cooling solutions. • Latin America, the Middle East, and Africa are experiencing gradual adoption of direct-to-chip liquid cooling technologies. While infrastructural limitations, budget constraints, and skills gaps have slowed deployment, growing awareness of long-term cost savings and sustainability benefits is steadily improving market outlook in these regions. What does the competitive landscape look like? The market features a combination of established thermal management companies and specialised liquid cooling solution providers. Competition is primarily based on cooling efficiency, system reliability, ease of integration, and total cost of ownership. Strategic partnerships between hardware manufacturers, data centre operators, and cooling technology providers are becoming increasingly common. Continuous investment in research and development remains critical, as cooling requirements evolve alongside processor design and workload intensity. What is the future outlook for the direct-to-chip liquid cooling market? The transition towards high-density computing shows no signs of slowing. Market forecasts indicate strong expansion, with the direct-to-chip liquid cooling market expected to grow from $1.3 billion (£951 million) in 2024 to $7.9 billion (£5.7 billion) by 2033, reflecting sustained demand across data centre, enterprise, and research environments. As processors become more powerful and energy efficiency expectations rise, direct-to-chip liquid cooling is expected to shift from selective adoption to broader implementation. Continued standardisation, declining component costs, and increased operational familiarity are likely to accelerate this transition. Conclusion: Is direct-to-chip liquid cooling becoming a standard rather than an option? Direct-to-chip liquid cooling addresses some of the most critical challenges facing modern computing infrastructure. By enabling efficient heat management, supporting high-performance workloads, and aligning with sustainability and energy efficiency goals, the technology is redefining thermal management strategies. As digital workloads intensify and infrastructure demands evolve, the market’s trajectory raises a defining question: Will direct-to-chip liquid cooling soon be regarded as a baseline requirement for advanced computing environments rather than a specialised enhancement?

Carrier launches CRAH for data centres
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced the AiroVision 39CV Computer Room Air Handler (CRAH), expanding its QuantumLeap portfolio with a precision cooling system designed for medium- to large-scale data centre environments. Developed and manufactured in Europe, the AiroVision 39CV is intended to support energy efficiency, reliability, and shorter lead times, while meeting EU regulatory requirements. The unit offers a cooling capacity from 20kW to 250kW and is designed to operate with elevated chilled water temperatures. Carrier states that this approach can improve energy performance and contribute to lower power usage effectiveness (PUE) by enabling more efficient chiller operation and supporting free cooling strategies. Factory-integrated design for simplified deployment The AiroVision 39CV features a built-in controller for real-time monitoring, adaptive operation, and integration with building management systems. The control platform can be configured to suit specific operational requirements. All components are factory-integrated to reduce on-site installation and commissioning work. Additional features, including an auto transfer switch and ultra-capacitors, are intended to support service continuity in critical environments. Michel Grabon, EMEA Marketing and Market Verticals Director at Carrier, says, “The 39CV is a strategic addition to our QuantumLeap Solutions portfolio, designed to help data centre operators address today’s most pressing challenges: increasing thermal loads from higher computing densities, the need to reduce energy consumption to meet sustainability targets, and the pressure to deploy solutions quickly and efficiently. "With its high-efficiency design, intelligent control system, and factory-integrated components, the 39CV helps operators to improve energy performance, optimise installation time, and build scalable infrastructures with confidence.” For more from Carrier, click here.

Aggreko: Power supply will decide AI winners and losers
Following the publication of a report that states up to a third of US data centres are expected to be fully off-grid by 2030, Aggreko, a British multinational temporary power generation company, is warning that the European market could follow the same trend, noting that the provision of power will be the deciding factor in the companies and markets that draw the biggest benefit from the ongoing AI boom. Bloom Energy’s 2026 power report, which looks specifically at developments in the US data centre market, also indicates that data centres are already beginning to move from areas where the grid is strained to those that can offer more ample supply. For instance, Texas’s data centre load is set to double by 2028, while traditionally leading areas like California and Oregon are set to lose 50% of their relative market share. Billy Durie, Global Sector Head of Data Centres at Aggreko, believes that these findings are a sign of what is to come in Europe, stating, “I am not surprised by the findings of Bloom’s latest report. Securing a reliable power supply has long been the bugbear of data centre operators across the world, though increasing power demand driven by the development of AI is now taking this challenge to new extremes in the US and Europe. “Depending on which source you look at, AI is set to increase power demand by as much as 150% by 2035, which is why operators are either relocating or taking power provision into their own hands in an attempt to find a permanent solution. “In my experience, these trends tend to emerge earlier in the US than in Europe, though we can certainly expect this market to follow suit. While this is a global challenge, with older grid infrastructure and more severe strain in Europe, you could even argue that we will see an even more acute response on this side of the pond very soon.” Moving off-grid in Europe The effects of grid strain on European data centre development have been well documented. In Scotland, a recent study has indicated that AI data centres could consume up to 75% of the nation’s electricity, while in Switzerland, there are fears that Zurich’s grid no longer has capacity to deal with additional demand. With comparatively limited options for relocation in Europe, many data centre operators have already turned to decentralised energy, though fully off-grid facilities are yet to be realised on a wider scale. Among the most popular solutions available on the market today are stage-V generators for short-term projects, while gas generators, microgrids, and renewables coupled with battery energy storage systems (BESS) are all in-demand options for energy provision and bridging power during upgrades. Small modular reactors (SMRs) also hold a place in the plans of many stakeholders, though are not expected to be commercially available until the end of the next decade, requiring effective bridging solutions in the interim. Billy concludes, “For European data centre operators who don’t have the same power of relocation that their US counterparts do, the ability to reduce dependence on the grid will be critical. Fortunately, there are already many solutions already available to help them do this, and even more exciting technologies in development. "Whatever option they choose to deploy, one thing is certain: the ability to source a stable power supply will dictate the winners and losers of the AI boom.” For more from Aggreko, click here.

PFX highlights its SOLUTHERM cooling fluids
PFX Group, a Canadian manufacturer of automotive and industrial fluids, has showcased its SOLUTHERM heat transfer fluid range at the 2026 AHR Expo in Las Vegas, USA. The company presented its thermal management fluids at the Recochem booth during the event, which ran from 2 to 4 February. The SOLUTHERM range is designed to support HVAC system performance, including traditional heating and cooling loops and liquid cooling applications in data centres. The company states that increasing power densities, changing regulatory requirements, and evolving system materials are driving greater demand for effective thermal management. This is particularly relevant in data centres, where continuous operation and high-performance computing environments require reliable temperature control to support equipment performance and operational continuity. The SOLUTHERM range includes glycol-based heat transfer fluids designed to support system efficiency, temperature stability, and corrosion protection. Some formulations are developed to support environmental targets, including biodegradable options and fluids aligned with LEED building requirements. Jerome Dujoux, Vice President of Branding and Innovation at PFX Group, says, “HVAC and data centre cooling are no longer separate conversations. "As computing power increases and buildings become more energy intensive, thermal management is becoming a connective tissue between digital infrastructure and the built environment. That’s the shift SOLUTHERM is designed for.” Thermal fluids for HVAC and data centre cooling Among the products highlighted at the exhibition were SOLUTHERM PG HD and EG HD heat transfer fluids, designed for HVAC applications in facilities including hospitals, universities, and other critical infrastructure environments. The company also presented SOLUTHERM direct liquid cooling fluids, developed for servers and high-performance computing environments. These fluids are designed to operate across a wide temperature range, supporting data centre cooling requirements associated with increasing power density. Additional products included SOLUTHERM PG HD LEED heat transfer fluids, which use bio-based propylene glycol and meet ASTM D8039 corrosion testing standards, and SOLUTHERM PG AL Safe heat transfer fluids, developed for systems containing aluminium components such as boilers, water heaters, and heat exchangers. Tom Corrigan, Director of Research and Development at PFX Group, notes, “Heat transfer fluids are often treated as a commodity when, in reality, they influence energy efficiency, equipment lifespan, and system reliability more than most people realise. "We see thermal management as a strategic decision and that’s why SOLUTHERM is engineered for specific applications and backed with ongoing support.”

LFB data centre division rebrands as Apx
The data centre division of LFB Group, a European HVAC and refrigeration company, has rebranded as Apx, reflecting a shift in focus towards increasing performance, project complexity, and delivery requirements across the sector. The rebrand follows more than 20 years supporting server room and data centre operations across Europe. The company states the new identity reflects the growing role of cooling systems in high-density and AI-driven data centre environments. Apx has been formed from LFB Group’s dedicated data centre team, previously operating under the Lennox name. The business intends to focus on closer collaboration across design, development, and project delivery, alongside increased emphasis on engineering validation and pre-commissioning processes. Expansion of production and validation capability The company has expanded its facilities in Lyon, France, to support increased engineering, manufacturing, and testing capabilities. Additional sites in Genas, Mions, Longvic, and Burgos form part of a multi-site production and validation network, supporting precision manufacturing, automated testing, and climatic performance validation. Matt Evans, CEO at Apx Data Centre Solutions, says, “The industry’s dams have well and truly burst, with billion dollar projects and developments being announced almost every week. Keeping on top of this demand, though, has never been more important. “Today, collaboration is everything. Operators are searching for partners who can offer them both flexibility and agility, enabling them to build for the future while reacting quickly to what's happening right now. "That's where co-engineering becomes critical: by working with designers, contractors, and operators from day one, we can shape decisions together, anticipate challenges, and engineer solutions before they become problems. “While no-one can predict what's around the corner, one thing is clear: performance has to be proven earlier. It's been one of our grounding principles since the start - the idea that pre-commissioning must be core to every product's DNA. "By front-loading engineering, validating performance up-front and removing uncertainty before components reach sites, we give operators the head space - and time - to meet the demand. “The direction of travel is clear: scale, capacity, and density. And I couldn't be more excited about where we've taken this business. The new Apx name marks our next chapter and it's one we're genuinely proud to be part of.” Broader expansion The company has recently introduced three products aimed at data centre cooling applications, including a computer room air handler, fan wall unit, and coolant distribution unit (CDU). Apx operates within the wider LFB Group, which also includes HVAC manufacturer Redge and refrigeration specialist Friga-Bohn. The group has more than 60 years of experience in refrigeration and mechanical engineering. The company is also expanding its workforce, with recruitment planned across project management, operations, controls, commissioning, and sales support roles in France, Germany, and the Netherlands. Apx expects its dedicated data centre team to grow to approximately 50 employees by 2027. For more from LFB Group, click here.

Johnson Controls launches cooling reference design guides
Johnson Controls, a global provider of smart building technologies, has announced the launch of its Reference Design Guide Series for one-gigawatt AI data centres. Each guide in the series maps the full thermal chain, offering cooling architectures tailored to diverse compute densities, geographies, and elevations. The series begins with a blueprint for water-cooled chiller plants, with future guides to address air-cooled and absorption chiller solutions. As AI transforms industries, the scale and complexity of data centre infrastructure is rapidly evolving. The ability to efficiently manage thermal loads at gigawatt scale is now a critical enabler for AI innovation, and the industry faces mounting pressure to deliver facilities that are not only high-performing, but also sustainable and future-ready. Johnson Controls says its Reference Design Guide Series responds to this challenge by outlining how to achieve "industry-leading" energy and water efficiency (PUE and WUE) while maintaining flexibility to scale across diverse climates and operational requirements. The guide outlines a complete thermal architecture supporting both liquid- and air-cooled IT loads through integrated computer room air handlers (CRAHs), fan coil walls, coolant distribution units (CDUs), and high-efficiency YORK centrifugal chillers. It provides sizing guidance for 220MW compute quadrants and defines temperature and operating conditions across all major facility loops, including Technology Cooling System (TCS) loops supporting next-generation GPUs. Stated key outcomes • Zero water consumption — A "fully water-free" heat rejection process using dry coolers, "reducing operational costs and advancing sustainability objectives." • Future-ready thermal flexibility — High-temperature TCS loop readiness aims to ensure compatibility with forthcoming GPU architectures. • Optimised high-density AI performance — Alignment with NVIDIA DSX reference architecture enables scalable deployment of 1-GW-class AI Factories. • Energy-efficient operation — Elevated condenser water temperatures, bifurcated loops, and YORK high-lift chillers aim to deliver good PUE and improved annualised efficiency. Austin Domenici, Vice President & General Manager at Johnson Controls Global Data Center Solutions, says, "AI Factories are production facilities - the places where intelligence is manufactured at an industrial scale. "By supporting the NVIDIA DSX reference architecture and improving water and energy efficiency in the cooling process while maintaining high temperature loop compatibility, our Reference Design Guide equips customers to deploy gigawatt-scale AI infrastructure that is scalable, repeatable, resilient, and sustainable." For more from Johnson Controls, click here.

AIP partners with Caterpillar for 2GW AI power
Developer of integrated AI power and compute infrastructure platforms American Intelligence & Power Corporation (AIP), construction equipment manufacturer Caterpillar, and equipment deliverer Boyd CAT have formed a strategic partnership to support the development of AIP’s Monarch Compute Campus in West Virginia, USA. The agreement includes a purchase arrangement for dedicated onsite power infrastructure, intended to support hyperscale and enterprise data centre requirements. The initial phase will provide up to 2GW of generation capacity, with power delivery beginning in 2026 and full capacity online during 2027. Under the agreement, AIP has ordered 2GW of fast-response natural gas generator sets to support the first phase of Monarch. Deliveries are scheduled between September 2026 and August 2027. The generation systems will be supported by battery energy storage systems (BESS), intended to manage rapid load changes associated with AI workloads. The equipment is expected to be commissioned within months of delivery, supporting phased deployment at the site. Further expansion is planned in later phases. Power platform for AI data centre workloads The Monarch site is designed as a behind-the-meter power platform, with onsite generation intended to operate independently of incremental utility transmission or distribution infrastructure. According to the companies, the platform is intended to support rapid load variability, high availability, and predictable long-term operation for AI-driven data centre environments. Daniel J Shapiro, CEO of AIP, comments, “This strategic alliance reflects a shared commitment to delivering reliable, scalable, and capital-efficient power solutions on an accelerated timeline. "Our design is purpose-built for AI data centre operations, combining fast-response natural gas generation with battery energy storage to manage rapid load variability and deliver consistent power quality at scale. "By leveraging our existing microgrid designation from the State of West Virginia, we can bring new capacity online quickly while supporting long-term grid reliability and resilience, without increasing rates or adding costs for existing utility customers.” Melissa Busen, Senior Vice President of Electric Power at Caterpillar, adds, “This collaboration reflects Caterpillar and our dealers’ continued focus on supporting customers that require primary, continuous-duty power at scale through our broad energy portfolio. "Projects like Monarch demonstrate how Caterpillar’s natural gas generation platforms are being deployed as core infrastructure for data centres and other power intensive applications where reliability, speed of deployment, and lifecycle performance are critical.” Generator details The project will use Caterpillar G3516 fast-response natural gas generator sets, selected for behind-the-meter data centre applications. The generators are designed to support rapid start, load-following operation, and continuous-duty performance. According to the companies, the systems can ramp from zero to full load in approximately seven seconds, supporting workloads with rapid load fluctuations. The generators will operate on natural gas and incorporate emissions controls, including selective catalytic reduction, to support compliance with relevant air permitting requirements. The Monarch platform has a stated long-term target of up to 8GW of planned generation capacity. With an existing West Virginia microgrid designation, the site is intended to operate without increasing rates or adding costs for existing utility customers. In parallel, AIP and Caterpillar have also entered into a strategic alliance framework covering phased expansion planning, operations and maintenance strategy, lifecycle performance, and service and parts support. The agreement also includes vendor equipment financing through Caterpillar Financial, subject to standard terms and conditions and aligned with delivery phasing. For more from Caterpillar, click here.

GridAI names new CEO
GridAI Technologies, a US provider of AI-driven software platforms for managing utility load and distributed energy resources, has appointed Marshall Chapin as CEO of its AI and energy infrastructure subsidiary, GridAI, following its acquisition of the company. GridAI Technologies says the appointment is intended to support its expansion at the intersection of artificial intelligence and energy infrastructure, as demand increases from hyperscale AI data centre developments. GridAI is developing grid and power-management software for large-scale AI data centre campuses. The platform is designed to coordinate distributed energy resources and manage power across multiple scales, with the aim of supporting more efficient and reliable operation as energy demand from AI workloads grows. The company says its software supports functions such as market-based dispatch, peak-load reduction, and dynamic pricing in utility and commercial environments. It also monitors real-time inputs, including energy prices and weather, to support operational decision-making. Platform focus and leadership background New hyperscale campuses can consume hundreds of megawatts of power, requiring advanced systems to manage and optimise energy resources. GridAI says that its platform incorporates forecasting, bidding, and dynamic load-balancing to support reliability and efficiency across large installations. The company also says the platform can be used in residential and small business environments to manage behind-the-meter assets such as HVAC systems, appliances, and batteries. Chapin brings experience across grid optimisation, energy transition, and distributed energy. Since March 2025, he has served as interim CEO of Amp X, an AI-driven grid-edge platform that is also a GridAI subsidiary. Jason Sawyer, CEO of GridAI Technologies, comments, “Marshall’s proven ability to commercialise complex energy-software platforms and scale global go-to-market operations makes him the ideal leader for GridAI at this pivotal moment. "With hyperscale AI campuses emerging as the defining infrastructure challenge of this decade, our power orchestration capabilities will be critical in helping hyperscalers deploy energy assets rapidly, profitably, and with enhanced reliability and resilience.” Marshall says, “GridAI is uniquely positioned to help hyperscalers, utilities, and energy-asset owners orchestrate the massive amount of flexible power required for this transformation. I’m excited to build on this vision and lead GridAI through this extraordinary phase of growth.”

Carrier launches CDU with 2°C ATD
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced a new coolant distribution unit (CDU), designed to support the growing use of liquid cooling in UK data centres while improving energy performance, resilience, and space utilisation. The Carrier CDU is intended to help operators manage higher rack densities and increasing cooling demands. It is designed to support liquid-cooled IT environments and provide greater control over energy use and system uptime. As liquid cooling becomes more widely adopted to meet efficiency targets, the CDU enables deployment at scale through management of secondary coolant loops. Carrier says this can help reduce pumping energy and optimise heat removal across varying load conditions. Thermal performance and system efficiency The CDU uses modular heat exchangers that can deliver approach temperatures as low as 2°C, compared with more typical 4°C systems. According to Carrier, this can enable up to 15% chiller energy savings, allowing more electrical capacity to be allocated to IT loads rather than cooling. Oliver Sanders, Data Centre Commercial Director UK&I, Carrier HVAC, notes, “Data centre leaders across the UK are focused on increasing capacity without increasing risk. “This new Carrier CDU supports that goal by giving operators greater thermal stability, more flexibility in system design, and better visibility of cooling performance. The result is improved energy efficiency and smoother scalability as liquid cooling demand grows.” The CDU is designed for use in mission-critical environments and includes redundant pumps and power supplies to support continued operation during maintenance or unexpected events. Intelligent controls manage fluid temperatures and flow rates in real time, with the aim of maintaining stable conditions for high-density servers while reducing energy consumption. Integration, scalability, and monitoring Carrier states that the CDU is designed for simplified integration into existing facilities, allowing liquid cooling to be introduced with minimal disruption. The product range includes multiple unit sizes from 1.3 to 5 MW, enabling operators to align cooling capacity with current and future high-density requirements. The system is intended to support direct-to-chip cooling as well as mixed cooling environments. Carrier says it is designed to maintain stable performance under fluctuating workloads and higher ambient temperatures. “Liquid cooling adoption is accelerating, and operators want systems that deliver both efficiency and certainty,” Oliver continues. “With this Carrier CDU, customers can integrate high-density workloads confidently, knowing their cooling system is designed to maximise uptime, efficiency, and long-term value.” The CDU integrates with Carrier’s control platforms to support centralised monitoring, performance optimisation, and energy management. This is intended to help data centre teams track cooling trends, respond to load changes, and plan capacity more effectively. The Carrier CDU forms part of Carrier’s QuantumLeap portfolio of data centre technologies. For more from Carrier, click here.



Translate »