Liquid Cooling Technologies Driving Data Centre Efficiency


ZutaCore unveils waterless end-of-row CDUs
ZutaCore, a developer of liquid cooling technology, has introduced a new family of waterless end-of-row (EOR) coolant distribution units (CDUs) designed for high-density artificial intelligence (AI) and high-performance computing (HPC) environments. The units are available in 1.2 MW and 2 MW configurations and form part of the company’s direct-to-chip, two-phase liquid cooling portfolio. According to ZutaCore, the EOR CDU range is intended to support multiple server racks from a single unit while maintaining rack-level monitoring and control. The company states that this centralised design reduces duplicated infrastructure and enables waterless operation inside the white space, addressing energy-efficiency and sustainability requirements in modern data centres. The cooling approach uses ZutaCore’s two-phase, direct-to-chip technology and a low-global warming potential dielectric fluid. Heat is rejected into the facility without water inside the server hall, aiming to reduce condensation and leak risk while improving thermal efficiency. My Truong, Chief Technology Officer at ZutaCore, says, “AI data centres demand reliable, scalable thermal management that provides rapid insights to operate at full potential. Our new end-of-row CDU family gives operators the control, intelligence, and reliability required to scale sustainably. "By integrating advanced cooling physics with modern RESTful APIs for remote monitoring and management, we’re enabling data centres to unlock new performance levels without compromising uptime or efficiency.” Centralised cooling and deployment models ZutaCore states that the systems are designed to support varying availability requirements, with hot-swappable components for continuous operation. Deployment options include a single-unit configuration for cost-effective scaling or an active-standby arrangement for enterprise environments that require higher redundancy levels. The company adds that the units offer encrypted connectivity and real-time monitoring through RESTful APIs, aimed at supporting operational visibility across multiple cooling units. The EOR CDU platform is set to be used in EGIL Wings’ 15 MW AI Vault facility, as part of a combined approach to sustainable, high-density compute infrastructure. Leland Sparks, President of EGIL Wings, claims, “ZutaCore’s end-of-row CDUs are exactly the kind of innovation needed to meet the energy and thermal challenges of AI-scale compute. "By pairing ZutaCore’s waterless cooling with our sustainable power systems, we can deliver data centres that are faster to deploy, more energy-efficient, and ready for the global scale of AI.” ZutaCore notes that its cooling technology has been deployed across more than forty global sites over the past four years, with users including Equinix, SoftBank, and the University of Münster. The company says it continues to expand through partnerships with organisations such as Mitsubishi Heavy Industries, Carrier, and ASRock Rack, including work on systems designed for next-generation AI servers.

Vertiv expands immersion liquid cooling portfolio
Vertiv, a global provider of critical digital infrastructure, has introduced the Vertiv CoolCenter Immersion cooling system, expanding its liquid cooling portfolio to support AI and high-performance computing (HPC) environments. The system is available now in Europe, the Middle East, and Africa (EMEA). Immersion cooling submerges entire servers in a dielectric liquid, providing efficient and uniform heat removal across all components. This is particularly effective for systems where power densities and thermal loads exceed the limits of traditional air-cooling methods. Vertiv has designed its CoolCenter Immersion product as a "complete liquid-cooling architecture", aiming to enable reliable heat removal for dense compute ranging from 25 kW to 240 kW per system. Sam Bainborough, EMEA Vice President of Thermal Business at Vertiv, explains, “Immersion cooling is playing an increasingly important role as AI and HPC deployments push thermal limits far beyond what conventional systems can handle. “With the Vertiv CoolCenter Immersion, we’re applying decades of liquid-cooling expertise to deliver fully engineered systems that handle extreme heat densities safely and efficiently, giving operators a practical path to scale AI infrastructure without compromising reliability or serviceability.” Product features The Vertiv CoolCenter Immersion is available in multiple configurations, including self-contained and multi-tank options, with cooling capacities from 25 kW to 240 kW. Each system includes an internal or external liquid tank, coolant distribution unit (CDU), temperature sensors, variable-speed pumps, and fluid piping, all intended to deliver precise temperature control and consistent thermal performance. Vertiv says that dual power supplies and redundant pumps provide high cooling availability, while integrated monitoring sensors, a nine-inch touchscreen, and building management system (BMS) connectivity simplify operation and system visibility. The system’s design also enables heat reuse opportunities, supporting more efficient thermal management strategies across facilities and aligning with broader energy-efficiency objectives. For more from Vertiv, click here.

CDUs: The brains of direct liquid cooling
As air cooling reaches its limits with AI and HPC workloads exceeding 100 kW per rack, hybrid liquid cooling is becoming essential. To this, coolant distribution units (CDUs) could be the key enabler for next-generation, high-density data centre facilities. In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, discusses further the importance of CDUs in direct liquid cooling: Cooling and the future of data centres Traditional air cooling has hit its limits, with rack power densities surpassing 100 kW due to the relentless growth of AI and high-performance computing (HPC) workloads. Already, CPUs and GPUs exceed 700–1000 W per socket, while projections estimate that to rise to over 1500 W going forward. Fans and heat sinks are just unable to handle these thermal loads at scale. Hybrid cooling strategies are becoming the only scalable, sustainable path forward. Single-phase direct-to-chip (DTC) liquid cooling has emerged as the most practical and serviceable solution, delivering coolant directly to cold plates attached to processors and accelerators. However, direct liquid cooling (DLC) cannot be scaled safely or efficiently with plumbing alone. The key enabler is the coolant distribution unit (CDU), a system that integrates pumps, heat exchangers, sensors, and control logic into a coordinated package. CDUs are often mistaken for passive infrastructure. But far from being a passive subsystem, they act as the brains of DLC, orchestrating isolation, stability, adaptability, and efficiency to make DTC viable at data centre scale. They serve as the intelligent control layer for the entire thermal management system. Intelligent orchestration CDUs do a lot more than just transport fluid around the cooling system; they think, adapt, and protect the liquid cooling portion of the hybrid cooling system. They maintain redundancy to ensure continuous operation, control flow, and pressure, using automated valves and variable speed pumps, filtering particulates to protect cold plates, and maintaining coolant temperature above the dew point to prevent condensation. They contribute to the precise, intelligent, and flexible coordination of the complete thermal management system. Because of their greater cooling capacity, CDUs are ideal for large HPC data centres. However, because they must be connected to the facility's chilled water supply or another heat rejection source to continuously provide liquid to the cold plates for cooling, they can be complicated. CDUs typically fall into two categories: • Liquid to Liquid (L2L): Large HPC facilities are well-suited for high-capacity CDUs known as L2L. Through heat exchangers, they move chip heat into the isolated chilled water loop, such as the facility water system (FWS). • Liquid to Air (L2A): For smaller deployments, L2A CDUs are simpler but have a lower cooling capacity. By utilising conventional HVAC systems, they transfer heat from the returning liquid coolant from the cold plates to the surrounding data centre air by using liquid-to-air heat exchangers rather than a chilled water supply or FWS. Isolation: Safeguarding IT from facility water Acting as the bridge between the FWS and the dedicated technology cooling system (TCS), which provides filtered liquid coolant directly to the chips via cold plate, CDUs isolate sensitive server cold plates from external variability, ensuring a safe and stable environment while constantly adjusting to shifting workloads. One of L2L CDUs' primary functions is to create a dual-loop architecture: • Primary loop (facility side): Connects to building chilled water, district cooling, or dry coolers • Secondary loop (IT side): Delivers conditioned coolant directly to IT racks CDUs isolate the primary loop (which may carry contaminants, particulates, scaling agents, or chemical treatments like biocides and corrosion inhibitors - chemistry that is incompatible with IT gear) from the secondary loop. As well as preventing corrosion and fouling, this isolation offers operators the safety margin that operators need for board-level confidence in liquid. The integrity of the server cold plates is safeguarded by the CDU, which uses a heat exchanger to separate the two environments and maintain a clean, controlled fluid in the IT loop. Because CDUs are fitted with variable speed pumps, automated valves, and sensors, they can dynamically adjust the flow rate and pressure of the TCS to ensure optimal cooling even when HPC workloads change. Stability: Balancing thermal predictability with unpredictable loads HPC and AI workloads are not only high power; they are also volatile. GPU-intensive training jobs or changeable CPU workloads can cause high-frequency power swings, which - without regulation - would translate into thermal instability. The CDU mitigates this risk by controlling temperature, pressure, and flow across all racks and nodes, absorbing dynamic changes and delivering predictable thermal conditions. The CDU absorbs fluctuations by stabilising temperature, pressure, and flow across all racks and nodes, regardless of how erratic the workload is. Sensor arrays ensure the cooling loop remains in accordance with specifications, while variable speed pumps modify flow to fit demand and heat exchangers are calibrated to maintain an established approach temperature. Adaptability: Bridging facility constraints with IT requirements The thermal architecture of data centres varies widely, with some using warm-water loops that operate at temperatures between 20 and 40°C. By adjusting secondary loop conditions to align IT requirements with the facility, the CDU adjusts to these fluctuations. The CDU uses mixing or bypass control to temper supply water. It can alternate between tower-assisted cooling, free cooling, or dry cooler rejection depending on the environmental conditions, and it can adjust flow distribution amongst racks to align with real-time demand. This adaptability makes DTC deployable in a variety of infrastructures without requiring extensive facility renovations. It also makes it possible for liquid cooling to be phased in gradually - ideal for operators who need to make incremental upgrades. Efficiency: Enabling sustainable scale Beyond risk and reliability, CDUs unlock possibilities that make liquid cooling a sustainable option. By managing flow and temperature, CDUs eliminate the inefficiencies of over-pumping and over-cooling. They also maximise scope for free cooling and heat recovery integration such as connecting to district heating networks and reclaiming waste heat as a revenue stream or sustainability benefit. This allows operators to simultaneously lower PUE (Power Usage Effectiveness) to values below 1.1 while simultaneously reducing WUE (Water Usage Effectiveness) by minimising evaporative cooling. All this, while meeting the extreme thermal demands of AI and HPC workloads. CDUs as the thermal control plane Viewed holistically, CDUs are far more than pumps and pipes; they are the thermal control plane for thermal management, orchestrating safe isolation, dynamic stability, infrastructure adaptability, and operational efficiency. They translate unpredictable IT loads into manageable facility-side conditions, ensuring that single-phase DTC can be deployed at scale, enabling HPC and AI data centres to evolve into multi-hundred kilowatt racks without thermal failure. Without CDUs, direct-to-chip cooling would be risky, uncoordinated, and inefficient. With CDUs, it becomes an intelligent and resilient architecture capable of supporting 100 kW (and higher) racks as well as the escalating thermal demands of AI and HPC clusters. As workloads continue to climb and rack power densities surge, the industry’s ability to scale hinges on this intelligence. CDUs are not a supporting component; they are the enabler of single-phase DTC at scale and a cornerstone of the future data centre. For more from Subzero Engineering, click here.

Salute introduces DTC liquid cooling operations service
Salute, a US provider of data centre lifecycle services, has announced what it describes as the data centre industry’s first dedicated service for direct-to-chip (DTC) liquid cooling operations, launched at NVIDIA GTC in Washington DC, USA. The service is aimed at supporting the growing number of data centres built for artificial intelligence (AI) and high-performance computing (HPC) workloads. Several data centre operators, including Applied Digital, Compass Datacenters, and SDC, have adopted Salute’s operational model for DTC liquid cooling across new and existing sites. Managing operational risks in high-density environments AI and HPC facilities operate at power densities considerably higher than those of traditional enterprise or cloud environments. In these facilities, heat must be managed directly at the chip level using liquid cooling technologies. Interruptions to coolant flow or system leaks can result in temperature fluctuations, equipment damage, or safety risks due to the proximity of electrical systems and liquids. Erich Sanchack, Chief Executive Officer at Salute, says, “Salute has achieved a long list of industry firsts that have made us an indispensable partner for 80% of companies in the data centre industry. "This first-of-its-kind DTC liquid cooling service is a major new milestone for our industry that solves complex operational challenges for every company making major investments in AI/HPC.” Salute’s service aims to help operators establish and manage DTC liquid cooling systems safely and efficiently. It includes: • Design and operational assessments to create tailored operational models for each facility • Commissioning support to ensure systems are optimised for AI and HPC operations • Access to a continuously updated library of best practices developed through collaborations with NVIDIA, CDU manufacturers, chemical suppliers, and other industry participants • Operational documentation, including procedures for chemistry management, leak prevention, safety, and CDU oversight • Training programmes for data centre staff through classroom, online, and lab-based sessions • Optional operational support to help operators scale teams in line with AI and HPC demand Industry comments John Shultz, Chief Product Officer AI and Learning Officer for Salute, argues, “This service has already proven to be a game changer for the many data centre service providers who partnered with us as early adopters. By successfully mitigating the risks of DTC liquid cooling, Salute is enabling these companies to rapidly expand their AI/HPC operations to meet customer demand. "These companies will rely on this service from Salute to support an estimated 260 MW of data centre capacity in the coming months and will expand that to an estimated 3,300 MW of additional data centre capacity by the end of 2027. This is an enormous validation of the impact of our service on their ability to scale. Now, other companies can benefit from this service to protect their investments in AI.” Laura Laltrello, COO of Operations at Applied Digital, notes, “High-density environments that utilise liquid cooling require an entirely new operational model, which is why we partnered with Salute to implement operational methodologies customised for our facilities and our customers’ needs.” Walter Wang, Founder at SDC, adds, "Salute is making it possible for SDC’s customers to accelerate AI deployments with zero downtime, thanks to the proven operational model, real-world training, and other best practices."

Arteco introduces ECO coolants for data centres
Arteco, a Belgian manufacturer of heat transfer fluids and direct-to-chip coolants, has expanded its coolant portfolio with the launch of ECO versions of its ZITREC EC product line, designed for direct-to-chip liquid cooling in data centres. Each product is manufactured using renewable or recycled feedstocks with the aim of delivering a significantly reduced product carbon footprint compared with fossil-based equivalents, while maintaining the same thermal performance and reliability. Addressing growing thermal challenges As demand for high-performance computing rises, driven by artificial intelligence (AI) and other workloads, operators face increasing challenges in managing heat loads efficiently. Arteco’s ZITREC EC line was developed to support liquid cooling systems in data centres, enabling high thermal performance and energy efficiency. The new ECO version incorporates base fluids, Propylene Glycol (PG) or Ethylene Glycol (EG), sourced from certified renewable or recycled materials. By moving away from virgin fossil-based resources, ECO products aim to help customers reduce scope 3 emissions without compromising quality. Serge Lievens, Technology Manager at Arteco, says, “Our comprehensive life cycle assessment studies show that the biggest environmental impact of our coolants comes from fossil-based raw materials at the start of the value chain. "By rethinking those building blocks and incorporating renewable and/or recycled raw materials, we are able to offer products with significantly lower climate impact, without compromising on high quality and performance standards.” Certification and traceability Arteco’s ECO coolants use a mass balance approach, ensuring that renewable and recycled feedstocks are integrated into production while maintaining full traceability. The process is certified under the International Sustainability and Carbon Certification (ISCC) PLUS standard. Alexandre Moireau, General Manager at Arteco, says, “At Arteco, we firmly believe the future of cooling must be sustainable. Our sustainability strategy focuses on climate action, smart use of resources, and care for people and communities. "This new family of ECO coolants is a natural extension of that commitment. Sustainability for us is a continuous journey, one where we keep researching, innovating, and collaborating to create better, cleaner cooling solutions.” For more from Arteco, click here.

GF partners with NTT Facilities on sustainable cooling
GF, a provider of piping systems for data centre cooling systems, has announced a collaboration with NTT Facilities in Japan to support the development of sustainable cooling technologies for data centres. The partnership involves GF supplying pre-insulated piping for the 'Products Engineering Hub for Data Center Cooling', a testbed and demonstration site operated by NTT Facilities. The hub opened in April 2025 and is designed to accelerate the move from traditional chiller-based systems to alternatives such as direct liquid cooling. Focus on energy-efficient cooling GF is providing its pre-insulated piping for the facility’s water loop. The system is designed to support efficient thermal management, reduce energy losses, and protect against corrosion. GF’s offering covers cooling infrastructure from the facility level through to rack-level systems. Wolfgang Dornfeld, President Business Unit APAC at GF, says, “Our partnership with NTT Facilities reflects our commitment to working side by side with customers to build smarter, more sustainable data centre infrastructure. "Cooling is a critical factor in AI-ready data centres, and our polymer-based systems ensure performance, reliability, and energy efficiency exactly where it matters most.” While the current project focuses on water transport within the facility, GF says it also offers a wider range of polymer-based systems for cooling networks. The company notes that these systems are designed to help improve uptime, increase reliability, and support sustainability targets. For more from GF, click here.

Castrol and Airsys partner on liquid cooling
Castrol, a British multinational lubricants company owned by BP, and Airsys, a provider of data centre cooling systems, have formed a partnership to advance liquid cooling technologies for data centres, aiming to meet the growing demands of next-generation computing and AI applications. The collaboration will see the companies integrate their technologies, co-develop new products, and promote greater industry awareness of liquid cooling. A recent milestone includes Castrol’s Immersion Cooling Fluid DC 20 being certified as fully compatible with Airsys’ LiquidRack systems. Addressing rising cooling demands in the AI era The partnership comes as traditional air-cooling methods struggle to keep pace with increasing power densities. Research from McKinsey indicates that average rack power density has more than doubled in two years to 17kW. Large Language Models (LLMs) such as ChatGPT can consume over 80kW per rack, while Nvidia’s latest chips may require up to 120kW per rack. Castrol’s own research found that 74% of data centre professionals believe liquid cooling is now the only viable option to handle these requirements. Without effective cooling, systems face risks of overheating, failure, and equipment damage. Industry expertise and collaboration By combining Castrol’s 125 years of expertise in fluid formulation with Airsys’ 30 years of cooling system development, the companies aim to accelerate the adoption of liquid cooling. Airsys has also developed spray cooling technology designed to address the thermal bottleneck of AI whilst reducing reliance on mechanical cooling. "Liquid cooling is no longer just an emerging trend; it’s a strategic priority for the future of thermal management," says Matthew Thompson, Managing Director at Airsys United Kingdom. "At Airsys, we’ve built a legacy in air cooling over decades, supporting critical infrastructure with reliable, high-performance systems. This foundation has enabled us to evolve and lead in liquid cooling innovation. "Our collaboration with Castrol combines our engineering depth with their expertise in advanced thermal fluids, enabling us to deliver next-generation solutions that meet the demands of high-density, high-efficiency environments." Peter Huang, Global President, Data Centre and Thermal Management at Castrol, adds, "Castrol has been working closely with Airsys for two years, and we’re excited to continue working together as we look to accelerate the adoption of liquid cooling technology and to help the industry support the AI boom. "We have been co-engineering solutions with OEMs for decades, and the partnership with Airsys is another example of how Castrol leans into technical problems and supports its customers and partners in delivering optimal outcomes." For more from Castrol, click here.

Danfoss expands UQDB coupling range
Danfoss Power Solutions, a Danish manufacturer of mobile hydraulic systems and components, has completed its Universal Quick Disconnect Blind-Mate (UQDB) coupling portfolio with the launch of the -08 size Hansen connector. The couplings are designed for direct connection between servers and manifolds in data centre liquid cooling systems and are fully compliant with Open Compute Project (OCP) standards. Higher flow capacity The new -08 size joins the existing -02, -04, and -06 sizes, covering body sizes from 1/8-inch to 1/2-inch. The company says it delivers a 29% higher flow rate than OCP requirements, supporting greater cooling efficiency for high-density racks. Danfoss UQDB couplings feature a flat-face dry break design to prevent spillage and a push-to-connect system with self-alignment to simplify installation in tight spaces. The plug half can move radially to align with the socket half, allowing compensation of up to +/-1 millimetre for easier in-rack connections. Developed in collaboration with the OCP community, the couplings meet existing standards and are designed to comply with the forthcoming OCP V2 specification for liquid cooling, expected in October. All UQDB units undergo helium-leak testing for reliability and include QR codes on both plug and socket halves for easier identification and tracking. https://www.youtube.com/watch?v=yjt9_O0Wb1o Chinmay Kulkarni, Data Centre Product Manager at Danfoss Power Solutions, says, “Our now-complete UQDB range expands our robust portfolio of thermal management products for data centres, enabling us to provide comprehensive systems and delivering on our 'one partner, every solution' promise. "When paired with our flexible, kink-free hoses, we deliver a complete direct-to-chip cooling solution that sets the standard for efficiency and reliability.” The couplings are manufactured from 303 stainless steel for corrosion resistance, with EPDM seals for fluid compatibility. They feature ORB terminal ends for secure, leak-free connections, an operating temperature range of 10°C to 65°C, and a minimum working pressure of 10 bar. For more from Danfoss, click here.

Aligned collaborates with Divcon for its Advanced Cooling Lab
Divcon Controls, a US provider of building management systems and electrical power monitoring systems for data centres and mission-critical facilities, has announced its role in the development of Aligned Data Centers’ new Advanced Cooling Lab in Phoenix, Arizona, where it served as the controls vendor for the facility. The project marks a step forward in the design and management of liquid-cooled infrastructure to support artificial intelligence (AI) and high-performance computing (HPC) workloads. The lab, which opened recently, is dedicated to testing advanced cooling methods for GPUs and AI accelerators. It reflects a growing need for more efficient thermal management as data centre density increases and energy requirements rise. “As the data centre landscape rapidly evolves to accommodate the immense power and cooling requirements of AI and HPC workloads, the complexities of managing mechanical systems in these environments are escalating,” says Kevin Timmons, Chief Executive Officer of Divcon Controls. “Our involvement with Aligned Data Centers' Advanced Cooling Lab has provided us with invaluable experience at the forefront of liquid cooling technology. "We are actively developing and deploying advanced control platforms that not only optimise the performance of these systems, but also contribute to long-term sustainability goals.” Divcon Controls has focused its work on managing the added complexity that liquid cooling introduces, including: • Precise thermal control — Managing coolant flow, temperature, and pressure to improve heat transfer efficiency and reduce energy consumption. • Integration with mechanical infrastructure — Coordinating the performance of pumps, heat exchangers, cooling distribution units (CDUs), and leak detection systems within a unified control framework. • Load-responsive adjustment — Adapting cooling output in real time to match fluctuating IT loads, helping maintain optimal operating conditions while limiting energy waste. • Visibility and predictive maintenance — Providing operators with detailed analytics on system performance to support proactive maintenance and longer equipment life. • Support for hybrid environments — Enabling the transition between air and liquid cooling within the same facility, as demonstrated at Aligned’s lab. As more facilities transition to hybrid and liquid-cooled architectures, Divcon Controls says it is focusing on delivering control systems that enhance energy efficiency, reduce operational risk, and ensure long-term asset reliability. “Our collaboration with industry leaders like Aligned Data Centers underscores our commitment to innovation and to solving the most pressing challenges in data centre infrastructure,” continues Kevin. “Divcon Controls is proud to be at the forefront of developing intelligent control platforms for the next generation of high-density, AI-powered data centres, with environmental performance front of mind.” For more from Aligned, click here.

GF introduces first-ever full-polymer Quick Connect Valve
The Quick Connect Valve 700 is a patented dual-ball valve engineered with the aim of enhancing safety, efficiency, and sustainability in Direct Liquid Cooling (DLC) systems. The company claims that, "as the first all-polymer quick connect valve for data centre applications, it is 50% lighter and facilitates 25% better flow compared to conventional metal alternatives while offering easy, ergonomic handling." As demand for high-density, high-performance computing grows, DLC is reportedly becoming a preferred method for thermal management in next-generation data centres. By transporting coolant directly to the chip, DLC can improve thermal efficiency compared to air-based methods. A key component in this setup is the Technology Cooling System (TCS), which distributes coolant from the Cooling Distribution Unit (CDU) to individual server racks. To support this shift, manufacturer of plastic piping systems, valves, and fittings GF has developed the Quick Connect Valve 700, a fully plastic, dual-ball valve engineered for direct-to-chip liquid cooling environments. Positioned at the interface between the main distribution system and server racks, the valve is intended to enable fast, safe, and durable coolant connections in mission-critical settings. Built on GF’s Ball Valve 546 Pro platform, the Quick Connect Valve 700 features two identical PVDF valve halves and a patented dual-interlock lever. This mechanism ensures the valve can only be decoupled when both sides are securely closed, aiming to minimise fluid loss and maximise operator safety during maintenance. Its two-handed operation further reduces the risk of accidental disconnection. The valve is made of corrosion-free polymer, which is over 50% lighter than metal alternatives and provides a UL 94 V-0 flammability rating. Combined with the ergonomic design of its interlocking mechanism, the valve is, according to the company, easy to handle during installation and operation. At the same time, its full-bore valve design seeks to ensure an optimal flow profile and a reduced pressure drop of up to 25% compared to similar metal products. The product has a minimum expected service life of 25 years. “With the Quick Connect Valve 700, we’ve created a critical link in the DLC cooling loop that’s not only lighter and safer, but more efficient,” claims Charles Freda, Global Head of Data Centers at GF. “This innovation builds on our long-standing thermoplastic expertise to help operators achieve the performance and uptime their mission-critical environments demand.” The Quick Connect Valve 700 has been assessed with an Environmental Product Declaration (EPD) according to ISO 14025 and EN 15804. An EPD is a standardised, third-party verified document that uses quantified data from Life Cycle Assessments to estimate environmental impacts and enable comparisons between similar products. For more from GF, click here.



Translate »