Products


Carrier launches CRAH for data centres
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced the AiroVision 39CV Computer Room Air Handler (CRAH), expanding its QuantumLeap portfolio with a precision cooling system designed for medium- to large-scale data centre environments. Developed and manufactured in Europe, the AiroVision 39CV is intended to support energy efficiency, reliability, and shorter lead times, while meeting EU regulatory requirements. The unit offers a cooling capacity from 20kW to 250kW and is designed to operate with elevated chilled water temperatures. Carrier states that this approach can improve energy performance and contribute to lower power usage effectiveness (PUE) by enabling more efficient chiller operation and supporting free cooling strategies. Factory-integrated design for simplified deployment The AiroVision 39CV features a built-in controller for real-time monitoring, adaptive operation, and integration with building management systems. The control platform can be configured to suit specific operational requirements. All components are factory-integrated to reduce on-site installation and commissioning work. Additional features, including an auto transfer switch and ultra-capacitors, are intended to support service continuity in critical environments. Michel Grabon, EMEA Marketing and Market Verticals Director at Carrier, says, “The 39CV is a strategic addition to our QuantumLeap Solutions portfolio, designed to help data centre operators address today’s most pressing challenges: increasing thermal loads from higher computing densities, the need to reduce energy consumption to meet sustainability targets, and the pressure to deploy solutions quickly and efficiently. "With its high-efficiency design, intelligent control system, and factory-integrated components, the 39CV helps operators to improve energy performance, optimise installation time, and build scalable infrastructures with confidence.” For more from Carrier, click here.

PFX highlights its SOLUTHERM cooling fluids
PFX Group, a Canadian manufacturer of automotive and industrial fluids, has showcased its SOLUTHERM heat transfer fluid range at the 2026 AHR Expo in Las Vegas, USA. The company presented its thermal management fluids at the Recochem booth during the event, which ran from 2 to 4 February. The SOLUTHERM range is designed to support HVAC system performance, including traditional heating and cooling loops and liquid cooling applications in data centres. The company states that increasing power densities, changing regulatory requirements, and evolving system materials are driving greater demand for effective thermal management. This is particularly relevant in data centres, where continuous operation and high-performance computing environments require reliable temperature control to support equipment performance and operational continuity. The SOLUTHERM range includes glycol-based heat transfer fluids designed to support system efficiency, temperature stability, and corrosion protection. Some formulations are developed to support environmental targets, including biodegradable options and fluids aligned with LEED building requirements. Jerome Dujoux, Vice President of Branding and Innovation at PFX Group, says, “HVAC and data centre cooling are no longer separate conversations. "As computing power increases and buildings become more energy intensive, thermal management is becoming a connective tissue between digital infrastructure and the built environment. That’s the shift SOLUTHERM is designed for.” Thermal fluids for HVAC and data centre cooling Among the products highlighted at the exhibition were SOLUTHERM PG HD and EG HD heat transfer fluids, designed for HVAC applications in facilities including hospitals, universities, and other critical infrastructure environments. The company also presented SOLUTHERM direct liquid cooling fluids, developed for servers and high-performance computing environments. These fluids are designed to operate across a wide temperature range, supporting data centre cooling requirements associated with increasing power density. Additional products included SOLUTHERM PG HD LEED heat transfer fluids, which use bio-based propylene glycol and meet ASTM D8039 corrosion testing standards, and SOLUTHERM PG AL Safe heat transfer fluids, developed for systems containing aluminium components such as boilers, water heaters, and heat exchangers. Tom Corrigan, Director of Research and Development at PFX Group, notes, “Heat transfer fluids are often treated as a commodity when, in reality, they influence energy efficiency, equipment lifespan, and system reliability more than most people realise. "We see thermal management as a strategic decision and that’s why SOLUTHERM is engineered for specific applications and backed with ongoing support.”

Case study: The data centre's shield against errors
In an industry where a single unplugged cable can stall a production line, "good-enough" labelling isn't an option. A leading automotive manufacturer faced a challenge: their cabling was becoming a maze of human error, threatening the uptime of their mission-critical services. The solution wasn't just better labels; it was a standardised identification ecosystem. By deploying industrial-grade materials and the high-volume BradyPrinter i7100, as well as the handheld M610, the manufacturer ensured that every rack and server remained clearly identifiable under any conditions. This move towards precision eliminated the guesswork that leads to accidental disconnections. The result? A solid infrastructure where technicians move with confidence. The operational resilience starts at the surface - with a reliable label that stays readable. Click here to read more and to learn more about reliable identification solutions for data centres. Meet Brady experts at Data Centre World (DCW) in London, UK, 4–5 March 2026, Booth F175. For more from Brady, click here.

Carrier launches CDU with 2°C ATD
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced a new coolant distribution unit (CDU), designed to support the growing use of liquid cooling in UK data centres while improving energy performance, resilience, and space utilisation. The Carrier CDU is intended to help operators manage higher rack densities and increasing cooling demands. It is designed to support liquid-cooled IT environments and provide greater control over energy use and system uptime. As liquid cooling becomes more widely adopted to meet efficiency targets, the CDU enables deployment at scale through management of secondary coolant loops. Carrier says this can help reduce pumping energy and optimise heat removal across varying load conditions. Thermal performance and system efficiency The CDU uses modular heat exchangers that can deliver approach temperatures as low as 2°C, compared with more typical 4°C systems. According to Carrier, this can enable up to 15% chiller energy savings, allowing more electrical capacity to be allocated to IT loads rather than cooling. Oliver Sanders, Data Centre Commercial Director UK&I, Carrier HVAC, notes, “Data centre leaders across the UK are focused on increasing capacity without increasing risk. “This new Carrier CDU supports that goal by giving operators greater thermal stability, more flexibility in system design, and better visibility of cooling performance. The result is improved energy efficiency and smoother scalability as liquid cooling demand grows.” The CDU is designed for use in mission-critical environments and includes redundant pumps and power supplies to support continued operation during maintenance or unexpected events. Intelligent controls manage fluid temperatures and flow rates in real time, with the aim of maintaining stable conditions for high-density servers while reducing energy consumption. Integration, scalability, and monitoring Carrier states that the CDU is designed for simplified integration into existing facilities, allowing liquid cooling to be introduced with minimal disruption. The product range includes multiple unit sizes from 1.3 to 5 MW, enabling operators to align cooling capacity with current and future high-density requirements. The system is intended to support direct-to-chip cooling as well as mixed cooling environments. Carrier says it is designed to maintain stable performance under fluctuating workloads and higher ambient temperatures. “Liquid cooling adoption is accelerating, and operators want systems that deliver both efficiency and certainty,” Oliver continues. “With this Carrier CDU, customers can integrate high-density workloads confidently, knowing their cooling system is designed to maximise uptime, efficiency, and long-term value.” The CDU integrates with Carrier’s control platforms to support centralised monitoring, performance optimisation, and energy management. This is intended to help data centre teams track cooling trends, respond to load changes, and plan capacity more effectively. The Carrier CDU forms part of Carrier’s QuantumLeap portfolio of data centre technologies. For more from Carrier, click here.

STULZ updates CyberRack Active Rear Door cooling
STULZ, a manufacturer of mission-critical air conditioning technology, has launched an updated version of its CyberRack Active Rear Door, aimed at high-density data centre cooling applications where space is limited and heat loads are increasing. The rear-mounted heat exchanger is designed to capture heat directly at rack level, using electronically commutated fans to remove heat at the point of generation. The updated unit is intended for use in both air-cooled and liquid-cooled data centre environments. Integrated sensors monitor return and supply air temperatures within the rack. Cooling output is then adjusted automatically in line with server heat load, aiming to maintain consistent thermal performance as workloads fluctuate. Designed for high-density and retrofit environments Valeria Mercante, Product Manager at STULZ, explains, “The tremendous growth of high-performance computing and artificial intelligence has driven server power densities higher than ever, creating significant heat challenges. “With data centre space often at a premium, the CyberRack Active Rear Door is precision engineered to deliver maximum cooling capacity in a footprint depth of just 274mm. "Delivering up to 49kW chilled water cooling with large heat exchanger surfaces and EC fans, it also supports higher water temperatures and can extend free cooling hours. This helps reduce overall energy consumption and operating costs.” The compact footprint means the unit can be installed without rack repositioning, making it suitable for retrofit projects and sites with limited floorspace. Custom adaptor frames are available to support a range of rack sizes and deployment models, including standalone use, supplemental precision air conditioning, and hybrid configurations alongside direct-to-chip liquid cooling. For maintenance, the system includes a two-step door opening of more than 90°, providing access to fans and coils. Hot-swappable axial fans with plug connectors are also designed to simplify servicing and reduce downtime. Differential pressure control adjusts fan speed in line with server airflow requirements, while low noise operation is also specified. The CyberRack Active Rear Door includes the STULZ E² intelligent control system, featuring a 4.3-inch touchscreen interface. The controller supports functions such as redundancy management, cross-unit parallel operation, standby mode with emergency operation, and integration with building management systems. Valeria continues, “The updated CyberRack Active Rear Door embodies our commitment to providing air conditioning solutions that combine cutting edge technology with intelligent design, user friendliness, energy efficiency, flexibility, and reliability. “In environments where space is tight, heat loads are high, or there’s no raised floor, these advanced units can deliver highly efficient cooling, regardless of the server load.” For more from STULZ, click here.

Fluke Networks launches CertiFiber Max fibre tester
Fluke Networks, a manufacturer of network certification and troubleshooting tools, has launched CertiFiber Max, a third-generation optical loss test set designed for high-density data centre fibre testing. The tester is built on the Versiv platform and integrates with LinkWare software. Fluke Networks states that CertiFiber Max can certify up to 24 fibres in under one second, addressing growing testing demands as fibre density increases in AI- and cloud-driven environments. As data centre architectures evolve, contractors are under pressure to certify more fibres within tighter performance margins. Fluke Networks notes that many existing tools either limit fibre counts or rely on fan-out cables and adapters, increasing testing time and complexity. Designed for high-density fibre certification CertiFiber Max supports 12-, 16-, and 24-fibre MPO connectors, as well as 16- and 24-fibre MMC connectors, using field-replaceable UniPort adapters. These adapters are designed to connect directly to multiple connector types and can be replaced or upgraded on site, extending the working life of the tester. The company says this approach allows technicians to adapt to changing connector standards without replacing test equipment, while also protecting tester ports during use in demanding environments. Vineet Thuvara, Chief Product Officer at Fluke Corporation, comments, “CertiFiber Max reflects our belief that trust in data centre operations starts at the physical layer. Built on the proven Versiv platform, it delivers native 24-fibre support for high-density networks.” As fibre counts continue to rise, the company positions its CertiFiber Max as a tool designed to support both current installations and future requirements, including emerging connector formats such as MMC. Charlie Stroup, Applications Engineering Manager at US Conec, notes, “As MMC deployments continue to expand rapidly, Fluke’s CertiFiber Max plays a critical role in supporting reliable testing for next-generation AI networks.” The system measures optical loss, length, and polarity across multiple fibres in under a second and uses the one-jumper reference method recommended by industry standards and manufacturers. For more from Fluke Networks, click here.

Motivair introduces scalable CDU for AI data centres
Motivair, a provider of liquid cooling systems for data centres, owned by Schneider Electric, has announced a new coolant distribution unit designed to support high-density data centre cooling requirements, including large-scale AI and high-performance computing deployments. The new CDU, MCDU-70, has a nominal capacity of 2.5 MW and is intended for use in liquid-cooled environments where compute density continues to increase. Motivair says the system can be deployed as part of a centralised cooling architecture and scaled beyond 10 MW through multiple units operating together. According to the company, the CDU is designed to support current and future GPU-based workloads, where heat output is significantly higher than traditional CPU-based infrastructure. It notes that rack power densities in AI environments are expected to approach one megawatt and above, increasing the need for liquid cooling approaches. Designed for scalable, high-density cooling Motivair states that the new CDU integrates with Schneider Electric’s EcoStruxure platform, allowing multiple units to operate as part of a coordinated system. The design is intended to support phased expansion as cooling demand grows, without requiring major redesign of the wider plant. Rich Whitmore, CEO of Motivair by Schneider Electric, comments, “Our solutions are designed to keep pace with chip and silicon evolution. Data centre success now depends on delivering scalable, reliable infrastructure that aligns with next-generation AI factory deployments.” The CDU forms part of Schneider Electric’s wider liquid cooling portfolio, which includes systems ranging from lower-capacity deployments through to multi-megawatt installations. Motivair says the units are designed as modular building blocks, enabling operators to select and combine systems based on specific performance and redundancy requirements. The system is manufactured through Schneider Electric's facilities in North America, Europe, and Asia, and is intended to provide high flow rates and pressure within a compact footprint. The company adds that the design supports parallel filtration, real-time monitoring, and integration with other cooling components to support efficient operation across the data centre. The MCDU-70 is now available to order globally. For more from Schneider Electric, click here.

Vertiv expands perimeter cooling range in EMEA
Vertiv, a global provider of critical digital infrastructure, has expanded its CoolPhase Perimeter PAM air-cooled perimeter cooling range with additional capacity options and the introduction of the CoolPhase Condenser, now available across Europe, the Middle East, and Africa (EMEA). The update is aimed at small, medium, and edge data centre environments, with Vertiv stating that the expanded range is intended to improve energy efficiency and operational resilience while reducing overall operating costs and extending equipment life. The CoolPhase Perimeter PAM has been developed for modern data centre requirements and now incorporates the EconoPhase Pumped Refrigerant Economizer, integrated within the CoolPhase Condenser system. Vertiv says the approach is designed to increase free-cooling operation by using a pumped refrigerant circuit that consumes less power than conventional compressor-based systems and reduces space requirements. The range uses R-513A refrigerant, which has a lower global warming potential than R-410A and is non-flammable with low toxicity. The company notes that this aligns the system with EU F-Gas Regulation 2024/573 and supports operators seeking to reduce emissions while maintaining cooling capacity. Designed for efficiency and regulatory compliance Sam Bainborough, VP Thermal Management, EMEA at Vertiv, explains, “With this latest addition to the Vertiv CoolPhase Perimeter PAM range, we're making our direct expansion offering more flexible while addressing two critical challenges faced by data centre operators today: environmental compliance and operational efficiency. “The new air-cooled models boost free-cooling capabilities to lower PUE, demonstrating our commitment to providing energy-efficient and environmentally responsible options.” The CoolPhase Perimeter PAM includes variable-speed compressors, staged coils, and patented filtration technology, and integrates with CoolPhase Condenser units using the Liebert iCOM control platform. The range forms part of Vertiv’s wider thermal portfolio and is supported by the company’s service organisation, covering design, commissioning, and ongoing operational support. For more from Vertiv, click here.

Vertiv launches new MegaMod HDX configurations
Vertiv, a global provider of critical digital infrastructure, has introduced new configurations of its MegaMod HDX prefabricated power and liquid cooling system for high-density computing deployments in North America and EMEA. The units are designed for environments using artificial intelligence and high-performance computing and allow operators to increase power and cooling capacity as requirements rise. Vertiv states the configurations give organisations a way to manage greater thermal loads while maintaining deployment speed and reducing space requirements. The MegaMod HDX integrates direct-to-chip liquid cooling with air-cooled systems to meet the demands of pod-based AI and GPU clusters. The compact configuration supports up to 13 racks with a maximum capacity of 1.25 MW, while the larger combo design supports up to 144 racks and power capacities up to 10 MW. Both are intended for rack densities from 50 kW to above 100 kW. Prefabricated scaling for high-density sites The hybrid architecture combines direct-to-chip cooling with air cooling as part of a prefabricated pod. According to Vertiv, a distributed redundant power design allows the system to continue operating if a module goes offline, and a buffer-tank thermal backup feature helps stabilise GPU clusters during maintenance or changes in load. The company positions the factory-assembled approach as a method of standardising deployment and planning and supporting incremental build-outs as data centre requirements evolve. The MegaMod HDX configurations draw on Vertiv’s existing power, cooling, and management portfolio, including the Liebert APM2 UPS (uninterruptible power supply), CoolChip CDU (cooling distribution unit), PowerBar busway system, and Unify infrastructure monitoring. Vertiv also offers compatible racks and OCP-compliant racks, CoolLoop RDHx rear door heat exchangers, CoolChip in-rack CDUs, rack power distribution units, PowerDirect in-rack DC power systems, and CoolChip Fluid Network Rack Manifolds. Viktor Petik, Senior Vice President, Infrastructure Solutions at Vertiv, says, “Today’s AI workloads demand cooling solutions that go beyond traditional approaches. "With the Vertiv MegaMod HDX available in both compact and combo solution configurations, organisations can match their facility requirements while supporting high-density, liquid-cooled environments at scale." For more from Vertiv, click here.

Janitza launches UMG 801 power analyser
Modern data centres often face a choice between designing electrical monitoring systems far beyond immediate needs or replacing equipment as sites expand. Janitza, a German manufacturer of energy measurement and power quality monitoring equipment, says its UMG 801 power analyser is designed to avoid this issue by allowing users to increase capacity from eight to 92 current measuring channels without taking systems offline. The analyser is suited to compact switchboards, with a fully expanded installation occupying less DIN rail space than traditional designs that rely on transformer disconnect terminals. Each add-on module introduces eight additional measuring channels within a single sub-unit, reducing physical footprint within crowded cabinets. Expandable monitoring with fewer installation constraints The core UMG 801 unit supports ten virtual module slots that can be populated in any mix. These include conventional transformer modules, low-power modules, and digital input modules. Bridge modules allow measurement points to be located up to 100 metres away without consuming module capacity, reducing wiring impact and installation complexity. Sampling voltage at 51.2 kHz, the analyser provides Class 0.2 accuracy across voltage, current, and energy readings. This level of precision is used in applications such as calculating power usage effectiveness (PUE) to two decimal places, as well as assessing harmonic distortion that may affect uninterruptible power supplies (UPS). Voltage harmonic analysis extends to the 127th order, and transient events down to 18 microseconds can be recorded. Onboard memory of 4 GB also ensures data continuity during network disruptions. The system is compatible with ISO 50001 energy management frameworks and includes two ethernet interfaces that can operate simultaneously to provide redundant communication paths. Native OPC UA and Modbus TCP/IP support enable direct communication with energy management platforms and legacy supervisory control systems, while whitelisting functions restrict access to approved devices. RS-485 additionally provides further support for older infrastructure. Configuration is carried out through an integrated web server rather than proprietary software, and an optional remote display allows monitoring without opening energised cabinets. Installations typically start with a single base unit at the primary distribution level, with additional modules added gradually as demand grows, reducing the need for upfront expenditure and avoiding replacement activity that risks downtime. Janitza’s remote display connects via USB and mirrors the analyser’s interface, providing visibility of all measurement channels from the switchboard front panel. Physical push controls enable parameter navigation, helping users access configuration and measurement information without opening the enclosure. The company notes that carrying out upgrades without interrupting operations may support facilities that cannot accommodate downtime windows. For more from Janitza, cick here.



Translate »