Products


STULZ updates CyberRack Active Rear Door cooling
STULZ, a manufacturer of mission-critical air conditioning technology, has launched an updated version of its CyberRack Active Rear Door, aimed at high-density data centre cooling applications where space is limited and heat loads are increasing. The rear-mounted heat exchanger is designed to capture heat directly at rack level, using electronically commutated fans to remove heat at the point of generation. The updated unit is intended for use in both air-cooled and liquid-cooled data centre environments. Integrated sensors monitor return and supply air temperatures within the rack. Cooling output is then adjusted automatically in line with server heat load, aiming to maintain consistent thermal performance as workloads fluctuate. Designed for high-density and retrofit environments Valeria Mercante, Product Manager at STULZ, explains, “The tremendous growth of high-performance computing and artificial intelligence has driven server power densities higher than ever, creating significant heat challenges. “With data centre space often at a premium, the CyberRack Active Rear Door is precision engineered to deliver maximum cooling capacity in a footprint depth of just 274mm. "Delivering up to 49kW chilled water cooling with large heat exchanger surfaces and EC fans, it also supports higher water temperatures and can extend free cooling hours. This helps reduce overall energy consumption and operating costs.” The compact footprint means the unit can be installed without rack repositioning, making it suitable for retrofit projects and sites with limited floorspace. Custom adaptor frames are available to support a range of rack sizes and deployment models, including standalone use, supplemental precision air conditioning, and hybrid configurations alongside direct-to-chip liquid cooling. For maintenance, the system includes a two-step door opening of more than 90°, providing access to fans and coils. Hot-swappable axial fans with plug connectors are also designed to simplify servicing and reduce downtime. Differential pressure control adjusts fan speed in line with server airflow requirements, while low noise operation is also specified. The CyberRack Active Rear Door includes the STULZ E² intelligent control system, featuring a 4.3-inch touchscreen interface. The controller supports functions such as redundancy management, cross-unit parallel operation, standby mode with emergency operation, and integration with building management systems. Valeria continues, “The updated CyberRack Active Rear Door embodies our commitment to providing air conditioning solutions that combine cutting edge technology with intelligent design, user friendliness, energy efficiency, flexibility, and reliability. “In environments where space is tight, heat loads are high, or there’s no raised floor, these advanced units can deliver highly efficient cooling, regardless of the server load.” For more from STULZ, click here.

Fluke Networks launches CertiFiber Max fibre tester
Fluke Networks, a manufacturer of network certification and troubleshooting tools, has launched CertiFiber Max, a third-generation optical loss test set designed for high-density data centre fibre testing. The tester is built on the Versiv platform and integrates with LinkWare software. Fluke Networks states that CertiFiber Max can certify up to 24 fibres in under one second, addressing growing testing demands as fibre density increases in AI- and cloud-driven environments. As data centre architectures evolve, contractors are under pressure to certify more fibres within tighter performance margins. Fluke Networks notes that many existing tools either limit fibre counts or rely on fan-out cables and adapters, increasing testing time and complexity. Designed for high-density fibre certification CertiFiber Max supports 12-, 16-, and 24-fibre MPO connectors, as well as 16- and 24-fibre MMC connectors, using field-replaceable UniPort adapters. These adapters are designed to connect directly to multiple connector types and can be replaced or upgraded on site, extending the working life of the tester. The company says this approach allows technicians to adapt to changing connector standards without replacing test equipment, while also protecting tester ports during use in demanding environments. Vineet Thuvara, Chief Product Officer at Fluke Corporation, comments, “CertiFiber Max reflects our belief that trust in data centre operations starts at the physical layer. Built on the proven Versiv platform, it delivers native 24-fibre support for high-density networks.” As fibre counts continue to rise, the company positions its CertiFiber Max as a tool designed to support both current installations and future requirements, including emerging connector formats such as MMC. Charlie Stroup, Applications Engineering Manager at US Conec, notes, “As MMC deployments continue to expand rapidly, Fluke’s CertiFiber Max plays a critical role in supporting reliable testing for next-generation AI networks.” The system measures optical loss, length, and polarity across multiple fibres in under a second and uses the one-jumper reference method recommended by industry standards and manufacturers. For more from Fluke Networks, click here.

Motivair introduces scalable CDU for AI data centres
Motivair, a provider of liquid cooling systems for data centres, owned by Schneider Electric, has announced a new coolant distribution unit designed to support high-density data centre cooling requirements, including large-scale AI and high-performance computing deployments. The new CDU, MCDU-70, has a nominal capacity of 2.5 MW and is intended for use in liquid-cooled environments where compute density continues to increase. Motivair says the system can be deployed as part of a centralised cooling architecture and scaled beyond 10 MW through multiple units operating together. According to the company, the CDU is designed to support current and future GPU-based workloads, where heat output is significantly higher than traditional CPU-based infrastructure. It notes that rack power densities in AI environments are expected to approach one megawatt and above, increasing the need for liquid cooling approaches. Designed for scalable, high-density cooling Motivair states that the new CDU integrates with Schneider Electric’s EcoStruxure platform, allowing multiple units to operate as part of a coordinated system. The design is intended to support phased expansion as cooling demand grows, without requiring major redesign of the wider plant. Rich Whitmore, CEO of Motivair by Schneider Electric, comments, “Our solutions are designed to keep pace with chip and silicon evolution. Data centre success now depends on delivering scalable, reliable infrastructure that aligns with next-generation AI factory deployments.” The CDU forms part of Schneider Electric’s wider liquid cooling portfolio, which includes systems ranging from lower-capacity deployments through to multi-megawatt installations. Motivair says the units are designed as modular building blocks, enabling operators to select and combine systems based on specific performance and redundancy requirements. The system is manufactured through Schneider Electric's facilities in North America, Europe, and Asia, and is intended to provide high flow rates and pressure within a compact footprint. The company adds that the design supports parallel filtration, real-time monitoring, and integration with other cooling components to support efficient operation across the data centre. The MCDU-70 is now available to order globally. For more from Schneider Electric, click here.

Vertiv expands perimeter cooling range in EMEA
Vertiv, a global provider of critical digital infrastructure, has expanded its CoolPhase Perimeter PAM air-cooled perimeter cooling range with additional capacity options and the introduction of the CoolPhase Condenser, now available across Europe, the Middle East, and Africa (EMEA). The update is aimed at small, medium, and edge data centre environments, with Vertiv stating that the expanded range is intended to improve energy efficiency and operational resilience while reducing overall operating costs and extending equipment life. The CoolPhase Perimeter PAM has been developed for modern data centre requirements and now incorporates the EconoPhase Pumped Refrigerant Economizer, integrated within the CoolPhase Condenser system. Vertiv says the approach is designed to increase free-cooling operation by using a pumped refrigerant circuit that consumes less power than conventional compressor-based systems and reduces space requirements. The range uses R-513A refrigerant, which has a lower global warming potential than R-410A and is non-flammable with low toxicity. The company notes that this aligns the system with EU F-Gas Regulation 2024/573 and supports operators seeking to reduce emissions while maintaining cooling capacity. Designed for efficiency and regulatory compliance Sam Bainborough, VP Thermal Management, EMEA at Vertiv, explains, “With this latest addition to the Vertiv CoolPhase Perimeter PAM range, we're making our direct expansion offering more flexible while addressing two critical challenges faced by data centre operators today: environmental compliance and operational efficiency. “The new air-cooled models boost free-cooling capabilities to lower PUE, demonstrating our commitment to providing energy-efficient and environmentally responsible options.” The CoolPhase Perimeter PAM includes variable-speed compressors, staged coils, and patented filtration technology, and integrates with CoolPhase Condenser units using the Liebert iCOM control platform. The range forms part of Vertiv’s wider thermal portfolio and is supported by the company’s service organisation, covering design, commissioning, and ongoing operational support. For more from Vertiv, click here.

Vertiv launches new MegaMod HDX configurations
Vertiv, a global provider of critical digital infrastructure, has introduced new configurations of its MegaMod HDX prefabricated power and liquid cooling system for high-density computing deployments in North America and EMEA. The units are designed for environments using artificial intelligence and high-performance computing and allow operators to increase power and cooling capacity as requirements rise. Vertiv states the configurations give organisations a way to manage greater thermal loads while maintaining deployment speed and reducing space requirements. The MegaMod HDX integrates direct-to-chip liquid cooling with air-cooled systems to meet the demands of pod-based AI and GPU clusters. The compact configuration supports up to 13 racks with a maximum capacity of 1.25 MW, while the larger combo design supports up to 144 racks and power capacities up to 10 MW. Both are intended for rack densities from 50 kW to above 100 kW. Prefabricated scaling for high-density sites The hybrid architecture combines direct-to-chip cooling with air cooling as part of a prefabricated pod. According to Vertiv, a distributed redundant power design allows the system to continue operating if a module goes offline, and a buffer-tank thermal backup feature helps stabilise GPU clusters during maintenance or changes in load. The company positions the factory-assembled approach as a method of standardising deployment and planning and supporting incremental build-outs as data centre requirements evolve. The MegaMod HDX configurations draw on Vertiv’s existing power, cooling, and management portfolio, including the Liebert APM2 UPS (uninterruptible power supply), CoolChip CDU (cooling distribution unit), PowerBar busway system, and Unify infrastructure monitoring. Vertiv also offers compatible racks and OCP-compliant racks, CoolLoop RDHx rear door heat exchangers, CoolChip in-rack CDUs, rack power distribution units, PowerDirect in-rack DC power systems, and CoolChip Fluid Network Rack Manifolds. Viktor Petik, Senior Vice President, Infrastructure Solutions at Vertiv, says, “Today’s AI workloads demand cooling solutions that go beyond traditional approaches. "With the Vertiv MegaMod HDX available in both compact and combo solution configurations, organisations can match their facility requirements while supporting high-density, liquid-cooled environments at scale." For more from Vertiv, click here.

Janitza launches UMG 801 power analyser
Modern data centres often face a choice between designing electrical monitoring systems far beyond immediate needs or replacing equipment as sites expand. Janitza, a German manufacturer of energy measurement and power quality monitoring equipment, says its UMG 801 power analyser is designed to avoid this issue by allowing users to increase capacity from eight to 92 current measuring channels without taking systems offline. The analyser is suited to compact switchboards, with a fully expanded installation occupying less DIN rail space than traditional designs that rely on transformer disconnect terminals. Each add-on module introduces eight additional measuring channels within a single sub-unit, reducing physical footprint within crowded cabinets. Expandable monitoring with fewer installation constraints The core UMG 801 unit supports ten virtual module slots that can be populated in any mix. These include conventional transformer modules, low-power modules, and digital input modules. Bridge modules allow measurement points to be located up to 100 metres away without consuming module capacity, reducing wiring impact and installation complexity. Sampling voltage at 51.2 kHz, the analyser provides Class 0.2 accuracy across voltage, current, and energy readings. This level of precision is used in applications such as calculating power usage effectiveness (PUE) to two decimal places, as well as assessing harmonic distortion that may affect uninterruptible power supplies (UPS). Voltage harmonic analysis extends to the 127th order, and transient events down to 18 microseconds can be recorded. Onboard memory of 4 GB also ensures data continuity during network disruptions. The system is compatible with ISO 50001 energy management frameworks and includes two ethernet interfaces that can operate simultaneously to provide redundant communication paths. Native OPC UA and Modbus TCP/IP support enable direct communication with energy management platforms and legacy supervisory control systems, while whitelisting functions restrict access to approved devices. RS-485 additionally provides further support for older infrastructure. Configuration is carried out through an integrated web server rather than proprietary software, and an optional remote display allows monitoring without opening energised cabinets. Installations typically start with a single base unit at the primary distribution level, with additional modules added gradually as demand grows, reducing the need for upfront expenditure and avoiding replacement activity that risks downtime. Janitza’s remote display connects via USB and mirrors the analyser’s interface, providing visibility of all measurement channels from the switchboard front panel. Physical push controls enable parameter navigation, helping users access configuration and measurement information without opening the enclosure. The company notes that carrying out upgrades without interrupting operations may support facilities that cannot accommodate downtime windows. For more from Janitza, cick here.

Southco develops blind-mate mechanism for liquid cooling
Southco, a US manufacturer of engineered access hardware including latches, hinges, and fasteners, has developed a high-tolerance blind-mate floating mechanism designed for next-generation liquid-cooled data centres. The company says the design is intended to address mechanical tolerance challenges that affect cooling system efficiency and operational stability. It notes that demand for liquid cooling is increasing as traditional air-cooling methods struggle to manage higher power densities associated with AI workloads and high-performance computing. Adoption is accelerating further as operators pursue sustainability and targeted PUE reductions. Liquid cooling, however, requires reliable physical connections, with Southco highlighting that even small alignment deviations at manifold and cold-plate interfaces can disrupt coolant flow, increase pump energy consumption, and heighten the risk of leaks. Managing mechanical deviation in liquid cooling systems Citing guidance in the Open Compute Project’s rack-mounted manifold requirements, Southco notes that a 1mm deviation can raise flow resistance by 15%, leading to around a 7% increase in pump energy. In large facilities, these effects scale alongside thousands of connection points. The company identifies several contributors to misalignment in operational environments: • Accumulated tolerances between rack formats, including EIA-310-D and ORV3, which may reach ±3.2mm • Displacement caused by vibration during transport and operation • Thermal expansion of materials, including copper manifolds expanding more than 1mm over typical temperature ranges Rigid, low-tolerance couplings can leave systems vulnerable to leaks, rising operational costs, and downtime risk, and the newly introduced blind-mate floating mechanism is designed to absorb movement and compensate for these deviations. The product offers floating tolerance of ±4mm radially, axial displacement absorption up to 6mm, and automatic self-centring when disconnected. The design is intended to support long-term leak prevention and meet standards applicable to OCP and ORV3 liquid cooling deployments. Southco adds that the mechanism includes sealing rated to withstand high-pressure testing in line with ASME B31.3 requirements and is intended to support more than ten years of continuous operation. It uses universal quick-disconnect interfaces to enable “blind” maintenance without precise alignment or tooling. The company positions the technology as a step towards enabling rapid maintenance, reducing equipment handling time, and lowering the risk of service interruption. It also points to reduced energy used by pumps through lower flow resistance. Southco sees future development in integrating sensing for temperature, flow, and pressure; exploring lighter materials; and working towards greater standardisation across suppliers and data centre ecosystems.

SPAL targets data centre cooling needs
SPAL Automotive, an Italian manufacturer of electric cooling fans and blowers, traditionally for automotive and industrial applications, is preparing to showcase its cooling technology at Data Centre World in London in March 2026, with a particular focus on brushless drive water pumps used in data centre thermal management. The pumps are designed for stationary applications where cooling demand is continuous and high. They feature software control compatibility - including CAN, PWM, and LIN - supporting precise regulation of coolant flow and temperature. The company says the pumps consume less power than mechanically driven units and use IP6K9K-rated brushless systems intended to mitigate issues such as overload, reverse polarity, and overvoltage. The role of cooling components in data centres Alongside its pumps, SPAL will display its wider cooling portfolio, which includes fans and blowers designed for controlled airflow and heat dissipation. The company plans to highlight the use of matched replacement components, particularly for systems that rely on coordinated assemblies of fans, pumps, and related controls. James Bowett, General Manager at SPAL UK, says, “In a world where costs are constantly under pressure, it’s false economy to opt for cheaper parts as this will not only affect the performance of the component itself, but the entire suite of parts within a system. "The only way to ensure effective, reliable, long-life operation is to replicate the set up installed at the point of manufacture. That means choosing the best calibre parts throughout.” SPAL states that its products are supplied with a four-year manufacturer’s warranty and are used to help maintain stable conditions for sensitive electronics. The company highlights that the growth of data centres linked to AI and cloud services is increasing demand for equipment designed specifically for energy efficiency, water use, and controlled cooling. SPAL will exhibit at Data Centre World on Stand F15, held at ExCeL London on 4–5 March 2026.

Enecom upgrades data storage with Infinidat's InfiniBox
Infinidat, a provider of enterprise data storage systems, has announced that Enecom, a Japanese ICT services provider operating primarily in the Chugoku region, has upgraded its enterprise data infrastructure using multiple InfiniBox storage systems. Enecom has deployed five InfiniBox systems across its environment. Two systems support the company’s EneWings enterprise cloud service, two are used for internal virtual infrastructure, and one is dedicated to backup and verification. The deployment is intended to support service availability, scalability, and resilience as data volumes increase. According to Enecom, the investment was driven by customer requirements for high system reliability, concerns around cyber security, and the rising cost and operational impact of legacy storage platforms. Masayuki Chikaraishi, Solution Service Department, Solution Business Division at Enecom, says, “When we were choosing how to upgrade our storage infrastructure, our customers told us that system reliability was particularly important and that the threat of damage caused by cyberattacks was a major concern. "We also had to address the rising costs of the legacy systems and the fallout when hardware failures occurred. For the longer term, we needed to be proactive to be able to handle the expected future growth in cloud demand and to strengthen the appeal of our EneWings brand.” Availability and cyber resilience focus Enecom says it is using an active-active configuration across two InfiniBox systems to maintain service continuity during maintenance and software upgrades. Takashi Ueki, Solution Service Department, Solution Business Division at Enecom, notes, “Many of our customers are concerned that even the slightest outage will affect their business. "By using two InfiniBox systems in an active-active cluster configuration, we can continue to provide services with higher reliability and peace of mind without interruption, even when performing maintenance or software version upgrades.” Cyber resilience was also a key consideration. Enecom is using InfiniSafe features within the InfiniBox platform, including immutable snapshots and recovery capabilities, to support rapid restoration following cyber incidents. Masayuki continues, “InfiniBox provides high-speed, tamper-proof, immutable snapshots creation as a standard feature to enable rapid recovery from a future cyberattack. Keeping data within Japan for data security reasons will become more important in the future.” For more from Infinidat, click here.

Motivair by Schneider Electric introduces new CDUs
Motivair, a US provider of liquid cooling solutions for data centres and AI computing, owned by Schneider Electric, has introduced a new range of coolant distribution units (CDUs) designed to address the increasing thermal requirements of high performance computing and AI workloads. The new units are designed for installation in utility corridors rather than within the white space, reflecting changes in how liquid cooling infrastructure is being deployed in modern data centres. According to the company, this approach is intended to provide operators with greater flexibility when integrating cooling systems into different facility layouts. The CDUs will be available globally, with manufacturing scheduled to increase from early 2026. Motivair states that the range supports a broader set of operating conditions, allowing data centre operators to use a wider range of chilled water temperatures when planning and operating liquid cooled environments. The additions expand the company’s existing liquid cooling portfolio, which includes floor-mounted and in-rack units for use across hyperscale, colocation, edge, and retrofit sites. Cooling design flexibility for AI infrastructure Motivair says the new CDUs reflect changes in infrastructure design as compute densities increase and AI workloads become more prevalent. The company notes that operators are increasingly placing CDUs outside traditional IT spaces to improve layout flexibility and maintenance access, as having multiple CDU deployment options allows cooling approaches to be aligned more closely with specific data centre designs and workload requirements. The company highlights space efficiency, broader operating ranges, easier access for maintenance, and closer integration with chiller plant infrastructure as key considerations for operators planning liquid cooling systems. Andrew Bradner, Senior Vice President, Cooling Business at Schneider Electric, says, “When it comes to data centre liquid cooling, flexibility is the key with customers demanding a more diverse and larger portfolio of end-to-end solutions. "Our new CDUs allow customers to match deployment strategies to a wider range of accelerated computing applications while leveraging decades of specialised cooling experience to ensure optimal performance, reliability, and future-readiness.” The launch marks the first new product range from Motivair since Schneider Electric acquired the company in February 2025. Rich Whitmore, CEO of Motivair, comments, “Motivair is a trusted partner for advanced liquid cooling solutions and our new range of technologies enables data centre operators to navigate the AI era with confidence. "Together with Schneider Electric, our goal is to deliver next-generation cooling solutions that adapt to any HPC, AI, or advanced data centre deployment to deliver seamless scalability, performance, and reliability when it matters most.” For more from Schneider Electric, click here.



Translate »