Innovations in Data Center Power and Cooling Solutions


Why resilient cooling systems are critical to reliability
In this exclusive article for DCNN, Dean Oliver, Area Sales Manager Commercial (South and London areas) at Spirotech, explores why uninterrupted operation of data centres - 24 hours a day, 365 days a year - is no longer optional, but essential. To achieve this, he believes robust backup systems, advanced infrastructure, and precision cooling are fundamental: The importance of data In today's digitally driven economy, data is the backbone of intelligent business decisions. From individuals and startups to multinational corporations and financial institutions, the protection of personal and commercial information is more vital than ever. The internet sparked a technological revolution that has continued to accelerate - ushering in innovations like cryptocurrencies and, more recently, the powerful rise of artificial intelligence (AI). While these developments are groundbreaking, they also highlight the need for caution and infrastructure readiness. For most users, the importance of data centres only becomes clear when systems fail. A 30-minute outage can bring parts of the economy to a halt. If banks can’t process transactions, the consequences are immediate and widespread. Data breaches can have a significant impact on businesses, both operationally and financially. This year alone, several high-profile companies have been targeted. Marks & Spencer, for example, reportedly suffered losses of around £300 million over a six-week period following a cyberattack. These and other companies affected by such problems underline just how dependent our society is on digital infrastructure. Cyberattacks, like denial-of-service (DoS) assaults, are a real and growing threat. But even without malicious intent, data centres must operate flawlessly, with zero downtime. Central to this is thermal management, including cooling systems that maintain optimal conditions to prevent system failure. Why cooling is key Data centres generate significant heat due to dense arrays of servers and network hardware. If temperatures are not precisely controlled, systems risk shutdown, data corruption, or permanent loss - an unacceptable risk for any organisation. Cooling solutions are mission-critical. Given the security and performance demands on data centres, there’s no room for error. Cutting corners to save on cost can have catastrophic consequences. That’s why careful planning at the design stage is essential. This should factor in redundancy for all key components: chillers, pumps, pressurisation units, and more. Communication links between these systems must also be integrated to ensure coordinated operation. The equation is simple: the more computing power you deploy, the greater the cooling demand. Cloud infrastructure consumes enormous amounts of energy and space, requiring 10s of megawatts of power and covering thousands of square metres. If the cooling system fails - whether from chiller malfunction or control breakdown - data loss on a massive scale becomes a very real possibility. That’s why backup systems must be immediately responsive, guaranteeing continued operation under any condition. Keeping systems operating Today, there are innovative control systems available, like those offered by Spirotech, that offer detailed insights into system performance and which capture operational data from pumps, valves, pressurisation units, and vacuum degassers. This enables early detection of potential issues and provides trend analysis to support proactive maintenance. For example, vacuum degassers can show how much air has been removed over time, while pressurisation units monitor pressure levels, leak events, and top-up activity. These systems work in tandem, ensuring balance and continuity. If a fault occurs, alerts are instantly dispatched to relevant personnel. A poorly designed or maintained pressurisation system can result in negative pressure, leading to air ingress via vents and seals - or, conversely, excessive pressure that causes water discharge and frequent refills. Air and dirt separators are also crucial to system health, preventing build-up and ensuring smooth operation across all pipework and components. Conclusion Effective cooling is essential for data centre systems due to the high demands on security and performance; there's no tolerance for failure. Inadequate or poorly designed cooling can lead to disastrous outcomes, including potential large-scale data loss. To prevent this, detailed planning during the design phase is crucial. This includes building in redundancy for all major components like chillers, pumps, and pressure units, and ensuring these systems can communicate and function together reliably. As computing capacity increases, so does the need for robust cooling. Modern cloud infrastructure uses vast amounts of power and physical space, placing even greater stress on cooling requirements. Therefore, backup systems must be fast-acting and fully capable of maintaining continuous operation to avoid downtime and protect data integrity, regardless of any component failures. For more from Spirotech, click here.

Schneider Electric advances 800 VDC power systems
Schneider Electric, a French multinational specialising in energy management and industrial automation, has outlined its latest developments in 800 VDC power architectures, designed to meet the growing demands of high-density rack systems across next-generation data centres. The company says the move reflects a broader industry transition towards higher power densities and greater efficiency, as artificial intelligence (AI) workloads drive increased compute intensity. Schneider Electric is developing its approach around system-level design, integrating power conversion, protection, and metering to ensure performance, scalability, and safety. Jim Simonelli, Chief Technology Officer for Data Centres at Schneider Electric, explains, “The move to 800 VDC is a natural evolution as compute density increases and Schneider Electric is committed to helping customers make that transition safely and reliably. "Our expertise lies in understanding the full power ecosystem, from grid to server, and designing systems that integrate seamlessly and operate predictably.” Collaboration with NVIDIA on 800 VDC sidecar Schneider Electric is working with NVIDIA to develop an 800 VDC sidecar capable of powering racks of up to 1.2 MW. The sidecar is designed to support future generations of NVIDIA GPUs and accelerated computing infrastructure by converting AC power from the data centre into 800 VDC. According to Schneider Electric, the design enables safe and efficient megawatt-scale rack power delivery while helping to minimise infrastructure and material costs. The sidecar includes: • Modular power conversion shelves• Integrated short-term energy storage for load smoothing and backup• Live Swap capability for safer maintenance• High energy efficiency The company says the development reflects its wider 'system-level' approach, which focuses on the complete power infrastructure rather than isolated components. This includes optimising conversion technology, intelligent metering, and integrated protection systems to improve operational efficiency and support scalable, high-density deployments. Safety, reliability, and validation Schneider Electric reports that its 800 VDC power systems are backed by extensive modelling, simulation, and testing. This includes fault current and arc flash analysis, as well as certified laboratory environments designed to replicate real-world conditions. The company’s safety and validation processes are aimed at ensuring predictable performance, simplified maintenance, and operational resilience, which are all key factors for facilities deploying high-density AI and high-performance computing racks. Dion Harris, Senior Director of HPC, Cloud, and AI Infrastructure Solutions GTM at NVIDIA, says, “Scalable power architectures are the foundation for next-generation AI infrastructure that maximises both performance and efficiency. "NVIDIA and Schneider Electric are building on our longstanding partnership to design and deliver advanced 800 VDC power systems that will support AI applications driving the industrial AI revolution.” For more from Schneider Electric, click here.

Uncover the hidden risks in data centre resilience
In July 2024, a lightning arrester failure in Northern Virginia, USA, triggered a massive 1,500MW load transfer across 70 data centres - handling over 70% of global internet traffic. The result? No customer outages, but a cascade of grid instability and unexpected generator behaviour that exposed critical vulnerabilities in power resilience. Powerside’s latest whitepaper - entitled Data Centre Load Transfer Event – Critical Insights from Power Quality Monitoring - delivers a technical case study from this unprecedented event, revealing: • Why identical voltage disturbances led to vastly different data centre responses • How power quality monitoring helped decode complex grid interactions • What this means for future-proofing infrastructure in 'Data Centre Alley' and beyond Whether you are managing mission-critical infrastructure or advising on grid stability, this is essential reading. You can download the full whitepaper by registering below: [ninja_form id=5]

Honeywell, LS Electric deal to boost data centre power
Honeywell, a US multinational specialising in building automation, has announced a global partnership with LS Electric, a South Korean manufacturer of electrical equipment and automation systems, to develop and market integrated hardware and software systems for power management and distribution in data centres and commercial buildings. The collaboration aims to simplify the integration of electrical and automation systems, improving operational efficiency, resilience, and energy management for operators of data-intensive and energy-critical facilities. Integrated approach to power and automation The partnership combines LS Electric’s experience in power infrastructure with Honeywell’s expertise in building automation and control systems. Together, the companies plan to offer systems that unify power distribution and building management, ensuring load and capacity are balanced to maintain resilience and uptime. The first joint products will include integrated switchgear and power management systems that help data centre operators control and distribute power more effectively. Future development will focus on new electrical monitoring systems using the Honeywell Forge platform, enhanced with AI and analytics, and LS Electric’s software capabilities. These systems will be designed to regulate energy distribution, improve efficiency, and provide predictive maintenance to reduce downtime and power quality issues. Energy storage and grid resilience Honeywell and LS Electric also plan to develop a grid- and building-aware battery energy storage system (BESS) for commercial and industrial buildings. The modular BESS will integrate LS Electric’s energy storage technology with Honeywell’s dynamic energy control software, allowing users to forecast and optimise energy sourcing and costs based on grid data, weather conditions, and other variables. The companies said the technology will help manage growing energy demand from sectors such as data centres, which currently account for between 1-2% of global electricity consumption. Billal Hammoud, President and CEO of Honeywell Building Automation, comments, “Our collaboration with LS Electric supports our continued focus on delivering smarter, scalable solutions for the world’s most critical industries. "As the global demand for data and energy accelerates, this partnership combines our complementary strengths to deliver intelligent infrastructure that’s both resilient and efficient.” JongWoo Kim, President of LS Electric Power Electric, adds, “Building on our expertise in power infrastructure and energy storage systems, we are expanding globally into the data centre and industrial building markets. "Through our collaboration with Honeywell, we will provide solutions that help large-scale operators achieve both energy efficiency and reliability.” For more from Honeywell, click here.

The shift from standby to strategic energy management
In this article, Laura Maciosek, Director Key Accounts at Cat Electric Power Division, talks about why shifting backup assets into primary power is becoming essential as grid constraints intensify: It’s safe to say the energy landscape is changing, with many prominent and significant changes having taken place in the last 24 months. The data-driven society we live in, from streaming devices and smart appliances to AI processing, continues to move demand for data centres in just one direction: up. As data centres experience this growth, utility power is no longer a given. Today, there’s no guarantee the local electrical grid can meet these increased power needs. In fact, many utilities I’ve talked with say it’ll be three to five years (or longer) before they can bring the required amount of power online. That puts data centre customers in a tricky position. How can they continue to expand and grow if there isn’t enough power and moving sites isn’t an option? The answer includes rethinking power options, and that means considering the transition from using power assets for largely backup purposes to employing them as a primary power source. That’s a big change from the status quo. If you’re in a similar position, you can read our advice on how to navigate the transition on our blog. Whether you’re ready to make the switch from standby to prime power at your data centre today – or simply weighing options for your next development or expansion – we’re here to help. We’ll work with you to find the right combination of assets and asset management software that fulfils your power requirements reliably and cost-effectively. Connect with one of our experts to get the process started. For more from Caterpillar, click here.

A-Gas completes large-scale DC refrigerant recovery project
A-Gas, a company specialising in Lifecycle Refrigerant Management (LRM), has completed a major refrigerant recovery project for a global technology provider, marking a significant environmental milestone for the data centre sector. More than 73,000 lbs (33,000 kg) of R410A were safely recovered across five buildings containing 222 cooling units scheduled for decommissioning. The work, carried out under challenging summer conditions, prevented the release of greenhouse gases equivalent to 70,226 tonnes of carbon dioxide (CO₂e). The project was managed by A-Gas Rapid Recovery, the company’s on-site refrigerant recovery division, which specialises in high-speed, compliant recovery operations for commercial and industrial facilities. Environmental and operational impact A-Gas said the recovery operation demonstrated its commitment to safe and environmentally responsible refrigerant lifecycle management. The project not only reduced environmental impact, but also delivered financial benefits to the client through the A-Gas buyback programme. Rapid Recovery’s process is designed to complete complex projects quickly, with recovery speeds up to 10 times faster than conventional methods, helping reduce downtime during critical infrastructure transitions. The operation included full Environmental Protection Agency (EPA) documentation, refrigerant analysis, and regulatory compliance checks throughout. A-Gas said its approach combines global expertise with safety-first practices to help technology and data centre clients meet both operational and sustainability goals. For more from A-Gas, click here.

Rethinking fuel control
In this exclusive article for DCNN, Jeff Hamilton, Fuel Oil Team Manager at Preferred Utilities Manufacturing Corporation, explores how distributed control systems can enhance reliability, security, and scalability in critical backup fuel infrastructure: Distributed architecture for resilient infrastructure Uninterrupted power is non-negotiable for data centres to provide continuity through every possible scenario, from extreme weather events to grid instability in an ageing infrastructure. Generators, of course, are central to this resilience, but we must also consider the fuel storage infrastructure that powers them. The way the fuel is monitored, delivered, and secured by a control system ultimately determines whether a backup system succeeds or fails when it is needed most. The risks of centralised control A traditional fuel control system typically uses a centralised controller such as a programmable logic controller (PLC) to manage all components. The PLC coordinates data from sensors, controls pumps, logs events, and communicates with building automation systems. Often, this controller connects through hardwired, point-to-point circuits that span large distances throughout the facility. This setup creates a couple of potential vulnerabilities: 1. If the central controller fails, the entire fuel system can be compromised. A wiring fault or software error may take down the full network of equipment it supports. 2. Cybersecurity is also a concern when using a centralised controller, especially if it’s connected to broader network infrastructure. A single breach can expose your entire system. Whilst these vulnerabilities may be acceptable in some industrial situations, modern data centres demand more robust and secure solutions. Decentralisation in control architecture addresses these concerns. Distributed logic and redundant communications Next-generation fuel control systems are adopting architectures with distributed logic, meaning that control is no longer centralised in one location. Instead, each field controller—or “node”—has its own processor and local interface. These nodes operate autonomously, running dedicated programs for their assigned devices (such as tank level sensors or transfer pumps). These nodes then communicate with one another over redundant communication networks. This peer-to-peer model eliminates the need for a master controller. If one node fails or if communication is interrupted, others continue operating without disruption. This means that pump operations, alarms, and safety protocols all remain active because each node has its own logic and control. This model increases both uptime and safety; it also simplifies installation. Since each node handles its own logic and display, it needs far less wiring than centralised systems. Adding new equipment involves simply installing a new node and connecting it to the network, rather than overhauling the entire system. Built-in cybersecurity through architecture A system’s underlying architecture plays a key role in determining its vulnerability to cybersecurity hacks. Centralised systems can provide a single entry point to an entire system. Distributed control architectures offer a fundamentally different security profile. Without a single controller, there is no single target. Each node operates independently and the communication network does not require internet-facing protocols. In some applications, distributed systems have even been configured to work in physical isolation, particularly where EMP protection is required. Attackers seeking to disrupt operations would need to compromise multiple nodes simultaneously, a task substantially more difficult than targeting a central controller. Even if one segment is compromised or disabled, the rest of the system continues to function as designed. This creates a hardened, resilient infrastructure that aligns with zero-trust security principles. Safety and redundancy by default Of course, any fuel control system must not just be secure; it must also be safe. Distributed systems offer advantages here as well. Each node can be programmed with local safety interlocks. For example, if a tank level sensor detects overfill, the node managing that tank can shut off the pump without needing permission from a central controller. Other safety features often include dual-pump rotation to prevent uneven wear, leak detection, and temperature or pressure monitoring with response actions. These processes run locally and independently. Even if communication between nodes is lost, the safety routines continue. Additionally, touchscreens or displays on individual nodes allow on-site personnel to access diagnostics and system data from any node on the network. This visibility simplifies troubleshooting and provides more oversight of real-time conditions. Scaling with confidence Data centres require flexibility to grow and adapt. However, traditional control systems make changes like upgrading infrastructure, increasing power, and installing additional backup systems costly and complex, often requiring complete rewiring or reprogramming. Distributed control systems make scaling more manageable. Adding a new generator or day tank, for example, involves connecting a new controller node and loading its program. Since each node contains its own logic and communicates over a shared network, the rest of the system continues operating during the upgrade. This minimises downtime and reduces installation costs. Some systems even allow live diagnostics during commissioning, which can be particularly valuable when downtime is not an option. A better approach for critical infrastructure Data centres face incredible pressure to deliver continuous performance, efficiency, and resilience. Backup fuel systems are a vital part of this reliability strategy, but the way these systems are controlled and monitored is changing. Distributed control architectures offer a smarter, safer path forwards. Preferred Utilities Manufacturing Corporation is committed to supporting data centres to better manage their critical operations. This commitment is reflected in products and solutions like its Preferred Fuel System Controller (FSC), a distributed control architecture that offers all the features described throughout this article, including redundant, masterless/node-based communication, providing secure, safe, and flexible fuel system control. With Preferred’s expertise, a distributed control architecture can be applied to system sizes ranging from 60 to 120 day tanks.

Arteco introduces ECO coolants for data centres
Arteco, a Belgian manufacturer of heat transfer fluids and direct-to-chip coolants, has expanded its coolant portfolio with the launch of ECO versions of its ZITREC EC product line, designed for direct-to-chip liquid cooling in data centres. Each product is manufactured using renewable or recycled feedstocks with the aim of delivering a significantly reduced product carbon footprint compared with fossil-based equivalents, while maintaining the same thermal performance and reliability. Addressing growing thermal challenges As demand for high-performance computing rises, driven by artificial intelligence (AI) and other workloads, operators face increasing challenges in managing heat loads efficiently. Arteco’s ZITREC EC line was developed to support liquid cooling systems in data centres, enabling high thermal performance and energy efficiency. The new ECO version incorporates base fluids, Propylene Glycol (PG) or Ethylene Glycol (EG), sourced from certified renewable or recycled materials. By moving away from virgin fossil-based resources, ECO products aim to help customers reduce scope 3 emissions without compromising quality. Serge Lievens, Technology Manager at Arteco, says, “Our comprehensive life cycle assessment studies show that the biggest environmental impact of our coolants comes from fossil-based raw materials at the start of the value chain. "By rethinking those building blocks and incorporating renewable and/or recycled raw materials, we are able to offer products with significantly lower climate impact, without compromising on high quality and performance standards.” Certification and traceability Arteco’s ECO coolants use a mass balance approach, ensuring that renewable and recycled feedstocks are integrated into production while maintaining full traceability. The process is certified under the International Sustainability and Carbon Certification (ISCC) PLUS standard. Alexandre Moireau, General Manager at Arteco, says, “At Arteco, we firmly believe the future of cooling must be sustainable. Our sustainability strategy focuses on climate action, smart use of resources, and care for people and communities. "This new family of ECO coolants is a natural extension of that commitment. Sustainability for us is a continuous journey, one where we keep researching, innovating, and collaborating to create better, cleaner cooling solutions.” For more from Arteco, click here.

Panduit launches EL2P intelligent PDU
Panduit, a manufacturer of electrical and network infrastructure solutions, has introduced the EL2P Intelligent Power Distribution Unit (iPDU), designed to improve power management in mission-critical data centre environments. With rising rack power densities driven by artificial intelligence (AI) workloads and broader digital transformation, the EL2P series provides data centre operators with tools to maintain uptime, optimise capacity, and support sustainability goals. Key aspects of the product include metering accuracy of ±0.5%, advanced cybersecurity, flexible outlet configurations, and integrated environmental sensing. Features and capabilities The EL2P iPDU includes an integrated colour touchscreen with an automatic interface rotation for different installation orientations, intended to improve usability for technicians. Its hot-swappable controller and display module allow servicing or upgrades without interrupting power, reducing downtime risks. The outlets are designed to provide flexibility by supporting multiple configurations (C13, C15, C19, or C21) within a single unit. The iPDU also supports extended operating temperatures up to 60°C, making it suitable for high-density racks and constrained edge environments. Cybersecurity is addressed with compliance to UL 2900-1 and IEC 62443-4-2 standards, secure code signing, 802.1x authentication, and a USGv6-certified IPv6 stack. Additional functions include: • Dual 1Gb Ethernet with daisy-chain capability – enabling up to 64 iPDUs to share one IP address and switch port• Native Cisco Nexus Dashboard integration – providing energy and sustainability insights without external hardware• Secure Zero Touch Provisioning (sZTP) – for faster configuration and scalable deployment• Redfish and RESTful API integration – ensuring compatibility with DCIM and cloud platforms Available in single- and three-phase models, the EL2P series offers input capacities from 5kVA to 43.5kVA and comes with dual-rated approvals for both North America and EMEA. Martin Kandziora, Senior Marketing Manager EMEA at Panduit, says, “The EL2P is a direct response to our customers’ demand for intelligent power management that simplifies installation, enhances security, and provides the granular visibility needed to future-proof operations. "It combines cutting-edge features like hot-swappable controllers, dual 1Gb Ethernet, and best-in-class metering accuracy in a single platform.” Panduit says the EL2P series is designed for colocation providers requiring tenant-level billing, hyperscale and cloud operators demanding high-density outlet configurations, and enterprises seeking scalable and secure power distribution. For more from Panduit, click here.

GF partners with NTT Facilities on sustainable cooling
GF, a provider of piping systems for data centre cooling systems, has announced a collaboration with NTT Facilities in Japan to support the development of sustainable cooling technologies for data centres. The partnership involves GF supplying pre-insulated piping for the 'Products Engineering Hub for Data Center Cooling', a testbed and demonstration site operated by NTT Facilities. The hub opened in April 2025 and is designed to accelerate the move from traditional chiller-based systems to alternatives such as direct liquid cooling. Focus on energy-efficient cooling GF is providing its pre-insulated piping for the facility’s water loop. The system is designed to support efficient thermal management, reduce energy losses, and protect against corrosion. GF’s offering covers cooling infrastructure from the facility level through to rack-level systems. Wolfgang Dornfeld, President Business Unit APAC at GF, says, “Our partnership with NTT Facilities reflects our commitment to working side by side with customers to build smarter, more sustainable data centre infrastructure. "Cooling is a critical factor in AI-ready data centres, and our polymer-based systems ensure performance, reliability, and energy efficiency exactly where it matters most.” While the current project focuses on water transport within the facility, GF says it also offers a wider range of polymer-based systems for cooling networks. The company notes that these systems are designed to help improve uptime, increase reliability, and support sustainability targets. For more from GF, click here.



Translate »