Data Centre Infrastructure News & Trends


Multi-million pound Heathrow data centre upgrade completed
Managed IT provider Redcentric has completed a multi-million pound electrical infrastructure upgrade at its Heathrow Corporate Park data centre in London. The project was partly funded through the Industrial Energy Transformation Fund, which supports high-energy organisations adopting lower-carbon technologies. The programme included replacement of legacy uninterruptible power supplies (UPS). As part of the upgrade, Centiel supplied StratusPower modular UPS equipment to protect an existing 7 MW critical load. Redcentric states the system design allows the facility to increase capacity to 10.5 MW without additional infrastructure work. The site reports a rise in UPS operating efficiency from below 90% to more than 97%, which could reduce future emissions over the expected lifecycle of the equipment. Modular UPS deployment and installation Paul Hone, Data Centre Facilities Director at Redcentric, comments, “Our London West colocation data centre is a strategically located facility that offers cost effective ISO-certified racks, cages, private suites, and complete data halls, as well as significant on-site office space. The data centre is powered by 100% renewable energy, sourced solely from solar, wind, and hydro. “In 2023 we embarked on the start of a full upgrade across the facility which included the electrical infrastructure and live replacement of legacy UPS before they reached end of life. This part of the project has now been completed with zero downtime or disruption. “In addition, for 2026, we are also planning a further deployment of 12 MW of power protection from two refurbished data halls being configured to support AI workloads of the future.” Aaron Oddy, Sales Manager at Centiel, adds, “A critical component of the project was the strategic removal of 22 MW of inefficient, legacy UPS systems. By replacing outdated technology with the latest innovation, we have dramatically improved efficiency delivering immediate and substantial cost savings. “StratusPower offers an exceptional 97.6% efficiency, dramatically increasing power utilisation and reducing the data centre's overall carbon footprint - a key driver for Redcentric. “The legacy equipment was replaced by Centiel’s StratusPower UPS system, featuring 14x500kW Modular UPS Systems. This delivered a significant reduction in physical size, while delivering greater resilience as a direct result of StratusPower’s award-winning, unique architecture. Durata carried out the installation work. Paul Hone concludes, “Environmental considerations were a key driver for us. StratusPower is a truly modular solution, ensuring efficient running and maintenance of systems. Reducing the requirement for major midlife service component replacements further adds to its green credentials. “With no commissioning issues [and] zero reliability challenges or problems with the product, we are already talking to the Centiel team about how they can potentially support us with power protection at our other sites.” For more from Centiel, click here.

APR increases power generation capacity to 1.1GW
APR Energy, a US provider of fast-track mobile gas turbine power generation for data centres and utilities, has expanded its mobile power generation fleet after acquiring eight gas turbines, increasing its owned capacity from 850 MW to more than 1.1 GW. The company says the investment reflects rising demand from data centre developers and utilities that require short-term power to support growth while permanent grid connections are delayed. APR currently provides generation for several global customers, including a major artificial intelligence data centre operator. Across multiple regions, new transmission and grid reinforcement projects are taking years to deliver, creating a gap between available power and the needs of electricity-intensive facilities. APR reports growing enquiries from data centre operators that require capacity within months rather than years. Rapid deployment for interim power The company says its turbines can typically be delivered, installed, and brought online within 30 to 90 days, enabling organisations to progress construction schedules and maintain service reliability while longer-term infrastructure is built. Chuck Ferry, Executive Chairman and Chief Executive Officer of APR Energy, comments, “The demand we are seeing is immediate and substantial. “Data centres and utilities need dependable power now. Expanding our capacity allows us to meet that demand with speed, certainty, and proven execution.” APR states that the expanded fleet positions it to support data centre growth at a time when grid access remains constrained, combining rapid deployment with operational experience across international markets.

Vertiv launches new MegaMod HDX configurations
Vertiv, a global provider of critical digital infrastructure, has introduced new configurations of its MegaMod HDX prefabricated power and liquid cooling system for high-density computing deployments in North America and EMEA. The units are designed for environments using artificial intelligence and high-performance computing and allow operators to increase power and cooling capacity as requirements rise. Vertiv states the configurations give organisations a way to manage greater thermal loads while maintaining deployment speed and reducing space requirements. The MegaMod HDX integrates direct-to-chip liquid cooling with air-cooled systems to meet the demands of pod-based AI and GPU clusters. The compact configuration supports up to 13 racks with a maximum capacity of 1.25 MW, while the larger combo design supports up to 144 racks and power capacities up to 10 MW. Both are intended for rack densities from 50 kW to above 100 kW. Prefabricated scaling for high-density sites The hybrid architecture combines direct-to-chip cooling with air cooling as part of a prefabricated pod. According to Vertiv, a distributed redundant power design allows the system to continue operating if a module goes offline, and a buffer-tank thermal backup feature helps stabilise GPU clusters during maintenance or changes in load. The company positions the factory-assembled approach as a method of standardising deployment and planning and supporting incremental build-outs as data centre requirements evolve. The MegaMod HDX configurations draw on Vertiv’s existing power, cooling, and management portfolio, including the Liebert APM2 UPS (uninterruptible power supply), CoolChip CDU (cooling distribution unit), PowerBar busway system, and Unify infrastructure monitoring. Vertiv also offers compatible racks and OCP-compliant racks, CoolLoop RDHx rear door heat exchangers, CoolChip in-rack CDUs, rack power distribution units, PowerDirect in-rack DC power systems, and CoolChip Fluid Network Rack Manifolds. Viktor Petik, Senior Vice President, Infrastructure Solutions at Vertiv, says, “Today’s AI workloads demand cooling solutions that go beyond traditional approaches. "With the Vertiv MegaMod HDX available in both compact and combo solution configurations, organisations can match their facility requirements while supporting high-density, liquid-cooled environments at scale." For more from Vertiv, click here.

Janitza launches UMG 801 power analyser
Modern data centres often face a choice between designing electrical monitoring systems far beyond immediate needs or replacing equipment as sites expand. Janitza, a German manufacturer of energy measurement and power quality monitoring equipment, says its UMG 801 power analyser is designed to avoid this issue by allowing users to increase capacity from eight to 92 current measuring channels without taking systems offline. The analyser is suited to compact switchboards, with a fully expanded installation occupying less DIN rail space than traditional designs that rely on transformer disconnect terminals. Each add-on module introduces eight additional measuring channels within a single sub-unit, reducing physical footprint within crowded cabinets. Expandable monitoring with fewer installation constraints The core UMG 801 unit supports ten virtual module slots that can be populated in any mix. These include conventional transformer modules, low-power modules, and digital input modules. Bridge modules allow measurement points to be located up to 100 metres away without consuming module capacity, reducing wiring impact and installation complexity. Sampling voltage at 51.2 kHz, the analyser provides Class 0.2 accuracy across voltage, current, and energy readings. This level of precision is used in applications such as calculating power usage effectiveness (PUE) to two decimal places, as well as assessing harmonic distortion that may affect uninterruptible power supplies (UPS). Voltage harmonic analysis extends to the 127th order, and transient events down to 18 microseconds can be recorded. Onboard memory of 4 GB also ensures data continuity during network disruptions. The system is compatible with ISO 50001 energy management frameworks and includes two ethernet interfaces that can operate simultaneously to provide redundant communication paths. Native OPC UA and Modbus TCP/IP support enable direct communication with energy management platforms and legacy supervisory control systems, while whitelisting functions restrict access to approved devices. RS-485 additionally provides further support for older infrastructure. Configuration is carried out through an integrated web server rather than proprietary software, and an optional remote display allows monitoring without opening energised cabinets. Installations typically start with a single base unit at the primary distribution level, with additional modules added gradually as demand grows, reducing the need for upfront expenditure and avoiding replacement activity that risks downtime. Janitza’s remote display connects via USB and mirrors the analyser’s interface, providing visibility of all measurement channels from the switchboard front panel. Physical push controls enable parameter navigation, helping users access configuration and measurement information without opening the enclosure. The company notes that carrying out upgrades without interrupting operations may support facilities that cannot accommodate downtime windows. For more from Janitza, cick here.

Data centre cooling options
Modern data centres require advanced cooling methods to maintain performance as power densities rise and workloads intensify. In light of this, BAC (Baltimore Aircoil Company), a provider of data centre cooling equipment, has shared some tips and tricks from its experts. This comes as the sector continues to expand rapidly, with some analysts estimating an 8.5% annual growth rate over the next five years, pushing the market beyond $600 billion (£445 billion) by 2029. AI and machine learning are accelerating this trajectory. Goldman Sachs Research forecasts a near 200TWh increase in annual power demand from 2024 to 2030, with AI projected to represent almost a fifth of global data centre load by 2028. This growth places exceptional pressure on cooling infrastructure. Higher rack densities and more compact layouts generate significant heat, making reliable heat rejection essential to prevent equipment damage, downtime, and performance degradation. The choice of cooling system directly influences efficiency and Total-power Usage Effectiveness (TUE). Cooling technologies inside the facility Two primary approaches dominate internal cooling: air-based systems and liquid-based systems. Air-cooled racks have long been the standard, especially in traditional enterprise environments or facilities with lower compute loads. However, rising heat output, hotspots, and increased energy consumption are testing the limits of air-only designs, contributing to higher TUE and emissions. Liquid cooling offers substantially greater heat-removal capacity. Different approaches to this include: • Immersion cooling, which submerges IT hardware in non-conductive dielectric fluid, enabling efficient heat rejection without reliance on ambient airflow. Immersion tanks are commonly paired with evaporative or dry coolers outdoors, maximising output while reducing energy use. The method also enables denser layouts by limiting thermal constraints. • Direct-to-chip cooling, which channels coolant through cold plates on high-load components such as CPUs and GPUs. While effective, it is less efficient than immersion and can introduce additional complexity. Rear door heat exchangers offer a hybrid path for legacy sites, removing heat at rack level without overhauling the entire cooling architecture. Heat rejection outside the white space Once captured inside the building, heat must be expelled efficiently. A spectrum of outdoor systems support differing site priorities, including energy, water, and climate considerations. Approaches include: • Dry coolers — These are increasingly used in water-sensitive regions. By using ambient air, they eliminate evaporative loss and offer strong Water Usage Effectiveness (WUE), though typically with higher power draw than evaporative systems. In cooler climates, they benefit from free cooling, reducing operational energy. • Hybrid and adiabatic systems — These offer variable modes, balancing energy and water use. They switch between dry operation and wet operation as conditions change, helping operators reduce water consumption while still tapping evaporative efficiencies during peaks. • Evaporative cooling — Through cooling towers or closed-circuit fluid coolers, this remains one of the most energy-efficient options where water is available. Towers evaporate water to remove heat, while fluid coolers maintain cleaner internal circuits, protecting equipment from contaminants. With data centre deployment expanding across diverse climates, operators increasingly weigh water scarcity, power constraints, and sustainability targets. Selecting the appropriate external cooling approach requires evaluating both consumption profiles and regulatory pressures. For more from BAC, click here.

Southco develops blind-mate mechanism for liquid cooling
Southco, a US manufacturer of engineered access hardware including latches, hinges, and fasteners, has developed a high-tolerance blind-mate floating mechanism designed for next-generation liquid-cooled data centres. The company says the design is intended to address mechanical tolerance challenges that affect cooling system efficiency and operational stability. It notes that demand for liquid cooling is increasing as traditional air-cooling methods struggle to manage higher power densities associated with AI workloads and high-performance computing. Adoption is accelerating further as operators pursue sustainability and targeted PUE reductions. Liquid cooling, however, requires reliable physical connections, with Southco highlighting that even small alignment deviations at manifold and cold-plate interfaces can disrupt coolant flow, increase pump energy consumption, and heighten the risk of leaks. Managing mechanical deviation in liquid cooling systems Citing guidance in the Open Compute Project’s rack-mounted manifold requirements, Southco notes that a 1mm deviation can raise flow resistance by 15%, leading to around a 7% increase in pump energy. In large facilities, these effects scale alongside thousands of connection points. The company identifies several contributors to misalignment in operational environments: • Accumulated tolerances between rack formats, including EIA-310-D and ORV3, which may reach ±3.2mm • Displacement caused by vibration during transport and operation • Thermal expansion of materials, including copper manifolds expanding more than 1mm over typical temperature ranges Rigid, low-tolerance couplings can leave systems vulnerable to leaks, rising operational costs, and downtime risk, and the newly introduced blind-mate floating mechanism is designed to absorb movement and compensate for these deviations. The product offers floating tolerance of ±4mm radially, axial displacement absorption up to 6mm, and automatic self-centring when disconnected. The design is intended to support long-term leak prevention and meet standards applicable to OCP and ORV3 liquid cooling deployments. Southco adds that the mechanism includes sealing rated to withstand high-pressure testing in line with ASME B31.3 requirements and is intended to support more than ten years of continuous operation. It uses universal quick-disconnect interfaces to enable “blind” maintenance without precise alignment or tooling. The company positions the technology as a step towards enabling rapid maintenance, reducing equipment handling time, and lowering the risk of service interruption. It also points to reduced energy used by pumps through lower flow resistance. Southco sees future development in integrating sensing for temperature, flow, and pressure; exploring lighter materials; and working towards greater standardisation across suppliers and data centre ecosystems.

LiquidStack secures 300MW CDU order from major US operator
LiquidStack, a global company specialising in liquid cooling for data centres, has announced a 300-megawatt order of coolant distribution unit (CDU) capacity from a major US-based data centre operator. The multi-site order will support AI-ready data centre deployments and highlights accelerating demand for scalable liquid cooling solutions for high-density AI workloads. The order comprises LiquidStack’s high-capacity CDU-1MW, designed to support rapid deployment, high-performance, operational efficiency, and future scalability for the next-generation of data centre environments. Liquid cooling for AI infrastructure The customer, a long-established operator with a growing portfolio of AI-ready facilities across the United States, selected LiquidStack as its liquid cooling partner to support the expansion of AI-ready, high-density infrastructure. LiquidStack says its manufacturing and delivery capabilities enable the accelerated fulfilment of the 300-megawatt order, supporting aggressive build-out timelines across the multiple sites. “Orders of this size signal a clear inflection point for liquid cooling,” says Joe Capes, CEO of LiquidStack. “Operators are committing to liquid cooling as core infrastructure for AI, and LiquidStack is uniquely positioned to support that transition at scale.” This announcement follows continued momentum for LiquidStack, including its inclusion on NVIDIA’s Recommended Vendor List for CDUs, the expansion of its manufacturing capacity in Carrollton, Texas, and increasing adoption of its CDU platforms to support AI and accelerated computing workloads. For more from LiquidStack, click here.

Rethinking cooling, power, and design for AI
In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, shares his predictions for the data centre industry in 2026. He explains that surging rack densities and GPU power demands are pushing traditional air-cooling beyond its limits, driving the industry towards hybrid cooling environments where airflow containment, liquid cooling, and intelligent controls operate as a single system. These trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Predictions for data centres in 2026 By 2026, the data centre will no longer function as a static host for digital infrastructure; instead, it will behave as a dynamic, adaptive system - one that evolves in real time alongside the workloads it supports. The driving force behind this shift is AI, which is pushing power, cooling, and physical design beyond previously accepted limits. Rack densities that once seemed impossible - 80 to 120 kW - are now commonplace. As GPUs push past 700 W, the thermal cost of compute is redefining core engineering assumptions across the industry. Traditional air-cooling strategies alone can no longer keep pace. However, the answer isn’t simply replacing air with liquid; what’s emerging instead is a hybrid environment, where airflow containment, liquid cooling, and predictive controls operate together as a single, coordinated system. As a result, the long-standing divide between “air-cooled” and “liquid-cooled” facilities is fading. Even in high-performing direct-to-chip (DTC) environments, significant residual heat must still be managed and removed by air. Preventing hot and cold air from mixing becomes critical - not just for stability, but for efficiency. In high-density and HPC environments, controlled airflow is now essential to reducing energy consumption and maintaining predictable performance. By 2026, AI will also play a more active role in managing the thermodynamics of the data centre itself. Coolant distribution units (CDUs) are evolving beyond basic infrastructure into intelligent control points. By analysing workload fluctuations in real time, CDUs can adapt cooling delivery, protect sensitive IT equipment, and mitigate thermal events before they impact performance, making liquid cooling not only more reliable but secure and scalable. This evolution is accelerating the divide between legacy data centres and a new generation of AI-focused facilities. Traditional data centres were built for consistent loads and flexible whitespace. AI infrastructure demands something different: modular design, fault-predictive monitoring, and engineering frameworks proven at hyperscale. To fully unlock AI’s potential, data centre design must evolve alongside it. Immersion cooling sits at the far end of this transition. While DTC remains the preferred solution today and for the foreseeable future, immersion is increasingly viewed as the long-term endpoint for ultra-high-density computing. It addresses thermal challenges that DTC can only partially relieve, enabling facilities to remove much of their airflow infrastructure altogether. Adoption remains gradual due to cost, maintenance requirements, and operational disruption - to name a few - but the real question is no longer if immersion will arrive, but how prepared operators will be when it eventually does. At the same time, the pace of AI growth is exposing the limitations of global supply chains. Slow manufacturing cycles and delayed engineering can no longer support the speed of deployment required. For example, Subzero Engineering’s new manufacturing and R&D facility in Vietnam (serving the APAC region) reflects a broader shift towards localised production and highly skilled regional workforces. By investing in R&D, application engineering, and precision manufacturing, Subzero Engineering is building the capacity needed to support global demand while developing local expertise that strengthens the industry as a whole. Taken together, these trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Cooling, airflow, energy, and structure are no longer separate considerations, but parts of a synchronised ecosystem. By 2026, data centres will become active contributors to the computing lifecycle itself. Operators that plan for adaptability today will be best positioned to lead in the next phase of the digital economy. For more from Subzero Engineering, click here.

Schneider Electric names new VP
Global energy technology company Schneider Electric has appointed Matthew Baynes as Vice President of its Secure Power and Data Centre division for the UK and Ireland. Matthew takes up the role as both countries see rapid growth in digital infrastructure investment, driven by rising demand from artificial intelligence workloads, accelerated data centre construction, and government-backed initiatives. Experience across data centre leadership Matthew has worked in Schneider Electric’s data centre business for nearly 20 years. His most recent position was Global Vice President for Strategic Partners and Cloud and Service Providers, where he led a global team supporting colocation, cloud, and hyperscale customers. Earlier roles included Global Colocation Segment Director, where he launched the company’s first multi-country account programme, now established as a core element of its global approach. Matthew has also held senior leadership positions in the UK and Ireland since Schneider Electric acquired APC in 2007 and worked for several years in the Netherlands supporting European operations. Alongside his corporate responsibilities, Matthew has contributed to industry bodies including techUK and the European Data Centre Association, supporting policy engagement and sustainability initiatives. Commenting on his appointment, Matthew says, “The UK is one of Europe’s most important and vibrant digital infrastructure hubs and, with AI accelerating demand, the next few years present a major opportunity to strengthen its global leadership position. "At the same time, Ireland continues to play a critical role in the region’s digital ecosystem, with its data centre market serving key customers globally. “Data centres are engines for jobs and competitiveness, supporting growth that benefits the digital economy, local communities, and empowering innovation. This is a pivotal moment to shape their role in the UK and Ireland’s digital future, and I’m delighted to accept this new role at such a crucial time.” Pablo Ruiz-Escribano, Senior Vice President for the Secure Power and Data Centre division in Europe, adds, “Matthew’s deep experience in global strategy and both local and regional execution makes him uniquely positioned to lead our Secure Power business in the UK and Ireland during this critical period of growth.” Matthew assumes the role with immediate effect. For more from Schneider Electric, click here.

SPAL targets data centre cooling needs
SPAL Automotive, an Italian manufacturer of electric cooling fans and blowers, traditionally for automotive and industrial applications, is preparing to showcase its cooling technology at Data Centre World in London in March 2026, with a particular focus on brushless drive water pumps used in data centre thermal management. The pumps are designed for stationary applications where cooling demand is continuous and high. They feature software control compatibility - including CAN, PWM, and LIN - supporting precise regulation of coolant flow and temperature. The company says the pumps consume less power than mechanically driven units and use IP6K9K-rated brushless systems intended to mitigate issues such as overload, reverse polarity, and overvoltage. The role of cooling components in data centres Alongside its pumps, SPAL will display its wider cooling portfolio, which includes fans and blowers designed for controlled airflow and heat dissipation. The company plans to highlight the use of matched replacement components, particularly for systems that rely on coordinated assemblies of fans, pumps, and related controls. James Bowett, General Manager at SPAL UK, says, “In a world where costs are constantly under pressure, it’s false economy to opt for cheaper parts as this will not only affect the performance of the component itself, but the entire suite of parts within a system. "The only way to ensure effective, reliable, long-life operation is to replicate the set up installed at the point of manufacture. That means choosing the best calibre parts throughout.” SPAL states that its products are supplied with a four-year manufacturer’s warranty and are used to help maintain stable conditions for sensitive electronics. The company highlights that the growth of data centres linked to AI and cloud services is increasing demand for equipment designed specifically for energy efficiency, water use, and controlled cooling. SPAL will exhibit at Data Centre World on Stand F15, held at ExCeL London on 4–5 March 2026.



Translate »