Data Centre Infrastructure News & Trends


Rethinking cooling, power, and design for AI
In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, shares his predictions for the data centre industry in 2026. He explains that surging rack densities and GPU power demands are pushing traditional air-cooling beyond its limits, driving the industry towards hybrid cooling environments where airflow containment, liquid cooling, and intelligent controls operate as a single system. These trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Predictions for data centres in 2026 By 2026, the data centre will no longer function as a static host for digital infrastructure; instead, it will behave as a dynamic, adaptive system - one that evolves in real time alongside the workloads it supports. The driving force behind this shift is AI, which is pushing power, cooling, and physical design beyond previously accepted limits. Rack densities that once seemed impossible - 80 to 120 kW - are now commonplace. As GPUs push past 700 W, the thermal cost of compute is redefining core engineering assumptions across the industry. Traditional air-cooling strategies alone can no longer keep pace. However, the answer isn’t simply replacing air with liquid; what’s emerging instead is a hybrid environment, where airflow containment, liquid cooling, and predictive controls operate together as a single, coordinated system. As a result, the long-standing divide between “air-cooled” and “liquid-cooled” facilities is fading. Even in high-performing direct-to-chip (DTC) environments, significant residual heat must still be managed and removed by air. Preventing hot and cold air from mixing becomes critical - not just for stability, but for efficiency. In high-density and HPC environments, controlled airflow is now essential to reducing energy consumption and maintaining predictable performance. By 2026, AI will also play a more active role in managing the thermodynamics of the data centre itself. Coolant distribution units (CDUs) are evolving beyond basic infrastructure into intelligent control points. By analysing workload fluctuations in real time, CDUs can adapt cooling delivery, protect sensitive IT equipment, and mitigate thermal events before they impact performance, making liquid cooling not only more reliable but secure and scalable. This evolution is accelerating the divide between legacy data centres and a new generation of AI-focused facilities. Traditional data centres were built for consistent loads and flexible whitespace. AI infrastructure demands something different: modular design, fault-predictive monitoring, and engineering frameworks proven at hyperscale. To fully unlock AI’s potential, data centre design must evolve alongside it. Immersion cooling sits at the far end of this transition. While DTC remains the preferred solution today and for the foreseeable future, immersion is increasingly viewed as the long-term endpoint for ultra-high-density computing. It addresses thermal challenges that DTC can only partially relieve, enabling facilities to remove much of their airflow infrastructure altogether. Adoption remains gradual due to cost, maintenance requirements, and operational disruption - to name a few - but the real question is no longer if immersion will arrive, but how prepared operators will be when it eventually does. At the same time, the pace of AI growth is exposing the limitations of global supply chains. Slow manufacturing cycles and delayed engineering can no longer support the speed of deployment required. For example, Subzero Engineering’s new manufacturing and R&D facility in Vietnam (serving the APAC region) reflects a broader shift towards localised production and highly skilled regional workforces. By investing in R&D, application engineering, and precision manufacturing, Subzero Engineering is building the capacity needed to support global demand while developing local expertise that strengthens the industry as a whole. Taken together, these trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Cooling, airflow, energy, and structure are no longer separate considerations, but parts of a synchronised ecosystem. By 2026, data centres will become active contributors to the computing lifecycle itself. Operators that plan for adaptability today will be best positioned to lead in the next phase of the digital economy. For more from Subzero Engineering, click here.

Schneider Electric names new VP
Global energy technology company Schneider Electric has appointed Matthew Baynes as Vice President of its Secure Power and Data Centre division for the UK and Ireland. Matthew takes up the role as both countries see rapid growth in digital infrastructure investment, driven by rising demand from artificial intelligence workloads, accelerated data centre construction, and government-backed initiatives. Experience across data centre leadership Matthew has worked in Schneider Electric’s data centre business for nearly 20 years. His most recent position was Global Vice President for Strategic Partners and Cloud and Service Providers, where he led a global team supporting colocation, cloud, and hyperscale customers. Earlier roles included Global Colocation Segment Director, where he launched the company’s first multi-country account programme, now established as a core element of its global approach. Matthew has also held senior leadership positions in the UK and Ireland since Schneider Electric acquired APC in 2007 and worked for several years in the Netherlands supporting European operations. Alongside his corporate responsibilities, Matthew has contributed to industry bodies including techUK and the European Data Centre Association, supporting policy engagement and sustainability initiatives. Commenting on his appointment, Matthew says, “The UK is one of Europe’s most important and vibrant digital infrastructure hubs and, with AI accelerating demand, the next few years present a major opportunity to strengthen its global leadership position. "At the same time, Ireland continues to play a critical role in the region’s digital ecosystem, with its data centre market serving key customers globally. “Data centres are engines for jobs and competitiveness, supporting growth that benefits the digital economy, local communities, and empowering innovation. This is a pivotal moment to shape their role in the UK and Ireland’s digital future, and I’m delighted to accept this new role at such a crucial time.” Pablo Ruiz-Escribano, Senior Vice President for the Secure Power and Data Centre division in Europe, adds, “Matthew’s deep experience in global strategy and both local and regional execution makes him uniquely positioned to lead our Secure Power business in the UK and Ireland during this critical period of growth.” Matthew assumes the role with immediate effect. For more from Schneider Electric, click here.

SPAL targets data centre cooling needs
SPAL Automotive, an Italian manufacturer of electric cooling fans and blowers, traditionally for automotive and industrial applications, is preparing to showcase its cooling technology at Data Centre World in London in March 2026, with a particular focus on brushless drive water pumps used in data centre thermal management. The pumps are designed for stationary applications where cooling demand is continuous and high. They feature software control compatibility - including CAN, PWM, and LIN - supporting precise regulation of coolant flow and temperature. The company says the pumps consume less power than mechanically driven units and use IP6K9K-rated brushless systems intended to mitigate issues such as overload, reverse polarity, and overvoltage. The role of cooling components in data centres Alongside its pumps, SPAL will display its wider cooling portfolio, which includes fans and blowers designed for controlled airflow and heat dissipation. The company plans to highlight the use of matched replacement components, particularly for systems that rely on coordinated assemblies of fans, pumps, and related controls. James Bowett, General Manager at SPAL UK, says, “In a world where costs are constantly under pressure, it’s false economy to opt for cheaper parts as this will not only affect the performance of the component itself, but the entire suite of parts within a system. "The only way to ensure effective, reliable, long-life operation is to replicate the set up installed at the point of manufacture. That means choosing the best calibre parts throughout.” SPAL states that its products are supplied with a four-year manufacturer’s warranty and are used to help maintain stable conditions for sensitive electronics. The company highlights that the growth of data centres linked to AI and cloud services is increasing demand for equipment designed specifically for energy efficiency, water use, and controlled cooling. SPAL will exhibit at Data Centre World on Stand F15, held at ExCeL London on 4–5 March 2026.

Jabil acquires Hanley Energy Group
Jabil, a US provider of electronics manufacturing and supply chain services, has completed the acquisition of Hanley Energy Group, a provider of energy management and critical power systems for the data centre infrastructure market. The transaction was completed on 2 January 2026 and was valued at approximately $725 million (£536 million), with contingent consideration of up to $58 million (£42.8 million) linked to future revenue targets. The acquisition was completed as an all-cash transaction. TM Capital acted as exclusive financial adviser to Hanley Energy Group, while UBS Investment Bank advised Jabil. A focus on data centre power management Jabil says the acquisition is intended to strengthen its capabilities in data centre power management, particularly as demand increases from artificial intelligence workloads. Hanley Energy Group operates across 13 locations globally - with headquarters in Stamullen, Ireland, and in Ashburn, Virginia, USA - employing around 850 staff. Founded in 2009, Hanley Energy Group works across the design, supply, installation, and commissioning of power and energy management systems, supporting infrastructure from the grid through to the data centre rack. The company also provides lifecycle services, including maintenance and operational support. Matt Crowley, Executive Vice President of Global Business Units, Intelligent Infrastructure at Jabil, comments, “We're excited to welcome Hanley Energy Group and their extensive expertise in power systems and energy optimisation to the Jabil team. "Their know-how and capabilities complement Jabil’s existing power management solutions for data centres and will help us deploy and service them down to the rack level.” Ed Bailey, Senior Vice President and Chief Technology Officer, Intelligent Infrastructure at Jabil, adds, “Data centre power management will only become more critical as hyperscalers ramp the availability of their AI technologies. "This acquisition of Hanley Energy Group, coupled with our growing thermal management capabilities, aligns well with Jabil’s strategy to deliver custom solutions for the world’s AI leaders across the data centre lifecycle.” Clive Gilmore, CEO of Hanley Energy Group, notes, “Joining forces with Jabil will supercharge our ability to deliver end-to-end, scalable, and energy-efficient solutions for the world’s most demanding data centre environments. "Our customers will benefit from the expanded reach of Jabil’s global manufacturing footprint and supply chain, access to broader capabilities across the data centre lifecycle, and opportunities for sustainable growth to meet the evolving needs of AI hyperscalers.” Dennis Nordon, Managing Director at Hanley Energy Group, concludes, “This is more than an acquisition; it’s a catalyst for the future of data centre power management. By joining with Jabil, we are positioned to lead the charge in delivering intelligent, sustainable solutions that empower hyperscalers to unlock the full potential of AI.”

1547 announces the McAllen Internet Exchange (MCT-IX)
fifteenfortyseven Critical Systems Realty (1547), a developer and operator of interconnected data centres and carrier hotels across North America, has announced the launch of the McAllen Internet Exchange, known as MCT-IX, located within the Chase Tower in McAllen, Texas. Chase Tower has long operated as a carrier hotel and a key aggregation point for cross-border network traffic between the United States and Mexico. The introduction of an internet exchange within the building provides a local platform for traffic exchange in a facility already used by multiple network operators. MCT-IX has been formally registered with ARIN and is now accepting initial participants, with several networks already committing ports. Interest in the exchange follows continued growth in network activity within Chase Tower. During 2025, the site has seen additional carrier deployments, capacity expansions by existing network operators, and increased demand for cross-connects. The building’s owner has invested more than $6 million (£4.4 million) in infrastructure upgrades, covering backup power, lifts, fire and life safety systems, and HVAC improvements. Capacity expansion and interconnection investment 1547 has also expanded interconnection infrastructure within the building, including the development of a new meet-me room and a dedicated carrier room. The additional space is designed to support growing cross-connect demand and to provide direct access between networks, the new internet exchange, and other tenants within the facility. Further capacity expansion is underway to support both existing data centre tenants and future MCT-IX participants; this includes an additional 500 kW of colocation capacity within Chase Tower, alongside a separate 3MW, 13,000ft² data centre annex. Both projects are scheduled for completion in Q4 2026. J. Todd Raymond, CEO and Managing Director of 1547, says, “Announcing MCT-IX is an important milestone for both 1547 and the McAllen market. "With formal ARIN recognition and early port commitments already underway, it is clear there is strong demand for an internet exchange that builds on the long-established interconnection ecosystem inside Chase Tower. "As owners of the carrier hotel, we are committed to supporting this next phase of growth.” The exchange is expected to reduce reliance on upstream routing that currently sends cross-border traffic outside the region before reaching its destination, giving networks a more local option for traffic exchange. John Bonczek, Chief Revenue Officer of 1547, adds, “Across Chase Tower, we are seeing measurable increases in interconnection activity, from new deployments to expanded capacity and growing interest in route diversity. "MCT-IX aligns with the needs of the ecosystem inside the building and complements our planned expansion.” 1547 says it will provide further updates as the exchange progresses through its launch phases and participation increases. For more from 1547, click here.

Ireland’s first liquid-cooled AI supercomputer
CloudCIX, an Irish provider of open-source cloud computing platforms and data centre services, and AlloComp, an Irish provider of AI infrastructure and sustainable computing solutions, have announced the deployment of Ireland’s first liquid-cooled NVIDIA HGX-based supercomputer at CloudCIX’s facility in Cork, marking a development in the country’s AI and high performance computing (HPC) infrastructure. Delivered recently and scheduled to go live in the coming weeks, the system is based on NVIDIA’s Blackwell architecture and supplied through Dell Technologies. It represents an upgrade to the Boole Supercomputer and is among the first liquid-cooled installations of this class in Europe. The upgraded system is intended to support industry users, startups, applied research teams, and academic spin-outs that require high performance, sovereign compute capacity within Ireland. Installation and infrastructure requirements The system stands more than 2.5 metres tall and weighs close to one tonne, requiring structural modifications during installation. Costellos Engineering carried out the building works and precision placement, including the creation of a new access point to accommodate the liquid-cooled rack. Jerry Sweeney, Managing Director of CloudCIX, says, “More and more Irish companies are working with AI models that demand extreme performance and tight control over data. This upgrade gives industry, startups, and applied researchers a world-class compute platform here in Ireland, close to their teams, their systems, and their customers.” The project was led by AlloComp, CloudCIX’s AI infrastructure partner, which supported system selection, supply coordination, and technical deployment. AlloComp co-founder Niall Smith comments, “[The] Boole Supercomputer upgrade represents a major step forward in what’s possible with AI, but it also marks a fundamental shift in the infrastructure required to power it. "Traditional data centres average roughly 8 kW per rack; today’s advanced AI systems are already at [an] unprecedented 120 kW per rack, and the next generation is forecast to reach 600 kW. Liquid cooling is no longer optional; it is the only way to deliver the density, efficiency, and performance for demanding AI workloads.” Kasia Zabinska, co-founder of AlloComp, adds, “Supporting CloudCIX in delivering Ireland’s first liquid-cooled system of this type is an important milestone. The result is a high-density platform designed to give Irish teams the performance, control, and sustainability they need to develop and deploy AI.” The system is expected to support larger model training, advanced simulation, and other compute-intensive workloads across sectors including medtech, pharmaceuticals, manufacturing, robotics, and computer vision. CloudCIX says it will begin onboarding customers as part of its sovereign AI infrastructure offering in the near term. To mark the deployment, CloudCIX and AlloComp plan to host a national event in January 2026, focused on supercomputing and next-generation AI infrastructure. The event will bring together industry, research, and policy stakeholders to discuss the role of sovereign and energy-efficient compute in Ireland’s AI development.

Motivair by Schneider Electric introduces new CDUs
Motivair, a US provider of liquid cooling solutions for data centres and AI computing, owned by Schneider Electric, has introduced a new range of coolant distribution units (CDUs) designed to address the increasing thermal requirements of high performance computing and AI workloads. The new units are designed for installation in utility corridors rather than within the white space, reflecting changes in how liquid cooling infrastructure is being deployed in modern data centres. According to the company, this approach is intended to provide operators with greater flexibility when integrating cooling systems into different facility layouts. The CDUs will be available globally, with manufacturing scheduled to increase from early 2026. Motivair states that the range supports a broader set of operating conditions, allowing data centre operators to use a wider range of chilled water temperatures when planning and operating liquid cooled environments. The additions expand the company’s existing liquid cooling portfolio, which includes floor-mounted and in-rack units for use across hyperscale, colocation, edge, and retrofit sites. Cooling design flexibility for AI infrastructure Motivair says the new CDUs reflect changes in infrastructure design as compute densities increase and AI workloads become more prevalent. The company notes that operators are increasingly placing CDUs outside traditional IT spaces to improve layout flexibility and maintenance access, as having multiple CDU deployment options allows cooling approaches to be aligned more closely with specific data centre designs and workload requirements. The company highlights space efficiency, broader operating ranges, easier access for maintenance, and closer integration with chiller plant infrastructure as key considerations for operators planning liquid cooling systems. Andrew Bradner, Senior Vice President, Cooling Business at Schneider Electric, says, “When it comes to data centre liquid cooling, flexibility is the key with customers demanding a more diverse and larger portfolio of end-to-end solutions. "Our new CDUs allow customers to match deployment strategies to a wider range of accelerated computing applications while leveraging decades of specialised cooling experience to ensure optimal performance, reliability, and future-readiness.” The launch marks the first new product range from Motivair since Schneider Electric acquired the company in February 2025. Rich Whitmore, CEO of Motivair, comments, “Motivair is a trusted partner for advanced liquid cooling solutions and our new range of technologies enables data centre operators to navigate the AI era with confidence. "Together with Schneider Electric, our goal is to deliver next-generation cooling solutions that adapt to any HPC, AI, or advanced data centre deployment to deliver seamless scalability, performance, and reliability when it matters most.” For more from Schneider Electric, click here.

ABB, Ark deploy medium voltage UPS in UK
ABB, a multinational corporation specialising in industrial automation and electrification products, has completed what it describes as the UK’s first deployment of a medium voltage uninterruptible power supply (UPS) system at Ark Data Centres’ Surrey campus. The installation, with a capacity of 25MVA, is intended to support rising demand for high-density AI computing and large-scale digital workloads. Ark Data Centres is among the early adopters of ABB’s medium voltage power architecture, which combines grid connection and UPS at the same voltage level to accommodate the growing electrical requirements of AI hardware. The project was delivered in partnership with ABB and JCA. The installation forms part of Ark’s ongoing expansion, including electrical capacity for next-generation GPUs used in AI training and inference. These systems support high-throughput computing across sectors such as research, healthcare, finance, media, and entertainment, and require stable, scalable power infrastructure. Medium voltage architecture for AI workloads Andy Garvin, Chief Operating Officer at Ark Data Centres, comments, “AI is accelerating data centre growth and intensifying the pressure to deliver capacity that is efficient, resilient, and sustainable. With ABB, we’ve delivered a first-of-its-kind solution that positions Ark to meet these challenges while supporting the UK’s digital future.” Stephen Gibbs, UK Distribution Solutions Marketing and Sales Director at ABB Electrification, adds, “We’re helping data centres design from day one for emerging AI workloads. Our medium voltage UPS technology is AI-ready and a critical step in meeting the power demands of future high-density racks. Delivered as a single solution, we are supporting today’s latest technology and futureproofing for tomorrow’s megawatt-powered servers. "ABB’s new medium voltage data centre architecture integrates HiPerGuard, the industry’s first solid-state medium voltage UPS, with its UniGear MV switchgear and Zenon ZEE600 control system into a single, end-to-end system. This approach eliminates interface risks and streamlines coordination across design, installation, and commissioning.” Steve Hill, Divisional Contracts Director at JCA, says, “Delivering a project of this scale brings challenges. Having one partner responsible for the switchgear, UPS, and controls reduced complexity and helped keep the programme on track. "Working alongside ABB, we were able to coordinate the installation and commissioning effectively so that Ark could benefit from the new system without delays or risks.” The system reportedly provides up to 25MVA of conditioned power, achieving 98% efficiency under heavy load and freeing floor space for AI computing equipment. Stabilising power at medium voltage should also reduce generator intervention and energy losses. For more from ABB, click here.

Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

Siemens, nVent develop reference design for AI DCs
German multinational technology company Siemens and nVent, a US manufacturer of electrical connection and protection systems, are collaborating on a liquid cooling and power reference architecture intended for hyperscale AI environments. The design aims to support operators facing rising power densities, more demanding compute loads, and the need for modular infrastructure that maintains uptime and operational resilience. The joint reference architecture is being developed for 100MW-scale AI data centres using liquid-cooled infrastructure such as the NVIDIA DGX SuperPOD with GB200 systems. It combines Siemens’ electrical and automation technology with NVIDIA’s reference design framework and nVent’s liquid cooling capabilities. The companies state that the architecture is structured to be compatible with Tier III design requirements. Reference model for power and cooling integration “We have decades of expertise supporting customers’ next-generation computing infrastructure needs,” says Sara Zawoyski, President of Systems Protection at nVent. “This collaboration with Siemens underscores that commitment. "The joint reference architecture will help data centre managers deploy our cutting-edge cooling infrastructure to support the AI buildout.” Ciaran Flanagan, Global Head of Data Center Solutions at Siemens, adds, “This reference architecture accelerates time-to-compute and maximises tokens-per-watt, which is the measure of AI output per unit of energy. “It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Together with nVent and our broader ecosystem of partners, we’re connecting the dots across the value chain to drive innovation, interoperability, and sustainability, helping operators build future-ready data centres that unlock AI’s full potential.” Reference architectures are increasingly used by data centre operators to support rapid deployment and consistent interface standards. They are particularly relevant as facilities adapt to higher rack-level densities and more intensive computing requirements. Siemens says it contributes its experience in industrial electrical systems and automation, ranging from medium- and low-voltage distribution to energy management software. nVent adds that it brings expertise in liquid cooling, working with chip manufacturers, original equipment makers, and hyperscale operators. For more from Siemens, click here.



Translate »