Data Centre Infrastructure News & Trends


ABB, Ark deploy medium voltage UPS in UK
ABB, a multinational corporation specialising in industrial automation and electrification products, has completed what it describes as the UK’s first deployment of a medium voltage uninterruptible power supply (UPS) system at Ark Data Centres’ Surrey campus. The installation, with a capacity of 25MVA, is intended to support rising demand for high-density AI computing and large-scale digital workloads. Ark Data Centres is among the early adopters of ABB’s medium voltage power architecture, which combines grid connection and UPS at the same voltage level to accommodate the growing electrical requirements of AI hardware. The project was delivered in partnership with ABB and JCA. The installation forms part of Ark’s ongoing expansion, including electrical capacity for next-generation GPUs used in AI training and inference. These systems support high-throughput computing across sectors such as research, healthcare, finance, media, and entertainment, and require stable, scalable power infrastructure. Medium voltage architecture for AI workloads Andy Garvin, Chief Operating Officer at Ark Data Centres, comments, “AI is accelerating data centre growth and intensifying the pressure to deliver capacity that is efficient, resilient, and sustainable. With ABB, we’ve delivered a first-of-its-kind solution that positions Ark to meet these challenges while supporting the UK’s digital future.” Stephen Gibbs, UK Distribution Solutions Marketing and Sales Director at ABB Electrification, adds, “We’re helping data centres design from day one for emerging AI workloads. Our medium voltage UPS technology is AI-ready and a critical step in meeting the power demands of future high-density racks. Delivered as a single solution, we are supporting today’s latest technology and futureproofing for tomorrow’s megawatt-powered servers. "ABB’s new medium voltage data centre architecture integrates HiPerGuard, the industry’s first solid-state medium voltage UPS, with its UniGear MV switchgear and Zenon ZEE600 control system into a single, end-to-end system. This approach eliminates interface risks and streamlines coordination across design, installation, and commissioning.” Steve Hill, Divisional Contracts Director at JCA, says, “Delivering a project of this scale brings challenges. Having one partner responsible for the switchgear, UPS, and controls reduced complexity and helped keep the programme on track. "Working alongside ABB, we were able to coordinate the installation and commissioning effectively so that Ark could benefit from the new system without delays or risks.” The system reportedly provides up to 25MVA of conditioned power, achieving 98% efficiency under heavy load and freeing floor space for AI computing equipment. Stabilising power at medium voltage should also reduce generator intervention and energy losses. For more from ABB, click here.

Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

Siemens, nVent develop reference design for AI DCs
German multinational technology company Siemens and nVent, a US manufacturer of electrical connection and protection systems, are collaborating on a liquid cooling and power reference architecture intended for hyperscale AI environments. The design aims to support operators facing rising power densities, more demanding compute loads, and the need for modular infrastructure that maintains uptime and operational resilience. The joint reference architecture is being developed for 100MW-scale AI data centres using liquid-cooled infrastructure such as the NVIDIA DGX SuperPOD with GB200 systems. It combines Siemens’ electrical and automation technology with NVIDIA’s reference design framework and nVent’s liquid cooling capabilities. The companies state that the architecture is structured to be compatible with Tier III design requirements. Reference model for power and cooling integration “We have decades of expertise supporting customers’ next-generation computing infrastructure needs,” says Sara Zawoyski, President of Systems Protection at nVent. “This collaboration with Siemens underscores that commitment. "The joint reference architecture will help data centre managers deploy our cutting-edge cooling infrastructure to support the AI buildout.” Ciaran Flanagan, Global Head of Data Center Solutions at Siemens, adds, “This reference architecture accelerates time-to-compute and maximises tokens-per-watt, which is the measure of AI output per unit of energy. “It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Together with nVent and our broader ecosystem of partners, we’re connecting the dots across the value chain to drive innovation, interoperability, and sustainability, helping operators build future-ready data centres that unlock AI’s full potential.” Reference architectures are increasingly used by data centre operators to support rapid deployment and consistent interface standards. They are particularly relevant as facilities adapt to higher rack-level densities and more intensive computing requirements. Siemens says it contributes its experience in industrial electrical systems and automation, ranging from medium- and low-voltage distribution to energy management software. nVent adds that it brings expertise in liquid cooling, working with chip manufacturers, original equipment makers, and hyperscale operators. For more from Siemens, click here.

tde expands tML breakout module for 800GbE ethernet
trans data elektronik (tde), a German manufacturer of fibre optic and copper cabling systems for data centres, has further developed its tML system and made it fit for increased network requirements, with the new breakout modules now supporting transceivers up to 800GbE. QSFP, QSFP-DD, and OSFP transceivers can now be used more efficiently and split into ports with lower data rates (4 x 100GbE or 8 x 100GbE). This allows data centre and network operators to increase the port density of their switch and router chassis and make better use of existing hardware. The company says the new breakout module is particularly suitable for use in high-speed data centres and modern telecommunications infrastructures. “Breakout applications have become firmly established in the high-speed sector,” explains André Engel, Managing Director of tde. "With our tML breakout modules, customers can now use transceivers up to 800GbE and still split them into smaller, clearly structured port speeds. "This allows them to combine maximum port density with very clear, structured cabling." Efficient use of MPO-based high-speed transceivers The current high-speed transceivers in the QSFP, QSFP-DD, and OSFP form factors have MPO connectors with 12, 16, or 24 fibres - in multimode (MM) or single-mode (SM). Typical applications such as SR4, DR4, and FR4 use eight fibres of the 12-fibre MPO, while SR8, DR8, and FR8 use sixteen fibres of a 16- or 24-fibre MPO. This is where tde says it comes in with its tML breakout modules. Depending on the application, the modules split the incoming transmission rate into, for example, four 100GbE or eight 100GbE channels with LC duplex connections. This allows multiple dedicated links with lower data rates to be provided from a single high-speed port - for switches, routers, or storage systems, for example. Alternatively, special versions with other connector faces such as MDC, SN, SC, or E2000 are available. Front MPO connectors and maximum packing density tde also relies on front-integrated MPO connectors for the latest generation of tML breakout modules. The MPO connections are plugged in directly from the front via patch cables. Compared to conventional solutions with rear MPO connectors, this aims to simplify structured patching, ensure clarity in the rack, and facilitate moves, adds, and changes during operation. A high port density can be achieved without the need for separate fanout cables. Eight tML breakout modules can be installed in the tML module carrier with one height unit. Future-proofing and investment protection tde says it has designed the tML breakout module for maximum ease of use. It can only be patched in the front patch panel level, seeking to support structured and clear cabling. Since the tML module carrier can be mixed and matched depending on the desired application and requirements, the breakout module should offer high packing density. Fibre-optic and copper modules can also be combined. André concludes, “With the addition of the tML breakout module, our tML system platform is well equipped for the future and will remain competitive in the long term.”

Energy Estate unveils Tasmanian subsea cables and hubs
Energy Estate Digital, a digital infrastructure platform backed by Energy Estate, has set out plans for new data centre hubs and subsea connectivity in Tasmania as part of a wider programme to support the growth of artificial intelligence infrastructure across Australia, New Zealand, and key international markets. The company is developing subsea cable routes between Australia and New Zealand, as well as major global hubs including California, Japan, and India. These new links are intended to support the expanding AI sector by connecting regions that offer land availability, renewable energy potential, and access to water resources. The platform, launched in December 2024, aligns with national objectives under the Australian National AI Plan announced recently by the Federal Government. As part of its approach to sovereign capability, the company says it intends to offer “golden shares” to councils and economic development agencies in landing-point regions. Two proposed subsea cable landings in Tasmania will form part of the network: the CaliNewy route from California will come ashore at Bell Bay, while the IndoMaris route from Oman and India will land near Burnie. These proposed locations are designed to complement existing cable links between Victoria and Tasmania and future upgrades anticipated through the Marinus Link project. Large-scale energy and infrastructure precincts are expected to develop around these landings, hosting AI facilities, data centre campuses, and other power-intensive industries such as manufacturing, renewable fuels production, and electrified transport. These precincts will be supported by renewable energy and storage projects delivered by Energy Estate and its partners. Partnership to develop industrial and digital precincts Energy Estate has signed a memorandum of understanding with H2U Group to co-develop energy and infrastructure precincts in Tasmania, beginning with the Bell Bay port and wider industrial area. In 2025, H2U signed a similar agreement with TasPorts to explore a large-scale green hydrogen and ammonia facility within the port. Bell Bay has been identified by the Tasmanian Government and the Australian Federal Government as a strategic location for industrial development, particularly for hydrogen and green manufacturing projects. Energy Estate and H2U plan to produce a masterplan that builds on existing infrastructure, access to renewable energy, and the region’s established industrial expertise. The work will also align with ongoing efforts within the Bell Bay Advanced Manufacturing Zone. The digital infrastructure hub proposed for Bell Bay will be the first of three locations Energy Estate intends to develop in Northern Tasmania. The company states that the scale of interest reflects Tasmania’s emerging position as a potential global centre for AI-related activity. Beyond Tasmania, Energy Estate is advancing similar developments in other regions, including the Hunter in New South Wales; Bass Coast and Portland in Victoria; Waikato, Manawatu, and South Canterbury in New Zealand; and the Central Valley in California.

BAC immersion cooling tank gains Intel certification
BAC (Baltimore Aircoil Company), a provider of data centre cooling equipment, has received certification for its immersion cooling tank as part of the Intel Data Center Certified Solution for Immersion Cooling, covering fourth- and fifth-generation Xeon processors. The programme aims to validate immersion technologies that meet the efficiency and sustainability requirements of modern data centres. The Intel certification process involved extensive testing of immersion tanks, cooling fluids, and IT hardware. It was developed collaboratively by BAC, Intel, ExxonMobil, Hypertec, and Micron. The programme also enables Intel to offer a warranty rider for single-phase immersion-cooled Xeon processors, providing assurance on durability and hardware compatibility. Testing was carried out at Intel’s Advanced Data Center Development Lab in Hillsboro, Oregon. BAC’s immersion cooling tanks, including its CorTex technology, were used to validate performance, reliability, and integration between cooling fluid and IT components. “Immersion cooling represents a critical advancement in data centre thermal management, and this certification is a powerful validation of that progress,” says Jan Tysebeart, BAC’s General Manager of Data Centers. “Our immersion cooling tanks are engineered for the highest levels of efficiency and reliability. "By participating in this collaborative certification effort, we’re helping to ensure a trusted, seamless, and superior experience for our customers worldwide.” Joint testing to support industry adoption The certification builds on BAC’s work in high-efficiency cooling design. Its Cobalt immersion system, which combines an indoor immersion tank with an outdoor heat-rejection unit, is designed to support low Power Usage Effectiveness values while improving uptime and sustainability. Jan continues, “Through rigorous joint testing and validation by Intel, we’ve proven that immersion cooling can bridge IT hardware and facility infrastructure more efficiently than ever before. “Certification programmes like this one are key to accelerating industry adoption by ensuring every element - tank, fluid, processor, and memory - meets the same high standards of reliability and performance.” For more from BAC, click here.

Aggreko expands liquid-cooled load banks for AI DCs
Aggreko, a British multinational temporary power generation and temperature control company, has expanded its liquid-cooled load bank fleet by 120MW to meet rising global demand for commissioning equipment used in high-density data centres. The company plans to double this capacity in 2026, supporting deployments across North America, Europe, and Asia, as operators transition to liquid cooling to manage the growth of AI and high-performance computing. Increasing rack densities, now reaching between 300kW and 500kW in some environments, have pushed conventional air-cooling systems to their limits. Liquid cooling is becoming the standard approach, offering far greater heat removal efficiency and significantly lower power consumption. As these systems mature, accurate simulation of thermal and electrical loads has become essential during commissioning to minimise downtime and protect equipment. The expanded fleet enables Aggreko to provide contractors and commissioning teams with equipment capable of testing both primary and secondary cooling loops, including chiller lines and coolant distribution units. The load banks also simulate electrical demand during integrated systems testing. Billy Durie, Global Sector Head – Data Centres at Aggreko, says, “The data centre market is growing fast, and with that speed comes the need to adopt energy efficient cooling systems. With this comes challenges that demand innovative testing solutions. “Our multi-million-pound investment in liquid-cooled load banks enables our partners - including those investing in hyperscale data centre delivery - to commission their facilities faster, reduce risks, and achieve ambitious energy efficiency goals.” Supporting commissioning and sustainability targets Liquid-cooled load banks replicate the heat output of IT hardware, enabling operators to validate cooling performance before systems go live. This approach can improve Power Usage Effectiveness and Water Usage Effectiveness while reducing the likelihood of early operational issues. Manufactured with corrosion-resistant materials and advanced control features, the equipment is designed for use in environments where reliability is critical. Remote operation capabilities and simplified installation procedures are also intended to reduce commissioning timelines. With global data centre power demand projected to rise significantly by 2030, driven by AI and high-performance computing, the ability to validate cooling systems efficiently is increasingly important. Aggreko says it also provides commissioning support as part of project delivery, working with data centre teams to develop testing programmes suited to each site. Billy continues, “Our teams work closely with our customers to understand their infrastructure, challenges, and goals, developing tailored testing solutions that scale with each project’s complexity. "We’re always learning from projects, refining our design and delivery to respond to emerging market needs such as system cleanliness, water quality management, and bespoke, end-to-end project support.” Aggreko states that the latest investment strengthens its ability to support high-density data centre construction and aligns with wider moves towards more efficient and sustainable operations. Billy adds, “The volume of data centre delivery required is unprecedented. By expanding our liquid-cooled load bank fleet, we’re scaling to meet immediate market demand and to help our customers deliver their data centres on time. "This is about providing the right tools to enable innovation and growth in an era defined by AI.” For more from Aggreko, click here.

Vertiv launches DC power system for networks in EMEA
Vertiv, a global provider of critical digital infrastructure, has introduced the Vertiv PowerDirect 7100 Energy, a hybrid-ready DC power platform designed to support next-generation telecom and edge networks across Europe, the Middle East, and Africa (EMEA). The system is engineered to strengthen power stability in environments ranging from robust grid connections to remote or off-grid locations while supporting operators’ wider energy transition strategies. The PowerDirect 7100 Energy provides up to 52 kW of scalable 48 V DC power and reportedly achieves efficiencies of up to 98%. Built on Vertiv’s fourth-generation hybrid architecture, it can integrate inputs from grid, generators, and alternative energy sources including solar, wind, or fuel-cell systems. Intelligent power management for diverse deployments At the core of the platform are Vertiv solar converters and modular rectifiers, managed by the Vertiv NetSure Control Unit. This combination enables remote monitoring, advanced load control, and energy scheduling to optimise system performance and extend equipment lifespan. Dave Wilson, Director of Global Hybrid Solutions at Vertiv, comments, “The world expects energy efficiency and flexibility with the growth of communications such as 5G and edge connectivity. "The Vertiv PowerDirect 7100 Energy gives operators a single, intelligent platform capable of adapting to any grid condition, delivering reliable power while supporting the transition to cleaner, more efficient energy strategies.” Built for harsh and space-constrained sites The system is available in 500 A, 750 A, and 1000 A configurations for telecom and edge data racks. A front-access layout simplifies installation and servicing, while an operating range of –40°C to +65°C allows for deployment in challenging or remote locations. The PowerDirect 7100 Energy joins Vertiv’s wider portfolio of Vertiv NetSure systems and hybrid-energy platforms within the company’s power-train architecture. Paired with Vertiv thermal management, IT management, and lifecycle support services, it seeks to provide operators with a foundation for resilient and efficient digital infrastructure. For more from Vertiv, click here.

Terra Innovatum, Uvation agree micro-modular nuclear pilot
Terra Innovatum Global, a developer of micro-modular nuclear reactors, and Uvation, an integrated technology provider, have signed a Letter of Intent (LOI) to launch a 1MWe pilot programme of Terra Innovatum’s micro-modular nuclear technology, with an option to scale up to 100MWe. The pilot is intended to support Uvation’s growing requirements for high-density AI and modular data centre infrastructure. Terra Innovatum develops micro-modular nuclear reactors, while Uvation focuses on technology platforms designed for large-scale, performance-intensive AI workloads. The companies state that behind-the-meter nuclear generation could provide a resilient and scalable alternative to grid-dependent power for data centre development. Alessandro Petruzzi, co-founder and CEO at Terra Innovatum, comments, “Uvation’s data centre expansion requires infrastructure that is not only scalable, but fundamentally resilient. "By integrating Terra Innovatum’s SOLO micro-modular reactor, we will offer a behind-the-meter energy source capable of delivering safe, stable, high-density power that traditional grids cannot guarantee. "SOLO adds built-in safety and provides redundancy - important for data centres, de-risking energy deployment during maintenance or shutdowns, ensuring continuity independent of power shortages, and enhancing cybersecurity protection. "This enables next-generation, high-performance modular data centres powered by a clean, uninterrupted energy backbone - unlocking new possibilities for AI, HPC, and mission-critical workloads.” Nuclear as an alternative pathway for energy-constrained AI projects Giordano Morichi, Founding Partner, Chief Business Development Officer and Investor Relations, adds, “As AI infrastructure outpaces today’s grid, the constraint is no longer processing power; it’s reliable, cost-effective power. Uvation’s future commitment to behind-the-meter nuclear reflects a broader market reality: energy security now defines the speed at which AI can scale. "SOLO fast-tracks AI commercialisation by providing near-instant, CO₂-free, revenue-generating power while sidestepping the delays and CapEx overruns inherent to traditional grid-dependent solutions. "This agreement also strengthens our commercial deployment and positions nuclear as the most viable path to support Uvation’s planned multi-gigawatt growth in the AI and data centre sector.” Reen Singh, CEO of Uvation, notes, “Global demand for AI, driven by the US, and the need for sovereign cloud infrastructure is accelerating far faster than the available power to support it. Some of our off-takers forecast demand exceeding 1GW, yet current infrastructure and lack of readily available access to energy limit the scale of deployments. “Power shortages have been major forces in this industry’s project delays. By integrating Terra Innovatum’s SOLO reactor into our future roadmap, we will look to secure immediate power along with a reliable, behind-the-meter energy source that enables scalable AI, inference, and edge deployments. "Our future 1MWe SOLO pilot program represents a critical first step, with a path to expand to 100 MWe across multiple sites and potentially several megawatt-scale installations throughout the US.” The companies intend the pilot to act as a foundation for potential multi-site expansion, citing accelerating power demand and increasing constraints on conventional grid-connected data centre projects.

SPP Pumps brings fire and cooling experience to DCs
SPP Pumps, a manufacturer of centrifugal pumps and systems, and its subsidiary, SyncroFlo, have combined their fire protection and cooling capabilities to support the expanding data centre sector. The companies aim to offer an integrated approach to pumping, fire suppression, and liquid cooling as operators and contractors face rising demand for large-scale, high-density facilities. The combined portfolio draws on SPP’s nearly 150 years of engineering experience and SyncroFlo’s long history in pre-packaged pump system manufacturing. With modern co-location and hyperscale facilities requiring hundreds of pumps on a single site, the companies state that the joint approach is intended to streamline procurement and project coordination for contractors, consultants, developers, and OEMs. SPP’s offering spans pump equipment for liquid-cooled systems, cooling towers, chillers, CRAC and CRAH systems, water treatment, transformer cooling, heat recovery, and fire suppression. Its fire pump equipment is currently deployed across regulated markets, with SPP and SyncroFlo packages available to meet NFPA 20 requirements. Integrated pump systems for construction efficiency The company says its portfolio also includes pre-packaged pump systems that are modular and tailored to each project. These factory-tested units are designed to reduce installation time and simplify on-site coordination, helping to address construction schedules and cost pressures. Tom Salmon, Group Business Development Manager for Data Centres at SPP and SyncroFlo, comments, “Both organisations have established strong credentials independently, with over 75 data centre projects delivered for the world’s largest operators. "We’re now combining our group’s extensive fire suppression, HVAC, and cooling capabilities. By bringing together our complementary capabilities from SPP, SyncroFlo, and other companies in our group, we can now offer comprehensive solutions that cover an entire data centre's pumping requirements.” John Santi, Vice President of Commercial Sales at SyncroFlo, adds, “Design consultants and contractors tell us lead time is critical. They cannot afford schedule delays. Our pre-packaged systems are factory-tested and ready for immediate commissioning. "With our project delivery experience and expertise across fire suppression, cooling, and heat transfer combined under one roof, we eliminate the coordination headaches of managing multiple suppliers across different disciplines.” Tom continues, “In many growth markets, data centres are now classified as critical national infrastructure, and rightly so. These facilities cannot afford downtime, and our experience with critical infrastructure positions us to best serve this market."



Translate »