Data Centre Infrastructure News & Trends


Data centre cooling in the AI era
During a busy Data Centre World London 2026, Joe from DCNN caught up with Alistair Barnes (pictured above), Global Head of Mechanical Engineering at Colt DCS, to ask how the mechanical engineering discipline is evolving in response to the rapid rise of AI workloads. The two discussed a variety of topics, from the shift towards liquid cooling solutions to the challenge of keeping pace with ever-increasing rack-level power densities. Here, you can read the full Q&A, in which Alistair shares his perspective on where liquid cooling stands today, how Colt DCS's Global Reference Design philosophy shapes its approach to data centre infrastructure, and what he believes remains the industry's toughest unsolved engineering challenge: Liquid cooling, rack densities, and the future of mechanical engineering Joe: Hi, Alistair! So, how is mechanical engineering keeping pace with the shift to higher-density AI workloads? Alistair: Mechanical engineers are keeping pace with higher‑density AI workloads by moving beyond traditional air‑only cooling and rethinking the entire thermal design stack. Instead of simply supplying cold air, they now operate more like system integrators, collaborating closely with IT and facilities teams to cool heat‑intensive components such as GPUs. This includes integrating direct‑to‑chip cold plates, liquid distribution loops, and hybrid cooling systems capable of managing the extreme heat generated by modern AI hardware. Joe: In your opinion, is liquid cooling now a mainstream solution or still a specialist one? Alistair: Liquid cooling is becoming increasingly mainstream, but the industry isn’t yet at a point where it can rely on liquid alone, as air still plays an important role in most deployments. Operators adopting Global Reference Designs (GRDs) now include liquid‑cooling options to support high‑density AI workloads that air alone can’t efficiently manage. As a result, many still use hybrid setups that combine air cooling with liquid where needed. Closed‑loop systems, such as liquid‑to‑chip, circulate coolant in a sealed loop, ensuring near‑zero wastewater and making them practical and sustainable. Joe: Where does mechanical engineering sit in Colt DCS's broader data centre design philosophy? Alistair: Mechanical engineering sits at the core of our design philosophy, supporting our commitment to delivering scalable, efficient, and sustainable data centre solutions. We adopt a GRD, a standardised and repeatable blueprint that accelerates deployment, optimises cost, and maintains consistent quality while remaining flexible enough to meet local requirements. Mechanical engineers play a key role in shaping the GRD, ensuring mission-critical cooling infrastructure and integrating new technologies across sites to support future growth and reliable operations. Joe: What's the hardest engineering problem the industry hasn't solved yet? Alistair: The hardest engineering problem the industry hasn’t solved is keeping pace with the accelerating rise in rack‑level power densities. Liquid cooling is advancing quickly and can manage far more heat than ever before, but single‑rack densities approaching 2MW and beyond are increasing faster than these solutions can be deployed at scale. The real challenge is delivering this capacity sustainably - balancing cooling performance, energy efficiency, and power availability - all while accelerating build timelines to keep up with customer demand. For more from Colt DCS, click here.

SATLINE completes Tier III infrastructure modernisation
SATLINE, a Lithuanian provider of virtual satellite-to-IP streaming services and colocation for satellite communications, has upgraded its core infrastructure to align with Tier III standards under the Uptime Institute Tier Classification System, strengthening resilience across its power and cooling environments. The upgrade introduces full redundancy across critical systems, enabling concurrent maintainability and removing single points of failure, all without interrupting live operations. The project included a comprehensive overhaul of SATLINE’s infrastructure, namely: • Power redundancy — upgraded from a single generator to two fully redundant generators• Expanded UPS capacity — systems doubled to improve runtime and load handling• Modernised cooling — HVAC systems redesigned for full redundancy and improved efficiency• Tier III-aligned architecture — enabling maintenance without service disruption All improvements were reportedly completed with no customer-impacting downtime. Improved resilience and operational continuity The transition from Tier II to a Tier III-aligned design delivers a fully resilient environment. This allows any component within the infrastructure to be serviced without affecting operations, while also improving fault tolerance and scalability. For customers, the upgrade should provide greater continuity, even during maintenance or future system expansions. Simas Mockevicius, Senior Network Engineer at SATLINE, comments, “Our Tier III–aligned upgrade has already delivered measurable gains in operational resilience. “Building on a 10-year track record of 100% uptime across both network and power, we have further strengthened our infrastructure through fully redundant power generation, increased UPS capacity, and modernised cooling. "The result is a system that not only sustains uninterrupted service, but is engineered to exceed the reliability benchmarks our customers depend on.” The upgrade, according to the SATLINE, forms part of its broader strategy to support the uptime demands of satellite communications and critical connectivity services. The company has also outlined plans to expand into Asia, targeting regions with growing demand for satellite connectivity.

ABB extends VoltaGrid data centre power deal
ABB, a multinational corporation specialising in industrial automation and electrification products, has secured additional orders from VoltaGrid, a Texas-based microgrid power generation company, to support data centre power infrastructure projects, linked to growing demand from AI workloads. The agreement was signed on 25 March 2026 at CERAWeek in Houston, USA, extending the companies’ existing collaboration. The orders are expected to be recorded in the second quarter of 2026. Financial terms were not disclosed. Under the agreement, ABB will supply 35 synchronous condensers with flywheel technology, alongside prefabricated eHouse units. These systems are used to support voltage stability in power networks, particularly for high-density data centre environments. The equipment will form part of VoltaGrid’s behind-the-meter power infrastructure, designed to provide stable and rapidly deployable energy for large-scale data centre operations. Supporting power stability for AI workloads Synchronous condensers help stabilise electricity networks by providing inertia, supporting short-circuit events, and managing reactive power. ABB’s scope also includes medium- and low-voltage distribution systems, as well as excitation systems intended to maintain reliability and uptime. Nathan Ough, CEO of VoltaGrid, says, “VoltaGrid’s power platform is purpose built to deliver large-scale power with exceptional dynamic performance and reliability for next-generation digital infrastructure. “By integrating ABB’s advanced grid stabilisation technologies with our AI-optimised power systems, we are able to meet increasingly stringent transient performance requirements while accelerating deployment at gigawatt scale.” Per Erik Holsten, President of ABB’s Energy Industries division, adds, “Extending our collaboration with VoltaGrid demonstrates the strength of ABB’s businesses working together combining automation, electrification, and motion expertise and technologies with innovative distributed power systems to create greater value for customers. “Together, we are enabling reliable, resilient, and scalable power infrastructure for data centres serving the rapidly growing AI economy.” Data centres accounted for around 1.5% of global electricity consumption in 2024, with the United States representing approximately 45% of that total. For more from ABB, click here.

STULZ, Merford conduct unique acoustic test for data centres
STULZ, a manufacturer of mission-critical air conditioning technology, and Merford, a Dutch specialist in noise control systems and acoustic doors, have completed an acoustic test confirming that a newly developed chiller system can meet strict data centre noise regulations under operational conditions. The test was carried out on a chiller for a project in Valeggio sul Mincio, Italy. It used a validated measurement methodology designed to reflect real-world performance, as operators increasingly consider noise alongside cooling capacity and energy efficiency. As data centre power densities increase, larger cooling systems can create greater environmental impact, particularly in urban locations. The project required compliance with a maximum night-time noise level of 80.2dB(A), prompting acoustic considerations to be integrated early in the design process. Davide Mazzi, Head of the Application Team at STULZ, explains, “The challenge was not only to guarantee efficient cooling, but to comply with extremely strict noise limits. “The installation is located on a rooftop in a densely built urban environment. Our task was to deliver the required performance without disturbing the surroundings and without compromising the operational reliability of the data centre.” Acoustic testing under real operating conditions The companies developed a noise attenuation system tailored to the chiller configuration. Acoustic measurements were conducted in line with EN ISO 9614-2:1997, which determines sound power levels using sound intensity measurements. Before testing, the team carried out an environmental analysis using SoundPLAN software to model sound propagation. The test setup ensured that background noise levels were at least 10dB below the chiller’s output, with surrounding equipment positioned to avoid interference. Two attenuation configurations were assessed: Both used steel frame structures with integrated acoustic components to reduce airborne and structure-borne noise, while the second configuration also included additional optimisation measures, resulting in greater overall noise reduction (although it increased system weight and complexity). Engineers measured sound power levels with and without the attenuation system to quantify performance and confirm compliance with the required limits. Davide continues, “We were delighted to find that the chiller equipped with the developed attenuation system successfully met the stringent noise requirements. “This project demonstrates that data centre cooling and acoustic compliance can be achieved simultaneously when engineering, acoustic design, and validation are approached as an integrated process. "As data centres continue to expand into urban environments, such integrated approaches are likely to become essential for balancing performance, sustainability, and community impact.” For more from STULZ, click here.

Vertiv to acquire ThermoKey
Vertiv, a global provider of critical digital infrastructure, has announced an agreement to acquire ThermoKey, as part of its ongoing focus on data centre cooling technologies. The acquisition is expected to expand Vertiv’s thermal management portfolio and manufacturing capabilities, particularly across EMEA. It also aims to strengthen the company’s ability to support high-density data centres and AI workloads, where cooling performance and efficiency are increasingly important. ThermoKey develops heat rejection and heat exchange technologies, with established relationships across original equipment manufacturers and system integrators. Its range includes dry coolers and microchannel-based systems designed for data centre and industrial applications. Giordano Albertazzi, CEO at Vertiv, notes, “Heat rejection is becoming increasingly critical for data centres and AI factories as the industry seeks new ways to unlock capacity, improve energy efficiency, and scale with confidence. “Through our work with ThermoKey, we have come to value its differentiated heat-exchange technologies, engineering depth, and relationships across OEMs and system integrators. "This acquisition is expected to expand the options available to our customers as they adopt more efficient cooling strategies and build infrastructure designed to stay ahead of rapidly evolving compute demands.” Founded in 1991 and based in Italy, ThermoKey has more than three decades of experience in designing and manufacturing heat exchangers for data centre cooling and other applications. Expanding thermal capabilities for AI data centres The company’s portfolio includes heat exchangers, dry coolers, air cooled condensers, and liquid cooling systems. Its technologies are compatible with low global warming potential (GWP) and natural refrigerants, aligning with wider industry efforts to improve efficiency and reduce environmental impact. ThermoKey’s engineering and production capabilities are expected to complement Vertiv’s existing thermal portfolio, while also increasing manufacturing flexibility and available capacity. This is intended to help meet rising demand for cooling infrastructure in high-density computing environments. For data centre operators, the acquisition is expected to support more integrated thermal system design, allowing coordination between liquid cooling, air cooling, and heat rejection technologies. This approach can help optimise performance based on site conditions, efficiency targets, and future expansion requirements. The transaction remains subject to regulatory approvals and other customary conditions, with completion anticipated in the second quarter of 2026. For more from Vertiv, click here.

RETN now live at Manchester's Lunar 1 data centre
RETN, an independent global network service provider, has launched a new point of presence (PoP) in Manchester, UK. As the city’s interconnection ecosystem continues to grow, RETN says it is enabling secure, reliable, and future-ready connectivity, powering both local and global digital ambitions. Christopher Elliott, UK Commercial Director at RETN, comments, “This new PoP strengthens our presence in the North, delivering greater route diversity and resilience for businesses, ISPs, and enterprises across the region. "It’s another step in our commitment to the Northern Powerhouse, supporting Manchester’s role as one of the UK’s leading connectivity hubs. Lunar’s commitment to operational excellence and customer‑focused service makes them an ideal partner as we continue to expand our network footprint.” Darren Elliston, Director of Customer Success at Lunar Digital, adds, “RETN’s decision to build a PoP inside our facility is a strong endorsement of the quality, resilience, and strategic importance of Lunar’s data centres. "This partnership gives our customers even more choice and flexibility in how they build and scale their infrastructure. It also reinforces Manchester’s position as one of the UK’s most important digital hubs, supporting the region’s continued growth and innovation.” For more from RETN, click here.

ZutaCore brings two-phase cooling to PCIe GPUs
ZutaCore, a developer of liquid cooling technology, has announced that its OmniTherm cold plate now enables waterless, two-phase cooling for manufacturers building servers with the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs in a single-slot PCIe form factor, supporting full-power operation in standard enterprise and AI cloud server environments. As AI inference expands across enterprise and cloud environments, PCIe GPU servers have become a common platform due to their relative ease of deployment, scalability, and compatibility with existing infrastructure. However, as GPU power consumption rises, air cooling can become a limiting factor, restricting density, driving up fan power, and increasing the risk of thermal throttling during sustained workloads. The company says OmniTherm addresses this by enabling a transition to two-phase liquid cooling without introducing water inside the server. The single-slot design allows operators to increase accelerator density in standard server architectures while capturing heat into a liquid loop, reducing reliance on high fan speeds that can create excessive noise, waste power, and cause difficult operating conditions in the data centre. "Enterprise and cloud operators want the flexibility of PCIe GPUs, but they also need density and sustained performance as power levels rise," comments My D. Truong, CTO of ZutaCore. "OmniTherm delivers waterless, two-phase cooling in a single-slot form factor, helping data centres increase accelerator density while maintaining stable thermals for 24/7 AI workloads." Two-phase cooling for dynamic AI workloads Production AI workloads - particularly inference - are rarely steady, fluctuating constantly and creating thermal swings that can affect performance and reliability. ZutaCore says its two-phase approach is designed to respond to changing workloads, helping data centres maintain predictable performance under dynamic utilisation. As racks move into higher power levels, the operational cost of air cooling also rises, with increased fan energy consumption and growing acoustic and facility pressures. OmniTherm uses a sealed, non-conductive dielectric fluid system that captures heat without requiring facility water in the server, reducing cooling overhead and providing a path to scaling PCIe-based AI deployments. Alongside this announcement, ZutaCore has also introduced HyperCool Cloud, a cloud-native operations platform designed to help data centres manage liquid cooling infrastructure. The platform, the company says, provides "near-real-time" CDU telemetry, fleet-level monitoring, and alarm-to-resolution workflows, helping operators manage service response and uptime as deployments scale across sites and fleets. For more from ZutaCore, click here.

ZIEHL-ABEGG updates ZAplus fan design
ZIEHL-ABEGG, a German ventilation manufacturer, has introduced the ZAplus Next Generation axial fan, aimed at improving airflow, efficiency, and acoustic performance in data centres and other cooling applications. The updated design builds on the existing ZAplus platform, incorporating a slimmer housing and revised aerodynamic components to increase air output and pressure within the same footprint. The company says this allows larger fan sizes to be deployed in existing spaces, supporting upgrades without requiring significant changes to system layouts. The housing, available in sizes from 450 mm to 1,000 mm, has been developed using computational fluid dynamics to optimise airflow. It is manufactured using plastic injection moulding to reduce weight and improve corrosion resistance. Design changes focus on airflow and efficiency The system includes FE2owlet and FE3owlet blade designs, alongside guide vanes and a compact diffuser to stabilise airflow and improve pressure performance. Additional nozzles are used to help smooth airflow and reduce turbulence. The company notes that these elements are designed to support efficient operation while maintaining a consistent footprint. The fan also enables variable speed control, allowing airflow to be adjusted to demand, which can help reduce energy use over time. The ZAplus Next Generation is available with both AC and ECblue motor options, providing flexibility for both retrofit and new-build data centre environments. ZIEHL-ABEGG says its composite construction is intended to support durability and reduce maintenance requirements in long-term operation.

Corning expands AI data centre connectivity
Corning, a US manufacturer of optical fibre for telecommunications and data centres, has expanded its data centre connectivity portfolio through a licensing agreement with US Conec. The agreement enables Corning to use PRIZM TMT optical ferrule technology, designed to increase fibre density within data centre environments, particularly for AI infrastructure. The technology supports higher fibre counts in limited space, addressing growing demand for connectivity as AI workloads scale and data centre architectures evolve. Mike O’Day, Senior Vice President and General Manager of Corning Optical Communications, comments, “AI infrastructure is pushing optical connectivity into new and more demanding environments. “By licensing PRIZM TMT, Corning is strengthening its ability to deliver scalable, fibre-rich solutions that help customers build larger, faster, and more efficient AI clusters, while aligning closely with the broader industry ecosystem.” Supporting higher-density AI infrastructure As AI deployments expand, data centres are increasing the number of connected accelerators and shifting towards optical connections in place of traditional copper links. This change is driving higher fibre density within server and switch racks, increasing the need for compact, high-performance connectors. The PRIZM TMT ferrule uses expanded beam technology with precision-aligned microlenses, rather than direct fibre contact. This approach is intended to improve installation reliability, reduce sensitivity to contamination, and support faster deployment. According to the companies, these characteristics are suited to large-scale AI environments, where high connection density and consistent performance are required. For more from Corning, click here.

Siemens, Rittal partner on data centre power
German multinational technology company Siemens and Rittal, a German manufacturer of industrial enclosures, IT racks, and climate control systems, have formed a partnership to develop power distribution infrastructure for data centres, targeting increasing demands from AI workloads. The collaboration focuses on standardised systems designed to support higher rack power densities, improve deployment speed, and streamline data centre construction. Power demands in AI environments are continuing to rise, with rack densities already exceeding 100 kW and expected to increase further over the coming years. The companies aim to address these requirements through updated approaches to power distribution, cooling, and heat management. Focus on scalable power infrastructure One of the first developments from the partnership is a sidecar power system, installed within the white space of a data centre. The system uses a dedicated power rack to supply server racks, supporting a modular and scalable approach to power delivery. The design aligns with Open Compute Project standards and is intended to simplify deployment while maintaining operational reliability. “To enable the rapid growth of AI, we need smart, reliable, and scalable power supply solutions for data centres and we need them quickly,” comments Andreas Matthé, CEO Electrical Products at Siemens Smart Infrastructure. Further joint work includes the development of standardised low-voltage distribution systems for modular and containerised data centres, alongside measures aimed at improving operational and personnel safety. The partnership builds on existing collaboration between Siemens and the Friedhelm Loh Group, Rittal’s parent company, and is expected to expand into additional applications beyond data centres. For more from Siemens, click here.



Translate »