Innovations in Data Center Power and Cooling Solutions


SATLINE completes Tier III infrastructure modernisation
SATLINE, a Lithuanian provider of virtual satellite-to-IP streaming services and colocation for satellite communications, has upgraded its core infrastructure to align with Tier III standards under the Uptime Institute Tier Classification System, strengthening resilience across its power and cooling environments. The upgrade introduces full redundancy across critical systems, enabling concurrent maintainability and removing single points of failure, all without interrupting live operations. The project included a comprehensive overhaul of SATLINE’s infrastructure, namely: • Power redundancy — upgraded from a single generator to two fully redundant generators• Expanded UPS capacity — systems doubled to improve runtime and load handling• Modernised cooling — HVAC systems redesigned for full redundancy and improved efficiency• Tier III-aligned architecture — enabling maintenance without service disruption All improvements were reportedly completed with no customer-impacting downtime. Improved resilience and operational continuity The transition from Tier II to a Tier III-aligned design delivers a fully resilient environment. This allows any component within the infrastructure to be serviced without affecting operations, while also improving fault tolerance and scalability. For customers, the upgrade should provide greater continuity, even during maintenance or future system expansions. Simas Mockevicius, Senior Network Engineer at SATLINE, comments, “Our Tier III–aligned upgrade has already delivered measurable gains in operational resilience. “Building on a 10-year track record of 100% uptime across both network and power, we have further strengthened our infrastructure through fully redundant power generation, increased UPS capacity, and modernised cooling. "The result is a system that not only sustains uninterrupted service, but is engineered to exceed the reliability benchmarks our customers depend on.” The upgrade, according to the SATLINE, forms part of its broader strategy to support the uptime demands of satellite communications and critical connectivity services. The company has also outlined plans to expand into Asia, targeting regions with growing demand for satellite connectivity.

ABB extends VoltaGrid data centre power deal
ABB, a multinational corporation specialising in industrial automation and electrification products, has secured additional orders from VoltaGrid, a Texas-based microgrid power generation company, to support data centre power infrastructure projects, linked to growing demand from AI workloads. The agreement was signed on 25 March 2026 at CERAWeek in Houston, USA, extending the companies’ existing collaboration. The orders are expected to be recorded in the second quarter of 2026. Financial terms were not disclosed. Under the agreement, ABB will supply 35 synchronous condensers with flywheel technology, alongside prefabricated eHouse units. These systems are used to support voltage stability in power networks, particularly for high-density data centre environments. The equipment will form part of VoltaGrid’s behind-the-meter power infrastructure, designed to provide stable and rapidly deployable energy for large-scale data centre operations. Supporting power stability for AI workloads Synchronous condensers help stabilise electricity networks by providing inertia, supporting short-circuit events, and managing reactive power. ABB’s scope also includes medium- and low-voltage distribution systems, as well as excitation systems intended to maintain reliability and uptime. Nathan Ough, CEO of VoltaGrid, says, “VoltaGrid’s power platform is purpose built to deliver large-scale power with exceptional dynamic performance and reliability for next-generation digital infrastructure. “By integrating ABB’s advanced grid stabilisation technologies with our AI-optimised power systems, we are able to meet increasingly stringent transient performance requirements while accelerating deployment at gigawatt scale.” Per Erik Holsten, President of ABB’s Energy Industries division, adds, “Extending our collaboration with VoltaGrid demonstrates the strength of ABB’s businesses working together combining automation, electrification, and motion expertise and technologies with innovative distributed power systems to create greater value for customers. “Together, we are enabling reliable, resilient, and scalable power infrastructure for data centres serving the rapidly growing AI economy.” Data centres accounted for around 1.5% of global electricity consumption in 2024, with the United States representing approximately 45% of that total. For more from ABB, click here.

STULZ, Merford conduct unique acoustic test for data centres
STULZ, a manufacturer of mission-critical air conditioning technology, and Merford, a Dutch specialist in noise control systems and acoustic doors, have completed an acoustic test confirming that a newly developed chiller system can meet strict data centre noise regulations under operational conditions. The test was carried out on a chiller for a project in Valeggio sul Mincio, Italy. It used a validated measurement methodology designed to reflect real-world performance, as operators increasingly consider noise alongside cooling capacity and energy efficiency. As data centre power densities increase, larger cooling systems can create greater environmental impact, particularly in urban locations. The project required compliance with a maximum night-time noise level of 80.2dB(A), prompting acoustic considerations to be integrated early in the design process. Davide Mazzi, Head of the Application Team at STULZ, explains, “The challenge was not only to guarantee efficient cooling, but to comply with extremely strict noise limits. “The installation is located on a rooftop in a densely built urban environment. Our task was to deliver the required performance without disturbing the surroundings and without compromising the operational reliability of the data centre.” Acoustic testing under real operating conditions The companies developed a noise attenuation system tailored to the chiller configuration. Acoustic measurements were conducted in line with EN ISO 9614-2:1997, which determines sound power levels using sound intensity measurements. Before testing, the team carried out an environmental analysis using SoundPLAN software to model sound propagation. The test setup ensured that background noise levels were at least 10dB below the chiller’s output, with surrounding equipment positioned to avoid interference. Two attenuation configurations were assessed: Both used steel frame structures with integrated acoustic components to reduce airborne and structure-borne noise, while the second configuration also included additional optimisation measures, resulting in greater overall noise reduction (although it increased system weight and complexity). Engineers measured sound power levels with and without the attenuation system to quantify performance and confirm compliance with the required limits. Davide continues, “We were delighted to find that the chiller equipped with the developed attenuation system successfully met the stringent noise requirements. “This project demonstrates that data centre cooling and acoustic compliance can be achieved simultaneously when engineering, acoustic design, and validation are approached as an integrated process. "As data centres continue to expand into urban environments, such integrated approaches are likely to become essential for balancing performance, sustainability, and community impact.” For more from STULZ, click here.

Vertiv to acquire ThermoKey
Vertiv, a global provider of critical digital infrastructure, has announced an agreement to acquire ThermoKey, as part of its ongoing focus on data centre cooling technologies. The acquisition is expected to expand Vertiv’s thermal management portfolio and manufacturing capabilities, particularly across EMEA. It also aims to strengthen the company’s ability to support high-density data centres and AI workloads, where cooling performance and efficiency are increasingly important. ThermoKey develops heat rejection and heat exchange technologies, with established relationships across original equipment manufacturers and system integrators. Its range includes dry coolers and microchannel-based systems designed for data centre and industrial applications. Giordano Albertazzi, CEO at Vertiv, notes, “Heat rejection is becoming increasingly critical for data centres and AI factories as the industry seeks new ways to unlock capacity, improve energy efficiency, and scale with confidence. “Through our work with ThermoKey, we have come to value its differentiated heat-exchange technologies, engineering depth, and relationships across OEMs and system integrators. "This acquisition is expected to expand the options available to our customers as they adopt more efficient cooling strategies and build infrastructure designed to stay ahead of rapidly evolving compute demands.” Founded in 1991 and based in Italy, ThermoKey has more than three decades of experience in designing and manufacturing heat exchangers for data centre cooling and other applications. Expanding thermal capabilities for AI data centres The company’s portfolio includes heat exchangers, dry coolers, air cooled condensers, and liquid cooling systems. Its technologies are compatible with low global warming potential (GWP) and natural refrigerants, aligning with wider industry efforts to improve efficiency and reduce environmental impact. ThermoKey’s engineering and production capabilities are expected to complement Vertiv’s existing thermal portfolio, while also increasing manufacturing flexibility and available capacity. This is intended to help meet rising demand for cooling infrastructure in high-density computing environments. For data centre operators, the acquisition is expected to support more integrated thermal system design, allowing coordination between liquid cooling, air cooling, and heat rejection technologies. This approach can help optimise performance based on site conditions, efficiency targets, and future expansion requirements. The transaction remains subject to regulatory approvals and other customary conditions, with completion anticipated in the second quarter of 2026. For more from Vertiv, click here.

ZutaCore brings two-phase cooling to PCIe GPUs
ZutaCore, a developer of liquid cooling technology, has announced that its OmniTherm cold plate now enables waterless, two-phase cooling for manufacturers building servers with the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs in a single-slot PCIe form factor, supporting full-power operation in standard enterprise and AI cloud server environments. As AI inference expands across enterprise and cloud environments, PCIe GPU servers have become a common platform due to their relative ease of deployment, scalability, and compatibility with existing infrastructure. However, as GPU power consumption rises, air cooling can become a limiting factor, restricting density, driving up fan power, and increasing the risk of thermal throttling during sustained workloads. The company says OmniTherm addresses this by enabling a transition to two-phase liquid cooling without introducing water inside the server. The single-slot design allows operators to increase accelerator density in standard server architectures while capturing heat into a liquid loop, reducing reliance on high fan speeds that can create excessive noise, waste power, and cause difficult operating conditions in the data centre. "Enterprise and cloud operators want the flexibility of PCIe GPUs, but they also need density and sustained performance as power levels rise," comments My D. Truong, CTO of ZutaCore. "OmniTherm delivers waterless, two-phase cooling in a single-slot form factor, helping data centres increase accelerator density while maintaining stable thermals for 24/7 AI workloads." Two-phase cooling for dynamic AI workloads Production AI workloads - particularly inference - are rarely steady, fluctuating constantly and creating thermal swings that can affect performance and reliability. ZutaCore says its two-phase approach is designed to respond to changing workloads, helping data centres maintain predictable performance under dynamic utilisation. As racks move into higher power levels, the operational cost of air cooling also rises, with increased fan energy consumption and growing acoustic and facility pressures. OmniTherm uses a sealed, non-conductive dielectric fluid system that captures heat without requiring facility water in the server, reducing cooling overhead and providing a path to scaling PCIe-based AI deployments. Alongside this announcement, ZutaCore has also introduced HyperCool Cloud, a cloud-native operations platform designed to help data centres manage liquid cooling infrastructure. The platform, the company says, provides "near-real-time" CDU telemetry, fleet-level monitoring, and alarm-to-resolution workflows, helping operators manage service response and uptime as deployments scale across sites and fleets. For more from ZutaCore, click here.

ZIEHL-ABEGG updates ZAplus fan design
ZIEHL-ABEGG, a German ventilation manufacturer, has introduced the ZAplus Next Generation axial fan, aimed at improving airflow, efficiency, and acoustic performance in data centres and other cooling applications. The updated design builds on the existing ZAplus platform, incorporating a slimmer housing and revised aerodynamic components to increase air output and pressure within the same footprint. The company says this allows larger fan sizes to be deployed in existing spaces, supporting upgrades without requiring significant changes to system layouts. The housing, available in sizes from 450 mm to 1,000 mm, has been developed using computational fluid dynamics to optimise airflow. It is manufactured using plastic injection moulding to reduce weight and improve corrosion resistance. Design changes focus on airflow and efficiency The system includes FE2owlet and FE3owlet blade designs, alongside guide vanes and a compact diffuser to stabilise airflow and improve pressure performance. Additional nozzles are used to help smooth airflow and reduce turbulence. The company notes that these elements are designed to support efficient operation while maintaining a consistent footprint. The fan also enables variable speed control, allowing airflow to be adjusted to demand, which can help reduce energy use over time. The ZAplus Next Generation is available with both AC and ECblue motor options, providing flexibility for both retrofit and new-build data centre environments. ZIEHL-ABEGG says its composite construction is intended to support durability and reduce maintenance requirements in long-term operation.

Siemens, Rittal partner on data centre power
German multinational technology company Siemens and Rittal, a German manufacturer of industrial enclosures, IT racks, and climate control systems, have formed a partnership to develop power distribution infrastructure for data centres, targeting increasing demands from AI workloads. The collaboration focuses on standardised systems designed to support higher rack power densities, improve deployment speed, and streamline data centre construction. Power demands in AI environments are continuing to rise, with rack densities already exceeding 100 kW and expected to increase further over the coming years. The companies aim to address these requirements through updated approaches to power distribution, cooling, and heat management. Focus on scalable power infrastructure One of the first developments from the partnership is a sidecar power system, installed within the white space of a data centre. The system uses a dedicated power rack to supply server racks, supporting a modular and scalable approach to power delivery. The design aligns with Open Compute Project standards and is intended to simplify deployment while maintaining operational reliability. “To enable the rapid growth of AI, we need smart, reliable, and scalable power supply solutions for data centres and we need them quickly,” comments Andreas Matthé, CEO Electrical Products at Siemens Smart Infrastructure. Further joint work includes the development of standardised low-voltage distribution systems for modular and containerised data centres, alongside measures aimed at improving operational and personnel safety. The partnership builds on existing collaboration between Siemens and the Friedhelm Loh Group, Rittal’s parent company, and is expected to expand into additional applications beyond data centres. For more from Siemens, click here.

Report: AI boom driving US data centres off grid
The rapid expansion of off-grid data centres across the US is emerging as a possible answer to the power constraints reshaping the AI-driven digital economy, according to a new report from law firm Troutman Pepper Locke. As artificial intelligence accelerates demand for compute capacity, the firm's report - Off-Grid Data Centers: A Potential Power Solution for AI - finds that developers, hyperscalers, and energy companies are increasingly turning to behind-the-meter and ‘island-moded’ generation to secure reliable, scalable electricity while avoiding grid congestion and regulatory delays. According to projections cited in the analysis, global data centre investment could reach $6.7 trillion (£5 trillion) by 2030, with approximately $2.7 trillion (£2 trillion) of that invested in the US market. Nowhere is the transformation more visible than in Texas, where the Electric Reliability Council of Texas (ERCOT) forecasts that data centre electricity demand could rise by 22 GW between 2025 and 2031, reaching 78 GW (or roughly 36% of total statewide demand). At the same time, AI-specialised server racks now require 50–100 kW each, up from 5–10 kW in traditional configurations just a few years ago. As microchips become more powerful and energy intensive, the report concludes that power - not silicon - has become the primary constraint on AI expansion. Natural gas as the bridge to scale One of the report's central findings is the decisive shift towards natural gas as the preferred near-term solution for off-grid facilities. Developers are prioritising dispatchable generation that can deliver the "five nines" reliability (99.999% uptime) demanded by hyperscale AI operations. While renewables remain a central part of long-term decarbonisation strategies, the analysis suggests that wind and solar alone cannot yet provide consistent, 24/7 baseload power at the scale AI requires without substantial overbuild and storage. Battery capacity, though advancing, "remains limited" in duration for utility-scale deployments. Small modular nuclear reactors reportedly hold promise but are not yet commercially deployable at scale. Natural gas generation, by contrast, can be deployed relatively quickly and offers dependable output, which the report argues makes it the dominant choice for early off-grid adopters, particularly in Texas, where fuel supply and land availability align. However, the report also cautions that turbine supply chains are tightening, and competition for equipment, skilled labour, and transmission infrastructure is intensifying as AI-driven projects accelerate nationwide. Interconnection bottlenecks fuel off-grid momentum Grid interconnection queues are increasingly congested, delaying projects in key markets. Developers are therefore reportedly pursuing behind-the-meter solutions as a bridge to eventual grid connection - or, in some cases, as a long-term strategy to maintain operational autonomy. Texas's deregulated electricity market and advanced behind-the-meter framework make it a focal point for this shift. Yet, regulatory oversight is also evolving. Senate Bill 6, passed with bipartisan support in 2025, introduced new obligations for large-load users, including requirements tied to backup generation and infrastructure cost allocation. At the federal level, policymakers are responding to the AI "gold rush" with measures designed both to accelerate data centre permitting and protect grid reliability. Proposed initiatives such as the Decentralised Access to Technology Alternatives (DATA) Act and large-load interconnection reforms could further clarify the treatment of private off-grid facilities and reduce compliance burdens. The report suggests that regulatory clarity - rather than deregulation alone - will be essential to sustaining investment momentum while safeguarding broader system stability. Community scrutiny and the $64 billion delay factor Beyond infrastructure, the report highlights mounting community resistance. Research referenced in the analysis indicates that as of early 2025, approximately $64 billion (£48.2 billion) in US data centre developments had faced delays due to bipartisan local opposition, often centred on energy costs, water use, and property impacts. Off-grid systems can mitigate some of these concerns by reducing strain on public grids and shielding residential ratepayers from infrastructure cost allocation. Nevertheless, proactive community engagement and transparent economic value propositions remain critical. The report also explores alternative models, including modular data centres colocated with renewable assets to absorb curtailed power, demonstrating that innovation in design and siting can complement traditional off-grid approaches. The partner imperative With gigawatt-scale campuses carrying price tags exceeding $1 billion (£753 million) per facility, counterparty strength and supply chain resilience are paramount, according to the report. Developers and energy providers "must conduct rigorous due diligence" on turbine manufacturers, engineering teams, landholders, and off-takers. In an off-grid environment, there is no utility fallback. Creditworthiness, long-term commitment, and technical capability become central risk determinants. The report underscores that competition is fierce and that some early entrants may struggle to scale without robust financial backing. Reliability first and always Ultimately, the report concludes that reliability eclipses all other considerations. Hyperscalers racing to lead the AI market prioritise guaranteed uptime over short-term cost arbitrage or energy trading opportunities. The business case for AI infrastructure depends on uninterrupted power, and developers are reshaping generation strategies accordingly. Brandon Lobb, Partner in Troutman Pepper Locke’s Energy Transactional Practice Group, says, "AI has shifted the centre of gravity in the energy market. Power availability - not just price - is now the defining variable in digital infrastructure strategy. "Off-grid solutions are emerging as a pragmatic response to interconnection delays, reliability demands, and community pressures. Companies that align regulatory strategy, supply chain discipline, and creditworthy partnerships will be best positioned to lead in this next phase of AI growth." As federal and state frameworks continue to evolve, off-grid data centres appear set to become a structural feature of the US energy and technology landscape, rather than a temporary workaround.

Siemens expands data centre ecosystem for AI infrastructure
German multinational technology company Siemens has expanded its data centre partner ecosystem to support the growth of next-generation artificial intelligence infrastructure, focusing on the integration of compute, power, and operational systems. The expansion includes a strategic investment in Emerald AI, a collaboration with PhysicsX, and the integration of energy storage technologies from Fluence. As AI adoption accelerates, data centre operators are facing increasing constraints around power availability and grid connection timelines. Siemens says the expanded ecosystem is intended to improve flexibility across infrastructure, helping operators scale capacity while maintaining reliability in power-constrained environments. Coordinating compute and energy systems Emerald AI’s technology enables AI workloads to shift in time and location to align with grid conditions, allowing data centre demand to respond dynamically to available power. This approach is designed to reduce peak demand pressures and support faster grid connections. Fluence’s battery energy storage systems (BESS) are intended to help operators manage large-scale AI workloads by shaping energy demand and supporting more predictable load profiles. The systems can also provide on-site power during grid constraints or outages, supporting operational continuity. In addition, Siemens is working with PhysicsX to apply physics-based AI modelling to data centre power distribution systems. Using simulation data, the approach enables engineers to model thermal behaviour in real time, reducing design times and supporting optimisation for dynamic AI workloads. Siemens said the combined ecosystem brings together workload orchestration, energy infrastructure, and AI-driven modelling to address the growing complexity of data centre design and operation as AI demand increases. For more from Siemens, click here.

'One in four DC operators fails to track energy usage'
A late‑2025 451 Research study, commissioned by Janitza, a German manufacturer of energy measurement and power quality monitoring equipment, reveals that nearly one in four data centre operators does not monitor the power consumption of their primary sites, even as AI workloads drive unprecedented pressure on electrical and cooling infrastructure. Without precise, real‑time energy data, Janitza argues, operators cannot safely scale AI‑ready capacity or protect their investments. Energy consumption without control 451 Research, the technology market intelligence unit of S&P Global, surveyed 208 data centre professionals to assess how efficiently business‑critical facilities operate today, using power usage effectiveness (PUE) as a key metric. Just over half of respondents reported a PUE between 1.5 and 2.0, while 23% admitted they are not tracking this fundamental performance indicator at all. The study highlights a structural business risk: power has become the limiting factor in building, scaling, and monetising AI‑capable infrastructure. Highly dynamic AI workloads drive power fluctuations of up to 40–70% within milliseconds, creating new challenges for power quality and increasing the risk of outages and equipment damage. The report notes, “In an environment where milliseconds matter, flexibility and data expertise are the critical differentiators.” The findings suggest that reliable, high‑resolution energy data now underpins predictive maintenance, capacity planning, and revenue optimisation in modern data centres. Janitza says operators who capture and analyse detailed power and power‑quality data can detect emerging faults earlier, extend the lifetime of critical components, and avoid unplanned downtime. As rack power densities rise towards 40–120 kW and AI models continue to grow, the study finds that comprehensive monitoring across the entire power chain, from grid connection to individual racks, is becoming a decisive competitive factor. For more from Janitza, cick here.



Translate »