Data Centre Infrastructure News & Trends


AFL: Why data centre leaders are heading to Stand C110
AFL, a manufacturer of fibre optic cables and connectivity equipment, will be attending this year's Data Centre World in London, 4–5 March 2026, exhibiting on Stand C110. In this article, the company tells you about what you can expect: Your AI clusters are hungry for bandwidth. GPU-to-GPU latency is make or break, and you’re being asked to scale yesterday, all while maintaining uptime, managing density, and staying within budget. AFL understands. It has engineered solutions specifically for these problems. What you’ll experience at Stand C110: • Hands-on demos• Industry-first technology• Solutions for your biggest bottlenecks• Modular white space infrastructure you can deploy rapidly• AI-GPU connectivity optimised for ultra-low latency compute fabrics• High-density DCI solutions that maximise available space in cable ducts• Pre-terminated, plug-and-play modules with full traceability to help you deploy faster• Fujikura’s Multi-Core, Hollow-Core, and Mass-Fusion splicers in action – the precision tools that research labs and hyperscalers trust for next-generation fibre deployment• Small-form-factor assemblies – reduce diameter, increase density, maximise airflow and cable pathways• Test with confidence – advanced inspection tools that validate performance before the first packet flows Why AFL for hyperscale data centres? • Globally available — consistent supply chain, wherever you build• Proven reliability — supporting the world’s largest hyperscale networks• Modular and scalable — grow your infrastructure without forklift upgrades• Built for AI workloads — engineered for the bandwidth and latency demands of dense GPU clusters Who should visit the stand? • Network engineers deploying or upgrading DCI links• Data centre architects planning next-generation AI infrastructure• Infrastructure leaders evaluating fibre solutions for hyperscale growth• Operations teams seeking faster commissioning and maintenance workflows Ready to enhance hyperscale efficiency? Bring your toughest connectivity challenges to Stand C110 and see how AFL’s team is already solving the real-world problems you face with innovative solutions ready for immediate global deployment. Find out how its optical fibre experts can help you scale seamlessly across growing hyperscale deployments for AI and cloud. For more from AFL, click here.

Vertiv launches UK UPS trade-in programme
Vertiv, a global provider of critical digital infrastructure, has launched a UK-wide 'Power Swap' programme that allows organisations to replace older single-phase uninterruptible power supply (UPS) systems with newer models. The initiative includes collection, refurbishment, and recycling of legacy equipment to support compliance with Waste Electrical and Electronic Equipment (WEEE) regulations and reduce electronic waste. The programme applies to single-phase UPS units up to 5kVA from any manufacturer. Participants receive a discount code for a replacement unit and can arrange free, on-site collection of the old system. Recycling and upgrade process Vertiv manages the process from registration to recycling. Businesses submit details of an existing UPS through a partner, receive approval, purchase a replacement unit, and schedule collection of the retired equipment. Eligible replacements include the Vertiv Edge, Vertiv Liebert GXT5, and Vertiv Liebert GXE UPS ranges. Stuart McDougall, Channel Marketing Specialist, Northern Europe at Vertiv, says, “While many UPS vendors offer recycling or limited trade-in options, the Vertiv Power Swap programme is designed specifically for the UK channel and single-phase UPS market, uniquely combining discount incentives and an efficient trade-in process. "The Vertiv Power Swap program helps our partners to reduce their carbon footprint. With the launch of this new initiative, we're supporting UK businesses to upgrade their power protection whilst decreasing their environmental impact." Martin Ryder, Channel Sales Director, Northern Europe at Vertiv, adds, “This program strengthens our commitment to the channel by providing partners with an opportunity for enhanced margins, and customers with reliable, innovative UPS technology. "The Power Swap Program makes it easier than ever to transition to high-efficiency solutions like the Vertiv Edge, Vertiv Liebert GXT5, and Vertiv Liebert GXE, enabling greater uptime and cost savings in today's demanding IT environments.” The programme is available to UK customers and partners until the end of 2026. For more from Vertiv, click here.

LINX to upgrade Lunar Digital data centre into fully resilient PoP
The London Internet Exchange (LINX), an internet exchange point (IXP) operator, is planning to upgrade its presence at the Lunar Digital data centre in Manchester, UK, transitioning the site from a single-homed transmission site to a dual-homed, fully resilient point of presence (PoP). LINX initially went live at Lunar Digital to gauge market demand for an additional PoP at the LINX Manchester interconnection hub. The reportedly strong uptake of services since the September 2024 deployment has now indicated to the company the need for a full, diverse, and resilient presence from the IXP at the facility. Jennifer Holmes, CEO of LINX, comments, “Manchester continues to establish itself as a powerhouse digital hub for the North, and the response and demand for LINX services from networks at Lunar Digital has exceeded our expectations.” Mike Hellers, Product Development Manager for LINX, adds, “Our Manchester LAN has tripled in size over the last couple of years to now enabling 130 networks to access low-latency services and [it] regularly carries more than 900Gbps of traffic at busy periods. “Upgrading Lunar to a resilient PoP ensures existing LINX members and future networks can benefit from enhanced reliability, additional capacity, and greater choice as the regional ecosystem continues to grow.” Manchester as a growing hub Lunar Digital operates three data centres in Manchester with LINX being accessible via a single cross connect from Lunar1 and Lunar2. The announced move underscores the rapid expansion of network operators, cloud platforms, content providers, and digital businesses choosing to colocate in Manchester. “We’re thrilled to deepen our collaboration with LINX,” says Rob Garbutt, CEO of Lunar Digital. “The upgrade to a full PoP reflects not only the growth of Lunar Digital, but the wider demand for robust, high‑performance, low-latency connectivity options across the North of England.” Networks at Lunar Digital will be able to access services at the LINX Manchester internet exchange via a single cross connect. This includes services like peering, private VLANs, Closed User Groups, and the exclusive Microsoft Azure Peering Service (MAPS). The transition work is due to be completed in the coming weeks. For more from LINX, click here.

Infosys, ExxonMobil collaborate on immersion cooling
Infosys, an Indian multinational provider of IT services, has expanded its collaboration with ExxonMobil, a US multinational oil and gas company, to develop and deploy ExxonMobil data centre immersion fluids for AI and high-performance computing environments. The initiative focuses on improving energy efficiency and supporting higher-density compute infrastructure. It builds on the companies’ existing relationship and targets the growing power and cooling requirements associated with AI workloads. Infosys will combine ExxonMobil’s immersion cooling fluids with its Topaz AI portfolio and Cobalt cloud services framework. The aim is to support the design and deployment of cooling systems across cloud and data centre environments. AI-driven optimisation and cloud integration According to Infosys, Topaz will be used to optimise cooling operations through real-time monitoring and predictive maintenance. Cobalt will provide cloud integration and deployment support for enterprise environments. The collaboration is expected to target hyperscalers, enterprises, and public sector organisations across sectors including financial services, telecoms, manufacturing, and energy. Ashiss Kumar Dash, EVP & Global Head - Services, Utilities, Resources, Energy and Enterprise Sustainability at Infosys, says, “Our expanded collaboration with ExxonMobil marks a pivotal step in scaling next-generation solutions. "By leveraging Infosys Topaz for real-time AI-driven optimisation and Infosys Cobalt for secure, scalable cloud deployment with ExxonMobil’s advanced energy expertise, we are addressing the urgent need for more efficient high-performance digital infrastructure. "This collaboration has the potential to deliver measurable outcomes by reducing data centre energy costs and carbon emissions while empowering enterprises to scale responsibly and meet the demands of an AI-powered future.” Alistair Westwood, Global Marketing Manager, ExxonMobil Product Solutions Company, adds, “This collaboration reflects our commitment to innovation by allowing us to apply our energy and thermal management expertise to the evolving landscape of digital infrastructure. "Infosys’ suite of AI and digital services is enabling us to pilot and adopt infrastructure that is smarter, efficient, and more resilient.”

ERIKS to showcase valves expertise at Data Centre World 2026
ERIKS UK & I, which has recently become a Rubix company, is exhibiting on Stand F6 at Data Centre World in London (4–5 March 2026), highlighting its experience in supporting designers and contractors working on increasingly complex cooling infrastructure. The company will showcase its valve expertise in data centre cooling applications, as AI-driven workloads place increasing demands on chilled water systems. The rapid adoption of AI workloads is reshaping data centre design, with higher rack densities and new cooling architectures placing greater strain on mechanical systems. Chilled water networks are now required to operate at higher flow rates, deliver tighter control, and perform reliably in more demanding operating conditions, increasing the importance of valve selection, consistency, and long-term performance. ERIKS supports data centre HVAC and chilled water applications with a broad portfolio of valve technologies covering the core functions commonly specified in cooling systems, including isolation, regulation, and protection. The offering spans a wide range of sizes, materials and actuation options, enabling engineers to standardise valve selection while accommodating differences in system design, environmental exposure, and future expansion plans. Meeting changing data centre design Jonny Herbert, Business Development Manager for Data Centres at ERIKS UK & I, says, “AI is accelerating the pace of change in data centre design, particularly on the cooling side. "While valves are often treated as commodity components, their role in controlling and protecting chilled water systems is critical. Our approach is shaped by years of experience in the data centre sector, prioritising robustness, material choice, and practical design.” ERIKS says it encourages earlier engagement on valve selection during the design and specification stages of data centre projects. Factors such as water quality, environmental exposure, coating requirements, and access for operation and maintenance can all influence long-term system reliability. Addressing these considerations upfront can help reduce the risk of premature failure, rework, or delays during installation. Jonny continues, “As data centre projects become larger, more complex, and more tightly integrated with digital infrastructure, Data Centre World has become an important meeting point for the engineers, consultants, and contractors shaping the next generation of facilities. Our presence reflects both the maturity of our involvement in the sector and the growing need for practical, experience-led support as cooling requirements continue to evolve.” Visit ERIKS UK & I on stand F6 at Data Centre World London (4–5 March 2026) to discuss valve requirements for data centre cooling and chilled water applications. Learn more, by visiting the company's website.

Trane to acquire LiquidStack
Trane Technologies, a US manufacturer of heating, ventilation, and air conditioning (HVAC) systems, has entered into a definitive agreement to acquire LiquidStack, a US-based provider of liquid cooling technology for data centres. LiquidStack, headquartered in Carrollton, Texas, develops direct-to-chip and immersion cooling systems for high-density and hyperscale computing environments. The company’s technology is used to support generative AI and other compute-intensive workloads. Trane Technologies made a minority investment in LiquidStack in 2023. The proposed acquisition expands its data centre thermal management portfolio, which includes chillers, heat rejection systems, controls, liquid distribution, and on-chip cooling. Expanding liquid cooling capabilities The deal includes LiquidStack’s manufacturing, engineering, and research and development operations in Texas and Hong Kong. Following completion, the business will operate within the Commercial HVAC unit of Trane Technologies’ Americas segment. Holly Paeper, President, Commercial HVAC Americas, Trane Technologies, says, “Rising chip-level power and heat densities, combined with increasingly variable workloads, are redefining thermal management requirements inside modern data centres. "Customers need integrated cooling solutions that scale from the central plant to the chip and can adapt as performance demands continue to evolve. "LiquidStack’s direct-to-chip and immersion cooling capabilities and talent, combined with Trane’s systems expertise and global footprint, strengthen our ability to deliver end-to-end, future-ready thermal management across the entire data centre ecosystem.” LiquidStack co-founder and CEO Joe Capes will join Trane Technologies in a leadership role and will continue to lead the business. Joe says, “LiquidStack has been on a mission to innovate and deliver the most advanced, powerful, and sustainable liquid cooling solutions. "Joining Trane Technologies enables us to accelerate that mission with the resources, scale, and global reach needed to power next-generation AI workloads in the most demanding compute environments." The transaction is expected to close in early 2026, subject to customary conditions. Financial terms have not been disclosed. Trane Technologies also recently announced the acquisition of Stellar Energy, which is expected to complete in the first quarter of 2026. For more from Trane Technologies, click here.

TES Power to deliver modular power for Spanish DC
TES Power, a provider of power distribution equipment and modular electrical rooms for data centres, has been selected to deliver 48 MW of modular power infrastructure for a new greenfield data centre development in Northern Spain, designed to support artificial intelligence workloads. The facility is intended for high-density compute environments, where power resilience, scalability, and deployment speed are key considerations. Growing demand from AI training and inference continues to place pressure on operators to deliver robust electrical infrastructure without compromising availability or reliability. Modular power skids for high-density AI environments As part of the project, TES Power will design and manufacture 25 fully integrated 2.5MW IT power skids. Each skid is a self-contained module incorporating cast resin transformers, LV switchgear, parallel UPS systems, end-of-life battery autonomy, CRAH-based cooling, and high-capacity busbar interconnections. The skids are designed to provide continuous power to critical IT loads, with automatic transfer from mains supply to battery and generator systems in the event of a supply disruption, a requirement increasingly associated with AI-driven data centre operations. Michael Beagan, Managing Director at TES Power, says, “AI is fundamentally changing the scale, speed, and resilience requirements of data centre power infrastructure. This project reflects exactly where the market is heading: larger, higher-density facilities that cannot tolerate risk or delay. "By delivering fully integrated, factory-tested power skids, we’re helping our client accelerate deployment while maintaining the absolute reliability that AI workloads demand.” The project uses off-site manufacture to reduce programme risk and enable parallel delivery, allowing electrical systems to be progressed while civil and building works continue on-site. Each skid will undergo full Factory Acceptance Testing prior to shipment, reducing commissioning risk and limiting on-site installation time. Building Information Modelling is being used to digitally coordinate each skid with wider site services, supporting installation sequencing, clash detection, and longer-term operational planning. TES Power’s scope also includes project management, site services, and final commissioning.

Direct-to-chip liquid cooling market to reach $7.9bn by 2033
Rising computational intensity has placed unprecedented pressure on traditional air-based cooling systems. High-performance computing (HPC), artificial intelligence (AI), cloud data centres, and advanced semiconductor architectures generate dense heat loads that are increasingly difficult to manage using conventional thermal management approaches. According to Research Intelo, a global market research and consulting firm, the global direct-to-chip liquid cooling market was valued at $1.3 billion (£951 million) in 2024 and is projected to reach $7.9 billion (£5.7 billion) by 2033, expanding at a CAGR of 22.3%. This strong growth trajectory underscores the growing reliance on liquid-based cooling technologies to support next-generation digital infrastructure. Direct-to-chip liquid cooling has emerged as a practical and scalable response to these challenges, offering targeted heat removal directly from processors and other high-power components. By reducing thermal resistance and improving heat transfer efficiency, this approach supports higher rack densities while aligning with broader energy efficiency and sustainability objectives. What exactly is direct-to-chip liquid cooling? Direct-to-chip liquid cooling is a thermal management method in which a liquid coolant flows through cold plates mounted directly onto heat-generating components such as CPUs, GPUs, and accelerators. Heat is absorbed at the source and transported away through a closed-loop liquid system, minimising reliance on air circulation. Compared to immersion cooling, which involves submerging entire systems in dielectric fluids, direct-to-chip solutions integrate more easily with existing data centre architectures. This balance between high cooling efficiency and operational compatibility has positioned the technology as a preferred option for gradual infrastructure upgrades and hybrid cooling deployments. Which factors are driving market growth? 1. Technological innovation and automation As processing power and server densities continue to rise, traditional air-cooling solutions are approaching their practical limits, increasing the risk of thermal throttling and hardware degradation. Direct-to-chip liquid cooling technologies provide a highly efficient alternative by enabling precise and consistent heat removal from critical components. Ongoing innovation in cold plate design, advanced coolants, and system integration is further enhancing performance and reliability. The incorporation of smart sensors, real-time monitoring tools, and automated flow controls enables predictive maintenance and dynamic thermal optimisation. These advancements are making direct-to-chip liquid cooling more scalable and accessible across a wide range of computing environments, from hyperscale data centres to edge deployments. 2. Shifts in end-user accelerating market expansion The rapid expansion of data-intensive applications, including AI, machine learning, blockchain, and the Internet of Things (IoT), has led to unprecedented heat generation within servers and computing clusters. Enterprises and data centre operators face increasing pressure to maintain high performance and uptime while controlling operational costs and energy consumption. Direct-to-chip liquid cooling addresses these demands by delivering superior thermal efficiency and reducing dependence on energy-intensive air conditioning systems. The ability to support higher rack densities is particularly valuable in urban data centres and edge locations where space and power constraints are significant. As organisations prioritise sustainability and long-term infrastructure resilience, adoption of liquid cooling technologies is expected to expand across multiple industry verticals. 3. Regulatory support and government incentives Regulatory frameworks aimed at reducing energy consumption and greenhouse gas emissions in data centres are creating favourable conditions for advanced cooling technologies. In regions such as Europe and North America, government incentives - including tax benefits, grants, and energy efficiency programs - are encouraging the adoption of low-impact thermal management solutions. In parallel, international standards for green data centre operations are pushing organisations to modernise their infrastructure and improve environmental performance. These regulatory and policy-driven factors are fostering innovation, reducing adoption barriers, and supporting sustained market growth. What challenges are limiting wider adoption? Despite strong growth prospects, the market faces several challenges that could impact adoption rates. Regulatory uncertainty related to safety standards, environmental compliance, and fluid handling requirements can complicate deployment decisions. Volatility in raw material prices, particularly for copper and specialised cooling fluids, may also influence production costs and pricing strategies. Additionally, standardisation gaps and interoperability issues can pose challenges in complex or legacy IT environments. Addressing these constraints will require continued collaboration among technology providers, regulators, and end-users to establish clear guidelines, improve compatibility, and build confidence in long-term system reliability. Which technologies are shaping product innovation? Manufacturers are continually refining cold plate designs to improve heat transfer efficiency and compatibility with next-generation processors. Innovations such as microchannel architectures, optimised flow paths, and advanced alloys enable higher thermal performance while minimising pressure drop and energy consumption. Customisation tailored to specific processor architectures and workload requirements has become increasingly common. This flexibility supports diverse applications across AI, HPC, cloud computing, and enterprise data centres, further strengthening the market’s value proposition. What regional trends are emerging? • North America dominates the global market, accounting for over 38% of total market share in 2024. This leadership is driven by a mature data centre ecosystem, advanced IT infrastructure, and early adoption of innovative cooling technologies. The strong presence of hyperscale data centre operators and cloud service providers, particularly in the US, has accelerated deployment across the region. • Asia Pacific is projected to register the fastest growth, with a CAGR of 27.1% from 2025 to 2033. Rapid digital transformation, expanding cloud infrastructure, and increasing investments in hyperscale and edge data centres are fuelling demand. Countries such as China, India, Japan, and Singapore are witnessing rising adoption of AI and HPC across sectors including fintech, healthcare, and smart cities, further driving the need for advanced cooling solutions. • Latin America, the Middle East, and Africa are experiencing gradual adoption of direct-to-chip liquid cooling technologies. While infrastructural limitations, budget constraints, and skills gaps have slowed deployment, growing awareness of long-term cost savings and sustainability benefits is steadily improving market outlook in these regions. What does the competitive landscape look like? The market features a combination of established thermal management companies and specialised liquid cooling solution providers. Competition is primarily based on cooling efficiency, system reliability, ease of integration, and total cost of ownership. Strategic partnerships between hardware manufacturers, data centre operators, and cooling technology providers are becoming increasingly common. Continuous investment in research and development remains critical, as cooling requirements evolve alongside processor design and workload intensity. What is the future outlook for the direct-to-chip liquid cooling market? The transition towards high-density computing shows no signs of slowing. Market forecasts indicate strong expansion, with the direct-to-chip liquid cooling market expected to grow from $1.3 billion (£951 million) in 2024 to $7.9 billion (£5.7 billion) by 2033, reflecting sustained demand across data centre, enterprise, and research environments. As processors become more powerful and energy efficiency expectations rise, direct-to-chip liquid cooling is expected to shift from selective adoption to broader implementation. Continued standardisation, declining component costs, and increased operational familiarity are likely to accelerate this transition. Conclusion: Is direct-to-chip liquid cooling becoming a standard rather than an option? Direct-to-chip liquid cooling addresses some of the most critical challenges facing modern computing infrastructure. By enabling efficient heat management, supporting high-performance workloads, and aligning with sustainability and energy efficiency goals, the technology is redefining thermal management strategies. As digital workloads intensify and infrastructure demands evolve, the market’s trajectory raises a defining question: Will direct-to-chip liquid cooling soon be regarded as a baseline requirement for advanced computing environments rather than a specialised enhancement?

OpenNebula validated with NVIDIA Spectrum-X
OpenNebula Systems has today announced that its cloud management and virtualisation platform, OpenNebula, has been validated by NVIDIA as an orchestration platform integrated with NVIDIA Spectrum-X Ethernet networking. OpenNebula is used as a multi-tenant platform for AI factories, providing isolation, governance, and lifecycle management for accelerated infrastructure. The company says the validation supports the deployment of AI-ready cloud infrastructure using Spectrum-X Ethernet. Spectrum-X Ethernet is designed for AI networking environments, where latency, congestion, and jitter can affect large-scale training and multi-tenant inference workloads. OpenNebula now integrates with the networking platform to provide a software-defined cloud environment for AI applications, with multi-tenancy across compute, GPU, and network layers on a shared Spectrum-X Ethernet fabric. Automated orchestration for AI workloads The integration allows OpenNebula to orchestrate tenant provisioning, network configuration, and device attachment through Spectrum-X Ethernet. The OpenNebula control plane also runs on NVIDIA Air, which provides a platform for testing, integration, and validation. Customers can use the environment to evaluate the integration, run simulations, and test automation workflows for AI factory deployments. Ignacio M. Llorente, CEO of OpenNebula Systems, says, “Through our collaboration with NVIDIA, we are extending OpenNebula to support the networking and performance requirements of modern AI infrastructures. "This integration allows customers to manage multi-tenant AI environments where NVIDIA Grace Blackwell and NVIDIA Grace Blackwell Ultra compute and Spectrum-X Ethernet networking are tightly orchestrated and optimised as a single platform.” Amit Katz, VP of Networking at NVIDIA, adds, “OpenNebula’s integration with NVIDIA Spectrum-X Ethernet brings cloud-native agility to the AI Factory, enabling customers to orchestrate multi-tenant accelerated infrastructure with maximum performance and predictability. "NVIDIA Air enables OpenNebula and our ecosystem partners to validate and simulate large-scale AI Factory deployments, giving customers a powerful environment to evaluate and accelerate their AI cloud strategies.” For more from OpenNebula, click here.

Carrier launches CRAH for data centres
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced the AiroVision 39CV Computer Room Air Handler (CRAH), expanding its QuantumLeap portfolio with a precision cooling system designed for medium- to large-scale data centre environments. Developed and manufactured in Europe, the AiroVision 39CV is intended to support energy efficiency, reliability, and shorter lead times, while meeting EU regulatory requirements. The unit offers a cooling capacity from 20kW to 250kW and is designed to operate with elevated chilled water temperatures. Carrier states that this approach can improve energy performance and contribute to lower power usage effectiveness (PUE) by enabling more efficient chiller operation and supporting free cooling strategies. Factory-integrated design for simplified deployment The AiroVision 39CV features a built-in controller for real-time monitoring, adaptive operation, and integration with building management systems. The control platform can be configured to suit specific operational requirements. All components are factory-integrated to reduce on-site installation and commissioning work. Additional features, including an auto transfer switch and ultra-capacitors, are intended to support service continuity in critical environments. Michel Grabon, EMEA Marketing and Market Verticals Director at Carrier, says, “The 39CV is a strategic addition to our QuantumLeap Solutions portfolio, designed to help data centre operators address today’s most pressing challenges: increasing thermal loads from higher computing densities, the need to reduce energy consumption to meet sustainability targets, and the pressure to deploy solutions quickly and efficiently. "With its high-efficiency design, intelligent control system, and factory-integrated components, the 39CV helps operators to improve energy performance, optimise installation time, and build scalable infrastructures with confidence.” For more from Carrier, click here.



Translate »