10 November 2025
Equinix to build new $22m data centre in Lagos, Nigeria
 
10 November 2025
ABB, VoltaGrid to strengthen power stability for AI expansion
 
10 November 2025
CEL Critical Power opens $40m US manufacturing facility
 
7 November 2025
Oxford technology supplied to quantum-AI data centre
 
7 November 2025
Mission Critical Group acquires Leman Engineering
 

Latest News


Vertiv expands immersion liquid cooling portfolio
Vertiv, a global provider of critical digital infrastructure, has introduced the Vertiv CoolCenter Immersion cooling system, expanding its liquid cooling portfolio to support AI and high-performance computing (HPC) environments. The system is available now in Europe, the Middle East, and Africa (EMEA). Immersion cooling submerges entire servers in a dielectric liquid, providing efficient and uniform heat removal across all components. This is particularly effective for systems where power densities and thermal loads exceed the limits of traditional air-cooling methods. Vertiv has designed its CoolCenter Immersion product as a "complete liquid-cooling architecture", aiming to enable reliable heat removal for dense compute ranging from 25 kW to 240 kW per system. Sam Bainborough, EMEA Vice President of Thermal Business at Vertiv, explains, “Immersion cooling is playing an increasingly important role as AI and HPC deployments push thermal limits far beyond what conventional systems can handle. “With the Vertiv CoolCenter Immersion, we’re applying decades of liquid-cooling expertise to deliver fully engineered systems that handle extreme heat densities safely and efficiently, giving operators a practical path to scale AI infrastructure without compromising reliability or serviceability.” Product features The Vertiv CoolCenter Immersion is available in multiple configurations, including self-contained and multi-tank options, with cooling capacities from 25 kW to 240 kW. Each system includes an internal or external liquid tank, coolant distribution unit (CDU), temperature sensors, variable-speed pumps, and fluid piping, all intended to deliver precise temperature control and consistent thermal performance. Vertiv says that dual power supplies and redundant pumps provide high cooling availability, while integrated monitoring sensors, a nine-inch touchscreen, and building management system (BMS) connectivity simplify operation and system visibility. The system’s design also enables heat reuse opportunities, supporting more efficient thermal management strategies across facilities and aligning with broader energy-efficiency objectives. For more from Vertiv, click here.

Zoom to open new UK data centre
Zoom has announced a major step in its UK growth strategy with plans of opening a UK data centre in 2026, designed to align with the country’s data residency requirements and unlock additional opportunities across regulated industries such as Public Sector, Healthcare, and Financial Services, ultimately offering increased choice for customers. Louise Newbury-Smith, Head of UK & Ireland, Zoom, comments, “Launching a UK data centre is a significant milestone in Zoom’s journey to provide secure, compliant, and high performance services for all of our customers. By investing in local infrastructure, we are ensuring that organisations across the UK, from financial services to government, can confidently embrace the future of AI first collaboration.” Zoom has evolved from a video communications platform into a comprehensive AI first collaboration suite. Today, UK organisations use Zoom Workplace for meetings, chat, telephony, workspace reservations, and productivity automation. By hosting services locally, the new UK data centre will help local customers to benefit from both productivity gains and enhanced data residency offerings. The facility is expected to go live in the first half of 2026 and support a wide range of services including Meetings, Webinars, Rooms, Team Chat, Phone, Notes, Docs and Zoom AI Companion. UK only meeting zones and telephony gateways with local dial in numbers will be available at launch, with additional services such as Zoom Contact Centre and PCI compliance integration phased in shortly after. UK: A Pillar of Growth and Innovation for Zoom The UK is one of Zoom’s most dynamic and forward-thinking regions, the company says, consistently ranking as a top market in EMEA. Over the past three years, UK-based organisations, from FTSE 100 companies to public sector bodies, have driven the adoption of hybrid work and next-generation collaboration tools. Steve Rafferty, Head of EMEA & APAC, Zoom, notes, “Our customers and partners have been clear: local infrastructure, compliance and greater choice over where their data is stored are critical to unlocking digital transformation in regulated industries. This investment demonstrates Zoom’s long term commitment to the UK market, and we are excited to see how it will empower organisations to innovate, collaborate, and thrive.” In June 2024, Zoom underscored its long-term commitment to the UK by opening the Experience Centre as part of the London office location. This state-of-the-art facility was launched as a hub for innovation, executive briefings, and hands-on demonstrations of Zoom’s evolving collaboration platform. Since its opening, the London Experience Centre has welcomed over 5,500 guests across customers, partners, and industry leaders, serving as a showcase for how critical the UK market is to Zoom’s broader EMEA strategy. The centre also exemplifies Zoom’s approach: co-creating solutions with and for customers, prospects and partners, supporting digital transformation, and nurturing a vibrant local technology ecosystem. The new data centre is set to play a pivotal role in Zoom's long-term growth strategy across both the UK and the wider EMEA region. This strategic investment complements Zoom's existing infrastructure in Saudi Arabia, Germany, and the Netherlands, which collectively serve European, Middle Eastern, and African markets. Through ongoing infrastructure investments, new partnerships, and market-specific innovations tailored to regional-specific needs, Zoom further strengthens presence and deepens roots throughout the region.

HUMAIN, AirTrunk to build DCs in Saudi Arabia
AI company HUMAIN and hyperscale data centre operator AirTrunk have agreed a strategic partnership to develop data centres in Saudi Arabia, including an initial project valued at around $3 billion (£2.2 billion) for a new campus in the country. HUMAIN is headquartered in Saudi Arabia and focuses on artificial intelligence capability development, while AirTrunk operates hyperscale data centre platforms across Asia Pacific. HUMAIN is owned by the Public Investment Fund, and AirTrunk is backed by Blackstone and the Canada Pension Plan Investment Board. The companies say the partnership is intended to support Saudi Arabia’s ambitions to expand its digital infrastructure and position itself as a regional technology hub. According to the organisations, the joint initiative aims to combine local AI and infrastructure expertise with international hyperscale data centre deployment experience. Industry comments Tareq Amin, Chief Executive Officer of HUMAIN, says, “Together with AirTrunk and Blackstone, HUMAIN is strengthening the technological infrastructure that underpins the Kingdom’s digital economy. "This partnership marks a pivotal moment in creating scalable, secure, and sustainable data centre capacity to support the rapid growth of AI and cloud computing. This initiative not only accelerates Saudi Arabia’s technological advancement, but also establishes a platform for long-term economic diversification and global competitiveness.” Robin Khuda, founder and Chief Executive Officer of AirTrunk, says, “Our strategic partnership with HUMAIN, a key player in the region, will support Saudi Arabia to realise its vision of being a data- and AI-driven economy. "We’re bringing together the whole digital ecosystem, combining HUMAIN’s end-to-end AI capabilities, from infrastructure to models, with AirTrunk’s hyperscale data centre capabilities. This announcement strengthens the AirTrunk data centre platform as we deliver world-class digital infrastructure for the cloud and AI across the Asia Pacific and now the Middle East, which is one of the fastest growing regions in the world.” Stephen A Schwarzman, Chair, Chief Executive Officer, and co-founder of Blackstone, says, “We are thrilled to help power this next era of innovation in the Middle East and enable AirTrunk’s expansion in this important region. "The AI revolution continues to be one of Blackstone’s highest conviction themes, and we bring scale and expertise across the ecosystem as the largest provider of data centres globally and a significant investor in related services and infrastructure." Long-term development and investment focus The partnership is expected to cover the design, construction, financing, and operation of large-scale facilities in Saudi Arabia. HUMAIN says it will lead national efforts to deliver AI-ready infrastructure, while AirTrunk and Blackstone will focus on development and investment. Areas of collaboration include attracting cloud service providers and enterprise customers to the sites. A talent development focus is also planned, with training and capability-building programmes intended to support local workforce growth in the sector. According to the companies, the partnership aligns with the Kingdom’s plans to build a digital economy, expand local skills, and accelerate AI infrastructure deployment. For more from AirTrunk, click here.

Colt DCS to expand West London hyperscale campus
Colt Data Centre Services (Colt DCS), a global provider of AI, hyperscale, and large enterprise data centres, has announced that it has received committee approval (Resolution to Grant) from Hillingdon Council to expand its Hayes Digital Park campus in West London with three new hyperscale data centres and an Innovation Hub. The £2.5 billion investment will strengthen the UK’s digital infrastructure, support the government’s modern industrial strategy, and help drive the nation’s growing AI economy. The three new hyperscale data centres - London 6, 7, and 8 - will be powered using 100% renewable energy through a Power Purchase Agreement (PPA). Power contracts for this development have been secured with National Grid and a high voltage supply is due to be delivered by October 2027. The expansion will add an additional 97MW to the available IT power at the Hayes Digital Park, taking the total capacity to 160MW. Construction is expected to start in mid-2026, with the first data centre (London 6) scheduled to go live in early 2029. Once operational, the new facilities will create over 500 permanent jobs, training more than 50 technical apprentices over a 10-year build programme. In addition to the data centres, Colt DCS will develop an Innovation Hub in partnership with Brunel University, designed to serve as a community space and incubator for digital start-ups. The hub will promote economic synergy by co-locating light-industrial and digital innovation businesses, creating opportunities for collaboration, research, and skills development within an affordable workspace. Students from Brunel University will be encouraged to use the hub to develop entrepreneurial projects and technology-led ventures to support the digital economy. AECOM has been appointed to develop the design proposals for the Innovation Hub. The facility aims to act as a base for innovation and community engagement, with flexible space for future industrial use, in line with planning policy for Strategic Industrial Land. It will also provide social value by hosting local events themed around culture, food, film, music, and literature. The new development will also deliver a district heating network, using waste heat from the data centres to support local businesses, communities, and residential buildings. Under the planning permission, back-up generators will only be permitted to operate for a maximum of 15 hours per year, with the data centres powered directly from the national grid. “This announcement marks another important milestone for the UK’s digital economy,” says Xavier Matagne, Chief Real Estate Officer at Colt DCS. “Data centres are a cornerstone of digital transformation. With this expansion, we can help power innovation, support the AI revolution, and contribute to the energy transition.” Xavier continues, “Our new campus in Hayes, including the Innovation Hub in partnership with Brunel University, will drive community value, from reusing heat for district heating to creating jobs, skills, and long-term investment. As one of the few operators capable of delivering new capacity in this area of London over the next decade, we’re proud to be helping power the UK’s future economy in a sustainable and inclusive way.” Cllr Steve Tuckwell, Hillingdon Council's Cabinet Member for Planning, Housing and Growth, notes, "Hillingdon is open for business, and we're working closely with our business community, new and existing investors and partners to drive innovation and development in the right places. "The innovation hub is an exciting new development that will help to foster economic growth. It will help to equip residents and smaller local businesses with the right skills, affordable workspaces, and opportunities to thrive. "Hayes is playing a leading role in shaping London's digital economy and infrastructure and it's vital local people have more opportunities to experience the benefits." For more from Colt DCS, click here.

America’s AI revolution needs the right infrastructure
In this article, Ivo Ivanov, CEO of DE-CIX, argues his case for why America’s AI revolution won’t happen without the right kind of infrastructure: Boom or bust Artificial intelligence might well be the defining technology of our time, but its future rests on something much less tangible hiding beneath the surface: latency. Every AI service, whether training models across distributed GPU-as-a-Service communities or running inference close to end users, depends on how fast, how securely, and how cost-effectively data can move. Network latency is simply the delay in the speed of traffic transmission caused by the distance the data needs to travel: the lower latency is (i.e. the faster the transmission), the better the performance of everything from autonomous vehicles to the applications we carry in our pockets. There’s always been a trend of technology applications outpacing network capabilities, but we’re feeling it more acutely now due to the sheer pace of AI growth. Depending on where you were in 2012, the average latency for the top 20 applications could be up to or more than 200 milliseconds. Today, there’s virtually no application in the top 100 that would function effectively with that kind of latency. That’s why internet exchanges (IXs) have begun to dominate the conversation. An IX is like an airport for data. Just as an airport coordinates the safe landing and departure of dozens of airlines, allowing them to exchange passengers and cargo seamlessly, an IX brings together networks, clouds, and content platforms to seamlessly exchange traffic. The result is faster connections, lower latency, greater efficiency, and a smoother journey for every digital service that depends on it. Deploying these IXs creates what is known as “data gravity”, a magnetic pull that draws in networks, content, and investment. Once this gravity takes hold, ecosystems begin to grow on their own, localising data and services, reducing latency, and fuelling economic growth. I recently spoke about this at a first-of-its-kind regional AI connectivity summit, The future of AI connectivity in Kansas & beyond, hosted at the Wichita State University (WSU) in Kansas, USA. It was the perfect location - given that WSU is the planned site of a new carrier-neutral IX - and the start of a much bigger plan to roll out IXs across university campuses nationwide. Discussions at the summit reflected a growing recognition that America’s AI economy cannot depend solely on coastal hubs or isolated mega-data centres. If AI is to deliver value across all parts of the economy, from aerospace and healthcare to finance and education, it needs a distributed, resilient, and secure interconnection layer reaching deep into the heartland. What is beginning in Wichita is part of a much bigger picture: building the kind of digital infrastructure that will allow AI to flourish. Networking changed the game, but AI is changing the rules For all its potential, AI’s crowning achievement so far might be the wakeup call it’s given us. It has magnified every weakness in today’s networks. Training up models requires immense compute power. Finding the data centre space for this can be a challenge, but new data transport protocols are meaning that AI processing could, in the future, be spread across multiple data centre facilities. Meanwhile, inference - and especially multi-AI agentive inference - demands ultra-low latency, as AI services interact with systems, people, and businesses in real time. But for both of these scenarios, the efficiency and speed of the network is key. If the network cannot keep pace (if data needs to travel too far), these applications become too slow to be useful. That’s why the next breakthrough in AI won’t be in bigger or better models, but in the infrastructure that connects them all. By bringing networks, clouds, and enterprises together on a neutral platform, an IX makes it possible to aggregate GPU resources across locations, create agile GPU-as-a-Service communities, and deliver real-time inference with the best performance and highest level of security. AI changes the geography of networking too. Instead of relying only on mega-hubs in key locations, we need interconnection spokes that reach into every region where people live, work, and innovate. Otherwise, businesses in the middle of the country face the “tromboning effect”, where their data detours hundreds of miles to another city to be exchanged and processed before returning a result - adding latency, raising costs, and weakening performance. We need to make these distances shorter, reduce path complexity, and allow data to move freely and securely between every player in the network chain. That’s how AI is rewriting the rulebook; latency, underpinned by distance and geography, matters more than ever. Building ecosystems and 'data gravity' When we establish an IX, we’re doing more than just connecting networks; we’re forging the embers of a future-proof ecosystem. I’ve seen this happen countless times. The moment a neutral (meaning data centre and carrier neutral) exchange is in place, it becomes a magnet that draws in networks, content providers, data centres, and investors. The pull of “data gravity” transforms a market from being dependent on distant hubs into a self-sustaining digital environment. What may look like a small step - a handful of networks exchanging traffic locally - very quickly becomes an accelerant for rapid growth. Dubai is one of the clearest examples. When we opened our first international platform there in 2012, 90% of the content used in the region was hosted outside of the Middle East, with latency above 200 milliseconds. A decade later, 90% of that content is localised within the region and latency has dropped to just three milliseconds. This was a direct result of the gravity created by the exchange, pulling more and more stakeholders into the ecosystem. For AI, that localisation isn’t just beneficial; it’s also essential. Training and inference both depend on data being closer to where it is needed. Without the gravity of an IX, content and compute remain scattered and far away, and performance suffers. With it, however, entire regions can unlock the kind of digital transformation that AI demands. The American challenge There was a time when connectivity infrastructure was dominated by a handful of incumbents, but that time has long since passed. Building AI-ready infrastructure isn’t something that one organisation or sector can do alone. Everywhere that has succeeded in building an AI-ready network environment has done so through partnerships - between data centre, network, and IX operators, alongside policy makers, technology providers, universities, and - of course - the business community itself. When those pieces of the puzzle are assembled together, the result is a healthy ecosystem that benefits everyone. This collaborative model, like the one envisaged at the IX in WSU, is exactly what the US needs if it is to realise the full potential of AI. Too much of America’s digital economy still depends on coastal hubs, while the centre of the country is underserved. That means businesses in aerospace, healthcare, finance, and education - many of which are based deep in the US heartland - must rely on services delivered from other states and regions, and that isn’t sustainable when it comes to AI. To solve this, we need a distributed layer of interconnection that extends across the entire nation. Only then can we create a truly digital America where every city has access to the same secure, high-performance infrastructure required to power its AI-driven future. For more from DE-CIX, click here.

CDUs: The brains of direct liquid cooling
As air cooling reaches its limits with AI and HPC workloads exceeding 100 kW per rack, hybrid liquid cooling is becoming essential. To this, coolant distribution units (CDUs) could be the key enabler for next-generation, high-density data centre facilities. In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, discusses further the importance of CDUs in direct liquid cooling: Cooling and the future of data centres Traditional air cooling has hit its limits, with rack power densities surpassing 100 kW due to the relentless growth of AI and high-performance computing (HPC) workloads. Already, CPUs and GPUs exceed 700–1000 W per socket, while projections estimate that to rise to over 1500 W going forward. Fans and heat sinks are just unable to handle these thermal loads at scale. Hybrid cooling strategies are becoming the only scalable, sustainable path forward. Single-phase direct-to-chip (DTC) liquid cooling has emerged as the most practical and serviceable solution, delivering coolant directly to cold plates attached to processors and accelerators. However, direct liquid cooling (DLC) cannot be scaled safely or efficiently with plumbing alone. The key enabler is the coolant distribution unit (CDU), a system that integrates pumps, heat exchangers, sensors, and control logic into a coordinated package. CDUs are often mistaken for passive infrastructure. But far from being a passive subsystem, they act as the brains of DLC, orchestrating isolation, stability, adaptability, and efficiency to make DTC viable at data centre scale. They serve as the intelligent control layer for the entire thermal management system. Intelligent orchestration CDUs do a lot more than just transport fluid around the cooling system; they think, adapt, and protect the liquid cooling portion of the hybrid cooling system. They maintain redundancy to ensure continuous operation, control flow, and pressure, using automated valves and variable speed pumps, filtering particulates to protect cold plates, and maintaining coolant temperature above the dew point to prevent condensation. They contribute to the precise, intelligent, and flexible coordination of the complete thermal management system. Because of their greater cooling capacity, CDUs are ideal for large HPC data centres. However, because they must be connected to the facility's chilled water supply or another heat rejection source to continuously provide liquid to the cold plates for cooling, they can be complicated. CDUs typically fall into two categories: • Liquid to Liquid (L2L): Large HPC facilities are well-suited for high-capacity CDUs known as L2L. Through heat exchangers, they move chip heat into the isolated chilled water loop, such as the facility water system (FWS). • Liquid to Air (L2A): For smaller deployments, L2A CDUs are simpler but have a lower cooling capacity. By utilising conventional HVAC systems, they transfer heat from the returning liquid coolant from the cold plates to the surrounding data centre air by using liquid-to-air heat exchangers rather than a chilled water supply or FWS. Isolation: Safeguarding IT from facility water Acting as the bridge between the FWS and the dedicated technology cooling system (TCS), which provides filtered liquid coolant directly to the chips via cold plate, CDUs isolate sensitive server cold plates from external variability, ensuring a safe and stable environment while constantly adjusting to shifting workloads. One of L2L CDUs' primary functions is to create a dual-loop architecture: • Primary loop (facility side): Connects to building chilled water, district cooling, or dry coolers • Secondary loop (IT side): Delivers conditioned coolant directly to IT racks CDUs isolate the primary loop (which may carry contaminants, particulates, scaling agents, or chemical treatments like biocides and corrosion inhibitors - chemistry that is incompatible with IT gear) from the secondary loop. As well as preventing corrosion and fouling, this isolation offers operators the safety margin that operators need for board-level confidence in liquid. The integrity of the server cold plates is safeguarded by the CDU, which uses a heat exchanger to separate the two environments and maintain a clean, controlled fluid in the IT loop. Because CDUs are fitted with variable speed pumps, automated valves, and sensors, they can dynamically adjust the flow rate and pressure of the TCS to ensure optimal cooling even when HPC workloads change. Stability: Balancing thermal predictability with unpredictable loads HPC and AI workloads are not only high power; they are also volatile. GPU-intensive training jobs or changeable CPU workloads can cause high-frequency power swings, which - without regulation - would translate into thermal instability. The CDU mitigates this risk by controlling temperature, pressure, and flow across all racks and nodes, absorbing dynamic changes and delivering predictable thermal conditions. The CDU absorbs fluctuations by stabilising temperature, pressure, and flow across all racks and nodes, regardless of how erratic the workload is. Sensor arrays ensure the cooling loop remains in accordance with specifications, while variable speed pumps modify flow to fit demand and heat exchangers are calibrated to maintain an established approach temperature. Adaptability: Bridging facility constraints with IT requirements The thermal architecture of data centres varies widely, with some using warm-water loops that operate at temperatures between 20 and 40°C. By adjusting secondary loop conditions to align IT requirements with the facility, the CDU adjusts to these fluctuations. The CDU uses mixing or bypass control to temper supply water. It can alternate between tower-assisted cooling, free cooling, or dry cooler rejection depending on the environmental conditions, and it can adjust flow distribution amongst racks to align with real-time demand. This adaptability makes DTC deployable in a variety of infrastructures without requiring extensive facility renovations. It also makes it possible for liquid cooling to be phased in gradually - ideal for operators who need to make incremental upgrades. Efficiency: Enabling sustainable scale Beyond risk and reliability, CDUs unlock possibilities that make liquid cooling a sustainable option. By managing flow and temperature, CDUs eliminate the inefficiencies of over-pumping and over-cooling. They also maximise scope for free cooling and heat recovery integration such as connecting to district heating networks and reclaiming waste heat as a revenue stream or sustainability benefit. This allows operators to simultaneously lower PUE (Power Usage Effectiveness) to values below 1.1 while simultaneously reducing WUE (Water Usage Effectiveness) by minimising evaporative cooling. All this, while meeting the extreme thermal demands of AI and HPC workloads. CDUs as the thermal control plane Viewed holistically, CDUs are far more than pumps and pipes; they are the thermal control plane for thermal management, orchestrating safe isolation, dynamic stability, infrastructure adaptability, and operational efficiency. They translate unpredictable IT loads into manageable facility-side conditions, ensuring that single-phase DTC can be deployed at scale, enabling HPC and AI data centres to evolve into multi-hundred kilowatt racks without thermal failure. Without CDUs, direct-to-chip cooling would be risky, uncoordinated, and inefficient. With CDUs, it becomes an intelligent and resilient architecture capable of supporting 100 kW (and higher) racks as well as the escalating thermal demands of AI and HPC clusters. As workloads continue to climb and rack power densities surge, the industry’s ability to scale hinges on this intelligence. CDUs are not a supporting component; they are the enabler of single-phase DTC at scale and a cornerstone of the future data centre. For more from Subzero Engineering, click here.

ZincFive introduces battery system designed for AI DCs
ZincFive, a producer of nickel-zinc (NiZn) battery-based solutions for immediate power applications, has announced a new nickel-zinc battery cabinet designed for data centres deploying artificial intelligence workloads. The system, named BC AI, is positioned as an uninterruptible power supply (UPS) battery platform that can support both high-intensity AI power surges and conventional backup requirements. The company says the new system builds on its existing nickel-zinc battery range and is engineered for environments where GPU clusters and rapid power fluctuations are driving changes in electrical infrastructure requirements. The battery technology is intended to respond to fast transient loads associated with AI training and inference, while also providing backup during power interruptions. The system includes a battery management platform and nickel-zinc chemistry designed for frequent high-power discharge cycles. The company says this approach reduces reliance on upstream electrical capacity by managing dynamic loads at the UPS level. Nickel-zinc battery design for transient load handling As well as incorporating a new nickel-zinc battery cell designed for high-intensity usage and long service life, ZincFive highlights the product's compact footprint and field-upgradeable design. Nickel-zinc chemistry offers power density characteristics that allow the system to accommodate rapid load spikes without significant footprint expansion. ZincFive says competing approaches may require substantially more physical space to manage similar peak loads, particularly where AI applications can generate power demands above nominal UPS levels. The system is targeted at hyperscale operators, colocation facilities, and UPS manufacturers integrating AI-ready backup capacity. The company also points to potential benefits related to infrastructure design, including reduced UPS sizing requirements and support for power-management strategies aimed at improving grid interaction. Tod Higinbotham, Chief Executive Officer of ZincFive, says, “AI is transforming the very foundation of data centres, creating new challenges that legacy technologies cannot solve. "With BC 2 AI, we are delivering a safe, sustainable, and future-ready power solution designed to handle the most demanding AI workloads while continuing to support traditional IT backup. "This is a defining moment not just for ZincFive, but for the entire data centre industry as it adapts to the AI era.” For more from ZincFive, click here.

Red Hat adds support for OpenShift on NVIDIA BlueField DPUs
Red Hat, a US provider of open-source software, has announced support for running Red Hat OpenShift on NVIDIA BlueField data processing units (DPUs). The company says the development is intended to help organisations deploy AI workloads with improved security, networking, and storage performance. According to Red Hat, modern AI applications increasingly compete with core infrastructure services for system resources, which can affect performance and security. The company states that running OpenShift with BlueField aims to separate AI workloads from infrastructure functions, such as networking and security, to improve operational efficiency and reduce system contention. It says the platform will support enhanced networking, more streamlined lifecycle management, and resource offloading to the DPU. Workload isolation and resource efficiency Red Hat states that by shifting networking services and infrastructure management tasks to the DPU, CPU resources can be used for AI applications, also highlighting acceleration features for data-plane and storage-traffic processing, including support for NVMe over Fabrics and optimised Open vSwitch data paths. Additional features include distributed routing for multi-tenant environments and security controls designed to reduce attack surfaces by isolating workloads away from infrastructure services. Support for BlueField on OpenShift will be offered initially as a technical preview, with broader integration planned. Red Hat notes that ongoing work with NVIDIA aims to add further support for the NVIDIA DOCA software framework and third-party network functions. The companies also expect future capability enhancements with the next generation of BlueField hardware and integration with NVIDIA’s Spectrum-X Ethernet networking for distributed AI environments. Ryan King, Vice President, AI and Infrastructure, Partner Ecosystem Success at Red Hat, comments, “As the adoption of generative and agentic AI grows, the demand for advanced security and performance in data centres has never been higher, particularly with the proliferation of AI workloads. "Our collaboration with NVIDIA to enable Red Hat OpenShift support for NVIDIA BlueField DPUs provides customers with a more reliable, secure, and high-performance platform to address this challenge and maximise their hardware investment.” Justin Boitano, Vice President, Enterprise Products at NVIDIA, adds, “Data-intensive AI reasoning workloads demand a new era of secure and efficient infrastructure. "The Red Hat OpenShift integration of NVIDIA BlueField builds on our longstanding work to empower organisations to achieve unprecedented scale and performance across their AI infrastructure.” For more from Red Hat, click here.

Sabey achieves 25% carbon emissions cut
Sabey Data Centers, a data centre developer, owner, and operator, has announced a 25.2% reduction in Scope 1 and Scope 2 carbon emissions from a 2018 baseline, even as electrical load under management has continued growth in the same interval. The company’s 2024 Sustainability Report details progress in environmental performance, technology innovation, and clean energy partnerships intended to rival the global data centre sector. 2024 report highlights The 2024 report shares data on Sabey’s emissions reductions, energy efficiency improvements, and external partnerships. The company says it continues to align its emissions reductions with its science-based targets and is working to achieve net-zero carbon emissions across Scope 1 and Scope 2 by 2029. Key developments from the report include: · Carbon emissions slashed 25.2% from 2018 baseline· Pioneering MOU with TerraPower to explore integrating next-generation nuclear energy· Nine buildings earn ENERGY STAR certification with scores over 90; five receive a score of 99/100 Clear path to net zero The report outlines the steps Sabey is taking to meet its net-zero goal. These include continued investment in carbon-free energy, improving building operations to reduce energy use, reducing emissions from HVAC and fuel sources, and helping customers better understand their own energy footprints. Casey Mason, Senior Energy & Sustainability Manager, says, “Data centres are the backbone of the digital economy and [the] AI revolution, but must become stewards of global decarbonisation. “We are not just on track for net zero by 2029; we're reimagining how critical digital infrastructure can be both scalable and sustainable for the world’s fastest-growing industries. "Our work with TerraPower, local utilities, and SBTi showcases the kind of bold collaboration needed for a climate-secure future.” In alignment with the Greenhouse Gas Protocol, Sabey reports on emissions and sustainability efforts annually, engaging with external organisations in the process, including CDP, GRESB, EcoVadis, Atrius, and data centre tenants. The company’s emissions reporting includes both location-based and market-based accounting methods. For more from Sabey, click here.

Danfoss to showcase DC technologies at SuperComputing 2025
Danfoss, a Danish manufacturer of mobile hydraulic systems and components, plans to present its data centre cooling and power management technologies at SuperComputing 2025, taking place 18–20 November in St Louis, Missouri, USA. The company says it will demonstrate equipment designed to support reliability, energy performance, and liquid-cooling adoption in high-density computing environments. Exhibits will include cooling components, liquid-cooling hardware, and motor-control equipment intended for use across data hall and plant-room applications. Danfoss notes that increasing data centre efficiency while maintaining uptime remains a central challenge for operators and developers, particularly as AI and high-performance computing drive increases in heat output and power usage. Peter Bleday, Vice President, Specialty Business Unit and Data Center at Danfoss Power Solutions, says, “Danfoss technologies are trusted by the world’s leading cloud service providers and chip manufacturers with products installed in facilities around the world. "We look forward to welcoming visitors to our booth to discuss how we can help them achieve smarter, more reliable, and more sustainable data centre operations.” Cooling and power management focus Danfoss will present liquid-cooling components including couplings, hoses, and valve assemblies designed to support leak-tested coolant distribution for rack-level and direct-to-chip cooling. A smart valve train system providing plug-and-play connection between piping and server racks will also be shown, designed to help optimise coolant flow and simplify installation. The company's HVAC portfolio will also feature, including centrifugal compressor technology engineered for high efficiency and low noise in compact installations. Danfoss states that this equipment is designed to support data centre cooling requirements with long-term performance stability. In addition, the manufacturer will highlight its power-conversion and motor-control portfolio, including variable-frequency drives and harmonic-mitigation equipment intended to support low-PUE facilities. The business says its liquid-cooled power-conversion modules are designed to support applications such as energy storage and fuel-cell systems within data centre environments. Danfoss representatives will also discuss the company’s involvement in wider sustainability initiatives, including the Net Zero Innovation Hub for Data Centers, where industry stakeholders such as Google and Microsoft collaborate on energy-efficiency and decarbonisation strategies. For more from Danfoss, click here.



Translate »