23 March 2026
ZutaCore brings two-phase cooling to PCIe GPUs
 
23 March 2026
Pure DC appoints new CCO and CFO
 
23 March 2026
ZIEHL-ABEGG updates ZAplus fan design
 
20 March 2026
Huawei Cloud: The first choice of multicloud provider for LATAM
 
20 March 2026
Duos Edge AI expands Amarillo data centre footprint
 

Latest News


Corning expands AI data centre connectivity
Corning, a US manufacturer of optical fibre for telecommunications and data centres, has expanded its data centre connectivity portfolio through a licensing agreement with US Conec. The agreement enables Corning to use PRIZM TMT optical ferrule technology, designed to increase fibre density within data centre environments, particularly for AI infrastructure. The technology supports higher fibre counts in limited space, addressing growing demand for connectivity as AI workloads scale and data centre architectures evolve. Mike O’Day, Senior Vice President and General Manager of Corning Optical Communications, comments, “AI infrastructure is pushing optical connectivity into new and more demanding environments. “By licensing PRIZM TMT, Corning is strengthening its ability to deliver scalable, fibre-rich solutions that help customers build larger, faster, and more efficient AI clusters, while aligning closely with the broader industry ecosystem.” Supporting higher-density AI infrastructure As AI deployments expand, data centres are increasing the number of connected accelerators and shifting towards optical connections in place of traditional copper links. This change is driving higher fibre density within server and switch racks, increasing the need for compact, high-performance connectors. The PRIZM TMT ferrule uses expanded beam technology with precision-aligned microlenses, rather than direct fibre contact. This approach is intended to improve installation reliability, reduce sensitivity to contamination, and support faster deployment. According to the companies, these characteristics are suited to large-scale AI environments, where high connection density and consistent performance are required. For more from Corning, click here.

Antin acquires NorthC
Antin Infrastructure Partners, a private equity firm specialising in infrastructure investments, has completed its acquisition of NorthC Datacenters, a data centre operator in Northwest Europe, from DWS and other minority shareholders. Headquartered in Amsterdam, NorthC operates 25 data centres across The Netherlands, Germany, and Switzerland, with more than 140 MW of secured gross grid capacity across existing and upcoming greenfield sites. The company plans to expand into new regions and begin construction of facilities in Frankfurt, Basel, and Geneva this year. Continuing regional expansion Alexandra Schless (pictured above), CEO of NorthC Datacenters, comments, “The finalisation of this acquisition marks a key milestone in the NorthC journey. "Now, with Antin Infrastructure Partners officially on board, we have gained a new strategic partner whose deep expertise in digital infrastructure perfectly aligns with our regional leadership and expansion goals. "We are ready to accelerate our growth across the Benelux and DACH regions, leveraging our 140 MW of secured capacity to meet the surging demand for AI inference workloads and enterprise digital transformation. "Our focus remains on delivering high-quality, regional colocation solutions with the scale and backing of a global infrastructure leader.” Stéphane Ifker and Maximilian Lindner, Managing Partner and Partner (respectively) at Antin Infrastructure Partners, add, “We are delighted to be working with NorthC as we jointly embark on the company’s next growth phase. "This closing signifies the start of an ambitious new chapter. We are fully committed to supporting Alexandra and her management team as they expand their footprint, modernise their facilities, and continue to serve as the backbone for Europe's most critical digital infrastructure sectors.” For more from NorthC, click here.

Siemens, Rittal partner on data centre power
German multinational technology company Siemens and Rittal, a German manufacturer of industrial enclosures, IT racks, and climate control systems, have formed a partnership to develop power distribution infrastructure for data centres, targeting increasing demands from AI workloads. The collaboration focuses on standardised systems designed to support higher rack power densities, improve deployment speed, and streamline data centre construction. Power demands in AI environments are continuing to rise, with rack densities already exceeding 100 kW and expected to increase further over the coming years. The companies aim to address these requirements through updated approaches to power distribution, cooling, and heat management. Focus on scalable power infrastructure One of the first developments from the partnership is a sidecar power system, installed within the white space of a data centre. The system uses a dedicated power rack to supply server racks, supporting a modular and scalable approach to power delivery. The design aligns with Open Compute Project standards and is intended to simplify deployment while maintaining operational reliability. “To enable the rapid growth of AI, we need smart, reliable, and scalable power supply solutions for data centres and we need them quickly,” comments Andreas Matthé, CEO Electrical Products at Siemens Smart Infrastructure. Further joint work includes the development of standardised low-voltage distribution systems for modular and containerised data centres, alongside measures aimed at improving operational and personnel safety. The partnership builds on existing collaboration between Siemens and the Friedhelm Loh Group, Rittal’s parent company, and is expected to expand into additional applications beyond data centres. For more from Siemens, click here.

Report: AI boom driving US data centres off grid
The rapid expansion of off-grid data centres across the US is emerging as a possible answer to the power constraints reshaping the AI-driven digital economy, according to a new report from law firm Troutman Pepper Locke. As artificial intelligence accelerates demand for compute capacity, the firm's report - Off-Grid Data Centers: A Potential Power Solution for AI - finds that developers, hyperscalers, and energy companies are increasingly turning to behind-the-meter and ‘island-moded’ generation to secure reliable, scalable electricity while avoiding grid congestion and regulatory delays. According to projections cited in the analysis, global data centre investment could reach $6.7 trillion (£5 trillion) by 2030, with approximately $2.7 trillion (£2 trillion) of that invested in the US market. Nowhere is the transformation more visible than in Texas, where the Electric Reliability Council of Texas (ERCOT) forecasts that data centre electricity demand could rise by 22 GW between 2025 and 2031, reaching 78 GW (or roughly 36% of total statewide demand). At the same time, AI-specialised server racks now require 50–100 kW each, up from 5–10 kW in traditional configurations just a few years ago. As microchips become more powerful and energy intensive, the report concludes that power - not silicon - has become the primary constraint on AI expansion. Natural gas as the bridge to scale One of the report's central findings is the decisive shift towards natural gas as the preferred near-term solution for off-grid facilities. Developers are prioritising dispatchable generation that can deliver the "five nines" reliability (99.999% uptime) demanded by hyperscale AI operations. While renewables remain a central part of long-term decarbonisation strategies, the analysis suggests that wind and solar alone cannot yet provide consistent, 24/7 baseload power at the scale AI requires without substantial overbuild and storage. Battery capacity, though advancing, "remains limited" in duration for utility-scale deployments. Small modular nuclear reactors reportedly hold promise but are not yet commercially deployable at scale. Natural gas generation, by contrast, can be deployed relatively quickly and offers dependable output, which the report argues makes it the dominant choice for early off-grid adopters, particularly in Texas, where fuel supply and land availability align. However, the report also cautions that turbine supply chains are tightening, and competition for equipment, skilled labour, and transmission infrastructure is intensifying as AI-driven projects accelerate nationwide. Interconnection bottlenecks fuel off-grid momentum Grid interconnection queues are increasingly congested, delaying projects in key markets. Developers are therefore reportedly pursuing behind-the-meter solutions as a bridge to eventual grid connection - or, in some cases, as a long-term strategy to maintain operational autonomy. Texas's deregulated electricity market and advanced behind-the-meter framework make it a focal point for this shift. Yet, regulatory oversight is also evolving. Senate Bill 6, passed with bipartisan support in 2025, introduced new obligations for large-load users, including requirements tied to backup generation and infrastructure cost allocation. At the federal level, policymakers are responding to the AI "gold rush" with measures designed both to accelerate data centre permitting and protect grid reliability. Proposed initiatives such as the Decentralised Access to Technology Alternatives (DATA) Act and large-load interconnection reforms could further clarify the treatment of private off-grid facilities and reduce compliance burdens. The report suggests that regulatory clarity - rather than deregulation alone - will be essential to sustaining investment momentum while safeguarding broader system stability. Community scrutiny and the $64 billion delay factor Beyond infrastructure, the report highlights mounting community resistance. Research referenced in the analysis indicates that as of early 2025, approximately $64 billion (£48.2 billion) in US data centre developments had faced delays due to bipartisan local opposition, often centred on energy costs, water use, and property impacts. Off-grid systems can mitigate some of these concerns by reducing strain on public grids and shielding residential ratepayers from infrastructure cost allocation. Nevertheless, proactive community engagement and transparent economic value propositions remain critical. The report also explores alternative models, including modular data centres colocated with renewable assets to absorb curtailed power, demonstrating that innovation in design and siting can complement traditional off-grid approaches. The partner imperative With gigawatt-scale campuses carrying price tags exceeding $1 billion (£753 million) per facility, counterparty strength and supply chain resilience are paramount, according to the report. Developers and energy providers "must conduct rigorous due diligence" on turbine manufacturers, engineering teams, landholders, and off-takers. In an off-grid environment, there is no utility fallback. Creditworthiness, long-term commitment, and technical capability become central risk determinants. The report underscores that competition is fierce and that some early entrants may struggle to scale without robust financial backing. Reliability first and always Ultimately, the report concludes that reliability eclipses all other considerations. Hyperscalers racing to lead the AI market prioritise guaranteed uptime over short-term cost arbitrage or energy trading opportunities. The business case for AI infrastructure depends on uninterrupted power, and developers are reshaping generation strategies accordingly. Brandon Lobb, Partner in Troutman Pepper Locke’s Energy Transactional Practice Group, says, "AI has shifted the centre of gravity in the energy market. Power availability - not just price - is now the defining variable in digital infrastructure strategy. "Off-grid solutions are emerging as a pragmatic response to interconnection delays, reliability demands, and community pressures. Companies that align regulatory strategy, supply chain discipline, and creditworthy partnerships will be best positioned to lead in this next phase of AI growth." As federal and state frameworks continue to evolve, off-grid data centres appear set to become a structural feature of the US energy and technology landscape, rather than a temporary workaround.

Schneider, NVIDIA to advance AI data centre design
Global energy technology company Schneider Electric has expanded its collaboration with NVIDIA to develop validated designs and digital tools for large-scale AI data centres. Working alongside AVEVA, the companies outlined new developments in designing, simulating, building, and operating AI infrastructure during NVIDIA GTC in San Jose, USA. These include a reference design for NVIDIA’s latest rack-scale systems, integration of digital twin capabilities, and testing of AI-driven tools for managing data centre alarms. The announcements focus on supporting large-scale AI deployments, sometimes referred to as “AI factories”, with an emphasis on power, cooling, and operational efficiency. Reference design and digital twin integration A new reference design has been developed for NVIDIA’s Vera Rubin NVL72 rack architecture, covering power distribution and cooling requirements. The design supports higher supply voltage, improved thermal efficiency, and clustered rack configurations for AI workloads. It has been validated using electrical system and airflow modelling tools to assess performance before deployment. In parallel, AVEVA has introduced a lifecycle digital twin architecture integrated within the NVIDIA Omniverse environment. This enables simulation of power, cooling, and operational conditions, allowing operators to test and refine designs prior to construction. According to the companies, this approach is intended to reduce design cycles, improve accuracy, and support more efficient deployment of AI infrastructure. Manish Kumar, Executive Vice President, Secure Power & Data Centers at Schneider Electric, comments, “As AI workloads scale in both size and complexity, the margin for error in data centre design becomes incredibly small. “Delivering AI at scale requires tightly integrated electrical, cooling, and digital architectures that can support both unprecedented performance demands while maintaining peak energy efficiency. "By combining advanced software, digital twins, and validated reference designs, operators can simulate and optimise infrastructure before a single rack is deployed. This approach reduces risk, accelerates deployment, and ensures the efficiency and resilience needed to power the next generation of AI factories.” Vladimir Troy, Vice President of AI Infrastructure at NVIDIA, adds, “Gigawatt-scale AI factories demand a fundamentally new class of energy-efficient and highly predictable infrastructure. “Together, NVIDIA and Schneider Electric are providing the power, cooling, and digital twin architectures needed to accelerate time-to-token for our customers worldwide.” AI-based alarm management testing Schneider Electric also confirmed testing of an AI-based alarm management capability using NVIDIA Nemotron models. The system analyses real-time data from multiple sources to identify root causes of issues and recommend corrective actions. The aim is to support data centre operators in resolving incidents more quickly and consistently, while reducing unnecessary maintenance activity. The latest developments build on ongoing collaboration between the companies, including work on digital twin environments, power system modelling, and support for higher-voltage data centre architectures. For more from Schneider Electric, click here.

SUBCO expands Australia network route diversity
SUBCO, an Australian developer of undersea fibre optic cable networks, has expanded its Australian network with additional route diversity between Sydney and Melbourne, alongside new data centre access points across major cities. The company says its Sydney–Melbourne connection now operates across two geographically independent paths, combining subsea and terrestrial infrastructure to improve resilience on one of the country’s busiest corridors. On the Sydney–Perth route, the Indigo Central and SMAP systems provide two separate cable paths with distinct geographic routes. Both systems operate independently, with separate landing stations, submarine line terminal equipment, and data centre connections to reduce the risk of disruption from a single incident. Bevan Slattery, founder and Co-CEO of SUBCO, explains, “Diversity has traditionally been something customers needed to engineer themselves, engaging multiple providers and hoping the underlying paths were physically separate. SUBCO’s strategy has been to own and operate diverse assets and deliver them as a single, fully integrated offering.” Expanded data centre connectivity SUBCO has also introduced new access points across Sydney, Melbourne, Adelaide, and Perth, extending connectivity to its domestic and international cable network. New connection locations include facilities operated by NextDC, Equinix, AirTrunk, and CDC Data Centres. The update forms part of a wider infrastructure expansion programme, which also includes the APX East subsea cable project. This planned system is expected to connect Australia directly with the mainland United States, with service targeted for late 2028. According to SUBCO, APX East will provide a direct subsea route without intermediate landing points, and will land north of Sydney’s existing cable protection zone to increase geographic separation.

NetApp launches new EF-Series storage systems
NetApp, a US provider of data storage and cloud infrastructure management, has announced new additions to its EF-Series storage portfolio, designed for high-performance workloads across AI, high-performance computing (HPC), and transactional databases. The latest models, EF50 and EF80, are intended to support increasing data demands in enterprise environments, including emerging applications such as sovereign AI clouds and AI-driven manufacturing. The systems are designed to work with parallel file systems such as Lustre and BeeGFS, supporting HPC simulations and GPU-intensive workloads through high-performance scratch storage. Performance and efficiency improvements According to NetApp, the new systems deliver over 110GBps of read throughput and 55GBps of write throughput, representing a 250% increase compared to previous generations. The systems also offer a power efficiency of 63.7GBps per kW, alongside storage density of up to 1.5PB within a 2U form factor. This is intended to support high-performance requirements while maintaining efficient rack usage. The EF-Series is positioned to support a range of use cases, including AI development, media production workflows, and large-scale data processing, with built-in data protection features. Clayton Vipond, Senior Solution Architect at CDW, says, “As we navigate the AI era, many enterprises are finding that they need to maximise their raw performance to extract the most value from their data. “The refreshed NetApp EF-Series deliver the throughput and capacity businesses need to scale high-powered workloads that transform data into insights and outcomes.” Simon Robinson, Principal Analyst at Omdia, adds, “By delivering a high-performance storage system that supports parallel file systems like Lustre and BeeGFS, NetApp is making its mark as emerging industries - such as neocloud - emerge to support the AI-Era.” NetApp states that the EF-Series platform builds on its existing installed base, with more than one million deployments globally. For more from NetApp, click here.

Panduit expands fibre portfolio with fusion splice connectors
Panduit, a manufacturer of electrical and network infrastructure solutions, has introduced OmniSplice, a new range of fusion-spliced fibre optic connectors designed for data centres, edge environments, and enterprise networks. The addition expands the company’s fibre optic portfolio with connectors aimed at supporting high-performance connectivity and faster installation in modern network infrastructure. OmniSplice connectors are designed for use with standard fusion splicing equipment, allowing integration into existing installation and maintenance workflows without requiring additional tools or modifications. Panduit says the connectors are intended to support consistent performance while reducing installation time. Integrated design for simplified deployment A key feature of the OmniSplice range is the integration of the splice point within the connector housing. This removes the need for additional components such as pigtails, helping to reduce space requirements and simplify installation. The connectors include pre-assembled fibre stubs and a holder design intended to support the fusion splicing process, aiming to improve consistency and reduce the likelihood of installation errors. According to Panduit, the design is suited to environments where rapid deployment or maintenance is required, including moves, adds, and changes, as well as repair work under time constraints. The launch reflects continued growth in fibre optic infrastructure across data centres, enterprise LANs, and edge applications, where there is increasing demand for solutions that can be integrated efficiently into existing systems. For more from Panduit, click here.

Nscale, Microsoft partner on large-scale campus in West Virginia
Nscale, a UK developer of AI data centres and cloud infrastructure, has signed a letter of intent with Microsoft to deliver 1.35GW of AI compute capacity at the Monarch AI campus in West Virginia, in collaboration with NVIDIA and Caterpillar. The development will deploy NVIDIA’s next-generation Vera Rubin NVL72 GPU systems, based on the NVIDIA DSX AI Factory reference design, with the undertaking expected to begin in phases from late 2027. In addition to this news, Nscale has also announced the acquisition of American Intelligence & Power Corporation (AIPCorp), which includes the Monarch Compute Campus in Mason County. The site spans up to 2,250 acres (9.1 km²) and is designed as a state-certified AI microgrid, with the potential to scale beyond 8GW of power capacity. Hyperscale AI infrastructure and power integration Under the agreement, Nscale will construct and operate the data centre infrastructure, with Microsoft supporting long-term compute services and lease arrangements. The campus is intended to support large-scale AI training and inference workloads, with high-speed connectivity to major US data centre hubs, including Ashburn and Chicago. As part of the project, Caterpillar will supply G3500 series natural gas generator sets, with plans to deliver up to 2GW of on-site power generation by the first half of 2028. The microgrid design enables the facility to operate independently of the local grid, while also allowing for potential future grid integration. The development reflects increasing demand for AI-driven data centre capacity, with industry forecasts indicating significant growth in global power requirements over the coming years. The Monarch campus is expected to build on Nscale’s existing capacity and support expansion of large-scale AI infrastructure in the US. For more from Nscale, click here.

Siemens expands data centre ecosystem for AI infrastructure
German multinational technology company Siemens has expanded its data centre partner ecosystem to support the growth of next-generation artificial intelligence infrastructure, focusing on the integration of compute, power, and operational systems. The expansion includes a strategic investment in Emerald AI, a collaboration with PhysicsX, and the integration of energy storage technologies from Fluence. As AI adoption accelerates, data centre operators are facing increasing constraints around power availability and grid connection timelines. Siemens says the expanded ecosystem is intended to improve flexibility across infrastructure, helping operators scale capacity while maintaining reliability in power-constrained environments. Coordinating compute and energy systems Emerald AI’s technology enables AI workloads to shift in time and location to align with grid conditions, allowing data centre demand to respond dynamically to available power. This approach is designed to reduce peak demand pressures and support faster grid connections. Fluence’s battery energy storage systems (BESS) are intended to help operators manage large-scale AI workloads by shaping energy demand and supporting more predictable load profiles. The systems can also provide on-site power during grid constraints or outages, supporting operational continuity. In addition, Siemens is working with PhysicsX to apply physics-based AI modelling to data centre power distribution systems. Using simulation data, the approach enables engineers to model thermal behaviour in real time, reducing design times and supporting optimisation for dynamic AI workloads. Siemens said the combined ecosystem brings together workload orchestration, energy infrastructure, and AI-driven modelling to address the growing complexity of data centre design and operation as AI demand increases. For more from Siemens, click here.



Translate »