Data Centre Operations: Optimising Infrastructure for Performance and Reliability


AI rush deemed "incompatible" with Clean Power Plan
A forecast suggests that the UK data centre boom is at odds with the UK’s clean power commitments, with the sector already overwhelming the electricity system and forcing an unavoidable reliance on gas. This is the view put forward by Simon Gallagher, Managing Director at UK Networks Services, speaking at Montel's UK Energy Day event earlier today (13 November). Simon said that only firm capacity be counted on in the context of powering data centres, as adverse weather conditions would reduce the availability of wind and solar during periods of low wind and sunlight. When asked what could realistically power tens of gigawatts worth of near constant data centre load, Simon said his “controversial opinion” was that this demand “is going to have to be met by gas.” “It’s the only technology we have that can do this on a firm basis. We don’t have storage,” Simon added. This led him to conclude that the sector’s growth was simply not compatible with the UK government’s Clean Power 2030 plans, under which 95% of energy must come from low-carbon sources by the turn of the decade. Physical limitations This comes amid an explosion in grid connection requests, jumping from around 17 GW to 97 GW over the summer, pushing the total capacity waiting for connections to the UK grid up to 125 GW. Simon continued, “About 80% of that is hyperscale data centres. It’s all AI. The impact on our grid is very real – and it just happened.” The UK was “never, ever” going to build the required transmission capacity in time, Simon added, with a new connection taking “at least five years.” He also outlined how the infrastructure available is not where data centres want to be, adding that such facilities seek sizeable connections “at transmission voltage, in urban areas near fibre.” This would typically site them away from significant power generation zones, which could help to alleviate network constraints and reduce balancing costs. Earlier today, Dhara Vyas, CEO of trade group Energy UK, told the same event that the UK’s clean energy expansion was being slowed by planning rules and grid connection queues that were “actively deterring investors”. For more from Montel, click here.

365, Robot Network unveil AI-enabled private cloud platform
365 Data Centers (365), a provider of network-centric colocation, network, cloud, and other managed services, has announced a partnership with Robot Network, a US provider of edge AI platforms, to deliver a new AI-enabled private cloud platform for enterprise customers. Hosted within 365’s cloud infrastructure, the platform supports small-language models, analytics, business intelligence, and cost optimisation, marking a shift in how colocation facilities can function as active layers in AI optimisation. Integrating AI capabilities into colocation environments Building on 365’s experience in colocation, network, and cloud services, the collaboration seeks to enable data processing and intelligent operations closer to the edge. The model allows more than 90% of workloads to be handled within the data centre, using high-density AI only where necessary. This creates a hybrid AI architecture that turns colocation from passive hosting into an active optimisation environment, lowering operational costs while allowing AI to run securely within compact, high-density footprints. Derek Gillespie, CEO of 365 Data Centers, says, “Our objective is to meet AI where colocation, connectivity, and cloud converge. "This platform will provide seamless integration and economies of scale for our customers and partners, giving them access to AI that is purpose-built for their business initiatives.” Initial enterprise use cases will be supported by a proprietary AI platform that integrates both small and large language models. Supporting AI adoption across enterprise operations Jacob Guedalia, CEO of Robot Network, comments, “We’re pleased to partner with 365 Data Centers to bring this unique offering to market. 365 is a forward-thinking partner with strong colocation capabilities and operational experience. "By combining our proprietary stack - optimised for AMD EPYC processors and NVIDIA GPUs - with their infrastructure, we’re providing a trusted platform that makes advanced AI accessible and affordable for enterprises. "Our system leverages small AI models from organisations such as Meta, OpenAI, and Grok to extend AI capabilities to a broader business audience.” 365 says the new platform underlines its strategy to evolve as an infrastructure-as-a-service provider, helping enterprises adopt AI-driven tools and improve efficiency through secure, flexible, and data-informed operations. The company notes it continues to focus on enabling digital transformation across colocation and cloud environments while maintaining reliability and scalability. For more from 365 Data Centers, click here.

Schneider, DataCentre UK deliver £1.4m modular DC
Schneider Electric, a global energy technology company, in partnership with its EcoXpert Partner DataCentre UK, has delivered a new modular data centre for South Warwickshire University NHS Foundation Trust (SWFT). The £1.4 million project seeks to strengthen the Trust’s digital infrastructure, improving energy efficiency, operational resilience, and capacity to support future healthcare demands. Supporting digital transformation and scalable healthcare infrastructure Facing growing pressure on legacy systems, Innovate Healthcare Services required a modern, scalable, and secure data centre. The new facility incorporates Schneider Electric’s EcoStruxure Data Centre technology, including APC NetShelter racks, modular cooling units, APC power distribution units (PDUs), and Easy UPS systems. Together, these components should provide greater resiliency and efficiency, while supporting the Trust’s sustainability goals. Paul Almond, MD at DataCentre UK, says, “As an EcoXpert Partner to Schneider Electric, we have integrated Schneider Electric's EcoStruxure Data Centre solutions into the design. "These solutions are pre-engineered, configurable, and scalable, encompassing racks, power, cooling, and management systems, aimed at maximising resiliency, sustainability, and efficiency. “Innovate and SWFT trusted our design and our selection of products and approved us to proceed with the build-out.” Improving energy performance and sustainability The upgraded infrastructure has reportedly reduced the Trust’s data centre energy consumption by an estimated 60% compared with its previous setup. Enhanced monitoring and management capabilities allow continuous optimisation of performance and efficiency, in line with SWFT’s Green Plan. “It was all built around sustainability,” notes Mike Conlon, Associate Director of Technology Services at Innovate Healthcare Services. “Conservatively, we are now using 60% less electricity on the same amount of IT load, based on the previous server room implementation, and the system has been designed with an expected annualised PUE of 1.2.” Ongoing operation and maintenance will be provided through a year-round support agreement managed by DataCentre UK and Schneider Electric. Strengthening healthcare resilience through infrastructure partnerships Karlton Gray, Director of Channels, UK & Ireland, Schneider Electric, comments, “With data centres underpinning critical healthcare services, it’s essential that infrastructure delivers the highest levels of reliability, scalability, and sustainability. "Our solutions have helped Innovate significantly improve their environmental footprint, while maintaining exceptional operational performance, and delivered a healthcare environment built for the future.” The project demonstrates how modular and energy-efficient data centre designs can support digital transformation across the healthcare sector, helping organisations meet sustainability targets while maintaining continuity of care. For more from Schneider Electric, click here.

VAST Data, CoreWeave agree $1.17 billion partnership
VAST Data, an AI operating system company, has announced a $1.17 billion (£889.8 million) commercial agreement with CoreWeave, a US provider of GPU-based cloud computing infrastructure for AI workloads, to extend their existing partnership in AI data infrastructure. The deal formalises CoreWeave’s use of the VAST Data Operating System (AI OS) as a key element of its data management platform. Expanding collaboration on large-scale data operations CoreWeave’s infrastructure, which uses the VAST AI OS, is designed to provide rapid access to large datasets and support intensive AI workloads. Its modular architecture allows deployment across multiple data centres, maintaining performance and reliability across distributed environments. As part of the agreement, VAST and CoreWeave will collaborate on new data services intended to improve efficiency in data pipelines and model development. The partnership aims to enhance operational consistency and reduce complexity for enterprise users developing or training AI models at scale. “At VAST, we are building the data foundation for the most ambitious AI initiatives in the world,” claims Renen Hallak, founder and CEO of VAST Data. “Our deep integration with CoreWeave is the result of a long-term commitment to working side by side at both the business and technical level. "By aligning our roadmaps, we are delivering an AI platform that organisations cannot find anywhere else in the market.” “The VAST AI Operating System underpins key aspects of how we design and deliver our AI cloud,” adds Brian Venturo, co-founder and Chief Strategy Officer of CoreWeave. “This partnership enables us to deliver AI infrastructure that is the most performant, scalable, and cost-efficient in the market, while reinforcing the trust and reliability of a data platform that our customers depend on for their most demanding workloads.” Supporting next-generation AI and compute systems Both companies say this agreement reflects their joint focus on developing infrastructure that can manage large-scale data processing and continuous AI training. By integrating VAST’s data management systems with CoreWeave’s GPU-based infrastructure, the partnership aims to support use cases such as real-time inference and industrial-scale model training. For more from VAST Data, click here.

America’s AI revolution needs the right infrastructure
In this article, Ivo Ivanov, CEO of DE-CIX, argues his case for why America’s AI revolution won’t happen without the right kind of infrastructure: Boom or bust Artificial intelligence might well be the defining technology of our time, but its future rests on something much less tangible hiding beneath the surface: latency. Every AI service, whether training models across distributed GPU-as-a-Service communities or running inference close to end users, depends on how fast, how securely, and how cost-effectively data can move. Network latency is simply the delay in the speed of traffic transmission caused by the distance the data needs to travel: the lower latency is (i.e. the faster the transmission), the better the performance of everything from autonomous vehicles to the applications we carry in our pockets. There’s always been a trend of technology applications outpacing network capabilities, but we’re feeling it more acutely now due to the sheer pace of AI growth. Depending on where you were in 2012, the average latency for the top 20 applications could be up to or more than 200 milliseconds. Today, there’s virtually no application in the top 100 that would function effectively with that kind of latency. That’s why internet exchanges (IXs) have begun to dominate the conversation. An IX is like an airport for data. Just as an airport coordinates the safe landing and departure of dozens of airlines, allowing them to exchange passengers and cargo seamlessly, an IX brings together networks, clouds, and content platforms to seamlessly exchange traffic. The result is faster connections, lower latency, greater efficiency, and a smoother journey for every digital service that depends on it. Deploying these IXs creates what is known as “data gravity”, a magnetic pull that draws in networks, content, and investment. Once this gravity takes hold, ecosystems begin to grow on their own, localising data and services, reducing latency, and fuelling economic growth. I recently spoke about this at a first-of-its-kind regional AI connectivity summit, The future of AI connectivity in Kansas & beyond, hosted at the Wichita State University (WSU) in Kansas, USA. It was the perfect location - given that WSU is the planned site of a new carrier-neutral IX - and the start of a much bigger plan to roll out IXs across university campuses nationwide. Discussions at the summit reflected a growing recognition that America’s AI economy cannot depend solely on coastal hubs or isolated mega-data centres. If AI is to deliver value across all parts of the economy, from aerospace and healthcare to finance and education, it needs a distributed, resilient, and secure interconnection layer reaching deep into the heartland. What is beginning in Wichita is part of a much bigger picture: building the kind of digital infrastructure that will allow AI to flourish. Networking changed the game, but AI is changing the rules For all its potential, AI’s crowning achievement so far might be the wakeup call it’s given us. It has magnified every weakness in today’s networks. Training up models requires immense compute power. Finding the data centre space for this can be a challenge, but new data transport protocols are meaning that AI processing could, in the future, be spread across multiple data centre facilities. Meanwhile, inference - and especially multi-AI agentive inference - demands ultra-low latency, as AI services interact with systems, people, and businesses in real time. But for both of these scenarios, the efficiency and speed of the network is key. If the network cannot keep pace (if data needs to travel too far), these applications become too slow to be useful. That’s why the next breakthrough in AI won’t be in bigger or better models, but in the infrastructure that connects them all. By bringing networks, clouds, and enterprises together on a neutral platform, an IX makes it possible to aggregate GPU resources across locations, create agile GPU-as-a-Service communities, and deliver real-time inference with the best performance and highest level of security. AI changes the geography of networking too. Instead of relying only on mega-hubs in key locations, we need interconnection spokes that reach into every region where people live, work, and innovate. Otherwise, businesses in the middle of the country face the “tromboning effect”, where their data detours hundreds of miles to another city to be exchanged and processed before returning a result - adding latency, raising costs, and weakening performance. We need to make these distances shorter, reduce path complexity, and allow data to move freely and securely between every player in the network chain. That’s how AI is rewriting the rulebook; latency, underpinned by distance and geography, matters more than ever. Building ecosystems and 'data gravity' When we establish an IX, we’re doing more than just connecting networks; we’re forging the embers of a future-proof ecosystem. I’ve seen this happen countless times. The moment a neutral (meaning data centre and carrier neutral) exchange is in place, it becomes a magnet that draws in networks, content providers, data centres, and investors. The pull of “data gravity” transforms a market from being dependent on distant hubs into a self-sustaining digital environment. What may look like a small step - a handful of networks exchanging traffic locally - very quickly becomes an accelerant for rapid growth. Dubai is one of the clearest examples. When we opened our first international platform there in 2012, 90% of the content used in the region was hosted outside of the Middle East, with latency above 200 milliseconds. A decade later, 90% of that content is localised within the region and latency has dropped to just three milliseconds. This was a direct result of the gravity created by the exchange, pulling more and more stakeholders into the ecosystem. For AI, that localisation isn’t just beneficial; it’s also essential. Training and inference both depend on data being closer to where it is needed. Without the gravity of an IX, content and compute remain scattered and far away, and performance suffers. With it, however, entire regions can unlock the kind of digital transformation that AI demands. The American challenge There was a time when connectivity infrastructure was dominated by a handful of incumbents, but that time has long since passed. Building AI-ready infrastructure isn’t something that one organisation or sector can do alone. Everywhere that has succeeded in building an AI-ready network environment has done so through partnerships - between data centre, network, and IX operators, alongside policy makers, technology providers, universities, and - of course - the business community itself. When those pieces of the puzzle are assembled together, the result is a healthy ecosystem that benefits everyone. This collaborative model, like the one envisaged at the IX in WSU, is exactly what the US needs if it is to realise the full potential of AI. Too much of America’s digital economy still depends on coastal hubs, while the centre of the country is underserved. That means businesses in aerospace, healthcare, finance, and education - many of which are based deep in the US heartland - must rely on services delivered from other states and regions, and that isn’t sustainable when it comes to AI. To solve this, we need a distributed layer of interconnection that extends across the entire nation. Only then can we create a truly digital America where every city has access to the same secure, high-performance infrastructure required to power its AI-driven future. For more from DE-CIX, click here.

Red Hat adds support for OpenShift on NVIDIA BlueField DPUs
Red Hat, a US provider of open-source software, has announced support for running Red Hat OpenShift on NVIDIA BlueField data processing units (DPUs). The company says the development is intended to help organisations deploy AI workloads with improved security, networking, and storage performance. According to Red Hat, modern AI applications increasingly compete with core infrastructure services for system resources, which can affect performance and security. The company states that running OpenShift with BlueField aims to separate AI workloads from infrastructure functions, such as networking and security, to improve operational efficiency and reduce system contention. It says the platform will support enhanced networking, more streamlined lifecycle management, and resource offloading to the DPU. Workload isolation and resource efficiency Red Hat states that by shifting networking services and infrastructure management tasks to the DPU, CPU resources can be used for AI applications, also highlighting acceleration features for data-plane and storage-traffic processing, including support for NVMe over Fabrics and optimised Open vSwitch data paths. Additional features include distributed routing for multi-tenant environments and security controls designed to reduce attack surfaces by isolating workloads away from infrastructure services. Support for BlueField on OpenShift will be offered initially as a technical preview, with broader integration planned. Red Hat notes that ongoing work with NVIDIA aims to add further support for the NVIDIA DOCA software framework and third-party network functions. The companies also expect future capability enhancements with the next generation of BlueField hardware and integration with NVIDIA’s Spectrum-X Ethernet networking for distributed AI environments. Ryan King, Vice President, AI and Infrastructure, Partner Ecosystem Success at Red Hat, comments, “As the adoption of generative and agentic AI grows, the demand for advanced security and performance in data centres has never been higher, particularly with the proliferation of AI workloads. "Our collaboration with NVIDIA to enable Red Hat OpenShift support for NVIDIA BlueField DPUs provides customers with a more reliable, secure, and high-performance platform to address this challenge and maximise their hardware investment.” Justin Boitano, Vice President, Enterprise Products at NVIDIA, adds, “Data-intensive AI reasoning workloads demand a new era of secure and efficient infrastructure. "The Red Hat OpenShift integration of NVIDIA BlueField builds on our longstanding work to empower organisations to achieve unprecedented scale and performance across their AI infrastructure.” For more from Red Hat, click here.

Kao Data unveils blueprint to accelerate UK AI ambitions
Kao Data, a specialist developer and operator of data centres engineered for AI and advanced computing, has released a strategic report charting a clear path to accelerate the UK's AI ambitions in support of the UK Government's AI Opportunities Action Plan. The report, AI Taking, Not Making, delivers practical recommendations to bridge the gap between government and industry - helping organisations to capitalise on the recent £31 billion in commitments from leading US technology firms including Microsoft, Google, NVIDIA, OpenAI, and CoreWeave, and highlighting key pitfalls which could prevent them from materialising. Drawing on Kao Data's expertise in delivering hyperscale-inspired AI infrastructure, the report identifies three strategic pillars which must be addressed for the UK to secure the much-anticipated economic boost from the Government’s Plan for Change. Key areas include energy pricing and grid modernisation, proposed amendments to the UK’s AI copyright laws, and coordinated investment strategies across the country’s energy and data centre infrastructure systems. "Matt Clifford's AI Opportunities Action Plan has galvanised the industry around a bold vision for Britain's digital future, and the recent investment pledges from global technology leaders signals tremendous confidence in our potential," says Spencer Lamb, MD & CCO of Kao Data. "What's needed now is focused collaboration between industry and government to transform these commitments into world-class infrastructure. Our new report offers a practical roadmap to make this happen - drawing on our experience developing data centres, engineered for AI and advanced computing, and operating those which already power some of the world's most demanding workloads." The new Kao Data report outlines concrete opportunities for partnership across a series of strategic pillars, including integrating data centres into Energy Intensive Industry (EII) frameworks, implementing zonal power pricing near renewable energy generation and accelerating grid modernisation to unlock the projected 10GW AI opportunity by 2030. Additionally, the report proposes new measures to evolve UK AI copyright law with a pragmatic approach that protects creative industries whilst ensuring Britain remains a competitive location for the large-scale AI training deployments essential for attracting frontier AI workloads. Further, the paper shares key considerations to optimise the government's AI Growth Zones (AIGZs), defining benefit structures that create stronger alignment between public infrastructure programmes and the private sector to ensure rapid deployment of sovereign UK AI capacity. "Britain possesses extraordinary advantages, world-leading research institutions, exceptional engineering talent, and now substantial investment to back the country’s AI ambitions," Spencer continues. "By working in partnership with government, we believe we can transform these strengths into the physical infrastructure that will power the next generation of industrial-scale AI innovations and deliver solutions that position the UK at the forefront of the global AI race." To download the full report, click here. For more from Kao Data, click here.

Start Campus, Nscale to deploy NVIDIA Blackwell in the EU
Start Campus, a designer, builder, and operator of sustainable data centres, in partnership with hyperscaler Nscale, has announced one of the European Union’s first deployments of the NVIDIA GB300 NVL72 platform at its SIN01 data centre in Sines, Portugal. The project supports Microsoft’s AI infrastructure strategy and marks a milestone in the development of advanced, sovereign AI capacity within the EU. Nscale, a European-headquartered AI infrastructure company operating globally, selected Start Campus’s site for its strategic location, readiness, and scalability. The first phase of the deployment is scheduled to go live in early 2026 at the SINES Data Campus. High-density power to support next-generation AI The NVIDIA GB300 NVL72 platform is designed for high-performance AI inference and training workloads, supporting larger and more complex model development. Start Campus says the installation will accommodate rack densities exceeding 130 kW, with power and cooling systems engineered to meet the requirements of ultra-dense AI computing. Portugal’s government has welcomed the investment as a key step in strengthening the country’s position in the European digital economy. Castro Almeida, Minister of Economy and Territorial Cohesion, comments, “This investment in Sines confirms international confidence in Portugal as a destination for innovation and technology. It strengthens our position in the global digital economy and supports high-value job creation.” Miguel Pinto Luz, Minister of Infrastructure and Housing, adds, “Start Campus illustrates how digital and infrastructure strategies can align to deliver long-term sustainability. "Sines demonstrates the convergence of the digital transition with Portugal’s geographic advantages - particularly its port, which plays a strategic role in connecting new submarine cables and enabling low-carbon investment.” Recent research by Copenhagen Economics projects that data centre investment could contribute up to €26 billion (£22.6 billion) to Portugal’s GDP by 2030, creating tens of thousands of jobs. Portugal’s location supports strong connectivity through high-capacity subsea cables such as Equiano, 2Africa, and EllaLink, providing low-latency global links. Data from national grid operator Redes Energéticas Nacionais (REN) shows that renewable energy supplied 71% of Portugal’s electricity consumption in 2024, rising to 81% in early 2025. The country’s energy costs also remain below EU and Euro Area averages. The next steps Robert Dunn, CEO of Start Campus, says, “With SIN01 now at full capacity and expanding to meet demand, we have demonstrated that the SINES Data Campus is ready for ultra-dense, next-generation AI workloads. "Partnering with Nscale and NVIDIA on this deployment highlights Portugal’s role as a leader in sustainable AI infrastructure.” The company is also progressing work on its next facility, the 180 MW SIN02 data centre, which will form part of the same campus. Josh Payne, CEO and founder of Nscale, notes, “AI requires an environment that combines scale, resilience, and sustainability. This deployment demonstrates our ability to deliver advanced infrastructure in the European Union while meeting the technical demands of modern workloads. "Partnering with Start Campus allows us to lay the groundwork for the next generation of AI.” Nscale is expanding its European footprint, including building the UK’s largest AI supercomputer with Microsoft at its Loughton campus and partnering with Aker ASA on Stargate Norway - a joint venture linked to multi-billion-euro agreements with Microsoft. For more from Start Campus, click here.

Rethinking infrastructure for the AI era
In this exclusive article for DCNN, Jon Abbott, Technologies Director, Global Strategic Clients at Vertiv, explains how the challenge for operators is no longer simply maintaining uptime; it’s adapting infrastructure fast enough to meet the unpredictable, high-intensity demands of AI workloads: Built for backup, ready for what’s next Artificial intelligence (AI) is changing how facilities are built, powered, cooled and secured. The industry is now facing hard questions about whether existing infrastructure, which has been designed for traditional enterprise or cloud loads, can be successfully upgraded to support the pace and intensity of AI-scale deployments. Data centres are being pushed to adapt quickly and the pressure is mounting from all sides: from soaring power densities to unplanned retrofits, and from tighter build timelines to demands for grid interactivity and physical resilience. What’s clear is that we’ve entered a phase where infrastructure is no longer just about uptime; instead, it’s about responsiveness, integration, and speed. The new shape of demand Today’s AI systems don’t scale in neat, predictable increments; they arrive with sharp step-changes in power draw, heat generation, and equipment footprint. Racks that once averaged under 10kW are being replaced by those consuming 30kW, 40kW, or even 80kW - often in concentrated blocks that push traditional cooling systems to their limits. This represents a physical problem. Heavier and wider AI-optimised racks require new planning for load distribution, cooling systems design, and containment. Many facilities are discovering that the margins they once relied on - in structural tolerance, space planning, or energy headroom - have already evaporated. Cooling strategies, in particular, are under renewed scrutiny. While air cooling continues to serve much of the IT estate, the rise of liquid-cooled AI workloads is accelerating. Rear-door heat exchangers and direct-to-chip cooling systems are no longer reserved for experimental deployments; they are being actively specified for near-term use. Most of these systems do not replace air entirely, but work alongside it. The result is a hybrid cooling environment that demands more precise planning, closer system integration, and a shift in maintenance thinking. Deployment cycles are falling behind One of the most critical tensions AI introduces is the mismatch between innovation cycles and infrastructure timelines. AI models evolve in months, but data centres are typically built over years. This gap creates mounting pressure on procurement, engineering, and operations teams to find faster, lower-risk deployment models. As a result, there is increasing demand for prefabricated and modular systems that can be installed quickly, integrated smoothly, and scaled with less disruption. These approaches are not being adopted to reduce cost; they are being adopted to save time and to de-risk complex commissioning across mechanical and electrical systems. Integrated uninterruptable power supply (UPS) and power distribution units, factory-tested cooling modules, and intelligent control systems are all helping operators compress build timelines while maintaining performance and compliance. Where operators once sought redundancy above all, they are now prioritising responsiveness as well as the ability to flex infrastructure around changing workload patterns. Security matters more when the stakes rise AI infrastructure is expensive, energy-intensive, and often tied to commercially sensitive operations. That puts physical security firmly back on the agenda - not only for hyperscale operators, but also for enterprise and colocation facilities managing high-value compute assets. Modern data centres are now adopting a more layered approach to physical security. It begins with perimeter control, but extends through smart rack-level locking systems, biometric or multi-factor authentication, and role-based access segmentation. For some facilities - especially those serving AI training operations - real-time surveillance and environmental alerting are being integrated directly into operational platforms. The aim is to reduce blind spots between security and infrastructure and to help identify risks before they interrupt service. The invisible fragility of hybrid environments One of the emerging risks in AI-scale facilities is the unintended fragility created by multiple overlapping systems. Cooling loops, power chains, telemetry platforms, and asset tracking tools all work in parallel, but without careful integration, they can fail to provide a coherent operational picture. Hybrid cooling systems may introduce new points of failure that are not always visible to standard monitoring tools. Secondary fluid networks, for instance, must be managed with the same criticality as power infrastructure. If overlooked, they can become weak points in otherwise well-architected environments. Likewise, inconsistent commissioning between systems can lead to drift, incompatibility, and inefficiency. These challenges are prompting many operators to invest in more integrated control platforms that span thermal, electrical, and digital infrastructure. The goal is now to have the ability to see issues and to act quickly - to re-balance loads, adapt cooling, or respond to anomalies in real time. Power systems are evolving too As compute densities rise, so too does energy consumption. Operators are looking at how backup systems can do more than sit idle: UPS fleets are being turned into grid-support assets. Demand response and peak shaving programmes are becoming part of energy strategy. Many data centres are now exploring microgrid models that incorporate renewables, fuel cells, or energy storage to offset demand and reduce reliance on volatile grid supply. What all of this reflects is a shift in mindset. Infrastructure is no longer a fixed investment; it is a dynamic capability - one that must scale, flex, and adapt in real time. Operators who understand this are the best placed to succeed in a fast-moving environment. From resilience to responsiveness The old model of data centre resilience was built around failover and redundancy. Today, resilience also means responsiveness: the ability to handle unexpected load spikes, adjust cooling to new workloads, maintain uptime under tighter energy constraints, and secure physical systems across more fluid operating environments. This shift is already reshaping how data centres are specified, how vendors are selected, and how operators evaluate return on infrastructure investment. What once might have been designed in isolated disciplines - cooling, power, controls, access - is now being engineered as part of a joined-up, system-level operational architecture. Intelligent data centres are not defined by their scale, but by their ability to stay ahead of what’s coming next. For more from Vertiv, click here.

AI infrastructure as Trojan horses for climate infrastructure
Data centres are getting bigger, denser, and more power-hungry than ever before. The rapid rise of artificial intelligence (AI) is accelerating this expansion, driving one of the largest capital build-outs of our time. Left unchecked, hyperscale growth could deepen strains on energy, water, and land - while concentrating economic benefits in just a few regions. But this trajectory isn’t inevitable. In this whitepaper, Shilpika Gautam, CEO and founder of Opna, explores how shifting from training-centric hyperscale facilities to inference-first, modular, and distributed data centres can align AI’s growth with climate resilience and community prosperity. The paper examines: • How right-sized, locally integrated data centres can anchor clean energy projects and strengthen grids through flexible demand, • Opportunities to embed circularity by reusing waste heat and water, and to drive demand for low-carbon materials and carbon removal, and • The need for transparency, contextual siting, and community accountability to ensure measurable, lasting benefits. Decentralised compute decentralises power. By embracing modular, inference-first design, AI infrastructure can become a force for both planetary sustainability and shared prosperity. You can download the whitepaper for yourself by clicking this link.



Translate »