In this exclusive article for DCNN, Jon Abbott, Technologies Director, Global Strategic Clients at Vertiv, explains how the challenge for operators is no longer simply maintaining uptime; it’s adapting infrastructure fast enough to meet the unpredictable, high-intensity demands of AI workloads:
Artificial intelligence (AI) is changing how facilities are built, powered, cooled and secured. The industry is now facing hard questions about whether existing infrastructure, which has been designed for traditional enterprise or cloud loads, can be successfully upgraded to support the pace and intensity of AI-scale deployments.
Data centres are being pushed to adapt quickly and the pressure is mounting from all sides: from soaring power densities to unplanned retrofits, and from tighter build timelines to demands for grid interactivity and physical resilience. What’s clear is that we’ve entered a phase where infrastructure is no longer just about uptime; instead, it’s about responsiveness, integration, and speed.
Today’s AI systems don’t scale in neat, predictable increments; they arrive with sharp step-changes in power draw, heat generation, and equipment footprint. Racks that once averaged under 10kW are being replaced by those consuming 30kW, 40kW, or even 80kW – often in concentrated blocks that push traditional cooling systems to their limits.
This represents a physical problem. Heavier and wider AI-optimised racks require new planning for load distribution, cooling systems design, and containment. Many facilities are discovering that the margins they once relied on – in structural tolerance, space planning, or energy headroom – have already evaporated.
Cooling strategies, in particular, are under renewed scrutiny. While air cooling continues to serve much of the IT estate, the rise of liquid-cooled AI workloads is accelerating. Rear-door heat exchangers and direct-to-chip cooling systems are no longer reserved for experimental deployments; they are being actively specified for near-term use. Most of these systems do not replace air entirely, but work alongside it. The result is a hybrid cooling environment that demands more precise planning, closer system integration, and a shift in maintenance thinking.
One of the most critical tensions AI introduces is the mismatch between innovation cycles and infrastructure timelines. AI models evolve in months, but data centres are typically built over years. This gap creates mounting pressure on procurement, engineering, and operations teams to find faster, lower-risk deployment models.
As a result, there is increasing demand for prefabricated and modular systems that can be installed quickly, integrated smoothly, and scaled with less disruption. These approaches are not being adopted to reduce cost; they are being adopted to save time and to de-risk complex commissioning across mechanical and electrical systems.
Integrated uninterruptable power supply (UPS) and power distribution units, factory-tested cooling modules, and intelligent control systems are all helping operators compress build timelines while maintaining performance and compliance. Where operators once sought redundancy above all, they are now prioritising responsiveness as well as the ability to flex infrastructure around changing workload patterns.
AI infrastructure is expensive, energy-intensive, and often tied to commercially sensitive operations. That puts physical security firmly back on the agenda – not only for hyperscale operators, but also for enterprise and colocation facilities managing high-value compute assets.
Modern data centres are now adopting a more layered approach to physical security. It begins with perimeter control, but extends through smart rack-level locking systems, biometric or multi-factor authentication, and role-based access segmentation. For some facilities – especially those serving AI training operations – real-time surveillance and environmental alerting are being integrated directly into operational platforms. The aim is to reduce blind spots between security and infrastructure and to help identify risks before they interrupt service.
One of the emerging risks in AI-scale facilities is the unintended fragility created by multiple overlapping systems. Cooling loops, power chains, telemetry platforms, and asset tracking tools all work in parallel, but without careful integration, they can fail to provide a coherent operational picture.
Hybrid cooling systems may introduce new points of failure that are not always visible to standard monitoring tools. Secondary fluid networks, for instance, must be managed with the same criticality as power infrastructure. If overlooked, they can become weak points in otherwise well-architected environments. Likewise, inconsistent commissioning between systems can lead to drift, incompatibility, and inefficiency.
These challenges are prompting many operators to invest in more integrated control platforms that span thermal, electrical, and digital infrastructure. The goal is now to have the ability to see issues and to act quickly – to re-balance loads, adapt cooling, or respond to anomalies in real time.
As compute densities rise, so too does energy consumption. Operators are looking at how backup systems can do more than sit idle: UPS fleets are being turned into grid-support assets. Demand response and peak shaving programmes are becoming part of energy strategy. Many data centres are now exploring microgrid models that incorporate renewables, fuel cells, or energy storage to offset demand and reduce reliance on volatile grid supply.
What all of this reflects is a shift in mindset. Infrastructure is no longer a fixed investment; it is a dynamic capability – one that must scale, flex, and adapt in real time. Operators who understand this are the best placed to succeed in a fast-moving environment.
The old model of data centre resilience was built around failover and redundancy. Today, resilience also means responsiveness: the ability to handle unexpected load spikes, adjust cooling to new workloads, maintain uptime under tighter energy constraints, and secure physical systems across more fluid operating environments.
This shift is already reshaping how data centres are specified, how vendors are selected, and how operators evaluate return on infrastructure investment. What once might have been designed in isolated disciplines – cooling, power, controls, access – is now being engineered as part of a joined-up, system-level operational architecture. Intelligent data centres are not defined by their scale, but by their ability to stay ahead of what’s coming next.
For more from Vertiv, click here.
Head office & Accounts:
Suite 14, 6-8 Revenge Road, Lordswood
Kent ME5 8UD
T: +44 (0)1634 673163
F: +44 (0)1634 673173