24 October 2025
Data Centre Congress Global returns in 2026
 
24 October 2025
Elevate sparks dialogue at DCA Transformation 2025
 
24 October 2025
AWS outage sparks call for resilient DC strategies
 
23 October 2025
Kao Data unveils blueprint to accelerate UK AI ambitions
 
23 October 2025
Start Campus, Nscale to deploy NVIDIA Blackwell in the EU
 

Latest News


Rewiring the data centre
In this exclusive article for DCNN, Will Stewart, Global Senior Industry Segment Manager Lead, Smart Infrastructure and Mobility at HARTING, explores how modularity, power density, and sustainability are converging to redefine how facilities are built, cooled, and scaled: Building smarter infrastructure for the AI age Artificial intelligence (AI) has moved from hype to headline, impacting everything from health diagnostics to financial analysis. While the public marvels at AI breakthroughs, the engines powering these advances - the world’s data centres - face growing, behind-the-scenes challenges. As organisations expand their AI capabilities, the energy needed to support modern computing infrastructure is rising at an unprecedented rate. Current research projects that global data centre power demand will increase by 50% as soon as 2027 and 165% by 2030, with much of this surge attributed to AI workloads’ explosive growth. Already, data centres account for approximately 2% of worldwide electricity consumption and forecasts suggest this share will continue its upward march. The resulting strain extends beyond server rooms; it is currently reshaping energy supply chains, policy priorities, and environmental strategy across industries. Rising to the infrastructure challenge Serving next-generation, AI-driven applications requires a dramatic rethink of traditional data centre design. Historically, a data centre’s infrastructure balanced a mix of physical and virtual resources - servers, storage, networking, power distribution units, cooling systems, security protocols, and supporting elements like racks and fire suppression - all engineered for reliability and uptime. AI’s energy-hungry, compute-intensive tasks have shattered these historical balances. Data centres today must deliver far more power to denser racks, operate reliably under heavier loads, and deploy new capacity at speeds unimaginable even a decade ago. These requirements are putting immense pressure on every inch of physical infrastructure, from the electrical grid connection to the server cabinet. Navigating power and cooling demands One of the most acute challenges is arising from escalating power and cooling needs. Where historical rack architecture required 16 or 32A, current designs push 70, 100, or even 200A, often in the same amount of physical rack space. These giant increases not only generate more heat, but require thicker, less flexible power cabling, raising new problems for deployment and ongoing maintenance. Efficiently removing heat from ever-denser configurations is a major engineering feat. Next-generation cooling technologies - ranging from liquid cooling to full-system immersion - are becoming essential rather than optional. At the same time, every connection point and cable run becomes a potential source of inefficiency or risk. Operators can no longer afford energy loss, heat generation, or even the downtime that results from outdated power distribution or poorly optimised layouts. The space and scalability constraint AI workloads are increasingly mission-critical. Even short interruptions in data centre uptime can lead to significant financial loss or damaging outages for users and services. With power loads climbing fast and every square foot optimised, the need for trustworthy, quickly serviceable infrastructure grows more urgent. Reliable system operation is now a defining competitive factor for data centre operators. To complicate matters further, capacity needs are accelerating, but available space remains finite. In many regions, the cost and scarcity of real estate forces data centres to pack as much compute and power as possible into smaller footprints. As higher-power architectures proliferate, the infrastructure supporting them - from power to networking - must become more compact and adaptable, maintaining robust operation without sacrificing maintainability or safety. Because new workloads can spike unpredictably, data centre leaders now require infrastructure that can be rapidly scaled up, upgraded, or reconfigured, sometimes within days rather than months. The traditional model of labour-intensive rewiring is proving unsustainable in today’s emerging reality. Sustainability in the spotlight Environmental scrutiny from regulators, investors, and end-users places data centres at the heart of the global decarbonisation agenda. Facilities must now integrate renewable energy, maximise electrical efficiency, and minimise overall carbon footprints while delivering more power each year. But achieving these goals demands holistic change from energy procurement and grid strategy down to every connector, cable, and cooling loop inside the facility. The challenges of the AI era are being met with new ideas at every level of the data centre: smarter building management systems now orchestrate lighting, thermal control, and energy use with unprecedented efficiency; cooling technologies are evolving quickly, as operators push beyond the limits of traditional air-based systems; advanced power distribution and grid connectivity solutions are enabling better load balancing, more reliable energy supply, and easier renewable integration. Within this broad transformation, the move towards modular, plug-and-play connections - sometimes called connectorisation - is having a dramatic impact. Unlike hardwired installations - which are slow to deploy, often hard to scale and maintain, and require specialised labour that is often unavailable - connectorised infrastructure supports pre-assembled, pre-tested units that can be installed in days rather than weeks and by the workforce that is already available on-site. This not only gets new capacity online faster, but also reduces the opportunity for error, simplifies expansions, and supports higher power throughput within constrained spaces. Connectors designed for current and future demands minimise heat and energy loss, enhance reliability, and simplify upgrades. Maintenance is easier and faster, with less need for specialised expertise and less operational downtime. These modular technologies are also helping data centres better optimise their architecture, manage complex workloads, and future-proof their operations. Cooperation and adaptation in a complex landscape Modernising data centre infrastructure is not simply a technical challenge, but one that requires broad collaboration between technology vendors, utilities, cloud providers, regulators, and policymakers. Federal incentives, innovative funding, and public-private partnerships are all working in support of grid modernisation efforts, while the need for flexibility in design and operation allows data centres to adapt to regional differences in energy supply, regulation, and demand. While AI has redefined what is possible, it has also redefined what is required behind the scenes. Data centre infrastructure must evolve rapidly - becoming not only larger, but smarter, faster, and greener. Every connection system and square foot now counts in the race to keep up with exponential demand. For more from HARTING, click here.

Rethinking infrastructure for the AI era
In this exclusive article for DCNN, Jon Abbott, Technologies Director, Global Strategic Clients at Vertiv, explains how the challenge for operators is no longer simply maintaining uptime; it’s adapting infrastructure fast enough to meet the unpredictable, high-intensity demands of AI workloads: Built for backup, ready for what’s next Artificial intelligence (AI) is changing how facilities are built, powered, cooled and secured. The industry is now facing hard questions about whether existing infrastructure, which has been designed for traditional enterprise or cloud loads, can be successfully upgraded to support the pace and intensity of AI-scale deployments. Data centres are being pushed to adapt quickly and the pressure is mounting from all sides: from soaring power densities to unplanned retrofits, and from tighter build timelines to demands for grid interactivity and physical resilience. What’s clear is that we’ve entered a phase where infrastructure is no longer just about uptime; instead, it’s about responsiveness, integration, and speed. The new shape of demand Today’s AI systems don’t scale in neat, predictable increments; they arrive with sharp step-changes in power draw, heat generation, and equipment footprint. Racks that once averaged under 10kW are being replaced by those consuming 30kW, 40kW, or even 80kW - often in concentrated blocks that push traditional cooling systems to their limits. This represents a physical problem. Heavier and wider AI-optimised racks require new planning for load distribution, cooling systems design, and containment. Many facilities are discovering that the margins they once relied on - in structural tolerance, space planning, or energy headroom - have already evaporated. Cooling strategies, in particular, are under renewed scrutiny. While air cooling continues to serve much of the IT estate, the rise of liquid-cooled AI workloads is accelerating. Rear-door heat exchangers and direct-to-chip cooling systems are no longer reserved for experimental deployments; they are being actively specified for near-term use. Most of these systems do not replace air entirely, but work alongside it. The result is a hybrid cooling environment that demands more precise planning, closer system integration, and a shift in maintenance thinking. Deployment cycles are falling behind One of the most critical tensions AI introduces is the mismatch between innovation cycles and infrastructure timelines. AI models evolve in months, but data centres are typically built over years. This gap creates mounting pressure on procurement, engineering, and operations teams to find faster, lower-risk deployment models. As a result, there is increasing demand for prefabricated and modular systems that can be installed quickly, integrated smoothly, and scaled with less disruption. These approaches are not being adopted to reduce cost; they are being adopted to save time and to de-risk complex commissioning across mechanical and electrical systems. Integrated uninterruptable power supply (UPS) and power distribution units, factory-tested cooling modules, and intelligent control systems are all helping operators compress build timelines while maintaining performance and compliance. Where operators once sought redundancy above all, they are now prioritising responsiveness as well as the ability to flex infrastructure around changing workload patterns. Security matters more when the stakes rise AI infrastructure is expensive, energy-intensive, and often tied to commercially sensitive operations. That puts physical security firmly back on the agenda - not only for hyperscale operators, but also for enterprise and colocation facilities managing high-value compute assets. Modern data centres are now adopting a more layered approach to physical security. It begins with perimeter control, but extends through smart rack-level locking systems, biometric or multi-factor authentication, and role-based access segmentation. For some facilities - especially those serving AI training operations - real-time surveillance and environmental alerting are being integrated directly into operational platforms. The aim is to reduce blind spots between security and infrastructure and to help identify risks before they interrupt service. The invisible fragility of hybrid environments One of the emerging risks in AI-scale facilities is the unintended fragility created by multiple overlapping systems. Cooling loops, power chains, telemetry platforms, and asset tracking tools all work in parallel, but without careful integration, they can fail to provide a coherent operational picture. Hybrid cooling systems may introduce new points of failure that are not always visible to standard monitoring tools. Secondary fluid networks, for instance, must be managed with the same criticality as power infrastructure. If overlooked, they can become weak points in otherwise well-architected environments. Likewise, inconsistent commissioning between systems can lead to drift, incompatibility, and inefficiency. These challenges are prompting many operators to invest in more integrated control platforms that span thermal, electrical, and digital infrastructure. The goal is now to have the ability to see issues and to act quickly - to re-balance loads, adapt cooling, or respond to anomalies in real time. Power systems are evolving too As compute densities rise, so too does energy consumption. Operators are looking at how backup systems can do more than sit idle: UPS fleets are being turned into grid-support assets. Demand response and peak shaving programmes are becoming part of energy strategy. Many data centres are now exploring microgrid models that incorporate renewables, fuel cells, or energy storage to offset demand and reduce reliance on volatile grid supply. What all of this reflects is a shift in mindset. Infrastructure is no longer a fixed investment; it is a dynamic capability - one that must scale, flex, and adapt in real time. Operators who understand this are the best placed to succeed in a fast-moving environment. From resilience to responsiveness The old model of data centre resilience was built around failover and redundancy. Today, resilience also means responsiveness: the ability to handle unexpected load spikes, adjust cooling to new workloads, maintain uptime under tighter energy constraints, and secure physical systems across more fluid operating environments. This shift is already reshaping how data centres are specified, how vendors are selected, and how operators evaluate return on infrastructure investment. What once might have been designed in isolated disciplines - cooling, power, controls, access - is now being engineered as part of a joined-up, system-level operational architecture. Intelligent data centres are not defined by their scale, but by their ability to stay ahead of what’s coming next. For more from Vertiv, click here.

Huber+Suhner launches SYNCRO
Huber+Suhner, a Swiss fibre optic cable manufacturer, has introduced its new SYNCRO family, an integrated, modular timing and Global Navigation Satellite System (GNSS) distribution portfolio, designed to simplify optical timing integration for data centre operators. Precise time synchronisation, accurate to within nanoseconds, underpins critical services such as global trade, telecommunications, navigation, and scientific measurement. The SYNCRO system aims to enable operators to integrate optical timing into existing fibre infrastructure, improving performance and reducing the cost and complexity associated with coaxial cabling. Modular design for reliable, scalable synchronisation The SYNCRO portfolio seeks to extend transmission distances, reduce the number of GNSS antennas required, and minimise the limitations of traditional cabling. It builds on Huber+Suhner’s earlier GNSS and Power-over-Fibre (PoF) technologies to deliver precise time synchronisation while maintaining nanosecond accuracy across a network. PoF allows optical fibre to transmit both timing signals and electrical power to remote antenna assemblies, removing the need for separate cabling or rooftop power connections. This enables operators to use existing fibre networks to deliver GNSS signals and centrally managed power to antenna locations. Dominik Tibolla, Product Manager at Huber+Suhner, says, “The increasing computing requirements driven by digitalisation, particularly in cloud computing and artificial intelligence, mean that data centre operators must expand capacity efficiently and securely. "SYNCRO has been developed to help operators scale their infrastructure, enhance monitoring, and ensure high levels of reliability and redundancy.” Details of the new range The SYNCRO range is available in three configurations to meet different operational needs: • SYNCRO Max — offering PoF capability, signal expansion, monitoring, and redundancy for demanding environments • SYNCRO Eco — which provides signal expansion and monitoring without PoF • SYNCRO Mini — for applications that do not require PoF or redundancy, while maintaining monitoring and expansion functions According to Huber+Suhner, moving timing distribution onto fibre eliminates many installation constraints and simplifies planning. The plug-and-play design, the company asserts, removes transmission distance limits associated with coaxial cabling, reduces the need for reinforced ducting or extensive grounding, and supports secure, long-distance connections between antennas and receivers. Dominik continues, “SYNCRO gives operators a reliable, cost-effective timing solution that consolidates GNSS antennas and simplifies deployment. This allows infrastructure budgets to be reallocated to higher-value projects while maintaining precise, resilient synchronisation across data centre operations.” The SYNCRO family will be presented at booth 29 at the International Timing and Sync Forum in Prague, Czech Republic, from 27–30 October. For more from Huber+Suhner, click here.

Telehouse breaks ground on new London data centre
Telehouse International Corporation of Europe, a global data centre service provider and subsidiary of telecommunications company KDDI Corporation, has today broken ground on the new Telehouse West Two data centre at its existing London Docklands campus - which Telehouse says is the most connected data centre campus in Europe. The £275m investment in the new data centre is set for completion in 2028. Flynn Management & Contracting, an international construction and fit-out company, will work with Jones Engineering Group, specialists in mechanical, electrical, and fire protection, to deliver the project. Strategically located close to London’s financial district, the new facility will be purpose-built to support the rapid adoption of emerging technologies such as AI. The new facility will integrate both air and liquid cooling technologies to meet growing demand for high density compute environments, allowing customers to scale without compromise. Telehouse West Two will offer two meet-me rooms and four dedicated secure connectivity risers. Designed with sustainability, resiliency, and security at its core, Telehouse West Two will offer uptime guarantees of 99.999% to ensure uninterrupted operations for customers. The data centre has been designed to BREEAM Excellent standards, indicating a high level of environmental performance against a widely recognised sustainability assessment for buildings. 100% renewable energy will power its operations. The new facility will deliver exceptional efficiency with very low WUE and PUE, while also supporting sustainability objectives with heat recovery potential and HVO-fuelled backup generators. The new data centre represents a significant step in Telehouse’s long-term strategy to expand its presence in London, where it has maintained a presence for more than 35 years. This latest development in London strengthens Telehouse’s global growth trajectory, meeting the rising demand for advanced digital infrastructure and empowering enterprises to accelerate their digital transformation. Kenkichi Honda, Managing Director at Telehouse Europe, says, “The new Telehouse West Two site marks another important step in our ongoing mission to deliver world-class, sustainable digital infrastructure. This expansion will empower digital transformation for enterprise clients across multiple sectors, enabling them to benefit from emerging technologies which are shaping the future world, while supporting the uncompromising need for energy efficiency and carbon neutrality.” The nine-storey building will cover a total gross area of 32,000m², including 11,292m² of white space across six levels, on floor customer storage and plant areas. With flexibility central to the design, the layout will incorporate associated switchgear, UPS systems, chilled water cooling, and a floor-by-floor ventilation plant to support a wide range of customer requirements. The site will also be powered by two new 132kV substations which will provide 11kV across the wider campus, enabling an overall building capacity of 33MW. Each floor will be capable of delivering up to 4.4 megawatts (MW) of power capacity ensuring the resilience and scalability required for future growth. To ensure the highest levels of protection, the facility will be equipped with multi-layered physical security and advanced threat detection, including 24/7 surveillance, on-site security personnel, and real-time incident response protocols. Kevin Flynn, CEO of Flynn Management & Contracting, comments, “We’re proud to have once again been appointed by Telehouse for a new building project. Our team is ready to deliver on the company’s vision for a data centre that meets the ever-growing digital needs of companies, and it’s a great opportunity for us to further enhance our presence in the capital’s data centre market.” Brendan McAtamney, Group Director, Jones Engineering Group, adds, “We’re delighted to work with Flynn Management & Contracting on the engineering and installation of services to Telehouse West Two. This is a fantastic opportunity for us to work on a major data centre project in the heart of London, and we’re looking forward to getting started.” For more from Telehouse, click here.

Pressing challenges impacting the future of US data centres
In this exclusive article for DCNN, Matt Coffel, Chief Commercial and Innovation Officer at Mission Critical Group (MCG), explores how growth in the US data centre market is being affected by increasing challenges, demanding new levels of collaboration and innovation across the sector: An increasingly omnipresent industry Data centres and related facilities are everywhere today. Synergy Research Group states hyperscalers account for 44% of those facilities worldwide, while non-hyperscale colocation and on-premise account for 22% and 34% respectively. They also project that by 2030, hyperscalers will account for 61% of all data centres and related facilities. While there are no definitive estimates on how many of these facilities will be constructed in the US in the coming years, planning and development in the country are happening faster than ever before. Take, for example, the recent developments in Pennsylvania regarding investment in data centres and other technology infrastructure to support AI, including significant investments from Amazon and CoreWeave. With AI adoption surging and data generation accelerating in sectors like healthcare, financial services, and the federal government, the number of data centres is only set to grow. But as demand rises, so too do the obstacles. Data centre operators and their partners face mounting challenges that threaten timelines, drive up costs, and complicate efforts to scale efficiently - and there are no easy fixes. Persistent and critical challenges When it comes to the construction of a data centre, there are many factors to consider. However, four factors have emerged as critical challenges for data centre operators and their partners: permitting, power, skilled talent, and compute. 1. Regulation: Securing permits to build and operate Even as demand for compute and power accelerates, operators are forced to navigate lengthy and often inconsistent approval processes. In some jurisdictions, permitting can take two to three years before projects can even break ground. These delays put US developers at a disadvantage compared to countries with more streamlined regulatory systems. The challenge is compounded by the sheer scale of today’s facilities. Projects promising multiple gigawatts of capacity require not only land and power, but also regulatory sign-off on issues such as environmental impact, emissions, and noise. These reviews often involve multiple parties - utilities, consultants, environmental specialists, and local governments - which makes coordination slow and uncertain. Moreover, the difficulty varies widely by location. In cities like Austin in Texas, approvals can be tough to secure, while just miles outside the city limits, the process may move much faster. 2. Time to power According to the International Energy Agency’s (IEA) Energy and AI report, “Power consumption by data centres is on course to account for almost half of the growth in electricity demand between now and 2030 in the US.” This rising demand is evident in Northern Virginia, where a large cluster of data centres has been built over the past twenty years. With this cluster of data centres in a single area, along with the demands from AI and data processing loads, power substations have reached maximum capacity. This has forced utility providers and data centre operators to either bring in new lines from distant locations - where there is excess power, but transmission infrastructure is lacking - or to build new data centres in rural areas so they can access untapped power. Yet, building new transmission lines from other locations or setting up data centres can take years and doesn’t address the current power demand. 3. Access to skilled talent to support current and future projects Data centre operators and their partners are working to build new facilities across the US, often in remote parts such as Western Texas, where they can access untapped power sources. Building in these areas introduces several challenges related to skilled labour. Building and maintaining a data centre requires highly skilled electricians, mechanics, and controls specialists who can handle complex electrical and mechanical systems and often on-site power generation. However, the US faces a nationwide shortage of these workers. The US Bureau of Labor Statistics forecasts that employment for electrical workers will grow by 11% from 2023 to 2033 - a much faster rate than the average for all jobs. Still, many electricians are nearing retirement and set to leave the field in the coming years. This is likely to create a gap that operators will find difficult to fill as they work to build and keep their facilities running. Additionally, data centre operators and their partners face the reality that many skilled workers are unwilling to live far from population centres. Recent estimates from Goldman Sachs Research underscore the scale of this challenge. It projects that the US will require 207,000 more transmission and interconnection workers and 300,000 extra jobs in power technologies, manufacturing, construction, and operations to support the additional power consumption needs projected for the US by 2030. This dual challenge of labour scarcity and logistical complexity is making traditional, on-site construction methods increasingly untenable. As a result, the industry is pivoting towards prefabricated, modular power solutions that are engineered and assembled in a controlled factory environment. This approach mitigates the impact of localised labour shortages by capitalising on a centralised, highly skilled workforce and deploying nearly complete, pre-tested power modules to the remote data centre location for rapid and simplified final installation. 4. The accelerating pace of change in compute technology The speed at which compute technology is evolving has reached an unprecedented level, putting enormous pressure on data centre operators and their partners. Moore’s Law is no longer the standard; today’s compute configurations are far more advanced than ever before, with denser platforms being released every 12 to 18 months. This rapid cycle forces operators to rethink how they design and future-proof facilities - leveraging concepts such as modularisation - as infrastructure built just a few years ago can quickly fall behind. The need for collaboration Each of these challenges is significant on its own, but together they mark one of the most complex periods in the history of infrastructure development. To move forward, data centre operators, utilities, manufacturers, technology providers, and government agencies must work closely to identify solutions and provide support for each obstacle. On the skilled labour front, companies outside the manufacturing space are also contributing. Earlier this year, Google pledged support to train 100,000 electrical workers and 30,000 new apprentices in the US. This funding was awarded to the electrical training ALLIANCE (etA), the largest apprenticeship and training program of its kind, founded by the International Brotherhood of Electrical Workers and NECA. State leaders are playing a role as well. In Pennsylvania this summer, for example, the governor and other legislators have demonstrated strong support for data centre growth. MCG, a manufacturer and integrator of power and electrical systems, is one example of how industry players are stepping up. MCG designs, manufactures, delivers, and services systems tailored for data centre operators and other mission critical environments. In collaboration with operators and other equipment and technology providers, MCG produces modular power systems that are built off-site to ease workforce constraints. These systems are then delivered directly to data centres or their power facilities, where the MCG team commissions and maintains them. With efforts from government officials, companies like MCG and Google, and other stakeholders, the US data centre industry can continue powering the digital future - no matter how much demand for power and compute increases. For more from Mission Critical Group, click here.

AI infrastructure as Trojan horses for climate infrastructure
Data centres are getting bigger, denser, and more power-hungry than ever before. The rapid rise of artificial intelligence (AI) is accelerating this expansion, driving one of the largest capital build-outs of our time. Left unchecked, hyperscale growth could deepen strains on energy, water, and land - while concentrating economic benefits in just a few regions. But this trajectory isn’t inevitable. In this whitepaper, Shilpika Gautam, CEO and founder of Opna, explores how shifting from training-centric hyperscale facilities to inference-first, modular, and distributed data centres can align AI’s growth with climate resilience and community prosperity. The paper examines: • How right-sized, locally integrated data centres can anchor clean energy projects and strengthen grids through flexible demand, • Opportunities to embed circularity by reusing waste heat and water, and to drive demand for low-carbon materials and carbon removal, and • The need for transparency, contextual siting, and community accountability to ensure measurable, lasting benefits. Decentralised compute decentralises power. By embracing modular, inference-first design, AI infrastructure can become a force for both planetary sustainability and shared prosperity. You can download the whitepaper for yourself by clicking this link.

Why resilient cooling systems are critical to reliability
In this exclusive article for DCNN, Dean Oliver, Area Sales Manager Commercial (South and London areas) at Spirotech, explores why uninterrupted operation of data centres - 24 hours a day, 365 days a year - is no longer optional, but essential. To achieve this, he believes robust backup systems, advanced infrastructure, and precision cooling are fundamental: The importance of data In today's digitally driven economy, data is the backbone of intelligent business decisions. From individuals and startups to multinational corporations and financial institutions, the protection of personal and commercial information is more vital than ever. The internet sparked a technological revolution that has continued to accelerate - ushering in innovations like cryptocurrencies and, more recently, the powerful rise of artificial intelligence (AI). While these developments are groundbreaking, they also highlight the need for caution and infrastructure readiness. For most users, the importance of data centres only becomes clear when systems fail. A 30-minute outage can bring parts of the economy to a halt. If banks can’t process transactions, the consequences are immediate and widespread. Data breaches can have a significant impact on businesses, both operationally and financially. This year alone, several high-profile companies have been targeted. Marks & Spencer, for example, reportedly suffered losses of around £300 million over a six-week period following a cyberattack. These and other companies affected by such problems underline just how dependent our society is on digital infrastructure. Cyberattacks, like denial-of-service (DoS) assaults, are a real and growing threat. But even without malicious intent, data centres must operate flawlessly, with zero downtime. Central to this is thermal management, including cooling systems that maintain optimal conditions to prevent system failure. Why cooling is key Data centres generate significant heat due to dense arrays of servers and network hardware. If temperatures are not precisely controlled, systems risk shutdown, data corruption, or permanent loss - an unacceptable risk for any organisation. Cooling solutions are mission-critical. Given the security and performance demands on data centres, there’s no room for error. Cutting corners to save on cost can have catastrophic consequences. That’s why careful planning at the design stage is essential. This should factor in redundancy for all key components: chillers, pumps, pressurisation units, and more. Communication links between these systems must also be integrated to ensure coordinated operation. The equation is simple: the more computing power you deploy, the greater the cooling demand. Cloud infrastructure consumes enormous amounts of energy and space, requiring 10s of megawatts of power and covering thousands of square metres. If the cooling system fails - whether from chiller malfunction or control breakdown - data loss on a massive scale becomes a very real possibility. That’s why backup systems must be immediately responsive, guaranteeing continued operation under any condition. Keeping systems operating Today, there are innovative control systems available, like those offered by Spirotech, that offer detailed insights into system performance and which capture operational data from pumps, valves, pressurisation units, and vacuum degassers. This enables early detection of potential issues and provides trend analysis to support proactive maintenance. For example, vacuum degassers can show how much air has been removed over time, while pressurisation units monitor pressure levels, leak events, and top-up activity. These systems work in tandem, ensuring balance and continuity. If a fault occurs, alerts are instantly dispatched to relevant personnel. A poorly designed or maintained pressurisation system can result in negative pressure, leading to air ingress via vents and seals - or, conversely, excessive pressure that causes water discharge and frequent refills. Air and dirt separators are also crucial to system health, preventing build-up and ensuring smooth operation across all pipework and components. Conclusion Effective cooling is essential for data centre systems due to the high demands on security and performance; there's no tolerance for failure. Inadequate or poorly designed cooling can lead to disastrous outcomes, including potential large-scale data loss. To prevent this, detailed planning during the design phase is crucial. This includes building in redundancy for all major components like chillers, pumps, and pressure units, and ensuring these systems can communicate and function together reliably. As computing capacity increases, so does the need for robust cooling. Modern cloud infrastructure uses vast amounts of power and physical space, placing even greater stress on cooling requirements. Therefore, backup systems must be fast-acting and fully capable of maintaining continuous operation to avoid downtime and protect data integrity, regardless of any component failures. For more from Spirotech, click here.

Reuters Energy LIVE is fast approaching
Reuters Events: Energy LIVE is heading to the heart of the energy capital in Houston, Texas, USA, this 9-10 December. Momentum is building with more than 1,500 energy professionals - including bp, Constellation, Entergy, ENGIE, Next Decade, and more - already registered. Register now to explore solutions from: • Digital transformation leaders and AI innovators• Infrastructure developers and data centre operators• Project developers across renewables, hydrogen, and LNG• SMR and advanced reactor specialists• Energy storage providers and grid modernisation experts Your free expo pass gives you access to: • Startup pitches judged by top VCs, with live audience voting• Roundtable discussions on industry-critical topics• Structured networking including 1:1 meetings, community meetups, and speed dating sessions• Themed exhibition tours spotlighting AI, digital, and emerging tech There will also be live podcasts and expo stage sessions featuring senior leaders from Dominion Energy, Sempra Infrastructure, Petrobras America, Woodside Energy, POET, J.P Morgan, Siemens, Halliburton, Mitsubishi Power, and Breakthrough Energy. Join a community of over 3,000 attendees and more than 100 exhibitors, innovators, investors, solution providers, venture capitalists, energy producers, and project developers for two days of high-impact networking, discovery, and insight. Register for your free pass and start your Energy LIVE journey today. For more from Reuters, click here.

London's data centres could heat 500,000 homes
According to a new report from global infrastructure company AECOM, London’s data centres are releasing enough waste heat to warm up to half a million homes each year, yet much of this potential energy is being lost to the atmosphere. Commissioned by the Greater London Authority (GLA) and conducted in partnership with asset management and commercial consultants HermeticaBlack, the study reveals that up to 1.6 terawatt-hours of heat could be recovered each year from the capital’s data centre estate - equivalent to meeting all the heating and hot water needs for all homes in Ealing. The report - Optimising Data Centres in London: Heat Reuse - identifies opportunities to adjust planning and infrastructure policy to unlock this potential for London and sets out recommendations including updated planning guidance, targeted infrastructure incentives, and a standardised framework for activating heat offtake from data centre operators. This includes making sure the designs for all future data centres optimise the ability to re-use waste heat. The potential of heat recovery The uptake of heat recovery in London is currently limited, but AECOM’s report identified cities around the world, including Geneva, that are utilising as much as 95% of the heat recovered from a data centre. The infrastructure consultant says that UK cities, including London, have an opportunity to heat new homes with clean, affordable energy. The report estimates, based on the quantum of heat being currently lost, there is the potential to heat up to half a million homes. When this model was tested across London’s data centre dataset, it evidenced the network could provide enough heat to supply around 350,000 homes. With more than one in eight London households in fuel poverty, and the UK still heavily reliant on gas boilers for home heating, the report highlights the social as well as environmental case for change. Data centres - often located in densely populated parts of East and West London - offer a local, low-carbon source of heat for nearby homes, schools, and public buildings. The added value of data centres Data centres are critical to catering for the increasing demand for AI and high-performance computing. The computing power required generates higher server temperatures, creating higher-grade waste heat more viable for reuse. Asad Kwaja, Associate Director, Sustainability & Decarbonisation Advisory at AECOM, says, “The UK needs complex digital infrastructure to enable its ambitions to become a leader in AI. "Data centres lie at the heart of this conversation, but we must consider their wider use if they are going to play an integral part of the UK’s infrastructure landscape. Data centres should no longer be considered as just an energy consumer; they can become a part of the whole energy ecosystem. “London is one of the biggest data centre hubs across Europe, the Middle East, and Africa, and hosts 80% of the UK’s capacity. With the right planning, coordination, and investment, London’s data centres could play a pivotal role in decarbonising the heat needed to power the influx of new homes the capital needs to build to address the housing crisis, while also cutting bills for existing residents and improving local energy resilience.” A scheme to capture the waste heat from data centres is already underway in North West London. In 2023, the Old Oak and Park Royal Development Corporation (OPDC) secured £36 million in funding from the government to deliver a heat network, developed by AECOM, to serve 95 gigawatt-hours annually, recovering heat from up to three data centres.

DC BLOX to expand Myrtle Beach landing station
DC BLOX, a provider of connected data centre and fibre networks, has announced the planned expansion of its Myrtle Beach cable landing station in South Carolina, USA. The company has acquired approximately 20 acres of adjacent land within the Myrtle Beach International Technology and Aerospace Park (ITAP) with the potential to accommodate up to five additional subsea cables and an additional 20MW of power from the current on-site substation. The Myrtle Beach cable landing station (MYR1) opened in 2023 and was developed to enable a resilient international communications gateway for subsea cable access into the southeastern US from western European countries, South America, the Caribbean, and Africa. MYR1 is the largest facility of its kind on the Eastern Seaboard. MYR2 will complement existing subsea cables already landing in Myrtle Beach (including Firmina, Anjana, and Nuvem), enhancing the region’s role in connecting the US with the world. Expanding existing connectivity Jeff Wabik, Chief Technology Officer at DC BLOX, says, “Demand for landing subsea cables in Myrtle Beach has been extraordinary and the rapid addition of new carrier partners into MYR1 has significantly enhanced the facility’s connectivity ecosystem. “By preparing for MYR2, DC BLOX is enabling new digital infrastructure development across the region by global hyperscale companies and ensuring continued growth of the Southeast’s digital economy.” Sandy Davis, Myrtle Beach Regional EDC President & CEO, comments, “The continued growth of DC BLOX in our community is the vision presented by their leadership in 2021. "DC BLOX is an extraordinary company committed to providing technology services and community partnerships as promised. We are excited to have DC BLOX expand in Horry County and to house the largest facility of its kind on the Eastern Seaboard in our county." Pending additional demand, the new MYR2 facility would be built adjacent to MYR1 within ITAP, a site that offers a solid coastal location for subsea systems. Once completed, the two facilities combined would support up to ten subsea cables, strengthening international connectivity and advancing Myrtle Beach’s position as a global cable landing destination. For more from DC BLOX, click here.



Translate »