Data Centre Architecture Insights & Best Practices


Expert cleaning for critical environments
IT Cleaning is the UK’s trusted authority in specialist IT and technical cleaning to ISO 14644-1 2022 Class 8, delivering expert services where precision is critical and failure is not an option. From data centres and data halls to server rooms and comms rooms, the company protects vital infrastructure with meticulous, industry-approved cleaning solutions. Every service is carried out by highly trained technicians using advanced anti-static methods, designed to safeguard sensitive equipment and reduce operational risk. With minimal disruption and maximum attention to detail, IT Cleaning ensures technology environments remain clean, compliant, and performance-ready. Operating nationwide, the company cleans for organisations that demand absolute reliability, strict compliance, and exceptional standards. Its reputation is built on technical expertise, consistent delivery, and a no-compromise approach to quality. For businesses that depend on uninterrupted IT performance, IT Cleaning is the specialist cleaning partner of choice. Click here to visit the company's website and find out more.

Thorn, Zumtobel to exhibit at Data Centre World
Thorn and Zumtobel, both lighting brands of the Zumtobel Group, are to present a "unified approach" to data centre lighting at Data Centre World 2026. The companies say the focus will be on three operational priorities for data centre operators and delivery teams: reduced energy consumption, reliable operation, and consistent control across white space, plant, circulation, and perimeter areas. The stand will outline how a coordinated lighting and controls strategy can support specification, installation, and ongoing operation across different data centre environments. The Zumtobel Group says its approach is intended to support consistency across projects, while also simplifying long-term maintenance and operational management. Lighting controls for data centres A central element of the stand will be the use of the LITECOM control platform, which is presented as a way to connect a defined portfolio of luminaires across different zones of a data centre. The companies say this is intended to support scheduling, presence detection, daylight strategies, scene setting, and portfolio standardisation. The stand will also feature TECTON II, shown as part of a continuous-row lighting infrastructure approach, which is designed to support rapid, tool-free assembly and future adaptation. Lighting applications on show will cover white space, technical areas, offices, and exterior zones. Products listed for demonstration include: • Thorn: Aquaforce Pro, ForceLED, Piazza, Omega Pro 2, IQ Beam • Zumtobel: IZURA, TECTON II, MELLOW LIGHT, AMPHIBIA, LANOS All are shown as being controlled via LITECOM. The stand design itself will be intended to reflect Zumtobel Group's stated sustainability principles, using reused and modular components from previous events, with minimal new-build elements. In addition, graphics have been consolidated to reduce printing and waste. Neil Raithatha, Head of Marketing, Thorn and Zumtobel Lighting UK & Ireland, notes, “Data centre customers need lighting that is consistent, efficient, and straightforward to manage. “Our presentation this year brings together proven luminaires with a control platform that helps project teams deliver quickly and run reliably, from the white space to the perimeter.” Thorn and Zumtobel will be exhibiting at Stand F140 at Excel London on 4–5 March 2026. For more from Thorn and Zumtobel, click here.

Data centre waste heat could warm millions of UK homes
New analysis from EnergiRaven, a UK provider of energy management software, and Viegand Maagøe, a Danish sustainability and ESG consultancy, suggests that waste heat from the next generation of UK data centres could be used to heat more than 3.5 million homes by 2035, provided the necessary heat network infrastructure is developed. The research estimates that projected growth in data centres could generate enough recoverable heat to supply between 3.5 million and 6.3 million homes, depending on data centre design efficiency and other technical factors. The report argues that without investment in large-scale heat network infrastructure, much of this heat will be lost. The study highlights a risk that the UK will expand data centre and AI infrastructure without making use of the waste heat produced, missing an opportunity to reduce household energy costs and improve energy resilience. “Our national grid will be powering these data centres - it’s madness to invest in the additional power these facilities will need and waste so much of it as unused heat, driving up costs for taxpayers and bill payers,” argues Simon Kerr, Head of Heat Networks at EnergiRaven. “Microsoft has said it wants its data centres to be ‘good neighbours’ - giving heat back to their communities should be an obvious first step.” Regional opportunities and proximity to housing The research points to examples where data centres are located close to both new housing developments and areas affected by fuel poverty. Around Greater Manchester, for example, 15,000 homes are planned in the Victoria North development, with a further 14,000 to 20,000 planned in Adlington. The area also includes more than a dozen existing data centres, with additional facilities planned. According to the analysis, these sites could potentially supply heat to nearby new housing, reducing the need for individual gas boilers and supporting lower-carbon heating. Moreover, the study maps how similar patterns could be replicated across the UK, linking waste heat sources with residential demand through heat networks. Using waste heat for space heating is common in parts of northern Europe, particularly in Nordic countries. There, waste heat from sources such as data centres, power plants, incinerators, and sewage treatment facilities is often connected to district heat networks, supplying homes via heat interface units instead of individual boilers. In the UK, a number of cities have been designated as Heat Network Zones, where heat networks have been identified as a lower-cost, low-carbon heating option. From 2026, Ofgem will take over regulation of heat networks and new technical standards will be introduced through the Heat Network Technical Assurance Scheme, aimed at improving consumer and investor confidence. Heat networks, regulation, and policy context The Warm Homes Plan includes a target to double the proportion of heat demand met by heat networks in England to 7% by 2035, with longer-term ambitions for heat networks to supply around 20% of heat by 2050. The plan also includes funding support for heat network development. However, Simon argues that current policy does not fully reflect the scale of opportunity from large waste heat sources, continuing, “Current policy in the UK is nudging us towards a patchwork of small networks that might connect heat from a single source to a single housing development. If we continue down this road, we will end up with cherry-picking and small, private monopolies, rather than national infrastructure that can take advantage of the full scale of waste heat sources around the country. “We know that investment in heat networks and thermal infrastructure consistently drives bills down over time and delivers reliable carbon savings, but these projects require long-term finance. "Government-backed low-interest loans, pension fund investment, and institutions such as GB Energy all have a role to play in bridging this gap, as does proactivity from local governments, who can take vital first steps by joining forces to map out potential networks and start laying the groundwork with feasibility studies.” Peter Maagøe Petersen, Director and Partner at Viegand Maagøe, adds, “We should see waste heat as a national opportunity. In addition to heating homes, heat highways can also reduce strain on the electricity grid and act as a large thermal battery, allowing renewables to keep operating even when usage is low and reducing reliance on imported fossil fuels. "As this data shows, the UK has all the pieces it needs to start taking advantage of waste heat - it just needs to join them together. With denser cities than its Nordic neighbours and a wealth of waste heat on the horizon, the UK is a fantastic place for heat networks. It needs to start focusing on heat as much as it does electricity - not just for lower bills, but for future jobs and energy security.”

PFlow highlights VRC use in data centres
PFlow Industries, a US manufacturer of vertical reciprocating conveyor (VRC) technology, has highlighted the use of VRCs within data centre environments, focusing on its M Series and F Series systems. VRCs are typically incorporated during the design phase of a facility to support vertical movement of equipment and materials. PFlow states that, compared with conventional lifting approaches, VRCs can be integrated with automated material handling systems and used for intermittent or continuous operation, with routine maintenance requirements kept relatively low. M Series and F Series applications The PFlow M Series 2-Post Mechanical Material Lift is designed for higher-cycle environments where frequent vertical movement of equipment is required. The company says the system can handle loads of up to 10,000lb (4,536kg) and operate across multiple floor levels. Standard travel speed is stated as 25 feet (7.62 metres) per minute, with higher speeds available depending on configuration. The M Series is designed for use for transporting items such as servers, racks, and other technical equipment, with safety features intended to support controlled movement and equipment protection. The PFlow F Series 4-Post Mechanical VRC is positioned for heavier loads and larger equipment. The standard lifting capacity is up to 50,000lb (22,680kg), with higher capacities available through customisation. The design allows loading and unloading from all four sides and is intended to accommodate oversized infrastructure, including battery systems and large server assemblies. PFlow says the F Series is engineered for high-cycle operation and flexible traffic patterns within facilities. The company adds that its VRCs are designed as permanent infrastructure elements within buildings rather than standalone equipment. It states that all systems are engineered to meet ASME B20.1 conveyor requirements and are intended for continuous operation in environments where uptime is critical. Dan Hext, National Sales Director at PFlow Industries, comments, “Every industry is under cost and compliance pressure. Our VRCs help facility operators achieve maximum throughput and efficiency while maintaining the highest levels of safety.”

The blueprint for tomorrow’s sustainable data centres
In this exclusive article for DCNN, Francesco Fontana, Enterprise Marketing and Alliances Director at Aruba, explores how operators can embed sustainability, flexibility, and high-density engineering into data centre design to meet the accelerating demands of AI: Sustainable design is now central to AI-scale data centres The explosive growth of AI is straining data centre capacity, prompting operators to both upgrade existing sites and plan large-scale new-builds. Europe’s AI market, projected to grow at a 36.4% CAGR through 2033, is driving this wave of investment as operators scramble to match demand. Operators face mounting pressure to address the environmental costs of rapid growth, as expansion alone cannot meet the challenge. The path forward lies in designing facilities that are sustainable by default, while balancing resilience, efficiency, and adaptability to ensure data centres can support the accelerating demands of AI. The cost of progress Customer expectations for data centres have shifted dramatically in recent years. The rapid uptake of AI and cloud technologies is fuelling demand for colocation environments that are scalable, flexible, and capable of supporting constantly evolving workloads and managing surging volumes of data. But this evolution comes at a cost. AI and other compute-intensive applications demand vast amounts of processing power, which in turn place new strains on both energy and water resources. Global data centre electricity usage is projected to reach 1,050 terawatt-hours (TWh) by 2026, placing data centres among the world’s top five national consumers. This rising consumption has put data centres firmly under the spotlight. Regulators, customers, and the wider public are scrutinising how facilities are designed and operated, making it clear that sustainability can no longer be treated as optional. To survive amongst these new expectations, operators must balance performance with environmental responsibility, rethinking infrastructure from the ground up. Steps to a next-generation sustainable data centre 1. Embed sustainability from day one Facilities designed 'green by default' are better placed to meet both operational and environmental goals, and this why sustainability can’t be an afterthought. This requires renewable energy integration from the outset through on-site solar, hydroelectric systems, or long-term clean power purchase agreements. Operators across Europe are also committing to industry frameworks like the Climate Neutral Data Centre Pact and the European Green Digital Coalition, ensuring progress is independently verified. Embedding sustainability into the design and operation of data centres not only reduces carbon intensity but also creates long-term efficiency gains that help manage AI’s heavy energy demands. 2. Build for flexibility and scale Modern businesses need infrastructures that can grow with them. For operators, this means creating resilient IT environments with space and power capacity to support future demand. Offering adaptable options - such as private cages and cross-connects - gives customers the freedom to scale resources up or down, as well as tailor facilities to their unique needs. This flexibility underpins cloud expansion, digital transformation initiatives, and the integration of new applications - all while helping customers remain agile in a competitive market. 3. Engineering for the AI Workload AI and high-performance computing (HPC) workloads demand far more power and cooling capacity than traditional IT environments, and conventional designs are struggling to keep up. Facilities must be engineered specifically for high-density deployments. Advanced cooling technologies, such as liquid cooling, allow operators to safely and sustainably support power densities far above 20 kW per rack, essential for next-generation GPUs and other AI-driven infrastructure. Rethinking power distribution, airflow management, and rack layout ensures high-density computing can be delivered efficiently without compromising stability or sustainability. 4. Location matters Where a data centre is built plays a major role in its sustainability profile, as regional providers often offer greater flexibility and more personalised services to meet customer needs. Italy, for example, has become a key destination for new facilities. Its cloud computing market is estimated at €10.8 billion (£9.4 billion) in 2025 and is forecast to more than double to €27.4 billion (£23.9 billion) by 2030, growing at a CAGR of 20.6%. Significant investments from hyperscalers in recent years are accelerating growth, making the region a hotspot for operators looking to expand in Europe. 5. Stay compliant with regulations and certifications Strong regulatory and environmental compliance is fundamental. Frameworks such as the General Data Protection Regulation (GDPR) safeguard data, while certifications like LEED (Leadership in Energy and Environmental Design) demonstrate energy efficiency and environmental accountability. Adhering to these standards ensures legal compliance, but it also improves operational transparency and strengthens credibility with customers. Sustainability and performance as partners The data centres of tomorrow must scale sustainably to meet the demands of AI, cloud, and digital transformation. This requires embedding efficiency and adaptability into every stage of design and operation. Investment in renewable energy, such as hydro and solar, will be crucial to reducing emissions. Equally, innovations like liquid cooling will help manage the thermal loads of compute-heavy AI environments. Emerging technologies - including agentic AI systems that autonomously optimise energy use and breakthroughs in quantum computing - promise to take efficiency even further. In short, sustainability and performance are no longer competing objectives; together, they form the foundation of a resilient digital future where AI can thrive without compromising the planet. For more from Aruba, click here.

InfraPartners, JLL partner to accelerate AI DC delivery
InfraPartners, a designer and builder of prefabricated AI data centres, and JLL, a global commercial real estate and investment management company, have formed a strategic agreement to accelerate the development and operation of AI data centres. The partnership brings together InfraPartners’ prefabricated AI data centre designs and JLL’s capabilities in site selection, project management, construction oversight, financial structuring, and facilities management. The companies state that the combined model is intended to address persistent challenges in data centre development, particularly the period between site identification and operational readiness. As investment in AI infrastructure grows, operators increasingly require deployment models that offer predictable schedules, reduced risk, and scalable designs suitable for GPU-heavy environments. Data centre construction continues to face risks associated with labour shortages, schedule delays, and complex financing. InfraPartners and JLL say they aim to manage these issues jointly by integrating design, prefabrication, delivery, and long-term operations into a single framework. Prefabrication and integrated delivery for AI infrastructure “Our clients are asking for faster, lower-risk routes to delivering AI infrastructure,” says Michalis Grigoratos, CEO at InfraPartners. “Our prefabricated, upgradeable digital infrastructure integrates seamlessly with JLL’s expertise across the full project lifecycle, so, together, we’re focused on providing a superior product that keeps pace with AI infrastructure changes and market growth. "Our globally scalable, repeatable approach includes site selection, prefabrication, and long-term operations, reducing time-to-first-token and maximising performance across the lifecycle.” Matt Landek, JLL Division President, Data Centers and Critical Environments, adds, “AI infrastructure demands a new approach - one that’s as dynamic and high-performing as the workloads it supports. “With InfraPartners, we are delivering a unique blueprint that brings real estate, engineering, and operational precision into a unified model.” Kristen Vosmaer, Managing Director at JLL, oversees global programme management, including JLL White Space and facilities management solutions, and supports delivery of the partnership. He comments, “This is one of the first collaborations to fully integrate data centre design, manufacturing, construction, commissioning, computer deployment, and lifecycle management for institutional-grade real estate delivery, marking a significant shift in shortening the time to monetisation for how mission-critical infrastructure assets are developed and maintained.” The companies plan to offer end-to-end capabilities intended to accelerate the delivery of AI-ready facilities for enterprise, government, and cloud operators. Initial deployment efforts will focus on high-growth AI markets in EMEA and the United States from Q1 2026, with plans to expand into additional regions. For more from InfraPartners, click here.

America’s AI revolution needs the right infrastructure
In this article, Ivo Ivanov, CEO of DE-CIX, argues his case for why America’s AI revolution won’t happen without the right kind of infrastructure: Boom or bust Artificial intelligence might well be the defining technology of our time, but its future rests on something much less tangible hiding beneath the surface: latency. Every AI service, whether training models across distributed GPU-as-a-Service communities or running inference close to end users, depends on how fast, how securely, and how cost-effectively data can move. Network latency is simply the delay in the speed of traffic transmission caused by the distance the data needs to travel: the lower latency is (i.e. the faster the transmission), the better the performance of everything from autonomous vehicles to the applications we carry in our pockets. There’s always been a trend of technology applications outpacing network capabilities, but we’re feeling it more acutely now due to the sheer pace of AI growth. Depending on where you were in 2012, the average latency for the top 20 applications could be up to or more than 200 milliseconds. Today, there’s virtually no application in the top 100 that would function effectively with that kind of latency. That’s why internet exchanges (IXs) have begun to dominate the conversation. An IX is like an airport for data. Just as an airport coordinates the safe landing and departure of dozens of airlines, allowing them to exchange passengers and cargo seamlessly, an IX brings together networks, clouds, and content platforms to seamlessly exchange traffic. The result is faster connections, lower latency, greater efficiency, and a smoother journey for every digital service that depends on it. Deploying these IXs creates what is known as “data gravity”, a magnetic pull that draws in networks, content, and investment. Once this gravity takes hold, ecosystems begin to grow on their own, localising data and services, reducing latency, and fuelling economic growth. I recently spoke about this at a first-of-its-kind regional AI connectivity summit, The future of AI connectivity in Kansas & beyond, hosted at the Wichita State University (WSU) in Kansas, USA. It was the perfect location - given that WSU is the planned site of a new carrier-neutral IX - and the start of a much bigger plan to roll out IXs across university campuses nationwide. Discussions at the summit reflected a growing recognition that America’s AI economy cannot depend solely on coastal hubs or isolated mega-data centres. If AI is to deliver value across all parts of the economy, from aerospace and healthcare to finance and education, it needs a distributed, resilient, and secure interconnection layer reaching deep into the heartland. What is beginning in Wichita is part of a much bigger picture: building the kind of digital infrastructure that will allow AI to flourish. Networking changed the game, but AI is changing the rules For all its potential, AI’s crowning achievement so far might be the wakeup call it’s given us. It has magnified every weakness in today’s networks. Training up models requires immense compute power. Finding the data centre space for this can be a challenge, but new data transport protocols are meaning that AI processing could, in the future, be spread across multiple data centre facilities. Meanwhile, inference - and especially multi-AI agentive inference - demands ultra-low latency, as AI services interact with systems, people, and businesses in real time. But for both of these scenarios, the efficiency and speed of the network is key. If the network cannot keep pace (if data needs to travel too far), these applications become too slow to be useful. That’s why the next breakthrough in AI won’t be in bigger or better models, but in the infrastructure that connects them all. By bringing networks, clouds, and enterprises together on a neutral platform, an IX makes it possible to aggregate GPU resources across locations, create agile GPU-as-a-Service communities, and deliver real-time inference with the best performance and highest level of security. AI changes the geography of networking too. Instead of relying only on mega-hubs in key locations, we need interconnection spokes that reach into every region where people live, work, and innovate. Otherwise, businesses in the middle of the country face the “tromboning effect”, where their data detours hundreds of miles to another city to be exchanged and processed before returning a result - adding latency, raising costs, and weakening performance. We need to make these distances shorter, reduce path complexity, and allow data to move freely and securely between every player in the network chain. That’s how AI is rewriting the rulebook; latency, underpinned by distance and geography, matters more than ever. Building ecosystems and 'data gravity' When we establish an IX, we’re doing more than just connecting networks; we’re forging the embers of a future-proof ecosystem. I’ve seen this happen countless times. The moment a neutral (meaning data centre and carrier neutral) exchange is in place, it becomes a magnet that draws in networks, content providers, data centres, and investors. The pull of “data gravity” transforms a market from being dependent on distant hubs into a self-sustaining digital environment. What may look like a small step - a handful of networks exchanging traffic locally - very quickly becomes an accelerant for rapid growth. Dubai is one of the clearest examples. When we opened our first international platform there in 2012, 90% of the content used in the region was hosted outside of the Middle East, with latency above 200 milliseconds. A decade later, 90% of that content is localised within the region and latency has dropped to just three milliseconds. This was a direct result of the gravity created by the exchange, pulling more and more stakeholders into the ecosystem. For AI, that localisation isn’t just beneficial; it’s also essential. Training and inference both depend on data being closer to where it is needed. Without the gravity of an IX, content and compute remain scattered and far away, and performance suffers. With it, however, entire regions can unlock the kind of digital transformation that AI demands. The American challenge There was a time when connectivity infrastructure was dominated by a handful of incumbents, but that time has long since passed. Building AI-ready infrastructure isn’t something that one organisation or sector can do alone. Everywhere that has succeeded in building an AI-ready network environment has done so through partnerships - between data centre, network, and IX operators, alongside policy makers, technology providers, universities, and - of course - the business community itself. When those pieces of the puzzle are assembled together, the result is a healthy ecosystem that benefits everyone. This collaborative model, like the one envisaged at the IX in WSU, is exactly what the US needs if it is to realise the full potential of AI. Too much of America’s digital economy still depends on coastal hubs, while the centre of the country is underserved. That means businesses in aerospace, healthcare, finance, and education - many of which are based deep in the US heartland - must rely on services delivered from other states and regions, and that isn’t sustainable when it comes to AI. To solve this, we need a distributed layer of interconnection that extends across the entire nation. Only then can we create a truly digital America where every city has access to the same secure, high-performance infrastructure required to power its AI-driven future. For more from DE-CIX, click here.

AWS outage sparks call for resilient DC strategies
This Monday’s Amazon Web Services (AWS) outage demonstrates the importance of investing in resilient data centre strategies, according to maintenance specialists Arfon Engineering. The worldwide outage saw millions unable to access popular apps and websites - including Alexa, Snapchat, and Reddit - after a Domain Name System (DNS) error took down the major AWS data centre site in Virginia. With hundreds of platforms down for over eight hours, it was the largest internet disruption since a CrowdStrike update caused a global IT meltdown last year. The financial impact of the crash is expected to reach into the hundreds of billions, while the potential reputational damage could be even more severe in the long run. A preventable disaster Although not caused by a lack of maintenance or physical malfunction of equipment and building services, the consequences of the downtime do reflect an opportunity for operators to adopt predictive maintenance strategies. Alice Oakes, Service and Support Manager at Arfon, comments, “The chaos brought by Monday’s outage shows the sheer damage that can be caused by something as simple servers going down. "While it might’ve been unavoidable, this is certainly not the case for downtime caused by equipment failures and reactive maintenance. “This is where predictive maintenance can make a real difference; it's more resilient, cost-effective, and environmentally responsible than typical reactive or preventative approaches, presenting operators with the chance to stay ahead of potential issues.” Predictive maintenance strategies incorporate condition-based monitoring (CBM), which uses real-time data to assess equipment health and forecast potential failures well in advance. This enables informed and proactive maintenance decisions before the point of downtime, eliminating unnecessary interventions and extending asset life in the process. CBM also reduces the frequency of unnecessary replacements, contributing to lower carbon emissions and reduced energy consumption in a sector under scrutiny for its environmental impact. Alice continues, “This incident is a timely reminder that resilience should be built into every layer of data centre infrastructure, especially the physical equipment powering them. "With billions set to be invested in UK data centres over the coming years, operators have a golden opportunity to future-proof their facilities. “Predictive maintenance should be cornerstone of both new-build and retrofit facilities to adapt to ensure continuity in a sector where downtime simply isn’t an option.”

Data centre delivery: How consistency endures
In this exclusive article for DCNN, Steve Clifford, Director of Data Centres at EMCOR UK, describes how end-to-end data centre design, building, and maintenance is essential for achieving data centre uptime and optimisation: Keeping services live Resilience and reliability. These aren’t optional features of data centres; they are essential requirements that require precision and a keen eye for detail from all stakeholders. If a patchwork of subcontractors are delivering data centre services, that can muddy the waters by complicating supply chains. This heightens the risk of miscommunication, which can cause project delays and operational downtime. Effective design and implementation are essential at a time when the data centre market is undergoing significant expansion, with the take-up of capacity expected to grow by 855MW - or a 22% year-on-year growth - in Europe alone. In-house engineering and project management teams can prioritise open communication to help build, manage, and maintain data centres over time. This end-to-end approach allows for continuity from initial consultation through to long-term operational excellence so data centres can do what they do best: uphold business continuity and serve millions of people. Designing and building spaces Before a data centre can be built, logistical challenges need to be addressed. In many regions, grid availability is limited, land is constrained, and planning approvals can take years. Compliance is key too: no matter the space, builds should align to ISO frameworks, local authority regulations, and - in some cases - critical national infrastructure (CNI) standards. Initially, teams need to uphold a customer’s business case, identify the optimal location, address cooling concerns, identify which risks to mitigate, and understand what space for expansion or improvement should be factored in now. While pre-selection of contractors and consultants is vital at this stage, it makes strategic sense to select a complementary delivery team that can manage mobilisation and long-term performance too. Engineering providers should collaborate with customer stakeholders, consultants, and supply chain partners so that the solution delivered is fit for purpose throughout its operational lifespan. As greenfield development can be lengthy, upgrading existing spaces is a popular alternative option. Called 'retrofitting', this route can reduce cost by 40% and reduce project timelines by 30%. When building in pre-existing spaces, maintaining continuity in live environments is crucial. For example, our team recently developed a data hall within a contained 1,000m² existing facility. Engineers used profile modelling to identify an optimal cooling configuration based on hot aisle containment and installed a 380V DC power system to maximise energy efficiency. This resulted in a 96.2% achievement across rectifiers and converters. The project delivered 136 cabinets, against a brief of 130, and, crucially, didn’t disrupt business-as-usual operations, using a phased integration for the early deployment of IT systems. Maintaining continuity In certain spaces like national defence and highly sensitive operations, maintaining continuity is fundamental. Critical infrastructure maintenance in these environments needs to prioritise security and reliability, as these facilities sit at the heart of national operations. Ongoing operational management requires a 24/7 engineering presence, supported by proactive maintenance management, comprehensive systems monitoring, a strategic critical spares strategy, and a robust event and incident management process. This constant presence, from the initial stages of consultation through to ongoing operational support, delivers clear benefits that compound over time; the same team that understands the design rationale can anticipate potential issues and respond swiftly when challenges arise. Using 3D modelling to coordinate designs and time-lapse visualisations depicting project progress can keep stakeholders up to date. Asset management in critical environments like CNIs also demands strict maintenance scheduling and control, coupled with complete risk transparency to customers. Total honesty and trust are non-negotiable so weekly client meetings can maintain open communication channels, ensuring customers are fully informed about system status, upcoming maintenance windows, and any potential risks on the horizon. Meeting high expectations These high-demand environments have high expectations, so keeping engineering teams engaged and motivated is key to long-term performance. A holistic approach to staff engagement should focus on continuous training and development to deliver greater continuity and deeper site expertise. When engineers intimately understand customer expectations and site needs, they can maintain the seamless service these critical operations demand. Focusing on continuity delivers measurable results. For one defence-grade data centre customer, we have maintained 100% uptime over eight years, from day one of operations. Consistent processes and dedicated personnel form a long-term commitment to operational excellence. Optimising spaces for the future Self-delivery naturally lends itself to growing, evolving relationships with customers. By transitioning to self-delivering entire projects and operations, organisations can benefit from a single point of contact while maintaining control over most aspects of service delivery. Rather than offering generic solutions, established relationships allow for bespoke approaches that anticipate future requirements and build in flexibility from the outset. A continuous improvement model ensures long-term capability development, with energy efficiency improvement representing a clear focus area as sustainability requirements become increasingly stringent. AI and HPC workloads are pushing rack densities higher, creating new demands for thermal management, airflow, and power draw. Many operators are also embedding smart systems - from IoT sensors to predictive analytics tools - into designs. These platforms provide real-time visibility of energy use, asset performance, and environmental conditions, enabling data-driven decision-making and continuous optimisation. Operators may also upgrade spaces to higher-efficiency systems and smart cooling, which support better PUE outcomes and long-term energy savings. When paired with digital tools for energy monitoring and predictive maintenance, teams can deliver on smarter operations and provide measurable returns on investment. Continuity: A strategic tool Uptime is critical – and engineering continuity is not just beneficial, but essential. From the initial stages of design and consultation through to ongoing management and future optimisation, data centres need consistent teams, transparent processes, and strategic relationships that endure. The end-to-end approach transforms continuity from an operational requirement into a strategic advantage, enabling facilities to adapt to evolving demands while maintaining constant uptime. When consistency becomes the foundation, exceptional performance follows.

Rethinking fuel control
In this exclusive article for DCNN, Jeff Hamilton, Fuel Oil Team Manager at Preferred Utilities Manufacturing Corporation, explores how distributed control systems can enhance reliability, security, and scalability in critical backup fuel infrastructure: Distributed architecture for resilient infrastructure Uninterrupted power is non-negotiable for data centres to provide continuity through every possible scenario, from extreme weather events to grid instability in an ageing infrastructure. Generators, of course, are central to this resilience, but we must also consider the fuel storage infrastructure that powers them. The way the fuel is monitored, delivered, and secured by a control system ultimately determines whether a backup system succeeds or fails when it is needed most. The risks of centralised control A traditional fuel control system typically uses a centralised controller such as a programmable logic controller (PLC) to manage all components. The PLC coordinates data from sensors, controls pumps, logs events, and communicates with building automation systems. Often, this controller connects through hardwired, point-to-point circuits that span large distances throughout the facility. This setup creates a couple of potential vulnerabilities: 1. If the central controller fails, the entire fuel system can be compromised. A wiring fault or software error may take down the full network of equipment it supports. 2. Cybersecurity is also a concern when using a centralised controller, especially if it’s connected to broader network infrastructure. A single breach can expose your entire system. Whilst these vulnerabilities may be acceptable in some industrial situations, modern data centres demand more robust and secure solutions. Decentralisation in control architecture addresses these concerns. Distributed logic and redundant communications Next-generation fuel control systems are adopting architectures with distributed logic, meaning that control is no longer centralised in one location. Instead, each field controller—or “node”—has its own processor and local interface. These nodes operate autonomously, running dedicated programs for their assigned devices (such as tank level sensors or transfer pumps). These nodes then communicate with one another over redundant communication networks. This peer-to-peer model eliminates the need for a master controller. If one node fails or if communication is interrupted, others continue operating without disruption. This means that pump operations, alarms, and safety protocols all remain active because each node has its own logic and control. This model increases both uptime and safety; it also simplifies installation. Since each node handles its own logic and display, it needs far less wiring than centralised systems. Adding new equipment involves simply installing a new node and connecting it to the network, rather than overhauling the entire system. Built-in cybersecurity through architecture A system’s underlying architecture plays a key role in determining its vulnerability to cybersecurity hacks. Centralised systems can provide a single entry point to an entire system. Distributed control architectures offer a fundamentally different security profile. Without a single controller, there is no single target. Each node operates independently and the communication network does not require internet-facing protocols. In some applications, distributed systems have even been configured to work in physical isolation, particularly where EMP protection is required. Attackers seeking to disrupt operations would need to compromise multiple nodes simultaneously, a task substantially more difficult than targeting a central controller. Even if one segment is compromised or disabled, the rest of the system continues to function as designed. This creates a hardened, resilient infrastructure that aligns with zero-trust security principles. Safety and redundancy by default Of course, any fuel control system must not just be secure; it must also be safe. Distributed systems offer advantages here as well. Each node can be programmed with local safety interlocks. For example, if a tank level sensor detects overfill, the node managing that tank can shut off the pump without needing permission from a central controller. Other safety features often include dual-pump rotation to prevent uneven wear, leak detection, and temperature or pressure monitoring with response actions. These processes run locally and independently. Even if communication between nodes is lost, the safety routines continue. Additionally, touchscreens or displays on individual nodes allow on-site personnel to access diagnostics and system data from any node on the network. This visibility simplifies troubleshooting and provides more oversight of real-time conditions. Scaling with confidence Data centres require flexibility to grow and adapt. However, traditional control systems make changes like upgrading infrastructure, increasing power, and installing additional backup systems costly and complex, often requiring complete rewiring or reprogramming. Distributed control systems make scaling more manageable. Adding a new generator or day tank, for example, involves connecting a new controller node and loading its program. Since each node contains its own logic and communicates over a shared network, the rest of the system continues operating during the upgrade. This minimises downtime and reduces installation costs. Some systems even allow live diagnostics during commissioning, which can be particularly valuable when downtime is not an option. A better approach for critical infrastructure Data centres face incredible pressure to deliver continuous performance, efficiency, and resilience. Backup fuel systems are a vital part of this reliability strategy, but the way these systems are controlled and monitored is changing. Distributed control architectures offer a smarter, safer path forwards. Preferred Utilities Manufacturing Corporation is committed to supporting data centres to better manage their critical operations. This commitment is reflected in products and solutions like its Preferred Fuel System Controller (FSC), a distributed control architecture that offers all the features described throughout this article, including redundant, masterless/node-based communication, providing secure, safe, and flexible fuel system control. With Preferred’s expertise, a distributed control architecture can be applied to system sizes ranging from 60 to 120 day tanks.



Translate »