Data Centre Architecture Insights & Best Practices


America’s AI revolution needs the right infrastructure
In this article, Ivo Ivanov, CEO of DE-CIX, argues his case for why America’s AI revolution won’t happen without the right kind of infrastructure: Boom or bust Artificial intelligence might well be the defining technology of our time, but its future rests on something much less tangible hiding beneath the surface: latency. Every AI service, whether training models across distributed GPU-as-a-Service communities or running inference close to end users, depends on how fast, how securely, and how cost-effectively data can move. Network latency is simply the delay in the speed of traffic transmission caused by the distance the data needs to travel: the lower latency is (i.e. the faster the transmission), the better the performance of everything from autonomous vehicles to the applications we carry in our pockets. There’s always been a trend of technology applications outpacing network capabilities, but we’re feeling it more acutely now due to the sheer pace of AI growth. Depending on where you were in 2012, the average latency for the top 20 applications could be up to or more than 200 milliseconds. Today, there’s virtually no application in the top 100 that would function effectively with that kind of latency. That’s why internet exchanges (IXs) have begun to dominate the conversation. An IX is like an airport for data. Just as an airport coordinates the safe landing and departure of dozens of airlines, allowing them to exchange passengers and cargo seamlessly, an IX brings together networks, clouds, and content platforms to seamlessly exchange traffic. The result is faster connections, lower latency, greater efficiency, and a smoother journey for every digital service that depends on it. Deploying these IXs creates what is known as “data gravity”, a magnetic pull that draws in networks, content, and investment. Once this gravity takes hold, ecosystems begin to grow on their own, localising data and services, reducing latency, and fuelling economic growth. I recently spoke about this at a first-of-its-kind regional AI connectivity summit, The future of AI connectivity in Kansas & beyond, hosted at the Wichita State University (WSU) in Kansas, USA. It was the perfect location - given that WSU is the planned site of a new carrier-neutral IX - and the start of a much bigger plan to roll out IXs across university campuses nationwide. Discussions at the summit reflected a growing recognition that America’s AI economy cannot depend solely on coastal hubs or isolated mega-data centres. If AI is to deliver value across all parts of the economy, from aerospace and healthcare to finance and education, it needs a distributed, resilient, and secure interconnection layer reaching deep into the heartland. What is beginning in Wichita is part of a much bigger picture: building the kind of digital infrastructure that will allow AI to flourish. Networking changed the game, but AI is changing the rules For all its potential, AI’s crowning achievement so far might be the wakeup call it’s given us. It has magnified every weakness in today’s networks. Training up models requires immense compute power. Finding the data centre space for this can be a challenge, but new data transport protocols are meaning that AI processing could, in the future, be spread across multiple data centre facilities. Meanwhile, inference - and especially multi-AI agentive inference - demands ultra-low latency, as AI services interact with systems, people, and businesses in real time. But for both of these scenarios, the efficiency and speed of the network is key. If the network cannot keep pace (if data needs to travel too far), these applications become too slow to be useful. That’s why the next breakthrough in AI won’t be in bigger or better models, but in the infrastructure that connects them all. By bringing networks, clouds, and enterprises together on a neutral platform, an IX makes it possible to aggregate GPU resources across locations, create agile GPU-as-a-Service communities, and deliver real-time inference with the best performance and highest level of security. AI changes the geography of networking too. Instead of relying only on mega-hubs in key locations, we need interconnection spokes that reach into every region where people live, work, and innovate. Otherwise, businesses in the middle of the country face the “tromboning effect”, where their data detours hundreds of miles to another city to be exchanged and processed before returning a result - adding latency, raising costs, and weakening performance. We need to make these distances shorter, reduce path complexity, and allow data to move freely and securely between every player in the network chain. That’s how AI is rewriting the rulebook; latency, underpinned by distance and geography, matters more than ever. Building ecosystems and 'data gravity' When we establish an IX, we’re doing more than just connecting networks; we’re forging the embers of a future-proof ecosystem. I’ve seen this happen countless times. The moment a neutral (meaning data centre and carrier neutral) exchange is in place, it becomes a magnet that draws in networks, content providers, data centres, and investors. The pull of “data gravity” transforms a market from being dependent on distant hubs into a self-sustaining digital environment. What may look like a small step - a handful of networks exchanging traffic locally - very quickly becomes an accelerant for rapid growth. Dubai is one of the clearest examples. When we opened our first international platform there in 2012, 90% of the content used in the region was hosted outside of the Middle East, with latency above 200 milliseconds. A decade later, 90% of that content is localised within the region and latency has dropped to just three milliseconds. This was a direct result of the gravity created by the exchange, pulling more and more stakeholders into the ecosystem. For AI, that localisation isn’t just beneficial; it’s also essential. Training and inference both depend on data being closer to where it is needed. Without the gravity of an IX, content and compute remain scattered and far away, and performance suffers. With it, however, entire regions can unlock the kind of digital transformation that AI demands. The American challenge There was a time when connectivity infrastructure was dominated by a handful of incumbents, but that time has long since passed. Building AI-ready infrastructure isn’t something that one organisation or sector can do alone. Everywhere that has succeeded in building an AI-ready network environment has done so through partnerships - between data centre, network, and IX operators, alongside policy makers, technology providers, universities, and - of course - the business community itself. When those pieces of the puzzle are assembled together, the result is a healthy ecosystem that benefits everyone. This collaborative model, like the one envisaged at the IX in WSU, is exactly what the US needs if it is to realise the full potential of AI. Too much of America’s digital economy still depends on coastal hubs, while the centre of the country is underserved. That means businesses in aerospace, healthcare, finance, and education - many of which are based deep in the US heartland - must rely on services delivered from other states and regions, and that isn’t sustainable when it comes to AI. To solve this, we need a distributed layer of interconnection that extends across the entire nation. Only then can we create a truly digital America where every city has access to the same secure, high-performance infrastructure required to power its AI-driven future. For more from DE-CIX, click here.

AWS outage sparks call for resilient DC strategies
This Monday’s Amazon Web Services (AWS) outage demonstrates the importance of investing in resilient data centre strategies, according to maintenance specialists Arfon Engineering. The worldwide outage saw millions unable to access popular apps and websites - including Alexa, Snapchat, and Reddit - after a Domain Name System (DNS) error took down the major AWS data centre site in Virginia. With hundreds of platforms down for over eight hours, it was the largest internet disruption since a CrowdStrike update caused a global IT meltdown last year. The financial impact of the crash is expected to reach into the hundreds of billions, while the potential reputational damage could be even more severe in the long run. A preventable disaster Although not caused by a lack of maintenance or physical malfunction of equipment and building services, the consequences of the downtime do reflect an opportunity for operators to adopt predictive maintenance strategies. Alice Oakes, Service and Support Manager at Arfon, comments, “The chaos brought by Monday’s outage shows the sheer damage that can be caused by something as simple servers going down. "While it might’ve been unavoidable, this is certainly not the case for downtime caused by equipment failures and reactive maintenance. “This is where predictive maintenance can make a real difference; it's more resilient, cost-effective, and environmentally responsible than typical reactive or preventative approaches, presenting operators with the chance to stay ahead of potential issues.” Predictive maintenance strategies incorporate condition-based monitoring (CBM), which uses real-time data to assess equipment health and forecast potential failures well in advance. This enables informed and proactive maintenance decisions before the point of downtime, eliminating unnecessary interventions and extending asset life in the process. CBM also reduces the frequency of unnecessary replacements, contributing to lower carbon emissions and reduced energy consumption in a sector under scrutiny for its environmental impact. Alice continues, “This incident is a timely reminder that resilience should be built into every layer of data centre infrastructure, especially the physical equipment powering them. "With billions set to be invested in UK data centres over the coming years, operators have a golden opportunity to future-proof their facilities. “Predictive maintenance should be cornerstone of both new-build and retrofit facilities to adapt to ensure continuity in a sector where downtime simply isn’t an option.”

Data centre delivery: How consistency endures
In this exclusive article for DCNN, Steve Clifford, Director of Data Centres at EMCOR UK, describes how end-to-end data centre design, building, and maintenance is essential for achieving data centre uptime and optimisation: Keeping services live Resilience and reliability. These aren’t optional features of data centres; they are essential requirements that require precision and a keen eye for detail from all stakeholders. If a patchwork of subcontractors are delivering data centre services, that can muddy the waters by complicating supply chains. This heightens the risk of miscommunication, which can cause project delays and operational downtime. Effective design and implementation are essential at a time when the data centre market is undergoing significant expansion, with the take-up of capacity expected to grow by 855MW - or a 22% year-on-year growth - in Europe alone. In-house engineering and project management teams can prioritise open communication to help build, manage, and maintain data centres over time. This end-to-end approach allows for continuity from initial consultation through to long-term operational excellence so data centres can do what they do best: uphold business continuity and serve millions of people. Designing and building spaces Before a data centre can be built, logistical challenges need to be addressed. In many regions, grid availability is limited, land is constrained, and planning approvals can take years. Compliance is key too: no matter the space, builds should align to ISO frameworks, local authority regulations, and - in some cases - critical national infrastructure (CNI) standards. Initially, teams need to uphold a customer’s business case, identify the optimal location, address cooling concerns, identify which risks to mitigate, and understand what space for expansion or improvement should be factored in now. While pre-selection of contractors and consultants is vital at this stage, it makes strategic sense to select a complementary delivery team that can manage mobilisation and long-term performance too. Engineering providers should collaborate with customer stakeholders, consultants, and supply chain partners so that the solution delivered is fit for purpose throughout its operational lifespan. As greenfield development can be lengthy, upgrading existing spaces is a popular alternative option. Called 'retrofitting', this route can reduce cost by 40% and reduce project timelines by 30%. When building in pre-existing spaces, maintaining continuity in live environments is crucial. For example, our team recently developed a data hall within a contained 1,000m² existing facility. Engineers used profile modelling to identify an optimal cooling configuration based on hot aisle containment and installed a 380V DC power system to maximise energy efficiency. This resulted in a 96.2% achievement across rectifiers and converters. The project delivered 136 cabinets, against a brief of 130, and, crucially, didn’t disrupt business-as-usual operations, using a phased integration for the early deployment of IT systems. Maintaining continuity In certain spaces like national defence and highly sensitive operations, maintaining continuity is fundamental. Critical infrastructure maintenance in these environments needs to prioritise security and reliability, as these facilities sit at the heart of national operations. Ongoing operational management requires a 24/7 engineering presence, supported by proactive maintenance management, comprehensive systems monitoring, a strategic critical spares strategy, and a robust event and incident management process. This constant presence, from the initial stages of consultation through to ongoing operational support, delivers clear benefits that compound over time; the same team that understands the design rationale can anticipate potential issues and respond swiftly when challenges arise. Using 3D modelling to coordinate designs and time-lapse visualisations depicting project progress can keep stakeholders up to date. Asset management in critical environments like CNIs also demands strict maintenance scheduling and control, coupled with complete risk transparency to customers. Total honesty and trust are non-negotiable so weekly client meetings can maintain open communication channels, ensuring customers are fully informed about system status, upcoming maintenance windows, and any potential risks on the horizon. Meeting high expectations These high-demand environments have high expectations, so keeping engineering teams engaged and motivated is key to long-term performance. A holistic approach to staff engagement should focus on continuous training and development to deliver greater continuity and deeper site expertise. When engineers intimately understand customer expectations and site needs, they can maintain the seamless service these critical operations demand. Focusing on continuity delivers measurable results. For one defence-grade data centre customer, we have maintained 100% uptime over eight years, from day one of operations. Consistent processes and dedicated personnel form a long-term commitment to operational excellence. Optimising spaces for the future Self-delivery naturally lends itself to growing, evolving relationships with customers. By transitioning to self-delivering entire projects and operations, organisations can benefit from a single point of contact while maintaining control over most aspects of service delivery. Rather than offering generic solutions, established relationships allow for bespoke approaches that anticipate future requirements and build in flexibility from the outset. A continuous improvement model ensures long-term capability development, with energy efficiency improvement representing a clear focus area as sustainability requirements become increasingly stringent. AI and HPC workloads are pushing rack densities higher, creating new demands for thermal management, airflow, and power draw. Many operators are also embedding smart systems - from IoT sensors to predictive analytics tools - into designs. These platforms provide real-time visibility of energy use, asset performance, and environmental conditions, enabling data-driven decision-making and continuous optimisation. Operators may also upgrade spaces to higher-efficiency systems and smart cooling, which support better PUE outcomes and long-term energy savings. When paired with digital tools for energy monitoring and predictive maintenance, teams can deliver on smarter operations and provide measurable returns on investment. Continuity: A strategic tool Uptime is critical – and engineering continuity is not just beneficial, but essential. From the initial stages of design and consultation through to ongoing management and future optimisation, data centres need consistent teams, transparent processes, and strategic relationships that endure. The end-to-end approach transforms continuity from an operational requirement into a strategic advantage, enabling facilities to adapt to evolving demands while maintaining constant uptime. When consistency becomes the foundation, exceptional performance follows.

Rethinking fuel control
In this exclusive article for DCNN, Jeff Hamilton, Fuel Oil Team Manager at Preferred Utilities Manufacturing Corporation, explores how distributed control systems can enhance reliability, security, and scalability in critical backup fuel infrastructure: Distributed architecture for resilient infrastructure Uninterrupted power is non-negotiable for data centres to provide continuity through every possible scenario, from extreme weather events to grid instability in an ageing infrastructure. Generators, of course, are central to this resilience, but we must also consider the fuel storage infrastructure that powers them. The way the fuel is monitored, delivered, and secured by a control system ultimately determines whether a backup system succeeds or fails when it is needed most. The risks of centralised control A traditional fuel control system typically uses a centralised controller such as a programmable logic controller (PLC) to manage all components. The PLC coordinates data from sensors, controls pumps, logs events, and communicates with building automation systems. Often, this controller connects through hardwired, point-to-point circuits that span large distances throughout the facility. This setup creates a couple of potential vulnerabilities: 1. If the central controller fails, the entire fuel system can be compromised. A wiring fault or software error may take down the full network of equipment it supports. 2. Cybersecurity is also a concern when using a centralised controller, especially if it’s connected to broader network infrastructure. A single breach can expose your entire system. Whilst these vulnerabilities may be acceptable in some industrial situations, modern data centres demand more robust and secure solutions. Decentralisation in control architecture addresses these concerns. Distributed logic and redundant communications Next-generation fuel control systems are adopting architectures with distributed logic, meaning that control is no longer centralised in one location. Instead, each field controller—or “node”—has its own processor and local interface. These nodes operate autonomously, running dedicated programs for their assigned devices (such as tank level sensors or transfer pumps). These nodes then communicate with one another over redundant communication networks. This peer-to-peer model eliminates the need for a master controller. If one node fails or if communication is interrupted, others continue operating without disruption. This means that pump operations, alarms, and safety protocols all remain active because each node has its own logic and control. This model increases both uptime and safety; it also simplifies installation. Since each node handles its own logic and display, it needs far less wiring than centralised systems. Adding new equipment involves simply installing a new node and connecting it to the network, rather than overhauling the entire system. Built-in cybersecurity through architecture A system’s underlying architecture plays a key role in determining its vulnerability to cybersecurity hacks. Centralised systems can provide a single entry point to an entire system. Distributed control architectures offer a fundamentally different security profile. Without a single controller, there is no single target. Each node operates independently and the communication network does not require internet-facing protocols. In some applications, distributed systems have even been configured to work in physical isolation, particularly where EMP protection is required. Attackers seeking to disrupt operations would need to compromise multiple nodes simultaneously, a task substantially more difficult than targeting a central controller. Even if one segment is compromised or disabled, the rest of the system continues to function as designed. This creates a hardened, resilient infrastructure that aligns with zero-trust security principles. Safety and redundancy by default Of course, any fuel control system must not just be secure; it must also be safe. Distributed systems offer advantages here as well. Each node can be programmed with local safety interlocks. For example, if a tank level sensor detects overfill, the node managing that tank can shut off the pump without needing permission from a central controller. Other safety features often include dual-pump rotation to prevent uneven wear, leak detection, and temperature or pressure monitoring with response actions. These processes run locally and independently. Even if communication between nodes is lost, the safety routines continue. Additionally, touchscreens or displays on individual nodes allow on-site personnel to access diagnostics and system data from any node on the network. This visibility simplifies troubleshooting and provides more oversight of real-time conditions. Scaling with confidence Data centres require flexibility to grow and adapt. However, traditional control systems make changes like upgrading infrastructure, increasing power, and installing additional backup systems costly and complex, often requiring complete rewiring or reprogramming. Distributed control systems make scaling more manageable. Adding a new generator or day tank, for example, involves connecting a new controller node and loading its program. Since each node contains its own logic and communicates over a shared network, the rest of the system continues operating during the upgrade. This minimises downtime and reduces installation costs. Some systems even allow live diagnostics during commissioning, which can be particularly valuable when downtime is not an option. A better approach for critical infrastructure Data centres face incredible pressure to deliver continuous performance, efficiency, and resilience. Backup fuel systems are a vital part of this reliability strategy, but the way these systems are controlled and monitored is changing. Distributed control architectures offer a smarter, safer path forwards. Preferred Utilities Manufacturing Corporation is committed to supporting data centres to better manage their critical operations. This commitment is reflected in products and solutions like its Preferred Fuel System Controller (FSC), a distributed control architecture that offers all the features described throughout this article, including redundant, masterless/node-based communication, providing secure, safe, and flexible fuel system control. With Preferred’s expertise, a distributed control architecture can be applied to system sizes ranging from 60 to 120 day tanks.

Pelagos planning ambitious 250MW facility in Gibraltar
Pelagos Data Centres, a developer of large-scale data centre infrastructure, has announced plans to build a major new data centre near the Port of Gibraltar, with capacity of up to 250MW by 2033. Unveiled at a launch event at the offices of Gibraltar’s Chief Minister, Fabian Picardo, the project represents an investment of around £1.8 billion. It is the largest development currently planned in the territory by value, and among the largest in its history. The facility will be built in five phases on a 20,000m² site. The first stage is scheduled to be operational in late 2027, with later phases delivered at intervals of around 18 months. Transforming Gibraltar’s digital landscape Funded entirely by private investment and backed by the Government of Gibraltar, the project is positioned as a step forward for the territory’s digital and economic development. It is intended to help meet Europe’s growing demand for data centre capacity, particularly as AI adoption accelerates across industries. The site will operate independently of Gibraltar’s existing power grid and include a public leisure facility as part of the development. Konstantin Sokolov, Chairman of Pelagos Data Centres, comments, "The scale of this project marks a new chapter for Gibraltar and for Europe's digital capabilities. "Just as electricity and the internet transformed society in the past, AI is now emerging as the defining technology of our time with the power to redefine entire industries, economies, and communities. "With our new facility, Pelagos Data Centres is laying the foundation for the next era of AI-driven innovation, positioning Gibraltar as a strategic hub and enabling Europe's brightest minds to unlock the full potential of this revolutionary technology." Chief Minister Fabian Picardo adds, "I am delighted that Pelagos Data Centres have decided that Gibraltar is the place to establish their first facility and that the whole community will benefit from their massive investment and its huge economic impact. "I look forward to this project becoming a reality as soon as possible." Jobs, efficiency, and sustainability The development is expected to create up to 500 jobs during construction and around 100 permanent positions once operational. Pelagos currently employs 50 full-time staff across London and Gibraltar, and plans to expand its local workforce significantly. The facility will be built to Tier III standards, carrier-neutral, and designed to serve both public and private sector clients. It will pursue international certifications covering information security, quality, sustainability, and energy management, with a targeted Power Usage Effectiveness (PUE) of 1.2 or better. The project’s sustainability strategy includes powering operations with a combination of renewable energy and liquefied natural gas (LNG), with the aim of achieving net-zero operational emissions by 2030. Cooling systems will be designed to minimise water use, and the company is exploring heat recovery options to support community projects. Sir Joe Bossano, Minister for Economic Development and Inward Investment, says, "This is the most significant infrastructure investment in Gibraltar since the early 1990s, when the GSLP Government brought state-of-the-art telecommunications as inward investment from the United States and made possible the creation of a centre for online services. Then, we future-proofed Gibraltar's economy. Today, we are doing so again. "The technology of the future - on which every advanced economy will depend - will be artificial intelligence. AI requires data, processing power, and energy resources on a scale never before seen. "The Ministry for Economic Development will put all its resources at the service of this initiative to ensure that it is delivered in the shortest possible time. In this field, speed of delivery is everything. Gibraltar should be the fastest jurisdiction on the planet when it comes to delivery." A further technical briefing and press conference is planned for the first quarter of 2026, ahead of construction beginning later that year.

Planning approved for new DC in Hertfordshire, UK
Outline planning has been approved for a new 5,000 m² data centre at 45 Maylands Avenue in Hemel Hempstead, Hertfordshire, UK. Designed by architecture firm Scott Brownrigg for Northtree Investment Management, the new project is intended to provide much-needed digital infrastructure while creating a new, high-quality workplace and public realm. Proposals seek to maximise space on the industrial site by replacing an existing two-storey warehouse and office building with a new three-storey facility, adorned with office accommodation, a substation, car parking, and servicing areas. The architectural company says designs echo the scale of neighbouring logistics and light industrial buildings, using a contemporary architectural language and high-quality materials to "enhance the frontage to Maylands Avenue." Existing levels on the site will be utilised to maximise available space while reducing the height of the building facing the street. The current access from Maylands Avenue will be enhanced to provide accessible parking and a point of arrival for guests, pedestrians, and those arriving by bicycle, while access from Cleaveland Way will be gated and dedicated to HGV and staff vehicles. A setback from the roadside creates an opportunity to reinforce the boulevard and improve the quality of the public realm along Maylands Avenue. New landscaping with seating areas hopes to encourage pedestrian and cycle movement and contribute to the visual amenity on the estate. The sustainability strategy includes measures such as a fabric-first approach for the design, as well as a layout that allows for naturally ventilated offices via openable windows, while maintaining the security of restricted spaces. A mixture of locally native trees and shrub species will be planted along the boundaries to the south and west of the site to create a vegetative buffer for the development and habitat for local wildlife. Scott Brownrigg claims its proposal for 45 Maylands Avenue is set to make the most of the site available, densifying industrial land use, whilst carefully considering how the occupied spaces can positively contribute to and improve upon the existing street scene.

Yondr to build 550MW Dallas campus
Yondr Group, a global developer, owner, and operator of hyperscale data centres, has secured a 163-acre site just south of Dallas in Lancaster, Texas, USA, to develop a campus with the capacity to accommodate 550MW critical IT load. The project is situated in one of the nation’s most sought-after data centre corridors. The acquisition is Yondr’s first announced expansion under its newly appointed CEO, Aaron Wangenheim. The Dallas site joins Yondr’s growing North American footprint, which includes two data centres totalling 96MW in Northern Virginia and a 27MW data centre in Toronto, Canada. The company is reportedly also in advanced discussions for sites in several other tier one US metros, and continues to expand in Europe, where the company has existing assets in London, Frankfurt, and Amsterdam. “The US is a key market for Yondr’s next phase of growth and Dallas is one of the largest and fastest-growing data centre markets in the world. This investment in Dallas is just the beginning,” says Aaron Wangenheim, CEO of Yondr. “Our proven ability to deliver reliable, resilient, and sustainable data centre solutions at scale – backed by the strength of our investors DigitalBridge and La Caisse – positions us incredibly well to support clients in tier one markets where they need us.” In addition to job creation, the project should generate large tax revenue for the region and open opportunities for local suppliers and contractors throughout construction and operations. Mayor Clyde Hairston of the City of Lancaster comments, "We are delighted to welcome Yondr, a respected data centre developer and operator, to Lancaster. Yondr has pledged to create full-time jobs as a result of this project and [to] provide significant financial support for local events and community initiatives. “As Lancaster continues to rise as a shining star of Texas, Yondr’s investment further solidifies our city’s place on the map as a hub for innovation, infrastructure, and opportunity. We look forward to seeing their campus take shape and their impact flourish within our community." The campus is expected to break ground in 2026. For more from Yondr Group, click here.

DC BLOX secures $1.15bn for Atlanta data centre
DC BLOX, a provider of connected data centres and fibre networks, has announced that it has closed $1.15 billion (£858 million) in green loan financing for the construction of a data centre campus in Douglas County, Georgia, USA. The funds will support the development of a 120 MW data centre and include campus expansion to support an additional 80 MW, available in 2027. “Securing this capital confirms confidence in our execution track record,” comments Melih Ileri, SVP of Capital Markets & Strategy at DC BLOX. “Continuing to deliver our projects on time and with excellence has earned us the trust of our customers and investors, leading to this historic growth in our business.” This project comes on the heels of recently announced DC BLOX projects including multiple hyperscale edge nodes across the US Southeast. With additional hyperscale-ready data centre capacity available in Conyers and Douglasville, Georgia, DC BLOX believes it is set to rapidly expand its presence around Atlanta. “With this latest project announcement, DC BLOX continues to deliver on its mission to build the foundational digital infrastructure needed to drive the Southeast’s growing economy,” claims Jeff Uphues, CEO of DC BLOX. “Atlanta is the fastest-growing data centre market in the US today and we are proud to enable our customers to expand their footprint in our region.” This financing follows the prior $265 million (£197.5 million) green loan secured from industry lenders, as well as the growth equity that was committed by Post Road Group in the fourth quarter of 2024. “The DC BLOX management team has done a terrific job positioning the business for success in the Southeast, with a consistent focus on serving the customer and community,” says Michael Bogdan, Managing Partner at Post Road Group. “We are thankful to all our capital partners who have helped capitalise the company to meet the tremendous hyperscale and edge growth the company has experienced.” Those involved in the deal • ING Capital served as Structuring and Administrative Agent• ING, Mizuho Bank, and Natixis Corporate & Investment Banking (Natixis CIB) served as Initial Coordinating Lead Arrangers and Joint Bookrunners• First Citizens Bank served as Coordinating Lead Arranger• CoBank ACB, LBBW New York Branch, The Toronto-Dominion Bank New York Branch, and KeyBank National Association served as Joint Lead Arrangers• The Huntington National Bank served as Mandated Lead Arranger• ING and Natixis CIB also served as Joint Green Loan Coordinators• A&O Shearman served as counsel to DC BLOX• Milbank served as counsel to the lenders For more from DC BLOX, click here.

'Cranes key to productivity in data centre construction'
With companies and consumers increasingly reliant on cloud-based computing and services, data centre construction has moved higher up the agenda across the world. Recently re-categorised as 'Critical National Infrastructure' in the UK, the market is highly competitive and demand for new facilities is high. However, these projects are very sensitive to risk. Challenges include the highly technical nature of some of the work, which relies on a specialist supply chain, and long lead times for equipment such as servers, computer chips, and backup generators – in some cases, up to two years. Time is of the essence. Every day of delay during a construction programme can have a multimillion-pound impact in lost income, and project teams can be penalised for falling behind. However, help is at hand from an unexpected source: the cranage provider. Cutting construction time by half Marr Contracting, an Australian provider of heavy-lift luffing tower cranes and cranage services, has been working on data centres around the world for several years. Its methodology is helping data centre projects reach completion in half of the average time. “The first time that I spoke to a client about their data centre project, they told me that they were struggling with the lifting requirements,” explains Simon Marr, Managing Director at Marr Contracting. “There were lots of heavy precast components and sequencing them correctly alongside other elements of the programme was proving difficult. “It was a traditional set-up with mobile cranes sitting outside the building structure, which made the site congested and ‘confused.’ "There was a clash between the critical path works of installing the in-ground services and the construction of the main structure, as the required mobile crane locations were hindering the in-ground works and the in-ground works were hindering where the mobile cranes could be placed. This in turn resulted in an extended programme.” The team at Marr suggested a different approach: to place fewer, yet larger-capacity cranes in strategic locations so that they could service the whole site and allow the in-ground works to proceed concurrently. By adopting this philosophy, the project was completed in half the time of a typical build. Marr has partnered with the client on every development since, with the latest project completed in just 35 weeks. “It’s been transformational,” claims Simon. “The solution removes complexity and improves productivity by allowing construction to happen across multiple work fronts. This, in turn, reduces the number of cranes on the project.” Early engagement is key Simon believes early engagement is key to achieving productivity and efficiency gains on data centre projects. He says, “There is often a disconnect between the engineering and planning of a project and how cranes integrate into the project’s construction logic. "The current approach, where the end-user of the crane issues a list of requirements for a project, with no visibility on the logic behind how these cranes will align with the construction methodology, is flawed. “It creates a situation where more cranes are usually added to an already congested site to fill the gap that could have been covered by one single tower crane.” One of the main pressure points on projects that is specific to data centres is the requirements around services. “The challenge with data centres is that a lot of power and water is needed, which means lots of in-ground services,” continues Simon. “The ideal would be to build these together, but that’s not possible with a traditional cranage solution because you’re making a compromise on whether you install the in-ground services or whether you delay that work so that the mobile cranes can support the construction of the structure. Ultimately, the programme falls behind.” “We’ve seen clients try to save money by downsizing the tower crane and putting it in the centre of the server hall. But this hinders the completion of the main structure and delays the internal fit out works. “Our approach is to use cranes that can do heavier lifts but that take up a smaller area, away from the critical path and outside the building structure. The crane solution should allow the concurrent delivery of critical path works – in turn, making the crane a servant to the project, not the hero. “With more sites being developed in congested urban areas, particularly new, taller data centres with heavier components, this is going to be more of an issue in the future.” Thinking big One of the benefits of early engagement and strategically deploying heavy lift tower cranes is that it opens the door for the constructor to “think big” with their construction methodology. This appeals to the data centre market as it enables constructors to work to design for manufacturer and assembly (DfMA). By using prefabricated, pre-engineered modules, DfMA aims to allow for the rapid construction and deployment of data centre facilities. Fewer, heavier lifts should reduce risk and improve safety because more components can be assembled offsite, delivered to the site, and then placed through fewer major crane lifts instead of multiple, smaller lifts. Simon claims, “By seeking advice from cranage experts early in the bid and design development stage of a project, the project can benefit from lower project costs, improved safety, higher quality, and quicker construction."

Datum launches second Manchester data centre, MCR2
UK data centre provider Datum Datacentres has officially launched MCR2, its newest data centre in Manchester, marking a milestone for both the company and the region’s £500 million regeneration initiative. The well-attended opening ceremony took place on Thursday, 26 June and celebrated the completion of the almost two-year construction project, signalling a boost for Manchester’s position as a UK tech hub. The ribbon was cut by Emma Taylor, Labour Councillor for the Sharston Ward, and the event was attended by distinguished guests including members of Manchester City Council, who collaborated closely with Datum throughout the project. Their joint efforts sought to ensure the facility aligns with the goals of Wythenshawe's ongoing regeneration, creating a resource to support the community's sustainable growth and innovation. Commenting on the launch, Matt Edgley, COO at Datum Datacentres, says, “We are thrilled to have officially opened MCR2. From the outset, our vision for MCR2 was to set new standards in operational resilience and reliability while embedding sustainability at its core. This facility stands as a testament to our commitment to fostering positive social and environmental change, supporting the local economy and playing an active role in the regeneration of Wythenshawe.” MCR2 was built in response to Manchester’s rapidly growing demand for data centre infrastructure. The new facility provides capacity for up to 1,200 racks, each capable of up to 30kW power delivery on dual circuits, supported by 2N level resilience. MCR2 offers a design PUE of 1.25, a 100% power availability SLA, and a focus on sustainability, security, and "operational excellence." Security is further bolstered by the inclusion of an on-site police-linked Alarm Receiving Centre and the site’s NSI Gold certification (BS5979 SOC). Emma Taylor, Labour Councillor for the Sharston Ward, comments, “Data centres are a critical part of our data infrastructure in Manchester and, as anchors for investment, play a really important part in supporting local growth. This multimillion pound investment by Datum really demonstrates the confidence in the region and I’m really excited for what the future holds. As someone who grew up just metres from the site of what is now MCR2, I’d like to thank the team at Datum for bringing a bit of life back into the fringes of Wythenshawe town centre.” Datum’s design and construction partner Keysource delivered the facility. Jon Healy, Managing Director at Keysource, a Salute company, adds, “It’s been great to deliver on another successful project with Datum Datacentres. We set out to challenge the status quo and drive the highest possible standards across the project design, construction, and sustainability. Collaboration between Datum and Keysource has been at the fore to deliver on key business drivers and is testament to our talented people involved. We look forward to the next project.” For more from Datum Datacentres, click here.



Translate »