Data Centre Architecture Insights & Best Practices


Reshaping data infrastructure to help carriers digitally transform
At MWC Barcelona 2026, Yuan Yuan, President of Huawei Data Storage Product Line, shared Chinese multinational technology company Huawei's key insights and innovations for enabling carriers to plan their data infrastructure, address challenges in AI adoption, and prepare for IT architecture transformation in the AI era. Data preparation for AI: From dormancy to awakening In the age of AI, data is an essential asset. Yuan noted that in the past two years, over 90% of enterprises actively embraced AI for business innovation, but fewer than 10% have successfully mastered and scaled AI technology. There are three primary challenges: persistent data silos that hinder data collaboration across regions and organisations; a lack of quality data supply, especially industry-specific knowledge; and inefficiencies in the data preparation phases like data collection, cleansing, and labelling. This results in AI applications falling short of commercial viability, raising doubts about the return on investment. Yuan predicts, "In the future, cold data will become a thing of the past. Data will shift from 'offline' to 'always online,' and retention policies will move from being compliance-driven to a principle of retaining and never deleting. Consequently, data volumes will expand from petabytes to exabytes, which will drive demand for greener, more efficient data infrastructure." Architectural transformation: From storing data to storing knowledge and memory As AI agents become the primary consumers of data, data infrastructure must evolve to embrace new data paradigms, including vector, graph, and key-value (KV) semantics. To eliminate AI hallucinations and enable continual AI evolution, data infrastructure must be capable of storing knowledge and memory. Yuan discussed Huawei's AI data platform, an innovative solution that integrates knowledge, memory, and inference acceleration services into a single storage system. This consolidated approach significantly reduces system complexity and O&M costs. The platform delivers a massive upgrade in performance. Inference efficiency (measured in tokens generated per second) is multiplied, while latency (time to first token) is reduced by 90%. Furthermore, the continual evolution of data, knowledge, and memory makes AI agents smarter over time. As Yuan explains, "In the future, every carrier will need its own AI data platform to help agents understand business processes, acquire domain-specific expertise, and iterate and upgrade rapidly. Otherwise, AI will remain nothing more than an expensive toy." AI adoption planning: From AI exploration to AI-driven service upgrades Although many carriers have made AI a strategic priority and are beginning to adopt it, significant challenges remain in real-world deployment: inference failure, inference costs, and inference speed. Yuan presented an intelligent computing service platform, jointly developed with a Chinese carrier, that tackles these challenges. The platform uses the KV cache technology to improve storage resource utilisation and supports inference applications of different large models like DeepSeek and Qwen. It optimises cost-effectiveness by innovatively eliminating repeated computing via querying. Through the collaboration of on-chip memory, DRAM, and AI storage, the platform enables PB-scale KV cache storage. This improves the overall throughput by more than 10 times, reduces inference costs by about 50%, and shortens response time to less than one second. In addition, algorithm optimisation addresses challenges like low KV cache hit ratios and inference failure due to long-sequence inputs in research report analysis. Serving as the foundation for AI, the platform has been deployed at scale at the group to enable multidimensional innovation across services, including internal IT systems, B2C services, B2B services, and B2H services. Yuan says, "Planning AI training and inference platforms requires more than focusing on computing power and models; deep collaboration between storage and compute is also essential to improve system-level efficiency and user experience." Yuan highlighted that AI is reshaping data infrastructure. In the AI era, storage systems will evolve into intelligent engines, which will not only store critical data assets, but also serve as the knowledge sources and memory carriers for the continuous evolution of AI agents. He called on carriers to prioritise accumulation and protection of quality data, and to plan and build a unified AI data platform that supports a wide range of large model applications while enabling service innovation for both internal operations and external offerings. Huawei says it will continue to advance technological innovation and architectural upgrades to help carriers digitally transform. For more from Huawei, click here.

Crestchic unveils 600kW liquid-cooled loadbank
Crestchic, a UK manufacturer of loadbanks and transformers for testing power systems and data centres, has launched its new 600kW Liquid Cooled Loadbank at Data Centre World London 2026, aimed at supporting commissioning in the growing liquid-cooled data centre market. As rack power densities increase, operators are increasingly adopting liquid cooling to manage higher thermal loads. Crestchic says the new system has been designed to provide accurate thermal validation and precision electrical testing for liquid-cooled infrastructure. The 600kW loadbank delivers up to 648kW at 415V and features stable ΔT thermal control to ±0.5°C, enabling repeatable testing during commissioning. Temperature accuracy is maintained regardless of flow variation, while built-in protections cover flow, pressure, overload, underload, and thermal shock. Designed for liquid-cooled data centre commissioning The unit uses a single-vessel architecture, reducing footprint compared with multi-vessel systems at similar power levels. This compact design makes it easier to position in plant rooms and simplifies transport and handling. The platform includes a stackable structure, flush-mounted connections, heavy-duty castors, and dual-side forklift pockets, allowing two units to be transported within a standard-height ISO shipping container. The system integrates with Crestchic’s VCS software, providing live monitoring of supply and hydraulic data, real-time load profiling, and the ability to cluster up to 240 load banks for hybrid air- and liquid-cooled testing. Paul Brickman, Commercial Director at Crestchic, says, “The move towards liquid cooling is accelerating as rack densities increase, particularly with AI and high-performance computing workloads. “Our new 600kW Liquid Cooled Loadbank has been designed from the ground up to serve this market, giving commissioning engineers the precision, reliability, and control they need to bring critical infrastructure online with confidence." The 600kW Liquid Cooled Loadbank is available for sale or rental through Crestchic’s global network. For more from Crestchic, click here.

McLaren appointed for 70MW London data centre phase
UK construction firm McLaren Construction has been appointed to deliver the shell and core of the first 70MW building at global data centre developer and operator Ada Infrastructure’s Docklands campus in London. The project marks Ada Infrastructure’s first European development and forms part of a planned 210MW campus in the Royal Docks. McLaren’s contract also covers enabling infrastructure for the wider site and provision for a future district heating network. The development will comprise three 70MW data centre buildings, alongside a community facility and public realm improvements, including upgraded pedestrian and cycle routes along the River Thames and works to the river wall, including a new flood defence barrier. The buildings will incorporate air and liquid cooling systems designed to operate without water evaporation, as well as low-carbon construction materials and connection points for district heating. The campus is targeting a BREEAM Excellent rating and is designed to support AI and high density workloads. A 210MW campus in London's Royal Docks James Moloney, Head of Ada Infrastructure EMEA, says, “The appointment of McLaren Construction is an important step in bringing this vision to life. "[Its] experience delivering complex data centre and infrastructure projects will be instrumental as we transform this long-vacant site into a sustainable, future-focused campus that also enhances public spaces and contributes to the wider regeneration of the Royal Docks.” McLaren’s supply chain partners include Keltbray for CFA piling, Menard for BMC piling, Gallagher for groundworks and civils, and William Hare for the steel frame. The shell and core contract is scheduled for completion in mid-2028, with the first building expected to be ready for occupation by the end of 2028.

Expert cleaning for critical environments
IT Cleaning is the UK’s trusted authority in specialist IT and technical cleaning to ISO 14644-1 2022 Class 8, delivering expert services where precision is critical and failure is not an option. From data centres and data halls to server rooms and comms rooms, the company protects vital infrastructure with meticulous, industry-approved cleaning solutions. Every service is carried out by highly trained technicians using advanced anti-static methods, designed to safeguard sensitive equipment and reduce operational risk. With minimal disruption and maximum attention to detail, IT Cleaning ensures technology environments remain clean, compliant, and performance-ready. Operating nationwide, the company cleans for organisations that demand absolute reliability, strict compliance, and exceptional standards. Its reputation is built on technical expertise, consistent delivery, and a no-compromise approach to quality. For businesses that depend on uninterrupted IT performance, IT Cleaning is the specialist cleaning partner of choice. Click here to visit the company's website and find out more.

Thorn, Zumtobel to exhibit at Data Centre World
Thorn and Zumtobel, both lighting brands of the Zumtobel Group, are to present a "unified approach" to data centre lighting at Data Centre World 2026. The companies say the focus will be on three operational priorities for data centre operators and delivery teams: reduced energy consumption, reliable operation, and consistent control across white space, plant, circulation, and perimeter areas. The stand will outline how a coordinated lighting and controls strategy can support specification, installation, and ongoing operation across different data centre environments. The Zumtobel Group says its approach is intended to support consistency across projects, while also simplifying long-term maintenance and operational management. Lighting controls for data centres A central element of the stand will be the use of the LITECOM control platform, which is presented as a way to connect a defined portfolio of luminaires across different zones of a data centre. The companies say this is intended to support scheduling, presence detection, daylight strategies, scene setting, and portfolio standardisation. The stand will also feature TECTON II, shown as part of a continuous-row lighting infrastructure approach, which is designed to support rapid, tool-free assembly and future adaptation. Lighting applications on show will cover white space, technical areas, offices, and exterior zones. Products listed for demonstration include: • Thorn: Aquaforce Pro, ForceLED, Piazza, Omega Pro 2, IQ Beam • Zumtobel: IZURA, TECTON II, MELLOW LIGHT, AMPHIBIA, LANOS All are shown as being controlled via LITECOM. The stand design itself will be intended to reflect Zumtobel Group's stated sustainability principles, using reused and modular components from previous events, with minimal new-build elements. In addition, graphics have been consolidated to reduce printing and waste. Neil Raithatha, Head of Marketing, Thorn and Zumtobel Lighting UK & Ireland, notes, “Data centre customers need lighting that is consistent, efficient, and straightforward to manage. “Our presentation this year brings together proven luminaires with a control platform that helps project teams deliver quickly and run reliably, from the white space to the perimeter.” Thorn and Zumtobel will be exhibiting at Stand F140 at Excel London on 4–5 March 2026. For more from Thorn and Zumtobel, click here.

Data centre waste heat could warm millions of UK homes
New analysis from EnergiRaven, a UK provider of energy management software, and Viegand Maagøe, a Danish sustainability and ESG consultancy, suggests that waste heat from the next generation of UK data centres could be used to heat more than 3.5 million homes by 2035, provided the necessary heat network infrastructure is developed. The research estimates that projected growth in data centres could generate enough recoverable heat to supply between 3.5 million and 6.3 million homes, depending on data centre design efficiency and other technical factors. The report argues that without investment in large-scale heat network infrastructure, much of this heat will be lost. The study highlights a risk that the UK will expand data centre and AI infrastructure without making use of the waste heat produced, missing an opportunity to reduce household energy costs and improve energy resilience. “Our national grid will be powering these data centres - it’s madness to invest in the additional power these facilities will need and waste so much of it as unused heat, driving up costs for taxpayers and bill payers,” argues Simon Kerr, Head of Heat Networks at EnergiRaven. “Microsoft has said it wants its data centres to be ‘good neighbours’ - giving heat back to their communities should be an obvious first step.” Regional opportunities and proximity to housing The research points to examples where data centres are located close to both new housing developments and areas affected by fuel poverty. Around Greater Manchester, for example, 15,000 homes are planned in the Victoria North development, with a further 14,000 to 20,000 planned in Adlington. The area also includes more than a dozen existing data centres, with additional facilities planned. According to the analysis, these sites could potentially supply heat to nearby new housing, reducing the need for individual gas boilers and supporting lower-carbon heating. Moreover, the study maps how similar patterns could be replicated across the UK, linking waste heat sources with residential demand through heat networks. Using waste heat for space heating is common in parts of northern Europe, particularly in Nordic countries. There, waste heat from sources such as data centres, power plants, incinerators, and sewage treatment facilities is often connected to district heat networks, supplying homes via heat interface units instead of individual boilers. In the UK, a number of cities have been designated as Heat Network Zones, where heat networks have been identified as a lower-cost, low-carbon heating option. From 2026, Ofgem will take over regulation of heat networks and new technical standards will be introduced through the Heat Network Technical Assurance Scheme, aimed at improving consumer and investor confidence. Heat networks, regulation, and policy context The Warm Homes Plan includes a target to double the proportion of heat demand met by heat networks in England to 7% by 2035, with longer-term ambitions for heat networks to supply around 20% of heat by 2050. The plan also includes funding support for heat network development. However, Simon argues that current policy does not fully reflect the scale of opportunity from large waste heat sources, continuing, “Current policy in the UK is nudging us towards a patchwork of small networks that might connect heat from a single source to a single housing development. If we continue down this road, we will end up with cherry-picking and small, private monopolies, rather than national infrastructure that can take advantage of the full scale of waste heat sources around the country. “We know that investment in heat networks and thermal infrastructure consistently drives bills down over time and delivers reliable carbon savings, but these projects require long-term finance. "Government-backed low-interest loans, pension fund investment, and institutions such as GB Energy all have a role to play in bridging this gap, as does proactivity from local governments, who can take vital first steps by joining forces to map out potential networks and start laying the groundwork with feasibility studies.” Peter Maagøe Petersen, Director and Partner at Viegand Maagøe, adds, “We should see waste heat as a national opportunity. In addition to heating homes, heat highways can also reduce strain on the electricity grid and act as a large thermal battery, allowing renewables to keep operating even when usage is low and reducing reliance on imported fossil fuels. "As this data shows, the UK has all the pieces it needs to start taking advantage of waste heat - it just needs to join them together. With denser cities than its Nordic neighbours and a wealth of waste heat on the horizon, the UK is a fantastic place for heat networks. It needs to start focusing on heat as much as it does electricity - not just for lower bills, but for future jobs and energy security.”

PFlow highlights VRC use in data centres
PFlow Industries, a US manufacturer of vertical reciprocating conveyor (VRC) technology, has highlighted the use of VRCs within data centre environments, focusing on its M Series and F Series systems. VRCs are typically incorporated during the design phase of a facility to support vertical movement of equipment and materials. PFlow states that, compared with conventional lifting approaches, VRCs can be integrated with automated material handling systems and used for intermittent or continuous operation, with routine maintenance requirements kept relatively low. M Series and F Series applications The PFlow M Series 2-Post Mechanical Material Lift is designed for higher-cycle environments where frequent vertical movement of equipment is required. The company says the system can handle loads of up to 10,000lb (4,536kg) and operate across multiple floor levels. Standard travel speed is stated as 25 feet (7.62 metres) per minute, with higher speeds available depending on configuration. The M Series is designed for use for transporting items such as servers, racks, and other technical equipment, with safety features intended to support controlled movement and equipment protection. The PFlow F Series 4-Post Mechanical VRC is positioned for heavier loads and larger equipment. The standard lifting capacity is up to 50,000lb (22,680kg), with higher capacities available through customisation. The design allows loading and unloading from all four sides and is intended to accommodate oversized infrastructure, including battery systems and large server assemblies. PFlow says the F Series is engineered for high-cycle operation and flexible traffic patterns within facilities. The company adds that its VRCs are designed as permanent infrastructure elements within buildings rather than standalone equipment. It states that all systems are engineered to meet ASME B20.1 conveyor requirements and are intended for continuous operation in environments where uptime is critical. Dan Hext, National Sales Director at PFlow Industries, comments, “Every industry is under cost and compliance pressure. Our VRCs help facility operators achieve maximum throughput and efficiency while maintaining the highest levels of safety.”

The blueprint for tomorrow’s sustainable data centres
In this exclusive article for DCNN, Francesco Fontana, Enterprise Marketing and Alliances Director at Aruba, explores how operators can embed sustainability, flexibility, and high-density engineering into data centre design to meet the accelerating demands of AI: Sustainable design is now central to AI-scale data centres The explosive growth of AI is straining data centre capacity, prompting operators to both upgrade existing sites and plan large-scale new-builds. Europe’s AI market, projected to grow at a 36.4% CAGR through 2033, is driving this wave of investment as operators scramble to match demand. Operators face mounting pressure to address the environmental costs of rapid growth, as expansion alone cannot meet the challenge. The path forward lies in designing facilities that are sustainable by default, while balancing resilience, efficiency, and adaptability to ensure data centres can support the accelerating demands of AI. The cost of progress Customer expectations for data centres have shifted dramatically in recent years. The rapid uptake of AI and cloud technologies is fuelling demand for colocation environments that are scalable, flexible, and capable of supporting constantly evolving workloads and managing surging volumes of data. But this evolution comes at a cost. AI and other compute-intensive applications demand vast amounts of processing power, which in turn place new strains on both energy and water resources. Global data centre electricity usage is projected to reach 1,050 terawatt-hours (TWh) by 2026, placing data centres among the world’s top five national consumers. This rising consumption has put data centres firmly under the spotlight. Regulators, customers, and the wider public are scrutinising how facilities are designed and operated, making it clear that sustainability can no longer be treated as optional. To survive amongst these new expectations, operators must balance performance with environmental responsibility, rethinking infrastructure from the ground up. Steps to a next-generation sustainable data centre 1. Embed sustainability from day one Facilities designed 'green by default' are better placed to meet both operational and environmental goals, and this why sustainability can’t be an afterthought. This requires renewable energy integration from the outset through on-site solar, hydroelectric systems, or long-term clean power purchase agreements. Operators across Europe are also committing to industry frameworks like the Climate Neutral Data Centre Pact and the European Green Digital Coalition, ensuring progress is independently verified. Embedding sustainability into the design and operation of data centres not only reduces carbon intensity but also creates long-term efficiency gains that help manage AI’s heavy energy demands. 2. Build for flexibility and scale Modern businesses need infrastructures that can grow with them. For operators, this means creating resilient IT environments with space and power capacity to support future demand. Offering adaptable options - such as private cages and cross-connects - gives customers the freedom to scale resources up or down, as well as tailor facilities to their unique needs. This flexibility underpins cloud expansion, digital transformation initiatives, and the integration of new applications - all while helping customers remain agile in a competitive market. 3. Engineering for the AI Workload AI and high-performance computing (HPC) workloads demand far more power and cooling capacity than traditional IT environments, and conventional designs are struggling to keep up. Facilities must be engineered specifically for high-density deployments. Advanced cooling technologies, such as liquid cooling, allow operators to safely and sustainably support power densities far above 20 kW per rack, essential for next-generation GPUs and other AI-driven infrastructure. Rethinking power distribution, airflow management, and rack layout ensures high-density computing can be delivered efficiently without compromising stability or sustainability. 4. Location matters Where a data centre is built plays a major role in its sustainability profile, as regional providers often offer greater flexibility and more personalised services to meet customer needs. Italy, for example, has become a key destination for new facilities. Its cloud computing market is estimated at €10.8 billion (£9.4 billion) in 2025 and is forecast to more than double to €27.4 billion (£23.9 billion) by 2030, growing at a CAGR of 20.6%. Significant investments from hyperscalers in recent years are accelerating growth, making the region a hotspot for operators looking to expand in Europe. 5. Stay compliant with regulations and certifications Strong regulatory and environmental compliance is fundamental. Frameworks such as the General Data Protection Regulation (GDPR) safeguard data, while certifications like LEED (Leadership in Energy and Environmental Design) demonstrate energy efficiency and environmental accountability. Adhering to these standards ensures legal compliance, but it also improves operational transparency and strengthens credibility with customers. Sustainability and performance as partners The data centres of tomorrow must scale sustainably to meet the demands of AI, cloud, and digital transformation. This requires embedding efficiency and adaptability into every stage of design and operation. Investment in renewable energy, such as hydro and solar, will be crucial to reducing emissions. Equally, innovations like liquid cooling will help manage the thermal loads of compute-heavy AI environments. Emerging technologies - including agentic AI systems that autonomously optimise energy use and breakthroughs in quantum computing - promise to take efficiency even further. In short, sustainability and performance are no longer competing objectives; together, they form the foundation of a resilient digital future where AI can thrive without compromising the planet. For more from Aruba, click here.

InfraPartners, JLL partner to accelerate AI DC delivery
InfraPartners, a designer and builder of prefabricated AI data centres, and JLL, a global commercial real estate and investment management company, have formed a strategic agreement to accelerate the development and operation of AI data centres. The partnership brings together InfraPartners’ prefabricated AI data centre designs and JLL’s capabilities in site selection, project management, construction oversight, financial structuring, and facilities management. The companies state that the combined model is intended to address persistent challenges in data centre development, particularly the period between site identification and operational readiness. As investment in AI infrastructure grows, operators increasingly require deployment models that offer predictable schedules, reduced risk, and scalable designs suitable for GPU-heavy environments. Data centre construction continues to face risks associated with labour shortages, schedule delays, and complex financing. InfraPartners and JLL say they aim to manage these issues jointly by integrating design, prefabrication, delivery, and long-term operations into a single framework. Prefabrication and integrated delivery for AI infrastructure “Our clients are asking for faster, lower-risk routes to delivering AI infrastructure,” says Michalis Grigoratos, CEO at InfraPartners. “Our prefabricated, upgradeable digital infrastructure integrates seamlessly with JLL’s expertise across the full project lifecycle, so, together, we’re focused on providing a superior product that keeps pace with AI infrastructure changes and market growth. "Our globally scalable, repeatable approach includes site selection, prefabrication, and long-term operations, reducing time-to-first-token and maximising performance across the lifecycle.” Matt Landek, JLL Division President, Data Centers and Critical Environments, adds, “AI infrastructure demands a new approach - one that’s as dynamic and high-performing as the workloads it supports. “With InfraPartners, we are delivering a unique blueprint that brings real estate, engineering, and operational precision into a unified model.” Kristen Vosmaer, Managing Director at JLL, oversees global programme management, including JLL White Space and facilities management solutions, and supports delivery of the partnership. He comments, “This is one of the first collaborations to fully integrate data centre design, manufacturing, construction, commissioning, computer deployment, and lifecycle management for institutional-grade real estate delivery, marking a significant shift in shortening the time to monetisation for how mission-critical infrastructure assets are developed and maintained.” The companies plan to offer end-to-end capabilities intended to accelerate the delivery of AI-ready facilities for enterprise, government, and cloud operators. Initial deployment efforts will focus on high-growth AI markets in EMEA and the United States from Q1 2026, with plans to expand into additional regions. For more from InfraPartners, click here.

America’s AI revolution needs the right infrastructure
In this article, Ivo Ivanov, CEO of DE-CIX, argues his case for why America’s AI revolution won’t happen without the right kind of infrastructure: Boom or bust Artificial intelligence might well be the defining technology of our time, but its future rests on something much less tangible hiding beneath the surface: latency. Every AI service, whether training models across distributed GPU-as-a-Service communities or running inference close to end users, depends on how fast, how securely, and how cost-effectively data can move. Network latency is simply the delay in the speed of traffic transmission caused by the distance the data needs to travel: the lower latency is (i.e. the faster the transmission), the better the performance of everything from autonomous vehicles to the applications we carry in our pockets. There’s always been a trend of technology applications outpacing network capabilities, but we’re feeling it more acutely now due to the sheer pace of AI growth. Depending on where you were in 2012, the average latency for the top 20 applications could be up to or more than 200 milliseconds. Today, there’s virtually no application in the top 100 that would function effectively with that kind of latency. That’s why internet exchanges (IXs) have begun to dominate the conversation. An IX is like an airport for data. Just as an airport coordinates the safe landing and departure of dozens of airlines, allowing them to exchange passengers and cargo seamlessly, an IX brings together networks, clouds, and content platforms to seamlessly exchange traffic. The result is faster connections, lower latency, greater efficiency, and a smoother journey for every digital service that depends on it. Deploying these IXs creates what is known as “data gravity”, a magnetic pull that draws in networks, content, and investment. Once this gravity takes hold, ecosystems begin to grow on their own, localising data and services, reducing latency, and fuelling economic growth. I recently spoke about this at a first-of-its-kind regional AI connectivity summit, The future of AI connectivity in Kansas & beyond, hosted at the Wichita State University (WSU) in Kansas, USA. It was the perfect location - given that WSU is the planned site of a new carrier-neutral IX - and the start of a much bigger plan to roll out IXs across university campuses nationwide. Discussions at the summit reflected a growing recognition that America’s AI economy cannot depend solely on coastal hubs or isolated mega-data centres. If AI is to deliver value across all parts of the economy, from aerospace and healthcare to finance and education, it needs a distributed, resilient, and secure interconnection layer reaching deep into the heartland. What is beginning in Wichita is part of a much bigger picture: building the kind of digital infrastructure that will allow AI to flourish. Networking changed the game, but AI is changing the rules For all its potential, AI’s crowning achievement so far might be the wakeup call it’s given us. It has magnified every weakness in today’s networks. Training up models requires immense compute power. Finding the data centre space for this can be a challenge, but new data transport protocols are meaning that AI processing could, in the future, be spread across multiple data centre facilities. Meanwhile, inference - and especially multi-AI agentive inference - demands ultra-low latency, as AI services interact with systems, people, and businesses in real time. But for both of these scenarios, the efficiency and speed of the network is key. If the network cannot keep pace (if data needs to travel too far), these applications become too slow to be useful. That’s why the next breakthrough in AI won’t be in bigger or better models, but in the infrastructure that connects them all. By bringing networks, clouds, and enterprises together on a neutral platform, an IX makes it possible to aggregate GPU resources across locations, create agile GPU-as-a-Service communities, and deliver real-time inference with the best performance and highest level of security. AI changes the geography of networking too. Instead of relying only on mega-hubs in key locations, we need interconnection spokes that reach into every region where people live, work, and innovate. Otherwise, businesses in the middle of the country face the “tromboning effect”, where their data detours hundreds of miles to another city to be exchanged and processed before returning a result - adding latency, raising costs, and weakening performance. We need to make these distances shorter, reduce path complexity, and allow data to move freely and securely between every player in the network chain. That’s how AI is rewriting the rulebook; latency, underpinned by distance and geography, matters more than ever. Building ecosystems and 'data gravity' When we establish an IX, we’re doing more than just connecting networks; we’re forging the embers of a future-proof ecosystem. I’ve seen this happen countless times. The moment a neutral (meaning data centre and carrier neutral) exchange is in place, it becomes a magnet that draws in networks, content providers, data centres, and investors. The pull of “data gravity” transforms a market from being dependent on distant hubs into a self-sustaining digital environment. What may look like a small step - a handful of networks exchanging traffic locally - very quickly becomes an accelerant for rapid growth. Dubai is one of the clearest examples. When we opened our first international platform there in 2012, 90% of the content used in the region was hosted outside of the Middle East, with latency above 200 milliseconds. A decade later, 90% of that content is localised within the region and latency has dropped to just three milliseconds. This was a direct result of the gravity created by the exchange, pulling more and more stakeholders into the ecosystem. For AI, that localisation isn’t just beneficial; it’s also essential. Training and inference both depend on data being closer to where it is needed. Without the gravity of an IX, content and compute remain scattered and far away, and performance suffers. With it, however, entire regions can unlock the kind of digital transformation that AI demands. The American challenge There was a time when connectivity infrastructure was dominated by a handful of incumbents, but that time has long since passed. Building AI-ready infrastructure isn’t something that one organisation or sector can do alone. Everywhere that has succeeded in building an AI-ready network environment has done so through partnerships - between data centre, network, and IX operators, alongside policy makers, technology providers, universities, and - of course - the business community itself. When those pieces of the puzzle are assembled together, the result is a healthy ecosystem that benefits everyone. This collaborative model, like the one envisaged at the IX in WSU, is exactly what the US needs if it is to realise the full potential of AI. Too much of America’s digital economy still depends on coastal hubs, while the centre of the country is underserved. That means businesses in aerospace, healthcare, finance, and education - many of which are based deep in the US heartland - must rely on services delivered from other states and regions, and that isn’t sustainable when it comes to AI. To solve this, we need a distributed layer of interconnection that extends across the entire nation. Only then can we create a truly digital America where every city has access to the same secure, high-performance infrastructure required to power its AI-driven future. For more from DE-CIX, click here.



Translate »