Exploring Modern Data Centre Design


Datacloud Middle East comes to Dubai
Taking place in Dubai, UAE on 10–12 February 2026, Datacloud Middle East will highlight the region’s rapid emergence as a global data centre hub, driven by hyperscaler investment, sovereign AI strategies, and large-scale digital transformation. Over three days, the event will examine how the Middle East will build future-ready infrastructure to support AI at scale while advancing sustainability and innovation. More than 50 industry experts will share insights on preparing for AI-driven workloads, with focused discussions on energy strategy, high-density design, and major developments such as Stargate UAE. Driving data centre acceleration in the Middle East The agenda will also address financing and delivery challenges, including capital deployment, modular construction, and international expansion. Sessions will explore operational excellence and sustainability, showcasing advanced cooling technologies, sovereign AI initiatives, and interconnection strategies that will enable resilient, high-performance connectivity across the region. With over 500 attendees, Datacloud Middle East will offer a comprehensive view of how gigawatt-scale campuses, cutting-edge cooling, and strategic partnerships will shape the Middle East’s AI infrastructure leadership. Click here to secure your place now.

DCNN to host webinar with CRH
Resilient data centre infrastructure isn’t built at commissioning; it’s built at conception. DCNN and CRH, a US data centre construction specialist, are coming together for a powerful panel discussion exploring how early collaboration with building material providers and site engineers can shape smarter, stronger, and more sustainable data centres. The webinar, 'From the ground up: How future‑proofing data centres starts at the beginning of the project', is a must‑attend session for anyone involved in planning, designing, or delivering next‑generation facilities. Date: 19 February 2026Time: 3pm BST (10am EST)Location: Online (Zoom) Why join this webinar? • Understand how early‑stage decisions influence long‑term resilience • Hear directly from CRH’s global leaders in sustainability, innovation, and infrastructure delivery • Gain insights across the full project lifecycle - from planning to execution • Connect with experts shaping the future of data centre construction Meet the panel Moderator: Joe Peck, Assistant Editor, DCNN Frans Vreeswijk, VP Customer Solutions Strategy, CRH Americas Jenessa Miglietta, VP Sustainability & Innovation, CRH Americas Thomas Donoghue, VP Industry Innovation, CRH Group Attendees will gain insights into how local providers mitigate challenges and address critical issues, along with practical ideas for accelerating construction timelines. They will also learn strategies for expanding partnerships with essential suppliers. Click here to register now and be part of the conversation that starts at the foundation.

Vertiv predicts data centre innovation trends
Data centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global provider of critical digital infrastructure. The Vertiv Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI to digital twins and adaptive liquid cooling. Scott Armul, Chief Product and Technology Officer at Vertiv, says, “The data centre industry is continuing to rapidly evolve how it designs, builds, operates, and services data centres in response to the density and speed of deployment demands of AI factories. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. "On-site energy generation and digital twin technology are also expected to help advance the scale and speed of AI adoption.” The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation. These include: • Extreme densification — accelerated by AI and HPC workloads• Gigawatt scaling at speed — with data centres now being deployed rapidly and at unprecedented scale• Data centre as a unit of compute — as the AI era requires facilities to be built and operated as a single system• Silicon diversification — noting data centre infrastructure must adapt to an increasing range of chips and compute The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape: 1. Powering up for AI Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation - and microgrids - will also drive adoption of higher voltage DC. 2. Distributed AI The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses, but how - and from where - those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries (such as finance, defence, and healthcare) may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities. 3. Energy autonomy accelerates Short-term, on-site energy generation capacity has been essential for most standalone data centres for decades to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as 'Bring Your Own Power (and Cooling)' are likely to be part of ongoing energy autonomy plans. 4. Digital twin-driven design and operations With increasingly dense AI workloads and more powerful GPUs also comes a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually - via digital twins - and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements. 5. Adaptive, resilient liquid cooling AI workloads and infrastructure have accelerated the adoption of liquid cooling, but, conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators, but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads. For more from Vertiv, click here.

Rethinking cooling, power, and design for AI
In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, shares his predictions for the data centre industry in 2026. He explains that surging rack densities and GPU power demands are pushing traditional air-cooling beyond its limits, driving the industry towards hybrid cooling environments where airflow containment, liquid cooling, and intelligent controls operate as a single system. These trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Predictions for data centres in 2026 By 2026, the data centre will no longer function as a static host for digital infrastructure; instead, it will behave as a dynamic, adaptive system - one that evolves in real time alongside the workloads it supports. The driving force behind this shift is AI, which is pushing power, cooling, and physical design beyond previously accepted limits. Rack densities that once seemed impossible - 80 to 120 kW - are now commonplace. As GPUs push past 700 W, the thermal cost of compute is redefining core engineering assumptions across the industry. Traditional air-cooling strategies alone can no longer keep pace. However, the answer isn’t simply replacing air with liquid; what’s emerging instead is a hybrid environment, where airflow containment, liquid cooling, and predictive controls operate together as a single, coordinated system. As a result, the long-standing divide between “air-cooled” and “liquid-cooled” facilities is fading. Even in high-performing direct-to-chip (DTC) environments, significant residual heat must still be managed and removed by air. Preventing hot and cold air from mixing becomes critical - not just for stability, but for efficiency. In high-density and HPC environments, controlled airflow is now essential to reducing energy consumption and maintaining predictable performance. By 2026, AI will also play a more active role in managing the thermodynamics of the data centre itself. Coolant distribution units (CDUs) are evolving beyond basic infrastructure into intelligent control points. By analysing workload fluctuations in real time, CDUs can adapt cooling delivery, protect sensitive IT equipment, and mitigate thermal events before they impact performance, making liquid cooling not only more reliable but secure and scalable. This evolution is accelerating the divide between legacy data centres and a new generation of AI-focused facilities. Traditional data centres were built for consistent loads and flexible whitespace. AI infrastructure demands something different: modular design, fault-predictive monitoring, and engineering frameworks proven at hyperscale. To fully unlock AI’s potential, data centre design must evolve alongside it. Immersion cooling sits at the far end of this transition. While DTC remains the preferred solution today and for the foreseeable future, immersion is increasingly viewed as the long-term endpoint for ultra-high-density computing. It addresses thermal challenges that DTC can only partially relieve, enabling facilities to remove much of their airflow infrastructure altogether. Adoption remains gradual due to cost, maintenance requirements, and operational disruption - to name a few - but the real question is no longer if immersion will arrive, but how prepared operators will be when it eventually does. At the same time, the pace of AI growth is exposing the limitations of global supply chains. Slow manufacturing cycles and delayed engineering can no longer support the speed of deployment required. For example, Subzero Engineering’s new manufacturing and R&D facility in Vietnam (serving the APAC region) reflects a broader shift towards localised production and highly skilled regional workforces. By investing in R&D, application engineering, and precision manufacturing, Subzero Engineering is building the capacity needed to support global demand while developing local expertise that strengthens the industry as a whole. Taken together, these trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Cooling, airflow, energy, and structure are no longer separate considerations, but parts of a synchronised ecosystem. By 2026, data centres will become active contributors to the computing lifecycle itself. Operators that plan for adaptability today will be best positioned to lead in the next phase of the digital economy. For more from Subzero Engineering, click here.

PFlow highlights VRC use in data centres
PFlow Industries, a US manufacturer of vertical reciprocating conveyor (VRC) technology, has highlighted the use of VRCs within data centre environments, focusing on its M Series and F Series systems. VRCs are typically incorporated during the design phase of a facility to support vertical movement of equipment and materials. PFlow states that, compared with conventional lifting approaches, VRCs can be integrated with automated material handling systems and used for intermittent or continuous operation, with routine maintenance requirements kept relatively low. M Series and F Series applications The PFlow M Series 2-Post Mechanical Material Lift is designed for higher-cycle environments where frequent vertical movement of equipment is required. The company says the system can handle loads of up to 10,000lb (4,536kg) and operate across multiple floor levels. Standard travel speed is stated as 25 feet (7.62 metres) per minute, with higher speeds available depending on configuration. The M Series is designed for use for transporting items such as servers, racks, and other technical equipment, with safety features intended to support controlled movement and equipment protection. The PFlow F Series 4-Post Mechanical VRC is positioned for heavier loads and larger equipment. The standard lifting capacity is up to 50,000lb (22,680kg), with higher capacities available through customisation. The design allows loading and unloading from all four sides and is intended to accommodate oversized infrastructure, including battery systems and large server assemblies. PFlow says the F Series is engineered for high-cycle operation and flexible traffic patterns within facilities. The company adds that its VRCs are designed as permanent infrastructure elements within buildings rather than standalone equipment. It states that all systems are engineered to meet ASME B20.1 conveyor requirements and are intended for continuous operation in environments where uptime is critical. Dan Hext, National Sales Director at PFlow Industries, comments, “Every industry is under cost and compliance pressure. Our VRCs help facility operators achieve maximum throughput and efficiency while maintaining the highest levels of safety.”

The blueprint for tomorrow’s sustainable data centres
In this exclusive article for DCNN, Francesco Fontana, Enterprise Marketing and Alliances Director at Aruba, explores how operators can embed sustainability, flexibility, and high-density engineering into data centre design to meet the accelerating demands of AI: Sustainable design is now central to AI-scale data centres The explosive growth of AI is straining data centre capacity, prompting operators to both upgrade existing sites and plan large-scale new-builds. Europe’s AI market, projected to grow at a 36.4% CAGR through 2033, is driving this wave of investment as operators scramble to match demand. Operators face mounting pressure to address the environmental costs of rapid growth, as expansion alone cannot meet the challenge. The path forward lies in designing facilities that are sustainable by default, while balancing resilience, efficiency, and adaptability to ensure data centres can support the accelerating demands of AI. The cost of progress Customer expectations for data centres have shifted dramatically in recent years. The rapid uptake of AI and cloud technologies is fuelling demand for colocation environments that are scalable, flexible, and capable of supporting constantly evolving workloads and managing surging volumes of data. But this evolution comes at a cost. AI and other compute-intensive applications demand vast amounts of processing power, which in turn place new strains on both energy and water resources. Global data centre electricity usage is projected to reach 1,050 terawatt-hours (TWh) by 2026, placing data centres among the world’s top five national consumers. This rising consumption has put data centres firmly under the spotlight. Regulators, customers, and the wider public are scrutinising how facilities are designed and operated, making it clear that sustainability can no longer be treated as optional. To survive amongst these new expectations, operators must balance performance with environmental responsibility, rethinking infrastructure from the ground up. Steps to a next-generation sustainable data centre 1. Embed sustainability from day one Facilities designed 'green by default' are better placed to meet both operational and environmental goals, and this why sustainability can’t be an afterthought. This requires renewable energy integration from the outset through on-site solar, hydroelectric systems, or long-term clean power purchase agreements. Operators across Europe are also committing to industry frameworks like the Climate Neutral Data Centre Pact and the European Green Digital Coalition, ensuring progress is independently verified. Embedding sustainability into the design and operation of data centres not only reduces carbon intensity but also creates long-term efficiency gains that help manage AI’s heavy energy demands. 2. Build for flexibility and scale Modern businesses need infrastructures that can grow with them. For operators, this means creating resilient IT environments with space and power capacity to support future demand. Offering adaptable options - such as private cages and cross-connects - gives customers the freedom to scale resources up or down, as well as tailor facilities to their unique needs. This flexibility underpins cloud expansion, digital transformation initiatives, and the integration of new applications - all while helping customers remain agile in a competitive market. 3. Engineering for the AI Workload AI and high-performance computing (HPC) workloads demand far more power and cooling capacity than traditional IT environments, and conventional designs are struggling to keep up. Facilities must be engineered specifically for high-density deployments. Advanced cooling technologies, such as liquid cooling, allow operators to safely and sustainably support power densities far above 20 kW per rack, essential for next-generation GPUs and other AI-driven infrastructure. Rethinking power distribution, airflow management, and rack layout ensures high-density computing can be delivered efficiently without compromising stability or sustainability. 4. Location matters Where a data centre is built plays a major role in its sustainability profile, as regional providers often offer greater flexibility and more personalised services to meet customer needs. Italy, for example, has become a key destination for new facilities. Its cloud computing market is estimated at €10.8 billion (£9.4 billion) in 2025 and is forecast to more than double to €27.4 billion (£23.9 billion) by 2030, growing at a CAGR of 20.6%. Significant investments from hyperscalers in recent years are accelerating growth, making the region a hotspot for operators looking to expand in Europe. 5. Stay compliant with regulations and certifications Strong regulatory and environmental compliance is fundamental. Frameworks such as the General Data Protection Regulation (GDPR) safeguard data, while certifications like LEED (Leadership in Energy and Environmental Design) demonstrate energy efficiency and environmental accountability. Adhering to these standards ensures legal compliance, but it also improves operational transparency and strengthens credibility with customers. Sustainability and performance as partners The data centres of tomorrow must scale sustainably to meet the demands of AI, cloud, and digital transformation. This requires embedding efficiency and adaptability into every stage of design and operation. Investment in renewable energy, such as hydro and solar, will be crucial to reducing emissions. Equally, innovations like liquid cooling will help manage the thermal loads of compute-heavy AI environments. Emerging technologies - including agentic AI systems that autonomously optimise energy use and breakthroughs in quantum computing - promise to take efficiency even further. In short, sustainability and performance are no longer competing objectives; together, they form the foundation of a resilient digital future where AI can thrive without compromising the planet. For more from Aruba, click here.

atNorth's ICE03 wins award for environmental design
atNorth, an Icelandic operator of sustainable Nordic data centres, has received the Environmental Impact Award at the Data Center Dynamics Awards for its expansion of the ICE03 data centre in Akureyri, Iceland. The category recognises projects that demonstrate clear reductions in the environmental impact of data centre operations. The expansion was noted for its design approach, which combines environmental considerations with social and economic benefits. ICE03 operates in a naturally cool climate and uses renewable energy to support direct liquid cooling. The site was constructed with sustainable materials, including Glulam and Icelandic rockwool, and was planned with regard for the surrounding landscape. Heat-reuse partnership and local benefits atNorth says all its new sites are built to accommodate heat reuse equipment. For ICE03, the company partnered with the local municipality to distribute waste heat to community projects, including a greenhouse where school children learn about ecological cultivation and sustainable food production. The initiative reduces the data centre’s carbon footprint, supports locally grown produce, and contributes to a regional circular economy. ICE03 operates with a PUE below 1.2, compared with a global average of 1.56. During the first phase of ICE03’s development, more than 90% of the workforce was recruited locally, and the company says it intends to continue hiring within the area as far as possible. atNorth also states it supports educational and community initiatives through volunteer activity and financial contributions. Examples include donating mechatronics equipment to the Vocational College of Akureyri and supporting local sports, events, and search and rescue services. The ICE03 site has also enabled a new point of presence (PoP) in the region, established by telecommunications company Farice. The PoP provides direct routing to mainland Europe via submarine cables, improving network resilience for northern Iceland. Ásthildur Sturludóttir, the Mayor of Akureyri, estimates that atNorth’s total investment in the town will reach around €109 million (£95.7 million), with long-term investment expected to rise to approximately €200 million (£175.6 million). This, combined with the wider economic and social contribution of the site, is positioned as a model for future data centre development. Eyjólfur Magnús Kristinsson, CEO of atNorth, comments, “We are delighted that our ICE03 data centre has been recognised for its positive impact on its local environment. "There is a critical need for a transformation in the approach to digital infrastructure development to ensure the scalability and longevity of the industry. Data centre operators must take a holistic approach to become long-term, valued partners of thriving communities.” For more from atNorth, click here.

Data centre delivery: How consistency endures
In this exclusive article for DCNN, Steve Clifford, Director of Data Centres at EMCOR UK, describes how end-to-end data centre design, building, and maintenance is essential for achieving data centre uptime and optimisation: Keeping services live Resilience and reliability. These aren’t optional features of data centres; they are essential requirements that require precision and a keen eye for detail from all stakeholders. If a patchwork of subcontractors are delivering data centre services, that can muddy the waters by complicating supply chains. This heightens the risk of miscommunication, which can cause project delays and operational downtime. Effective design and implementation are essential at a time when the data centre market is undergoing significant expansion, with the take-up of capacity expected to grow by 855MW - or a 22% year-on-year growth - in Europe alone. In-house engineering and project management teams can prioritise open communication to help build, manage, and maintain data centres over time. This end-to-end approach allows for continuity from initial consultation through to long-term operational excellence so data centres can do what they do best: uphold business continuity and serve millions of people. Designing and building spaces Before a data centre can be built, logistical challenges need to be addressed. In many regions, grid availability is limited, land is constrained, and planning approvals can take years. Compliance is key too: no matter the space, builds should align to ISO frameworks, local authority regulations, and - in some cases - critical national infrastructure (CNI) standards. Initially, teams need to uphold a customer’s business case, identify the optimal location, address cooling concerns, identify which risks to mitigate, and understand what space for expansion or improvement should be factored in now. While pre-selection of contractors and consultants is vital at this stage, it makes strategic sense to select a complementary delivery team that can manage mobilisation and long-term performance too. Engineering providers should collaborate with customer stakeholders, consultants, and supply chain partners so that the solution delivered is fit for purpose throughout its operational lifespan. As greenfield development can be lengthy, upgrading existing spaces is a popular alternative option. Called 'retrofitting', this route can reduce cost by 40% and reduce project timelines by 30%. When building in pre-existing spaces, maintaining continuity in live environments is crucial. For example, our team recently developed a data hall within a contained 1,000m² existing facility. Engineers used profile modelling to identify an optimal cooling configuration based on hot aisle containment and installed a 380V DC power system to maximise energy efficiency. This resulted in a 96.2% achievement across rectifiers and converters. The project delivered 136 cabinets, against a brief of 130, and, crucially, didn’t disrupt business-as-usual operations, using a phased integration for the early deployment of IT systems. Maintaining continuity In certain spaces like national defence and highly sensitive operations, maintaining continuity is fundamental. Critical infrastructure maintenance in these environments needs to prioritise security and reliability, as these facilities sit at the heart of national operations. Ongoing operational management requires a 24/7 engineering presence, supported by proactive maintenance management, comprehensive systems monitoring, a strategic critical spares strategy, and a robust event and incident management process. This constant presence, from the initial stages of consultation through to ongoing operational support, delivers clear benefits that compound over time; the same team that understands the design rationale can anticipate potential issues and respond swiftly when challenges arise. Using 3D modelling to coordinate designs and time-lapse visualisations depicting project progress can keep stakeholders up to date. Asset management in critical environments like CNIs also demands strict maintenance scheduling and control, coupled with complete risk transparency to customers. Total honesty and trust are non-negotiable so weekly client meetings can maintain open communication channels, ensuring customers are fully informed about system status, upcoming maintenance windows, and any potential risks on the horizon. Meeting high expectations These high-demand environments have high expectations, so keeping engineering teams engaged and motivated is key to long-term performance. A holistic approach to staff engagement should focus on continuous training and development to deliver greater continuity and deeper site expertise. When engineers intimately understand customer expectations and site needs, they can maintain the seamless service these critical operations demand. Focusing on continuity delivers measurable results. For one defence-grade data centre customer, we have maintained 100% uptime over eight years, from day one of operations. Consistent processes and dedicated personnel form a long-term commitment to operational excellence. Optimising spaces for the future Self-delivery naturally lends itself to growing, evolving relationships with customers. By transitioning to self-delivering entire projects and operations, organisations can benefit from a single point of contact while maintaining control over most aspects of service delivery. Rather than offering generic solutions, established relationships allow for bespoke approaches that anticipate future requirements and build in flexibility from the outset. A continuous improvement model ensures long-term capability development, with energy efficiency improvement representing a clear focus area as sustainability requirements become increasingly stringent. AI and HPC workloads are pushing rack densities higher, creating new demands for thermal management, airflow, and power draw. Many operators are also embedding smart systems - from IoT sensors to predictive analytics tools - into designs. These platforms provide real-time visibility of energy use, asset performance, and environmental conditions, enabling data-driven decision-making and continuous optimisation. Operators may also upgrade spaces to higher-efficiency systems and smart cooling, which support better PUE outcomes and long-term energy savings. When paired with digital tools for energy monitoring and predictive maintenance, teams can deliver on smarter operations and provide measurable returns on investment. Continuity: A strategic tool Uptime is critical – and engineering continuity is not just beneficial, but essential. From the initial stages of design and consultation through to ongoing management and future optimisation, data centres need consistent teams, transparent processes, and strategic relationships that endure. The end-to-end approach transforms continuity from an operational requirement into a strategic advantage, enabling facilities to adapt to evolving demands while maintaining constant uptime. When consistency becomes the foundation, exceptional performance follows.

Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

'Cranes key to productivity in data centre construction'
With companies and consumers increasingly reliant on cloud-based computing and services, data centre construction has moved higher up the agenda across the world. Recently re-categorised as 'Critical National Infrastructure' in the UK, the market is highly competitive and demand for new facilities is high. However, these projects are very sensitive to risk. Challenges include the highly technical nature of some of the work, which relies on a specialist supply chain, and long lead times for equipment such as servers, computer chips, and backup generators – in some cases, up to two years. Time is of the essence. Every day of delay during a construction programme can have a multimillion-pound impact in lost income, and project teams can be penalised for falling behind. However, help is at hand from an unexpected source: the cranage provider. Cutting construction time by half Marr Contracting, an Australian provider of heavy-lift luffing tower cranes and cranage services, has been working on data centres around the world for several years. Its methodology is helping data centre projects reach completion in half of the average time. “The first time that I spoke to a client about their data centre project, they told me that they were struggling with the lifting requirements,” explains Simon Marr, Managing Director at Marr Contracting. “There were lots of heavy precast components and sequencing them correctly alongside other elements of the programme was proving difficult. “It was a traditional set-up with mobile cranes sitting outside the building structure, which made the site congested and ‘confused.’ "There was a clash between the critical path works of installing the in-ground services and the construction of the main structure, as the required mobile crane locations were hindering the in-ground works and the in-ground works were hindering where the mobile cranes could be placed. This in turn resulted in an extended programme.” The team at Marr suggested a different approach: to place fewer, yet larger-capacity cranes in strategic locations so that they could service the whole site and allow the in-ground works to proceed concurrently. By adopting this philosophy, the project was completed in half the time of a typical build. Marr has partnered with the client on every development since, with the latest project completed in just 35 weeks. “It’s been transformational,” claims Simon. “The solution removes complexity and improves productivity by allowing construction to happen across multiple work fronts. This, in turn, reduces the number of cranes on the project.” Early engagement is key Simon believes early engagement is key to achieving productivity and efficiency gains on data centre projects. He says, “There is often a disconnect between the engineering and planning of a project and how cranes integrate into the project’s construction logic. "The current approach, where the end-user of the crane issues a list of requirements for a project, with no visibility on the logic behind how these cranes will align with the construction methodology, is flawed. “It creates a situation where more cranes are usually added to an already congested site to fill the gap that could have been covered by one single tower crane.” One of the main pressure points on projects that is specific to data centres is the requirements around services. “The challenge with data centres is that a lot of power and water is needed, which means lots of in-ground services,” continues Simon. “The ideal would be to build these together, but that’s not possible with a traditional cranage solution because you’re making a compromise on whether you install the in-ground services or whether you delay that work so that the mobile cranes can support the construction of the structure. Ultimately, the programme falls behind.” “We’ve seen clients try to save money by downsizing the tower crane and putting it in the centre of the server hall. But this hinders the completion of the main structure and delays the internal fit out works. “Our approach is to use cranes that can do heavier lifts but that take up a smaller area, away from the critical path and outside the building structure. The crane solution should allow the concurrent delivery of critical path works – in turn, making the crane a servant to the project, not the hero. “With more sites being developed in congested urban areas, particularly new, taller data centres with heavier components, this is going to be more of an issue in the future.” Thinking big One of the benefits of early engagement and strategically deploying heavy lift tower cranes is that it opens the door for the constructor to “think big” with their construction methodology. This appeals to the data centre market as it enables constructors to work to design for manufacturer and assembly (DfMA). By using prefabricated, pre-engineered modules, DfMA aims to allow for the rapid construction and deployment of data centre facilities. Fewer, heavier lifts should reduce risk and improve safety because more components can be assembled offsite, delivered to the site, and then placed through fewer major crane lifts instead of multiple, smaller lifts. Simon claims, “By seeking advice from cranage experts early in the bid and design development stage of a project, the project can benefit from lower project costs, improved safety, higher quality, and quicker construction."



Translate »