Exploring Modern Data Centre Design


Rethinking cooling, power, and design for AI
In this article for DCNN, Gordon Johnson, Senior CFD Manager at Subzero Engineering, shares his predictions for the data centre industry in 2026. He explains that surging rack densities and GPU power demands are pushing traditional air-cooling beyond its limits, driving the industry towards hybrid cooling environments where airflow containment, liquid cooling, and intelligent controls operate as a single system. These trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Predictions for data centres in 2026 By 2026, the data centre will no longer function as a static host for digital infrastructure; instead, it will behave as a dynamic, adaptive system - one that evolves in real time alongside the workloads it supports. The driving force behind this shift is AI, which is pushing power, cooling, and physical design beyond previously accepted limits. Rack densities that once seemed impossible - 80 to 120 kW - are now commonplace. As GPUs push past 700 W, the thermal cost of compute is redefining core engineering assumptions across the industry. Traditional air-cooling strategies alone can no longer keep pace. However, the answer isn’t simply replacing air with liquid; what’s emerging instead is a hybrid environment, where airflow containment, liquid cooling, and predictive controls operate together as a single, coordinated system. As a result, the long-standing divide between “air-cooled” and “liquid-cooled” facilities is fading. Even in high-performing direct-to-chip (DTC) environments, significant residual heat must still be managed and removed by air. Preventing hot and cold air from mixing becomes critical - not just for stability, but for efficiency. In high-density and HPC environments, controlled airflow is now essential to reducing energy consumption and maintaining predictable performance. By 2026, AI will also play a more active role in managing the thermodynamics of the data centre itself. Coolant distribution units (CDUs) are evolving beyond basic infrastructure into intelligent control points. By analysing workload fluctuations in real time, CDUs can adapt cooling delivery, protect sensitive IT equipment, and mitigate thermal events before they impact performance, making liquid cooling not only more reliable but secure and scalable. This evolution is accelerating the divide between legacy data centres and a new generation of AI-focused facilities. Traditional data centres were built for consistent loads and flexible whitespace. AI infrastructure demands something different: modular design, fault-predictive monitoring, and engineering frameworks proven at hyperscale. To fully unlock AI’s potential, data centre design must evolve alongside it. Immersion cooling sits at the far end of this transition. While DTC remains the preferred solution today and for the foreseeable future, immersion is increasingly viewed as the long-term endpoint for ultra-high-density computing. It addresses thermal challenges that DTC can only partially relieve, enabling facilities to remove much of their airflow infrastructure altogether. Adoption remains gradual due to cost, maintenance requirements, and operational disruption - to name a few - but the real question is no longer if immersion will arrive, but how prepared operators will be when it eventually does. At the same time, the pace of AI growth is exposing the limitations of global supply chains. Slow manufacturing cycles and delayed engineering can no longer support the speed of deployment required. For example, Subzero Engineering’s new manufacturing and R&D facility in Vietnam (serving the APAC region) reflects a broader shift towards localised production and highly skilled regional workforces. By investing in R&D, application engineering, and precision manufacturing, Subzero Engineering is building the capacity needed to support global demand while developing local expertise that strengthens the industry as a whole. Taken together, these trends point to the rise of a fundamentally different kind of data centre - one that understands its own demands and actively responds to them. Cooling, airflow, energy, and structure are no longer separate considerations, but parts of a synchronised ecosystem. By 2026, data centres will become active contributors to the computing lifecycle itself. Operators that plan for adaptability today will be best positioned to lead in the next phase of the digital economy. For more from Subzero Engineering, click here.

PFlow highlights VRC use in data centres
PFlow Industries, a US manufacturer of vertical reciprocating conveyor (VRC) technology, has highlighted the use of VRCs within data centre environments, focusing on its M Series and F Series systems. VRCs are typically incorporated during the design phase of a facility to support vertical movement of equipment and materials. PFlow states that, compared with conventional lifting approaches, VRCs can be integrated with automated material handling systems and used for intermittent or continuous operation, with routine maintenance requirements kept relatively low. M Series and F Series applications The PFlow M Series 2-Post Mechanical Material Lift is designed for higher-cycle environments where frequent vertical movement of equipment is required. The company says the system can handle loads of up to 10,000lb (4,536kg) and operate across multiple floor levels. Standard travel speed is stated as 25 feet (7.62 metres) per minute, with higher speeds available depending on configuration. The M Series is designed for use for transporting items such as servers, racks, and other technical equipment, with safety features intended to support controlled movement and equipment protection. The PFlow F Series 4-Post Mechanical VRC is positioned for heavier loads and larger equipment. The standard lifting capacity is up to 50,000lb (22,680kg), with higher capacities available through customisation. The design allows loading and unloading from all four sides and is intended to accommodate oversized infrastructure, including battery systems and large server assemblies. PFlow says the F Series is engineered for high-cycle operation and flexible traffic patterns within facilities. The company adds that its VRCs are designed as permanent infrastructure elements within buildings rather than standalone equipment. It states that all systems are engineered to meet ASME B20.1 conveyor requirements and are intended for continuous operation in environments where uptime is critical. Dan Hext, National Sales Director at PFlow Industries, comments, “Every industry is under cost and compliance pressure. Our VRCs help facility operators achieve maximum throughput and efficiency while maintaining the highest levels of safety.”

The blueprint for tomorrow’s sustainable data centres
In this exclusive article for DCNN, Francesco Fontana, Enterprise Marketing and Alliances Director at Aruba, explores how operators can embed sustainability, flexibility, and high-density engineering into data centre design to meet the accelerating demands of AI: Sustainable design is now central to AI-scale data centres The explosive growth of AI is straining data centre capacity, prompting operators to both upgrade existing sites and plan large-scale new-builds. Europe’s AI market, projected to grow at a 36.4% CAGR through 2033, is driving this wave of investment as operators scramble to match demand. Operators face mounting pressure to address the environmental costs of rapid growth, as expansion alone cannot meet the challenge. The path forward lies in designing facilities that are sustainable by default, while balancing resilience, efficiency, and adaptability to ensure data centres can support the accelerating demands of AI. The cost of progress Customer expectations for data centres have shifted dramatically in recent years. The rapid uptake of AI and cloud technologies is fuelling demand for colocation environments that are scalable, flexible, and capable of supporting constantly evolving workloads and managing surging volumes of data. But this evolution comes at a cost. AI and other compute-intensive applications demand vast amounts of processing power, which in turn place new strains on both energy and water resources. Global data centre electricity usage is projected to reach 1,050 terawatt-hours (TWh) by 2026, placing data centres among the world’s top five national consumers. This rising consumption has put data centres firmly under the spotlight. Regulators, customers, and the wider public are scrutinising how facilities are designed and operated, making it clear that sustainability can no longer be treated as optional. To survive amongst these new expectations, operators must balance performance with environmental responsibility, rethinking infrastructure from the ground up. Steps to a next-generation sustainable data centre 1. Embed sustainability from day one Facilities designed 'green by default' are better placed to meet both operational and environmental goals, and this why sustainability can’t be an afterthought. This requires renewable energy integration from the outset through on-site solar, hydroelectric systems, or long-term clean power purchase agreements. Operators across Europe are also committing to industry frameworks like the Climate Neutral Data Centre Pact and the European Green Digital Coalition, ensuring progress is independently verified. Embedding sustainability into the design and operation of data centres not only reduces carbon intensity but also creates long-term efficiency gains that help manage AI’s heavy energy demands. 2. Build for flexibility and scale Modern businesses need infrastructures that can grow with them. For operators, this means creating resilient IT environments with space and power capacity to support future demand. Offering adaptable options - such as private cages and cross-connects - gives customers the freedom to scale resources up or down, as well as tailor facilities to their unique needs. This flexibility underpins cloud expansion, digital transformation initiatives, and the integration of new applications - all while helping customers remain agile in a competitive market. 3. Engineering for the AI Workload AI and high-performance computing (HPC) workloads demand far more power and cooling capacity than traditional IT environments, and conventional designs are struggling to keep up. Facilities must be engineered specifically for high-density deployments. Advanced cooling technologies, such as liquid cooling, allow operators to safely and sustainably support power densities far above 20 kW per rack, essential for next-generation GPUs and other AI-driven infrastructure. Rethinking power distribution, airflow management, and rack layout ensures high-density computing can be delivered efficiently without compromising stability or sustainability. 4. Location matters Where a data centre is built plays a major role in its sustainability profile, as regional providers often offer greater flexibility and more personalised services to meet customer needs. Italy, for example, has become a key destination for new facilities. Its cloud computing market is estimated at €10.8 billion (£9.4 billion) in 2025 and is forecast to more than double to €27.4 billion (£23.9 billion) by 2030, growing at a CAGR of 20.6%. Significant investments from hyperscalers in recent years are accelerating growth, making the region a hotspot for operators looking to expand in Europe. 5. Stay compliant with regulations and certifications Strong regulatory and environmental compliance is fundamental. Frameworks such as the General Data Protection Regulation (GDPR) safeguard data, while certifications like LEED (Leadership in Energy and Environmental Design) demonstrate energy efficiency and environmental accountability. Adhering to these standards ensures legal compliance, but it also improves operational transparency and strengthens credibility with customers. Sustainability and performance as partners The data centres of tomorrow must scale sustainably to meet the demands of AI, cloud, and digital transformation. This requires embedding efficiency and adaptability into every stage of design and operation. Investment in renewable energy, such as hydro and solar, will be crucial to reducing emissions. Equally, innovations like liquid cooling will help manage the thermal loads of compute-heavy AI environments. Emerging technologies - including agentic AI systems that autonomously optimise energy use and breakthroughs in quantum computing - promise to take efficiency even further. In short, sustainability and performance are no longer competing objectives; together, they form the foundation of a resilient digital future where AI can thrive without compromising the planet. For more from Aruba, click here.

atNorth's ICE03 wins award for environmental design
atNorth, an Icelandic operator of sustainable Nordic data centres, has received the Environmental Impact Award at the Data Center Dynamics Awards for its expansion of the ICE03 data centre in Akureyri, Iceland. The category recognises projects that demonstrate clear reductions in the environmental impact of data centre operations. The expansion was noted for its design approach, which combines environmental considerations with social and economic benefits. ICE03 operates in a naturally cool climate and uses renewable energy to support direct liquid cooling. The site was constructed with sustainable materials, including Glulam and Icelandic rockwool, and was planned with regard for the surrounding landscape. Heat-reuse partnership and local benefits atNorth says all its new sites are built to accommodate heat reuse equipment. For ICE03, the company partnered with the local municipality to distribute waste heat to community projects, including a greenhouse where school children learn about ecological cultivation and sustainable food production. The initiative reduces the data centre’s carbon footprint, supports locally grown produce, and contributes to a regional circular economy. ICE03 operates with a PUE below 1.2, compared with a global average of 1.56. During the first phase of ICE03’s development, more than 90% of the workforce was recruited locally, and the company says it intends to continue hiring within the area as far as possible. atNorth also states it supports educational and community initiatives through volunteer activity and financial contributions. Examples include donating mechatronics equipment to the Vocational College of Akureyri and supporting local sports, events, and search and rescue services. The ICE03 site has also enabled a new point of presence (PoP) in the region, established by telecommunications company Farice. The PoP provides direct routing to mainland Europe via submarine cables, improving network resilience for northern Iceland. Ásthildur Sturludóttir, the Mayor of Akureyri, estimates that atNorth’s total investment in the town will reach around €109 million (£95.7 million), with long-term investment expected to rise to approximately €200 million (£175.6 million). This, combined with the wider economic and social contribution of the site, is positioned as a model for future data centre development. Eyjólfur Magnús Kristinsson, CEO of atNorth, comments, “We are delighted that our ICE03 data centre has been recognised for its positive impact on its local environment. "There is a critical need for a transformation in the approach to digital infrastructure development to ensure the scalability and longevity of the industry. Data centre operators must take a holistic approach to become long-term, valued partners of thriving communities.” For more from atNorth, click here.

Data centre delivery: How consistency endures
In this exclusive article for DCNN, Steve Clifford, Director of Data Centres at EMCOR UK, describes how end-to-end data centre design, building, and maintenance is essential for achieving data centre uptime and optimisation: Keeping services live Resilience and reliability. These aren’t optional features of data centres; they are essential requirements that require precision and a keen eye for detail from all stakeholders. If a patchwork of subcontractors are delivering data centre services, that can muddy the waters by complicating supply chains. This heightens the risk of miscommunication, which can cause project delays and operational downtime. Effective design and implementation are essential at a time when the data centre market is undergoing significant expansion, with the take-up of capacity expected to grow by 855MW - or a 22% year-on-year growth - in Europe alone. In-house engineering and project management teams can prioritise open communication to help build, manage, and maintain data centres over time. This end-to-end approach allows for continuity from initial consultation through to long-term operational excellence so data centres can do what they do best: uphold business continuity and serve millions of people. Designing and building spaces Before a data centre can be built, logistical challenges need to be addressed. In many regions, grid availability is limited, land is constrained, and planning approvals can take years. Compliance is key too: no matter the space, builds should align to ISO frameworks, local authority regulations, and - in some cases - critical national infrastructure (CNI) standards. Initially, teams need to uphold a customer’s business case, identify the optimal location, address cooling concerns, identify which risks to mitigate, and understand what space for expansion or improvement should be factored in now. While pre-selection of contractors and consultants is vital at this stage, it makes strategic sense to select a complementary delivery team that can manage mobilisation and long-term performance too. Engineering providers should collaborate with customer stakeholders, consultants, and supply chain partners so that the solution delivered is fit for purpose throughout its operational lifespan. As greenfield development can be lengthy, upgrading existing spaces is a popular alternative option. Called 'retrofitting', this route can reduce cost by 40% and reduce project timelines by 30%. When building in pre-existing spaces, maintaining continuity in live environments is crucial. For example, our team recently developed a data hall within a contained 1,000m² existing facility. Engineers used profile modelling to identify an optimal cooling configuration based on hot aisle containment and installed a 380V DC power system to maximise energy efficiency. This resulted in a 96.2% achievement across rectifiers and converters. The project delivered 136 cabinets, against a brief of 130, and, crucially, didn’t disrupt business-as-usual operations, using a phased integration for the early deployment of IT systems. Maintaining continuity In certain spaces like national defence and highly sensitive operations, maintaining continuity is fundamental. Critical infrastructure maintenance in these environments needs to prioritise security and reliability, as these facilities sit at the heart of national operations. Ongoing operational management requires a 24/7 engineering presence, supported by proactive maintenance management, comprehensive systems monitoring, a strategic critical spares strategy, and a robust event and incident management process. This constant presence, from the initial stages of consultation through to ongoing operational support, delivers clear benefits that compound over time; the same team that understands the design rationale can anticipate potential issues and respond swiftly when challenges arise. Using 3D modelling to coordinate designs and time-lapse visualisations depicting project progress can keep stakeholders up to date. Asset management in critical environments like CNIs also demands strict maintenance scheduling and control, coupled with complete risk transparency to customers. Total honesty and trust are non-negotiable so weekly client meetings can maintain open communication channels, ensuring customers are fully informed about system status, upcoming maintenance windows, and any potential risks on the horizon. Meeting high expectations These high-demand environments have high expectations, so keeping engineering teams engaged and motivated is key to long-term performance. A holistic approach to staff engagement should focus on continuous training and development to deliver greater continuity and deeper site expertise. When engineers intimately understand customer expectations and site needs, they can maintain the seamless service these critical operations demand. Focusing on continuity delivers measurable results. For one defence-grade data centre customer, we have maintained 100% uptime over eight years, from day one of operations. Consistent processes and dedicated personnel form a long-term commitment to operational excellence. Optimising spaces for the future Self-delivery naturally lends itself to growing, evolving relationships with customers. By transitioning to self-delivering entire projects and operations, organisations can benefit from a single point of contact while maintaining control over most aspects of service delivery. Rather than offering generic solutions, established relationships allow for bespoke approaches that anticipate future requirements and build in flexibility from the outset. A continuous improvement model ensures long-term capability development, with energy efficiency improvement representing a clear focus area as sustainability requirements become increasingly stringent. AI and HPC workloads are pushing rack densities higher, creating new demands for thermal management, airflow, and power draw. Many operators are also embedding smart systems - from IoT sensors to predictive analytics tools - into designs. These platforms provide real-time visibility of energy use, asset performance, and environmental conditions, enabling data-driven decision-making and continuous optimisation. Operators may also upgrade spaces to higher-efficiency systems and smart cooling, which support better PUE outcomes and long-term energy savings. When paired with digital tools for energy monitoring and predictive maintenance, teams can deliver on smarter operations and provide measurable returns on investment. Continuity: A strategic tool Uptime is critical – and engineering continuity is not just beneficial, but essential. From the initial stages of design and consultation through to ongoing management and future optimisation, data centres need consistent teams, transparent processes, and strategic relationships that endure. The end-to-end approach transforms continuity from an operational requirement into a strategic advantage, enabling facilities to adapt to evolving demands while maintaining constant uptime. When consistency becomes the foundation, exceptional performance follows.

Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

'Cranes key to productivity in data centre construction'
With companies and consumers increasingly reliant on cloud-based computing and services, data centre construction has moved higher up the agenda across the world. Recently re-categorised as 'Critical National Infrastructure' in the UK, the market is highly competitive and demand for new facilities is high. However, these projects are very sensitive to risk. Challenges include the highly technical nature of some of the work, which relies on a specialist supply chain, and long lead times for equipment such as servers, computer chips, and backup generators – in some cases, up to two years. Time is of the essence. Every day of delay during a construction programme can have a multimillion-pound impact in lost income, and project teams can be penalised for falling behind. However, help is at hand from an unexpected source: the cranage provider. Cutting construction time by half Marr Contracting, an Australian provider of heavy-lift luffing tower cranes and cranage services, has been working on data centres around the world for several years. Its methodology is helping data centre projects reach completion in half of the average time. “The first time that I spoke to a client about their data centre project, they told me that they were struggling with the lifting requirements,” explains Simon Marr, Managing Director at Marr Contracting. “There were lots of heavy precast components and sequencing them correctly alongside other elements of the programme was proving difficult. “It was a traditional set-up with mobile cranes sitting outside the building structure, which made the site congested and ‘confused.’ "There was a clash between the critical path works of installing the in-ground services and the construction of the main structure, as the required mobile crane locations were hindering the in-ground works and the in-ground works were hindering where the mobile cranes could be placed. This in turn resulted in an extended programme.” The team at Marr suggested a different approach: to place fewer, yet larger-capacity cranes in strategic locations so that they could service the whole site and allow the in-ground works to proceed concurrently. By adopting this philosophy, the project was completed in half the time of a typical build. Marr has partnered with the client on every development since, with the latest project completed in just 35 weeks. “It’s been transformational,” claims Simon. “The solution removes complexity and improves productivity by allowing construction to happen across multiple work fronts. This, in turn, reduces the number of cranes on the project.” Early engagement is key Simon believes early engagement is key to achieving productivity and efficiency gains on data centre projects. He says, “There is often a disconnect between the engineering and planning of a project and how cranes integrate into the project’s construction logic. "The current approach, where the end-user of the crane issues a list of requirements for a project, with no visibility on the logic behind how these cranes will align with the construction methodology, is flawed. “It creates a situation where more cranes are usually added to an already congested site to fill the gap that could have been covered by one single tower crane.” One of the main pressure points on projects that is specific to data centres is the requirements around services. “The challenge with data centres is that a lot of power and water is needed, which means lots of in-ground services,” continues Simon. “The ideal would be to build these together, but that’s not possible with a traditional cranage solution because you’re making a compromise on whether you install the in-ground services or whether you delay that work so that the mobile cranes can support the construction of the structure. Ultimately, the programme falls behind.” “We’ve seen clients try to save money by downsizing the tower crane and putting it in the centre of the server hall. But this hinders the completion of the main structure and delays the internal fit out works. “Our approach is to use cranes that can do heavier lifts but that take up a smaller area, away from the critical path and outside the building structure. The crane solution should allow the concurrent delivery of critical path works – in turn, making the crane a servant to the project, not the hero. “With more sites being developed in congested urban areas, particularly new, taller data centres with heavier components, this is going to be more of an issue in the future.” Thinking big One of the benefits of early engagement and strategically deploying heavy lift tower cranes is that it opens the door for the constructor to “think big” with their construction methodology. This appeals to the data centre market as it enables constructors to work to design for manufacturer and assembly (DfMA). By using prefabricated, pre-engineered modules, DfMA aims to allow for the rapid construction and deployment of data centre facilities. Fewer, heavier lifts should reduce risk and improve safety because more components can be assembled offsite, delivered to the site, and then placed through fewer major crane lifts instead of multiple, smaller lifts. Simon claims, “By seeking advice from cranage experts early in the bid and design development stage of a project, the project can benefit from lower project costs, improved safety, higher quality, and quicker construction."

InfraPartners launches Advanced Research and Engineering
InfraPartners, a designer and builder of prefabricated AI data centres, today announced the launch of a new research function, InfraPartners Advanced Research and Engineering. Led by recent hire Bal Aujla, previously the Global Head of Innovation Labs at BlackRock, InfraPartners has assembled a team of experts based in Europe and the US to act as a resource for AI innovation in the data centre industry. The function seeks to foster industry collaboration to provide forward-looking insights and thought leadership. AI demand is driving a surge in new global data centre builds, which are projected to triple by 2030. AI-specific infrastructure is expected to drive approximately 70% of this growth. Additionally, regulation, regionalisation, and geopolitical shifts are reshaping infrastructure needs. As a result, operators are looking at new ways to meet these changes with solutions that deliver scale, schedule certainty, and accelerated time-to-value while improving sustainability and avoiding technology obsolescence. InfraPartners Advanced Research and Engineering intends to accelerate data centre innovation by identifying and focusing on the biggest opportunities and challenges of this next wave of AI-driven growth. With plans for Gigawatts (GWs) of data centre builds globally and projected investments reaching nearly $7 trillion (£5.15 trillion), the impact of new innovation will be significant. Through partnerships with industry experts, regulators, and disruptive newcomers, the InfraPartners Advanced Research and Engineering team aims to foster a community where ideas and research can be shared to grow data centre knowledge, capabilities, and opportunities. These efforts will aim to advance the digital infrastructure sector as a whole. “At InfraPartners, our new research function represents the deliberate convergence of expertise from across the AI and data centre ecosystem. We’re bringing together professionals with diverse perspectives and backgrounds in artificial intelligence, data centre architecture, power infrastructure, and capital allocation to address the evolving needs of AI and the significant value it can bring to the world,” says Bal Aujla, Director, Head of Advanced Research and Engineering at InfraPartners. “This integrated team approach enables us to look at opportunities and challenges from end-to-end and across every layer of the stack. We’re no longer approaching digital infrastructure as a siloed engineering challenge. Instead, the new team will focus on the initiatives that have the most impact on transforming data centre architecture and creating business value.” InfraPartners Advanced Research and Engineering says it has developed a new design philosophy that prioritises flexibility, upgradeability, and rapid refresh cycles. Called the 'Upgradeable Data Center,' this design, it claims, future-proofs data centre investments and enables greater resilience and sustainability in a fast-changing digital landscape. “The Upgradeable Data Center reflects the fact data centres must now be built to evolve. In a world where GPU generations shift every 12–18 months and designs change significantly each time, it is no longer viable to build static infrastructure which has decades-long refresh cycles. Our design approach enables operators to deploy the latest GPUs and upgrade data centre infrastructure in a seamless way,” notes Harqs Singh, Chief Technology Officer at InfraPartners. In its first white paper, Data Centers Transforming at the pace of Technology change, the team explores the rapid growth of AI workloads and its impact on digital infrastructure, including the GPU technology roadmap, increasing rack densities, and the implications for the modern data centre. It highlights the economic risks and commercial opportunities emerging from these trends and introduces how the Upgradeable Data Center is seeking to enable new data centres to transform at the pace of technology change. InfraPartners' model is to build 80% of the data centre offsite and 20% onsite, helping address key industry challenges like skilled labour shortages and power constraints, whilst aligning infrastructure investment with business agility and long-term growth.

Pulsant launches data centre redesign programme
Pulsant, an edge infrastructure provider, today announced a nationwide initiative to enhance and standardise the client experience across all sites in its platformEDGE network. The programme builds on the successful redesign of its Croydon data centre, which received positive survey feedback. The project aims to deliver a consistent experience across all Pulsant data centres, and will involve ongoing client input and feedback during each site implementation. The roll-out is scheduled to take place throughout 2025, with South Gyle in Edinburgh earmarked as the next location. Commenting on the decision, Ben Cranham, Chief Operating Officer at Pulsant, says, "Our commitment to delivering the optimal client experience drives us to continuously survey and respond to an evolving set of onsite requirements. The success of our Croydon site redesign has shown us the value in that approach and this nationwide roll-out reflects our dedication to ensuring that all our clients, regardless of which data centre they use, enjoy the same level of experience and support from Pulsant." For more from Pulsant, click here.

Legrand unveils new Starline Series-S Track Busway
Legrand has unveiled the next generation Starline Series-S Track Busway power distribution system. This revolutionary product combines the performance, functionality and flexibility of Starline’s proven track busway technology with the added benefit of an IP54 ingress-protection rating. As a result, the company is meeting the growing commercial and industrial demand for splashproof, highly dust-resistant and extremely flexible electrical power distribution systems.    According to Data Bridge Market Research, the busway market will reach a valuation of US$3.5bn by 2028, growing at a CAGR of 8.6%. This growth will be driven by a growing desire for user-friendly, cost-effective and highly adaptable electrical distribution systems that support changing load locations in environments with limited space. For over 30 years, Starline has been meeting this need with innovative, easy-to-use track busway designs that reduce costs while maximising efficiency and scalability. With a design that features multiple mounting options, a continuous access slot and configurable plugin units that can be connected and disconnected without de-energising the busway, Starline has helped some of the biggest names in the commercial and industrial space eliminate its dependency on busduct as well as pipe and wire solutions.  The new Series-S Track Busway family of products allows users to enjoy all the benefits of its innovative design anywhere that additional water, dust or other contaminants require up to an IP54 (including IP44) or NEMA 3R rating. This opens its track busway technology for use in wet environments, dusty or debris-contaminated manufacturing, areas with sprinkler or splash-proof requirements, and outdoor applications.   Starline Series-S Track Busway’s protection level extends to its uniquely designed plug-in units, which are offered with a wide variety of watertight IEC and NEMA-rated devices to meet any need. Other features include:  Available in five and 10 foot sections, custom lengths upon request  100-to-1200-amp systems, four pole, rated up to 600Vac or 600Vdc  UL, IEC and ETL certifications  Aluminium housing with corrosion-resistant coating  Splashproof and highly dust-resistant design with watertight IEC and NEMA device options  Click here for latest data centre news.



Translate »