Exploring Modern Data Centre Design


Data centre delivery: How consistency endures
In this exclusive article for DCNN, Steve Clifford, Director of Data Centres at EMCOR UK, describes how end-to-end data centre design, building, and maintenance is essential for achieving data centre uptime and optimisation: Keeping services live Resilience and reliability. These aren’t optional features of data centres; they are essential requirements that require precision and a keen eye for detail from all stakeholders. If a patchwork of subcontractors are delivering data centre services, that can muddy the waters by complicating supply chains. This heightens the risk of miscommunication, which can cause project delays and operational downtime. Effective design and implementation are essential at a time when the data centre market is undergoing significant expansion, with the take-up of capacity expected to grow by 855MW - or a 22% year-on-year growth - in Europe alone. In-house engineering and project management teams can prioritise open communication to help build, manage, and maintain data centres over time. This end-to-end approach allows for continuity from initial consultation through to long-term operational excellence so data centres can do what they do best: uphold business continuity and serve millions of people. Designing and building spaces Before a data centre can be built, logistical challenges need to be addressed. In many regions, grid availability is limited, land is constrained, and planning approvals can take years. Compliance is key too: no matter the space, builds should align to ISO frameworks, local authority regulations, and - in some cases - critical national infrastructure (CNI) standards. Initially, teams need to uphold a customer’s business case, identify the optimal location, address cooling concerns, identify which risks to mitigate, and understand what space for expansion or improvement should be factored in now. While pre-selection of contractors and consultants is vital at this stage, it makes strategic sense to select a complementary delivery team that can manage mobilisation and long-term performance too. Engineering providers should collaborate with customer stakeholders, consultants, and supply chain partners so that the solution delivered is fit for purpose throughout its operational lifespan. As greenfield development can be lengthy, upgrading existing spaces is a popular alternative option. Called 'retrofitting', this route can reduce cost by 40% and reduce project timelines by 30%. When building in pre-existing spaces, maintaining continuity in live environments is crucial. For example, our team recently developed a data hall within a contained 1,000m² existing facility. Engineers used profile modelling to identify an optimal cooling configuration based on hot aisle containment and installed a 380V DC power system to maximise energy efficiency. This resulted in a 96.2% achievement across rectifiers and converters. The project delivered 136 cabinets, against a brief of 130, and, crucially, didn’t disrupt business-as-usual operations, using a phased integration for the early deployment of IT systems. Maintaining continuity In certain spaces like national defence and highly sensitive operations, maintaining continuity is fundamental. Critical infrastructure maintenance in these environments needs to prioritise security and reliability, as these facilities sit at the heart of national operations. Ongoing operational management requires a 24/7 engineering presence, supported by proactive maintenance management, comprehensive systems monitoring, a strategic critical spares strategy, and a robust event and incident management process. This constant presence, from the initial stages of consultation through to ongoing operational support, delivers clear benefits that compound over time; the same team that understands the design rationale can anticipate potential issues and respond swiftly when challenges arise. Using 3D modelling to coordinate designs and time-lapse visualisations depicting project progress can keep stakeholders up to date. Asset management in critical environments like CNIs also demands strict maintenance scheduling and control, coupled with complete risk transparency to customers. Total honesty and trust are non-negotiable so weekly client meetings can maintain open communication channels, ensuring customers are fully informed about system status, upcoming maintenance windows, and any potential risks on the horizon. Meeting high expectations These high-demand environments have high expectations, so keeping engineering teams engaged and motivated is key to long-term performance. A holistic approach to staff engagement should focus on continuous training and development to deliver greater continuity and deeper site expertise. When engineers intimately understand customer expectations and site needs, they can maintain the seamless service these critical operations demand. Focusing on continuity delivers measurable results. For one defence-grade data centre customer, we have maintained 100% uptime over eight years, from day one of operations. Consistent processes and dedicated personnel form a long-term commitment to operational excellence. Optimising spaces for the future Self-delivery naturally lends itself to growing, evolving relationships with customers. By transitioning to self-delivering entire projects and operations, organisations can benefit from a single point of contact while maintaining control over most aspects of service delivery. Rather than offering generic solutions, established relationships allow for bespoke approaches that anticipate future requirements and build in flexibility from the outset. A continuous improvement model ensures long-term capability development, with energy efficiency improvement representing a clear focus area as sustainability requirements become increasingly stringent. AI and HPC workloads are pushing rack densities higher, creating new demands for thermal management, airflow, and power draw. Many operators are also embedding smart systems - from IoT sensors to predictive analytics tools - into designs. These platforms provide real-time visibility of energy use, asset performance, and environmental conditions, enabling data-driven decision-making and continuous optimisation. Operators may also upgrade spaces to higher-efficiency systems and smart cooling, which support better PUE outcomes and long-term energy savings. When paired with digital tools for energy monitoring and predictive maintenance, teams can deliver on smarter operations and provide measurable returns on investment. Continuity: A strategic tool Uptime is critical – and engineering continuity is not just beneficial, but essential. From the initial stages of design and consultation through to ongoing management and future optimisation, data centres need consistent teams, transparent processes, and strategic relationships that endure. The end-to-end approach transforms continuity from an operational requirement into a strategic advantage, enabling facilities to adapt to evolving demands while maintaining constant uptime. When consistency becomes the foundation, exceptional performance follows.

Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

'Cranes key to productivity in data centre construction'
With companies and consumers increasingly reliant on cloud-based computing and services, data centre construction has moved higher up the agenda across the world. Recently re-categorised as 'Critical National Infrastructure' in the UK, the market is highly competitive and demand for new facilities is high. However, these projects are very sensitive to risk. Challenges include the highly technical nature of some of the work, which relies on a specialist supply chain, and long lead times for equipment such as servers, computer chips, and backup generators – in some cases, up to two years. Time is of the essence. Every day of delay during a construction programme can have a multimillion-pound impact in lost income, and project teams can be penalised for falling behind. However, help is at hand from an unexpected source: the cranage provider. Cutting construction time by half Marr Contracting, an Australian provider of heavy-lift luffing tower cranes and cranage services, has been working on data centres around the world for several years. Its methodology is helping data centre projects reach completion in half of the average time. “The first time that I spoke to a client about their data centre project, they told me that they were struggling with the lifting requirements,” explains Simon Marr, Managing Director at Marr Contracting. “There were lots of heavy precast components and sequencing them correctly alongside other elements of the programme was proving difficult. “It was a traditional set-up with mobile cranes sitting outside the building structure, which made the site congested and ‘confused.’ "There was a clash between the critical path works of installing the in-ground services and the construction of the main structure, as the required mobile crane locations were hindering the in-ground works and the in-ground works were hindering where the mobile cranes could be placed. This in turn resulted in an extended programme.” The team at Marr suggested a different approach: to place fewer, yet larger-capacity cranes in strategic locations so that they could service the whole site and allow the in-ground works to proceed concurrently. By adopting this philosophy, the project was completed in half the time of a typical build. Marr has partnered with the client on every development since, with the latest project completed in just 35 weeks. “It’s been transformational,” claims Simon. “The solution removes complexity and improves productivity by allowing construction to happen across multiple work fronts. This, in turn, reduces the number of cranes on the project.” Early engagement is key Simon believes early engagement is key to achieving productivity and efficiency gains on data centre projects. He says, “There is often a disconnect between the engineering and planning of a project and how cranes integrate into the project’s construction logic. "The current approach, where the end-user of the crane issues a list of requirements for a project, with no visibility on the logic behind how these cranes will align with the construction methodology, is flawed. “It creates a situation where more cranes are usually added to an already congested site to fill the gap that could have been covered by one single tower crane.” One of the main pressure points on projects that is specific to data centres is the requirements around services. “The challenge with data centres is that a lot of power and water is needed, which means lots of in-ground services,” continues Simon. “The ideal would be to build these together, but that’s not possible with a traditional cranage solution because you’re making a compromise on whether you install the in-ground services or whether you delay that work so that the mobile cranes can support the construction of the structure. Ultimately, the programme falls behind.” “We’ve seen clients try to save money by downsizing the tower crane and putting it in the centre of the server hall. But this hinders the completion of the main structure and delays the internal fit out works. “Our approach is to use cranes that can do heavier lifts but that take up a smaller area, away from the critical path and outside the building structure. The crane solution should allow the concurrent delivery of critical path works – in turn, making the crane a servant to the project, not the hero. “With more sites being developed in congested urban areas, particularly new, taller data centres with heavier components, this is going to be more of an issue in the future.” Thinking big One of the benefits of early engagement and strategically deploying heavy lift tower cranes is that it opens the door for the constructor to “think big” with their construction methodology. This appeals to the data centre market as it enables constructors to work to design for manufacturer and assembly (DfMA). By using prefabricated, pre-engineered modules, DfMA aims to allow for the rapid construction and deployment of data centre facilities. Fewer, heavier lifts should reduce risk and improve safety because more components can be assembled offsite, delivered to the site, and then placed through fewer major crane lifts instead of multiple, smaller lifts. Simon claims, “By seeking advice from cranage experts early in the bid and design development stage of a project, the project can benefit from lower project costs, improved safety, higher quality, and quicker construction."

InfraPartners launches Advanced Research and Engineering
InfraPartners, a designer and builder of prefabricated AI data centres, today announced the launch of a new research function, InfraPartners Advanced Research and Engineering. Led by recent hire Bal Aujla, previously the Global Head of Innovation Labs at BlackRock, InfraPartners has assembled a team of experts based in Europe and the US to act as a resource for AI innovation in the data centre industry. The function seeks to foster industry collaboration to provide forward-looking insights and thought leadership. AI demand is driving a surge in new global data centre builds, which are projected to triple by 2030. AI-specific infrastructure is expected to drive approximately 70% of this growth. Additionally, regulation, regionalisation, and geopolitical shifts are reshaping infrastructure needs. As a result, operators are looking at new ways to meet these changes with solutions that deliver scale, schedule certainty, and accelerated time-to-value while improving sustainability and avoiding technology obsolescence. InfraPartners Advanced Research and Engineering intends to accelerate data centre innovation by identifying and focusing on the biggest opportunities and challenges of this next wave of AI-driven growth. With plans for Gigawatts (GWs) of data centre builds globally and projected investments reaching nearly $7 trillion (£5.15 trillion), the impact of new innovation will be significant. Through partnerships with industry experts, regulators, and disruptive newcomers, the InfraPartners Advanced Research and Engineering team aims to foster a community where ideas and research can be shared to grow data centre knowledge, capabilities, and opportunities. These efforts will aim to advance the digital infrastructure sector as a whole. “At InfraPartners, our new research function represents the deliberate convergence of expertise from across the AI and data centre ecosystem. We’re bringing together professionals with diverse perspectives and backgrounds in artificial intelligence, data centre architecture, power infrastructure, and capital allocation to address the evolving needs of AI and the significant value it can bring to the world,” says Bal Aujla, Director, Head of Advanced Research and Engineering at InfraPartners. “This integrated team approach enables us to look at opportunities and challenges from end-to-end and across every layer of the stack. We’re no longer approaching digital infrastructure as a siloed engineering challenge. Instead, the new team will focus on the initiatives that have the most impact on transforming data centre architecture and creating business value.” InfraPartners Advanced Research and Engineering says it has developed a new design philosophy that prioritises flexibility, upgradeability, and rapid refresh cycles. Called the 'Upgradeable Data Center,' this design, it claims, future-proofs data centre investments and enables greater resilience and sustainability in a fast-changing digital landscape. “The Upgradeable Data Center reflects the fact data centres must now be built to evolve. In a world where GPU generations shift every 12–18 months and designs change significantly each time, it is no longer viable to build static infrastructure which has decades-long refresh cycles. Our design approach enables operators to deploy the latest GPUs and upgrade data centre infrastructure in a seamless way,” notes Harqs Singh, Chief Technology Officer at InfraPartners. In its first white paper, Data Centers Transforming at the pace of Technology change, the team explores the rapid growth of AI workloads and its impact on digital infrastructure, including the GPU technology roadmap, increasing rack densities, and the implications for the modern data centre. It highlights the economic risks and commercial opportunities emerging from these trends and introduces how the Upgradeable Data Center is seeking to enable new data centres to transform at the pace of technology change. InfraPartners' model is to build 80% of the data centre offsite and 20% onsite, helping address key industry challenges like skilled labour shortages and power constraints, whilst aligning infrastructure investment with business agility and long-term growth.

Pulsant launches data centre redesign programme
Pulsant, an edge infrastructure provider, today announced a nationwide initiative to enhance and standardise the client experience across all sites in its platformEDGE network. The programme builds on the successful redesign of its Croydon data centre, which received positive survey feedback. The project aims to deliver a consistent experience across all Pulsant data centres, and will involve ongoing client input and feedback during each site implementation. The roll-out is scheduled to take place throughout 2025, with South Gyle in Edinburgh earmarked as the next location. Commenting on the decision, Ben Cranham, Chief Operating Officer at Pulsant, says, "Our commitment to delivering the optimal client experience drives us to continuously survey and respond to an evolving set of onsite requirements. The success of our Croydon site redesign has shown us the value in that approach and this nationwide roll-out reflects our dedication to ensuring that all our clients, regardless of which data centre they use, enjoy the same level of experience and support from Pulsant." For more from Pulsant, click here.

Legrand unveils new Starline Series-S Track Busway
Legrand has unveiled the next generation Starline Series-S Track Busway power distribution system. This revolutionary product combines the performance, functionality and flexibility of Starline’s proven track busway technology with the added benefit of an IP54 ingress-protection rating. As a result, the company is meeting the growing commercial and industrial demand for splashproof, highly dust-resistant and extremely flexible electrical power distribution systems.    According to Data Bridge Market Research, the busway market will reach a valuation of US$3.5bn by 2028, growing at a CAGR of 8.6%. This growth will be driven by a growing desire for user-friendly, cost-effective and highly adaptable electrical distribution systems that support changing load locations in environments with limited space. For over 30 years, Starline has been meeting this need with innovative, easy-to-use track busway designs that reduce costs while maximising efficiency and scalability. With a design that features multiple mounting options, a continuous access slot and configurable plugin units that can be connected and disconnected without de-energising the busway, Starline has helped some of the biggest names in the commercial and industrial space eliminate its dependency on busduct as well as pipe and wire solutions.  The new Series-S Track Busway family of products allows users to enjoy all the benefits of its innovative design anywhere that additional water, dust or other contaminants require up to an IP54 (including IP44) or NEMA 3R rating. This opens its track busway technology for use in wet environments, dusty or debris-contaminated manufacturing, areas with sprinkler or splash-proof requirements, and outdoor applications.   Starline Series-S Track Busway’s protection level extends to its uniquely designed plug-in units, which are offered with a wide variety of watertight IEC and NEMA-rated devices to meet any need. Other features include:  Available in five and 10 foot sections, custom lengths upon request  100-to-1200-amp systems, four pole, rated up to 600Vac or 600Vdc  UL, IEC and ETL certifications  Aluminium housing with corrosion-resistant coating  Splashproof and highly dust-resistant design with watertight IEC and NEMA device options  Click here for latest data centre news.

Schneider Electric delivers data centre project for Loughborough University
Schneider Electric has delivered a new data centre modernisation project for Loughborough University, in collaboration with its elite partner, on365. The project saw Schneider Electric and on365 modernise the university’s IT infrastructure with new energy efficient technologies, including an EcoStruxure Row Data Center, InRow Cooling solution, Galaxy VS UPS and EcoStruxure IT software, enabling the university to harness the power of resilient IT infrastructure, data analytics and digital services to support new breakthroughs in sporting research. As Loughborough University is known for its sports-related subjects and is home to world-class sporting facilities, IT is fundamental to its operations, from its high-performance computing (HPC) servers which support analytical research projects, to a highly virtualised data centre environment that provides critical applications including finance, administration and security. To overcome a series of data centre challenges, including requirements for a complete redesign, modernisation of legacy cooling systems, improved cooling efficiencies, and greater visibility of its distributed IT assets, the university undertook the project at its Haslegrave and Holywell Park data centres. Delivered in two phases, the project firstly saw on365 modernise the Haslegrave facility by replacing an outdated raised floor and deploying an EcoStruxure Row Data Center solution. The deployment of this significantly improved the overall structure, enabling an efficient data centre design. During the upgrade, it also brought other parts of the infrastructure under the IT department’s control, using new InRow DX units to deliver improved cooling reliability, and provide it with greater ability to cope with unplanned weather such as heat waves, which had adversely affected its IT and cooling operations in the past. Use of this solution also created a new space for future IT expansions and extended a ‘no single points of failure’ design throughout the facility. This made the environment more suitable for a new generation of compact and powerful servers, and the solution was replicated at Holywell Park thereafter. Further improvements in resilience and efficiency were also achieved by Schneider Electric’s Galaxy VS UPS with lithium-ion batteries. “At the foundational level of everything which is data-driven at the university, the Haslegrave and Holywell data centres are the power behind a host of advancements in sports science, and our transition towards a more sustainable operation,” says Mark Newall, IT Specialist at the University of Loughborough. “Working with Schneider Electric and on365 has enabled our data centre to become more efficient, effective and resilient.” The university has also upgraded the software used to manage and control its infrastructure. It has deployed the company’s EcoStruxure IT platform, providing it with enhanced visibility and data-driven insights that help identify and mitigate potential faults before they become critical. This, in conjunction with a new three-year Schneider Electric services agreement delivered via on365, has given the university 24x7 access to maintenance support. The university also utilises a large distributed edge network environment, which has in excess of 60 APC Smart-UPS protecting it. As part of its services agreement, all critical power systems are monitored and maintained via EcoStruxure IT, providing real-time visibility and helping IT personnel to manage the campus’ network more efficiently.

Secure I.T. Environments installs micro data centre for Barnet Hospital ICU
Secure I.T. Environments has announced the completion of a project for Barnet Hospital, designing, supplying and installing its custom 42U Micro Data Centre 3 at the hospital’s Intensive Care Unit (ICU), supporting those wards. The new edge micro data centre provides critical network services and communications for the operational side of the ICU, and includes passive air-conditioning for up to a 12kW load. Designed to a high security specification, the cabinet is secure against access or damage by the general public, and will provide a new level of reliability over the previous data centre. It took three days to install, and involved the movement of equipment between old and new cabinets, structured cabling of the new cabinet, power supply installation, and testing. Secure I.T. Environments will also be providing maintenance for the cooling system in the new data centre. Chris Wellfair, Project Director at Secure I.T. Environments, says, “Intensive Care Units can be one of the most challenging locations in a hospital to install a data centre, as reliability and security are critical characteristics for any technology used. Our micro data centre range not only meets that standard, but can handle high density applications with ease, and fit elegantly and quietly into any environment.”

Indigo telecom group announces plans to recruit 100 people
4site, a subsidiary of Indigo Telecom Group, has announced it will be recruiting more than 100 people, over the next three years, to support its plans for international expansion. With open-location and office-based roles across Ireland, the roles will span from Fibre Planners, GIS Engineers, Design Engineers, Telecoms Surveyors and Project Managers to business support roles in accounts, sales and operations. Indigo will be recruiting the 100 workers locally from Limerick and the Mid-West in the vicinity of the company’s Irish headquarters at Raheen Business Park. Established in Magor, South Wales in 1997, Indigo Telecom Group employs more than 400 people across 10 offices in the UK, Ireland, France, Germany and Netherlands. The company is now focused on expanding its skills portfolio to capitalise on the market opportunities around Fibre to the Home (FTTH), wireless, 5G, data centres, digitisation and telco network services. In 2020 alone, Indigo Telecom Group welcomed 140 people to the team. 4site works closely with Limerick Institute of Technology and the University of Limerick to create job opportunities for their highly skilled graduates. Providing network infrastructure to fixed/mobile carriers and the enterprise sector since 1997, Indigo Telecom Group is a leader in delivering Design, Build and Support services to a dynamic market where businesses and consumers demand powerful connectivity. A reputation for reliability has made Indigo Telecom Group a trusted partner to some of the biggest companies in the world, including Vodafone, Nokia, BT and NTT. The COVID-19 pandemic has demonstrated the critical importance that telecommunications infrastructure plays in keeping businesses and societies connected. Because of the economic and social disruption, people across the globe have relied more than ever on connectivity for information, interacting with loved ones, and working from home. During this period, Indigo Telecom Group has scaled operations to meet the growing demand of its customers. Tánaiste and Minister for Enterprise, Trade and Employment Leo Varadkar says: “I am really pleased to see that Indigo Telecom Group, through its Irish subsidiary 4site is expanding in Limerick and will be recruiting over 100 people over the next three years. This is in addition to the 140 new staff that Indigo Telecom Group hired  in 2020 - approximately 90 of which were here in Ireland - and underlines the company’s continued commitment to the Mid-West. This year more than ever we have relied on our communications networks to keep in touch and I welcome the expansion of this sector here. I wish Indigo and 4site every success with the expansion plans.” Kevin Taylor MBE, Chairman, Indigo Telecom Group, comments on the announcement: “We’re really excited to invest in Ireland, and specifically within Limerick and surrounding areas. This provides a great opportunity for local staff  to join an organisation which is on a high growth trajectory and with plans to expand in 2021 and beyond. For people considering a career in telecoms or a new challenge, there couldn’t be a better time to join a sector that is experiencing exponential growth and playing a critical role in the way we all connect with each other.”

CyrusOne strengthens sustainable design and delivery practice
CyrusOne has announced the appointment of Stuart Gray as Engineering Director Europe to strengthen its Design & Construction team. Stuart joins the company with more than 20 years of data centre sector and technical engineering expertise, and will be responsible for driving consistency, efficiency and sustainable practice in the design, delivery and commissioning of developments throughout CyrusOne’s data centre portfolio in Europe.  “I’m delighted Stuart has decided to join CyrusOne. It is an exciting time for our business in Europe as we have an ambitious development pipeline to deliver against increased demand for capacity. As a subject matter expert in mission critical, Stuart will lead engineering throughout the full lifecycle of our data centre developments - from design development through to commissioning - and further strengthen the breadth and depth of our offering to customers. Stuart will also drive efficiency and consistency and champion our continued development of sustainable and environmental design against the backdrop of our ‘Zero Carbon by 2040’ pledge,” comments Richard Brandon, CyrusOne’s Senior Director of European Design and Construction. “I am delighted to join CyrusOne's specialist European Design & Construction team and look forward to bringing the experience and engineering knowledge I have gained in my career to this exciting and challenging new role," says Stuart. "As CyrusOne continues to build the data centres of the future, I am eager to elevate our designs and technical engineering capabilities to help meet the company’s ambitious environment and sustainability objectives.”    Stuart joins the company from construction company, Structure Tone, where he was a project director, responsible for operational delivery on all live UK data centre projects. Stuart has also worked across multiple areas of the data centre industry from electrical contracting, data centre engineering design, technical services management and specialist main contracting - delivering technically complex schemes for a wide range of enterprise, hyperscale and colocation customers.  Stuart’s appointment follows CyrusOne’s recent ‘Zero Carbon by 2040’ pledge through the reduction of carbon emissions across the company’s global data centre portfolio. The pledge will build on sustainable efforts put forward by CyrusOne including purchasing renewables, leveraging green power and integrating sustainable design components across all its facilities around the world.



Translate »