Data Centre Build News & Insights


ECL developing 35MW Santa Clara data centre
ECL, a US data-centre-as-a-service company, has announced plans to develop a 35MW data centre in Santa Clara, California, USA, designed to support high-density AI workloads using a mix of power sources. The facility, known as CSC-1, will combine on-grid electricity with hydrogen and natural gas generation. The approach is intended to address growing demand for power in data centre markets where grid capacity is limited. CSC-1 will launch with rack densities ranging from 75kW to 270kW, and the site is based on ECL’s FlexGrid architecture, which integrates multiple power inputs and is designed to operate alongside local utility infrastructure. The system is expected to deliver a power usage effectiveness (PUE) of below 1.15 while supporting lower emissions through a combination of cooling methods. The development will follow a phased approach, starting with an initial 2.5MW deployment and scaling up to full capacity as demand increases. This model is intended to allow operators to begin AI workloads earlier, without waiting for full site completion. Modular power approach addresses grid constraints Northern California remains a constrained market for data centre power, with delays in grid connections affecting new developments. As a result, alternative approaches such as on-site power generation are becoming more widely adopted. ECL’s FlexGrid system uses modular power blocks that can be deployed incrementally. This allows capacity to be added over time, aligning infrastructure growth with demand for AI compute. The system also incorporates different cooling methods, including direct-to-chip and air cooling. When hydrogen is used as a power source, by-product water can be reused within the cooling process, reducing the need for additional water supply. The architecture is designed to meet Tier III-level reliability requirements and includes a real-time management platform to monitor and adjust power generation, cooling, and rack-level operations. Yuval Bachar, Co-founder and CEO of ECL, comments, “A 35MW facility delivered in Santa Clara in under a year would have been unthinkable through traditional grid-connected development. "Every major AI operator in the Bay Area is staring at the same maths, with years-long interconnection queues pitted against AI deployment needs that are growing by the minute. "By phasing growth through modular power blocks, ECL matches infrastructure deployment to the actual pace of AI demand rather than forcing customers to overbuild or wait. This site demonstrates that power architecture itself can become the enabling layer for AI scale rather than the constraint.” ECL is currently accepting enquiries from prospective tenants for the site. For more from ECL, click here.

'AI growth doesn’t have to break the grid'
A UK high‑performance computing (HPC) data centre has reportedly cut its carbon emissions by three quarters while easing pressure on the electricity system, offering a blueprint for how the fast‑growing AI sector can expand without overwhelming the grid. Stellium Datacenters, which operates one the UK's largest purpose-built data centre campuses near Newcastle, has switched to a new way of sourcing electricity. This matches its power use with renewable generation hour by hour, rather than relying on annual averages. The move comes as data centres face mounting scrutiny over their energy use, with concerns growing that AI and cloud computing could strain local grids and push up energy costs. That scrutiny has intensified in recent months, with MPs launching an inquiry through the Environmental Audit Committee into the environmental impact of data centres, including their growing electricity and water use and the pressure they place on local grids. Working with renewable energy supplier Good Energy, Stellium now runs its site on a 100% renewable, hourly‑matched electricity supply, linking consumption directly to power generated by more than 3,300 independent UK renewable generators. This approach allows the company to show exactly when its electricity demand is met by renewable sources, achieving an hourly matching score of 95.4%, more than double the current market average of around 43%. Planned additions, including large-scale battery storage, are expected to lift this to 97–98% while being able to show exactly which UK renewable assets powered the data centre and when. 'Hourly matching' as an improved metric Traditionally, many data centres rely on renewable certificates that show clean electricity was generated somewhere on the grid over a year, even if fossil fuels were used at the time power was actually consumed. Some “100% renewable” tariffs relying on this system mask continued reliance on fossil-fuelled power at precisely the moments when the grid is most constrained. By contrast, hourly matching provides a much clearer picture of real‑world impact, demonstrating which users are sourcing clean, homegrown power versus relying on fossil‑fuelled generation at peak times. Stellium says the change has transformed conversations with customers, regulators, and auditors, particularly global AI and technology firms with strict net zero and reporting requirements. The company says it can now demonstrate, in detail, which renewable assets powered its operations, when they did so, and where they are located. Paul Mellon, Operations Director at Stellium, notes, “Data centres often get bad press for their high, inflexible energy use. But this shows that AI and high‑performance computing don’t have to come at the expense of the grid or the climate. "By switching to hourly‑matched renewable power, we’ve been able to cut emissions dramatically while giving customers the transparency they increasingly demand.” Nigel Pocklington, CEO of Good Energy, adds, “By matching electricity use with renewable generation hour by hour, Stellium can show when clean power is actually being used. "That kind of transparency cuts carbon emissions, reduces reliance on fossil fuels at peak times, and proves that digital growth and a resilient energy system can go hand in hand.” Explosive data centre growth in the UK The case comes as the UK prepares for a major expansion in data centre capacity to support AI, cloud computing, and data‑driven industries. As planners, communities, and policymakers look more closely at how new developments will affect local infrastructure, Stellium’s experience suggests that data centres can respond by sourcing and reporting their energy responsibly, rather than relying on offsetting or misleading annualised accounting. With pressure growing on the sector to prove its environmental credentials, the model demonstrates that practical solutions may already exist, and that AI‑driven growth can be aligned with a cleaner, more resilient electricity system. For more from Stellium Datacenters, click here.

STL launches Neuralis US data centre platform
STL, an optical and digital systems company, has launched its Neuralis data centre connectivity portfolio in the United States, targeting infrastructure designed for artificial intelligence and high-density computing environments. The announcement was made by STL Optical Connectivity NA, the company’s US subsidiary, at Data Center World 2026 in Washington, D.C. Neuralis is designed to support evolving data centre requirements, particularly the shift towards AI workloads, hyperscale computing, and edge deployments. These trends are increasing demand for high-speed, high-density connectivity within and between facilities. The portfolio focuses on managing the transition from traditional north–south traffic flows to more intensive east–west traffic, driven by GPU-based architectures and AI training processes. Designed for high-density AI infrastructure The Neuralis portfolio is structured around two main areas: The first focuses on maximising data centre space through the use of high-density, pre-terminated fibre cabling. This approach moves connection work into manufacturing environments, reducing on-site installation time and complexity. The second area addresses data centre interconnect (DCI), supporting large-scale data transfer between sites. This includes fibre infrastructure designed for high-capacity environments, with cables capable of supporting large fibre counts for AI deployments. STL has developed the portfolio through collaboration with customers, with a focus on addressing space, density, and deployment challenges in modern data centres. The company’s manufacturing process covers the full fibre lifecycle, including preform production, fibre drawing, cabling, and connector integration. Production for the US market is supported by STL’s facility in Lugoff, South Carolina. Ankit Agarwal, Managing Director of STL, notes, "AI demands a level of precision and density that traditional cabling simply cannot meet. "With STL Neuralis, we are providing the high-speed, low-latency foundation that allows GPU clusters to perform at their peak, moving complexity out of the field and into a controlled, high-precision factory environment." The launch reflects increasing demand for infrastructure capable of supporting AI-driven workloads, as operators continue to scale data centre capacity across North America. For more from STL, click here.

OVHcloud expands quantum cloud platform with Quandela
OVHcloud, a French cloud computing provider, has made photonic quantum computing company Quandela’s Belenos quantum computer available through its Quantum platform, expanding access to quantum computing across Europe. The announcement was made at the Quantum Defence Summit, with the addition of Belenos marking a further development of OVHcloud’s cloud-based quantum offering. The OVHcloud Quantum platform provides access to quantum systems through a Quantum-as-a-Service model, allowing organisations to use quantum computing resources without requiring dedicated hardware. Belenos is based on photonic quantum technology and offers a capacity of 12 qubits. It is intended to support experimentation with algorithms across a range of areas, including image processing, artificial intelligence, and quantum machine learning. Potential applications also extend to fields such as simulation, engineering, and environmental modelling. Expanding access to quantum computing in Europe OVHcloud says it has been supporting the European quantum ecosystem since 2022, providing access to quantum emulators through its infrastructure. The platform currently includes multiple emulators, enabling users to test and develop applications across different quantum computing approaches. The addition of Belenos introduces a physical quantum processing unit to the platform, complementing existing emulator-based access. Miroslaw Klaba, R&D Director at OVHcloud, comments, “We are delighted to deliver on the promise of the Quantum platform by adding a second reference quantum computer, Belenos, from the French company Quandela. "The quantum revolution accelerates and OVHcloud is taking its part as the European cloud leader within the ecosystem.” The system is available through a usage-based pricing model, with billing calculated per second and no long-term commitment required. Niccolò Somaschi, CEO and co-founder of Quandela, notes, “The integration of Belenos 12 qubits into the OVHcloud portfolio marks a decisive step for quantum in Europe. Accessible through the cloud, this photonic computer becomes a concrete tool for businesses. "With OVHcloud, we are offering data scientists and innovators alike the means to develop their algorithms on a flexible and sovereign infrastructure.” The expansion reflects ongoing efforts to increase accessibility to quantum computing, supporting research and development across industry and academia. For more from OVHcloud, click here.

Lonestar unveils space-based data storage service, StarVault
Lonestar, a space-based data storage company building orbital and lunar data centres, has announced the launch of StarVault, the "world's first" commercial space-based data storage service, alongside plans to expand its orbital infrastructure through a new agreement with Sidus Space, a US space and defence technology company. The platform is designed to store data off-planet, combining space-based infrastructure with cryptographic key management. It is intended for use by organisations seeking additional resilience for critical data. Lonestar has also ordered a second orbital payload from Sidus Space to increase storage capacity and redundancy. The first payload is currently in development and is scheduled to launch in October aboard the LizzieSat-4 satellite, with a second launch planned for 2027. Expansion of orbital data infrastructure The expansion follows earlier test missions and increasing interest from sectors including government, finance, and critical infrastructure. The StarVault platform is designed to provide an additional layer of data protection, supporting resilience against risks such as cyber incidents, environmental disruption, and geopolitical instability. Steve Eisele, CEO of Lonestar, says, “Demand for off-planet data security has exceeded expectations. With StarVault, we are not just launching a new category; we are scaling it.” Sidus Space is building the initial payload, with further deployments expected as Lonestar develops its orbital data storage network. The companies state that the initiative represents an early step in the development of space-based data infrastructure, with a focus on secure storage beyond traditional terrestrial data centres.

SambaNova, Intel unveil hybrid AI platform
SambaNova, a company specialising in AI hardware and software, and American multinational technology company Intel have announced a new hybrid-chip platform designed to address data centre capacity constraints linked to AI workloads. The architecture combines GPUs for prefill processing, Intel Xeon 6 processors for system control and workload execution, and SambaNova’s reconfigurable dataflow units (RDUs) for inference decoding. The platform is expected to be available in the second half of 2026 for enterprise, cloud, and sovereign AI deployments. The design targets agent-based AI workloads, which require coordinated processing across multiple stages, including data input, model inference, and execution of external tools and applications. Hybrid approach to AI infrastructure The platform reflects a shift towards heterogeneous computing in data centres, where different processor types are used for specific tasks rather than relying solely on GPUs. In this model, GPUs handle the initial processing of prompts, while RDUs manage high-throughput inference tasks. Xeon 6 processors act as both the host system and execution layer, coordinating workloads, running code, and managing interactions with external systems. Rodrigo Liang, CEO and co-founder of SambaNova Systems, explains, “Agentic AI is moving into production, and the winning pattern we’re seeing is GPUs to start the job, Intel Xeon 6 to run it, and SambaNova RDUs to finish it fast. "Together with Intel, we’re giving customers a blueprint they can deploy in existing air-cooled data centres, with broad x86 coverage for the coding agents and tools they already use today.” Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group at Intel, adds, “The data centre software ecosystem is built on x86 and it runs on Xeon, providing a mature, proven foundation that developers, enterprises, and cloud providers rely on at scale. "Workloads of the future will require a heterogeneous mix of computing, and this collaboration with SambaNova delivers a cost-efficient, high-performance inference architecture designed to meet customer needs at scale, powered by Xeon 6.” The companies state that the approach is intended to support increasing demand for AI inference, particularly as agent-based systems move from testing into production environments. Additional industry participants highlighted the growing need for scalable infrastructure to support coding agents and similar workloads, which rely on CPUs for execution alongside accelerators for inference. The announcement marks an expansion of the existing collaboration between SambaNova and Intel, with a focus on enabling large-scale AI deployment across data centre environments.

Black & White Engineering makes senior tech hires
Data centre design consultancy Black & White Engineering has appointed Charlie Bater as Chief Technical Officer and Paul Cook as Global Director of Technology & Innovation, expanding its senior technical leadership team. The appointments come as the company continues to grow internationally, now operating across 24 locations with more than 1,000 employees. The move, the company says, reflects increasing demand for integrated, data-led engineering approaches across data centre and critical infrastructure projects. Charlie Bater takes on the newly created CTO role, having spent eight years with the business, most recently as Global Datacentre Director. During that time, he has supported regional expansion, technical standards, and project delivery consistency. The creation of the CTO role forms part of a wider update to the company’s technical leadership structure, aimed at supporting growth and strengthening engineering capability. Paul Cook joins the senior leadership team as Global Director of Technology & Innovation, working alongside Charlie Bater to develop a more structured approach to technology and innovation across projects. He brings experience across sectors including utilities, ports, pharmaceutical research and development, and healthcare. Prior to joining Black & White Engineering, he worked at Yondr Group and ISG in roles focused on technology, research and development, and digital integration. A focus on technical leadership and innovation Charlie Bater says, “Stepping into the CTO role is an incredible opportunity, and I’m grateful for the trust placed in me. Having grown with the business over the past seven years, I’ve seen first hand the strength of our people and the ambition that drives Black & White. “My focus is to build on our position as a leading data centre design consultancy by further enabling a technical function that drives innovation, supports our teams, and ensures we continue delivering high-quality solutions for our clients across global markets.” The appointments are part of the continued development of the company’s Global Engineering Team, a central function that supports project teams, technical direction, and consistency across regions. Paul Cook adds, “A consistent theme throughout my career has been understanding how complex environments operate in practice and how better integration of infrastructure, digital capability, and operational processes can improve performance and resilience. “At Black & White, the opportunity is to build a Technology and Innovation capability that is practical and supports how projects are delivered day to day, while also ensuring that buildings are designed to provide operational insight and enable effective performance over their lifecycle, supported by a structured research and development framework that ensures innovation is captured and applied in a measurable way. That means being clear about where technology adds value, improving how data is used, and strengthening decision-making from the earliest stages of a project.” The company says its Global Engineering Team will continue to support early-stage technical planning, bid development, and standardisation across projects, with a focus on consistency, efficiency, and long-term performance. For more from Black & White Engineering, click here.

DataScope, BCEI sign global data centre agreement
DataScope, a UK provider of construction management software, and Burr Computer Environments (BCEI), an engineering and construction management firm specialising in data centres, have signed a global enterprise agreement to deploy DataScope’s full software suite across BCEI’s data centre projects worldwide. The agreement will cover all current and future developments, including projects delivered in collaboration with EdgeConneX. It formalises a partnership that began in September 2020 with the deployment of DataTouch Daily Site Co-ordination in Santiago, Chile, and has since expanded across multiple international data centre campuses. Locations where the system has been implemented so far include Brussels, Jakarta, Kuala Lumpur, Frankfurt, Chicago, Atlanta, New Albany, and Austin. Over this period, DataScope’s platform has been used to provide visibility of labour allocation, site attendance, and workforce competency tracking. It has also supported the management of high-risk activities, alongside reported improved communication and collaboration across project teams. BCEI has additionally used the system to manage key health and safety processes digitally, including permits, safety communications, RAMS, and safety observations. The companies say this has enabled the use of real-time safety data to support proactive risk management across projects. Supporting global scaling and consistency The enterprise agreement is intended to support BCEI’s continued global expansion, enabling more consistent reporting, improved operational control, and greater efficiency across its data centre portfolio. Jason Crowell, Environmental, Health, and Safety Director at BCEI, comments, “Data centre delivery is evolving rapidly and our clients demand both predictability and absolute reliability. "This global agreement ensures we have the digital backbone to scale efficiently while maintaining the highest safety standards across every region we operate in.” Joe Desormeaux, VP, Mission Critical at DataScope, adds, “This enterprise agreement marks a significant milestone in our journey with BCEI. What began in 2020 with the successful deployment of DataTouch in Santiago has grown into a truly global partnership spanning multiple continents and some of the most complex data centre projects in the world. “We are incredibly proud of what has been achieved together to date, from establishing robust workforce management and digital permit controls to creating best-in-class daily coordination processes. We look forward to the next phase of this partnership and to supporting BCEI’s continued growth across its global data centre portfolio.”

Nebius to operate 310MW Polarnode data centre
Dutch AI cloud company Nebius will operate a 310MW data centre campus in Lappeenranta, Finland, in a project developed by Finnish data centre project developer Polarnode. Construction is already under way in the Pajarila district, with the first phase expected to become operational in 2027. Once complete, the site is set to be among the largest AI-focused data centres in Europe. The development reflects increasing demand for large-scale infrastructure to support AI workloads, with operators seeking locations that offer access to power, cooling efficiency, and connectivity. Finland is increasingly seen as a suitable location for data centre development, due to its access to low-carbon energy, established grid infrastructure, and favourable climate conditions for cooling. Nebius states it is targeting more than 3GW of contracted power capacity globally by the end of 2026, with the Lappeenranta site contributing to that target. The local impact Mikko Toivanen, Chair at Polarnode, says, “It is fantastic that the first data centre project in Lappeenranta and its surrounding area has advanced to the construction phase on an accelerated schedule, and that [...] Nebius will be operating the campus. "In terms of scale, the project is historic, and this major investment is excellent news for the whole of Southeast Finland.” Arkady Volozh, founder and CEO of Nebius, adds, “We have been building in Finland for many years and are pleased to be expanding our presence here. Lappeenranta represents a significant addition to our global AI infrastructure build-out and will make a significant contribution to achieving our capacity goals.” Polarnode reports that the project will create around 700 direct construction roles, alongside additional indirect employment through subcontractors. Once operational, the facility is expected to employ more than 100 permanent staff. The company has also announced further data centre developments in Nokia, Pori, and Kuopio, as part of its expansion in Northern Europe. Tuomo Sallinen, Mayor of Lappeenranta, notes, “Lappeenranta offers an increasingly attractive environment for innovation, with our universities playing a key role in developing top talent tailored to the needs of high-tech industries. The new data centre will position our city at the forefront of Finland’s AI ecosystem.” For more from Polarnode, click here.

JSM constructing power infrastructure for Maincubes Berlin DC
JSM Group, a provider of integrated utility infrastructure solutions, has commenced construction of the high-voltage substation and cable route for Maincubes’ new data centre campus in Nauen, Germany. The start of the works follows the granting of a building permit for the energy infrastructure and represents a major milestone in the delivery of the main Hub Berlin campus. JSM is responsible for the delivery of the companioned 110kV cable route and substation - critical components that will underpin the campus’s long-term energy security and scalability. The approximately six kilometre cable route will transport electricity from renewable energy sources via the modern E.DIS distribution network to the site’s 110kV substation. Enabling high-performance infrastructure for cloud and AI The new campus has been designed with a grid connection capacity of 200 megawatts - with further expansion options available - to support high-performance computing environments, including advanced AI workloads and complex data analytics. Maincubes selected Nauen as the site for its new campus due to the Berlin-Brandenburg region’s stable energy supply, strong renewable generation from wind and photovoltaics, and favourable conditions for sustainable growth. Oliver Menzel, CEO of Maincubes, comments, “The start of construction of the substation is the next visible step on our journey towards Hub Berlin. "In Nauen, a state-of-the-art data centre location is being created: regionally rooted and internationally connected. In doing so, we are consistently continuing the success story of Maincubes and reinforcing our commitment to sustainable, energy efficient, and resilient digital infrastructure.” JSM leadership perspective Michael Booth, CEO of JSM Group, says, “This project highlights JSM Group’s capability to deliver complex, high-voltage energy infrastructure for mission critical environments. "Data centres of this scale demand absolute reliability, technical excellence, and close collaboration with our partners. We are proud to be playing a central role in enabling Maincubes’ expansion in the Berlin region and supporting the delivery of sustainable, high-performance digital infrastructure.” Michael Wiebersinsky, Mayor of the City of Nauen, adds, “With the new data centre campus, our region is developing into a highly modern location where future innovations can emerge. From a sustainability perspective, it gives me confidence that Nauen will be a reliable partner for the operating company, Maincubes.” Hanjo During, Managing Director of E.DIS Netz, notes, “With the campus currently under development, we will connect a particularly high-performance data centre to our regional electricity distribution network. With the campus planned here in Nauen, the connected capacity will increase significantly in the future.” Through the Nauen development, Maincubes says it continues to expand its presence in the capital region, building on the "successful operation of its first Berlin data centre, BER01."



Translate »