News


ECL developing 35MW Santa Clara data centre
ECL, a US data-centre-as-a-service company, has announced plans to develop a 35MW data centre in Santa Clara, California, USA, designed to support high-density AI workloads using a mix of power sources. The facility, known as CSC-1, will combine on-grid electricity with hydrogen and natural gas generation. The approach is intended to address growing demand for power in data centre markets where grid capacity is limited. CSC-1 will launch with rack densities ranging from 75kW to 270kW, and the site is based on ECL’s FlexGrid architecture, which integrates multiple power inputs and is designed to operate alongside local utility infrastructure. The system is expected to deliver a power usage effectiveness (PUE) of below 1.15 while supporting lower emissions through a combination of cooling methods. The development will follow a phased approach, starting with an initial 2.5MW deployment and scaling up to full capacity as demand increases. This model is intended to allow operators to begin AI workloads earlier, without waiting for full site completion. Modular power approach addresses grid constraints Northern California remains a constrained market for data centre power, with delays in grid connections affecting new developments. As a result, alternative approaches such as on-site power generation are becoming more widely adopted. ECL’s FlexGrid system uses modular power blocks that can be deployed incrementally. This allows capacity to be added over time, aligning infrastructure growth with demand for AI compute. The system also incorporates different cooling methods, including direct-to-chip and air cooling. When hydrogen is used as a power source, by-product water can be reused within the cooling process, reducing the need for additional water supply. The architecture is designed to meet Tier III-level reliability requirements and includes a real-time management platform to monitor and adjust power generation, cooling, and rack-level operations. Yuval Bachar, Co-founder and CEO of ECL, comments, “A 35MW facility delivered in Santa Clara in under a year would have been unthinkable through traditional grid-connected development. "Every major AI operator in the Bay Area is staring at the same maths, with years-long interconnection queues pitted against AI deployment needs that are growing by the minute. "By phasing growth through modular power blocks, ECL matches infrastructure deployment to the actual pace of AI demand rather than forcing customers to overbuild or wait. This site demonstrates that power architecture itself can become the enabling layer for AI scale rather than the constraint.” ECL is currently accepting enquiries from prospective tenants for the site. For more from ECL, click here.

Huber+Suhner expands Microsoft Azure fibre collaboration
Huber+Suhner, a Swiss fibre optic cable manufacturer, has strengthened its collaboration with Microsoft Azure to support the wider deployment of hollow core fibre (HCF) connectivity across the Azure network. The company plans further investment in production capabilities to increase manufacturing volumes as Microsoft expands the use of HCF across additional Azure regions. The collaboration is focused on supporting cloud and AI infrastructure requirements. Huber+Suhner has worked with Microsoft’s Azure fibre team in Romsey, UK, since 2017, following the acquisition of Lumenisity, a University of Southampton spin-out. Together, the organisations have developed HCF cable and connector technologies which are already deployed within the Azure network. Higher-capacity variants are also in development to support future infrastructure growth. The two companies have jointly developed and qualified a range of outside plant (OSP) and inside plant (ISP) cable designs for field deployment. Work is also ongoing to develop higher-density HCF cable designs for future network requirements. At Huber+Suhner’s manufacturing facility in Herisau, Switzerland, dedicated processes have been introduced to integrate HCF into multi-fibre loose-tube cables, with scope to increase capacity as demand grows. Connector development supporting HCF deployment Alongside cable development, Huber+Suhner has developed a mode-converting HCF connector designed for hyperscale and metro optical environments. These connectors are manufactured at the company’s Cube Optics facility in Mainz, Germany, with further investment planned to expand production capacity. With both HCF cable and connector designs qualified, Huber+Suhner says it is extending its portfolio to support end-to-end fibre connectivity across cloud infrastructure. Jürgen Walter, COO Communication Segment at Huber+Suhner, comments, “Huber+Suhner is proud to support Microsoft as HCF connectivity solutions move to deployment at scale. "Building on our foundations of innovation and quality, we can expect further advances in our HCF connectivity portfolio as the pace of adoption accelerates. Together, we look forward to shaping the future of cloud connectivity and unlocking the full potential of HCF.” Colin Wallace, GM Cloud Network Engineering at Microsoft Azure, adds, “We value our long-standing collaboration with Huber+Suhner, which has helped us transition HCF technology from advanced research into operational deployment in the Microsoft Azure network. "These HCF cable and connector technologies are already deployed and carrying live traffic over Azure HCF links today, and this integrated capability will help us rapidly co-design and scale connectivity solutions for the future of cloud and AI network infrastructure.” The relevance of HCF HCF technology enables data to be transmitted through air rather than glass, allowing for significantly lower latency in optical networks. Microsoft’s Double-Nested Anti-Resonant Nodeless Fibre design also supports lower signal loss and higher launch powers compared to standard single-mode fibre, reducing the need for optical amplification in some metro networks. The use of HCF in data centre environments is expected to support greater flexibility in site location, as well as improved efficiency in distributed AI workloads by reducing latency between compute clusters. However, wider deployment presents technical challenges, including the need for robust cable designs and compatible termination methods. Huber+Suhner says its HCF connectors are designed to interface with standard single-mode fibre systems while protecting the hollow core structure and maintaining performance in operational environments. For more from Huber+Suhner, click here.

'AI growth doesn’t have to break the grid'
A UK high‑performance computing (HPC) data centre has reportedly cut its carbon emissions by three quarters while easing pressure on the electricity system, offering a blueprint for how the fast‑growing AI sector can expand without overwhelming the grid. Stellium Datacenters, which operates one the UK's largest purpose-built data centre campuses near Newcastle, has switched to a new way of sourcing electricity. This matches its power use with renewable generation hour by hour, rather than relying on annual averages. The move comes as data centres face mounting scrutiny over their energy use, with concerns growing that AI and cloud computing could strain local grids and push up energy costs. That scrutiny has intensified in recent months, with MPs launching an inquiry through the Environmental Audit Committee into the environmental impact of data centres, including their growing electricity and water use and the pressure they place on local grids. Working with renewable energy supplier Good Energy, Stellium now runs its site on a 100% renewable, hourly‑matched electricity supply, linking consumption directly to power generated by more than 3,300 independent UK renewable generators. This approach allows the company to show exactly when its electricity demand is met by renewable sources, achieving an hourly matching score of 95.4%, more than double the current market average of around 43%. Planned additions, including large-scale battery storage, are expected to lift this to 97–98% while being able to show exactly which UK renewable assets powered the data centre and when. 'Hourly matching' as an improved metric Traditionally, many data centres rely on renewable certificates that show clean electricity was generated somewhere on the grid over a year, even if fossil fuels were used at the time power was actually consumed. Some “100% renewable” tariffs relying on this system mask continued reliance on fossil-fuelled power at precisely the moments when the grid is most constrained. By contrast, hourly matching provides a much clearer picture of real‑world impact, demonstrating which users are sourcing clean, homegrown power versus relying on fossil‑fuelled generation at peak times. Stellium says the change has transformed conversations with customers, regulators, and auditors, particularly global AI and technology firms with strict net zero and reporting requirements. The company says it can now demonstrate, in detail, which renewable assets powered its operations, when they did so, and where they are located. Paul Mellon, Operations Director at Stellium, notes, “Data centres often get bad press for their high, inflexible energy use. But this shows that AI and high‑performance computing don’t have to come at the expense of the grid or the climate. "By switching to hourly‑matched renewable power, we’ve been able to cut emissions dramatically while giving customers the transparency they increasingly demand.” Nigel Pocklington, CEO of Good Energy, adds, “By matching electricity use with renewable generation hour by hour, Stellium can show when clean power is actually being used. "That kind of transparency cuts carbon emissions, reduces reliance on fossil fuels at peak times, and proves that digital growth and a resilient energy system can go hand in hand.” Explosive data centre growth in the UK The case comes as the UK prepares for a major expansion in data centre capacity to support AI, cloud computing, and data‑driven industries. As planners, communities, and policymakers look more closely at how new developments will affect local infrastructure, Stellium’s experience suggests that data centres can respond by sourcing and reporting their energy responsibly, rather than relying on offsetting or misleading annualised accounting. With pressure growing on the sector to prove its environmental credentials, the model demonstrates that practical solutions may already exist, and that AI‑driven growth can be aligned with a cleaner, more resilient electricity system. For more from Stellium Datacenters, click here.

How to define the right sovereign cloud strategy
In this exclusive article for DCNN, Joe Baguley, CTO EMEA at Broadcom, gives his insight into how a workload-first approach to sovereign cloud, underpinned by data classification, flexible architecture, and strong partnerships, is reshaping European digital competitiveness: Reclaiming control and competitiveness Across Europe, governments and enterprises alike are increasingly recognising that data control holds the keys to innovation. This means a change in attitudes towards cloud sovereignty; it’s no longer seen as a simple compliance factor, but as a top priority for competitiveness and trust. The European Union is taking steps to support this shift, placing greater emphasis on sovereign infrastructure as part of its broader digital strategy. A clear example is the €180 million (£156 million) tender launched by the European Commission through its Cloud III Dynamic Purchasing System, aimed at procuring sovereign cloud services for EU institutions. To ensure cloud sovereignty, the first step is preparation: organisations need a clear understanding of where their data resides, how it moves, and who controls it. Answering these questions requires a clearly defined strategy, one that aligns workloads with the most appropriate cloud environments and establishes effective data governance. Importantly, it has to support the development of flexible cloud architectures capable of meeting regulatory demands while still enabling innovation. Designing cloud strategies around workload needs At the heart of a successful sovereign cloud strategy lies a simple principle: placing the right workload in the right environment. There is no single solution that fits all applications. Enterprises must align each workload with the cloud environment that best meets its compliance, operational, and performance requirements to determine whether it belongs in a public, private, or sovereign cloud. Some applications may thrive in a hyperscaler environment, while others require the control and security of a sovereign setup. This reality has made hybrid cloud strategies the norm. Over the past decade, many organisations initially committed to a single hyperscaler for all workloads only to realise that different applications have different requirements. Today, IT leaders increasingly need to adopt a ‘right workload, right place’ mindset, recognising that some applications may remain on premises, others run optimally in public clouds, and some require sovereign environments for regulatory or operational reasons. This hybrid approach enables organisations to balance innovation with control while avoiding vendor lock-in and making more effective use of the strengths of different cloud ecosystems. Data classification comes first Of course, organisations cannot secure or govern what they do not fully understand. Comprehensive data classification is a critical first step. Misclassified data is a frequent source of compliance risk and over-classification, often a product of risk aversion, which can create extra operational complexity and cost. Many organisations treat all data as highly classified simply to be safe, but this can lead to over-investment in secure infrastructure where it is not needed. Mapping data flows across borders and providers is equally important. Compliance blind spots often appear when data is inadvertently stored or processed in jurisdictions with restrictive data laws. Understanding where sensitive data resides, how it moves, and which regulations apply is essential to reducing risk, demonstrating accountability, and maintaining trust with partners and customers. Retrofitting compliance into existing infrastructure is costly and complex; embedding that understanding into cloud architecture from the outset is far more efficient. Building flexibility into architecture Flexibility is the cornerstone of effective sovereign cloud implementations. Architectures built for interoperability and portability allow workloads to move seamlessly across private, public, and sovereign clouds. This adaptability is vital for risks posed by geopolitical or regulatory change. Hyperscalers cannot always guarantee sovereignty due to extraterritorial legislation such as the US CLOUD Act, which permits government access to data held by American companies abroad. By contrast, working with local cloud operators enables enterprises to maintain jurisdictional control over their data while still leveraging the latest technology. Moreover, working with local cloud operators can provide additional technological sovereignty benefits ranging from the investment to the local ecosystem and industrial base, all the way to addressing supply chain concerns, promoting interoperability, avoiding vendor lock-in, having stronger operational control, and managing dependency concerns. Sovereignty should be viewed not as a constraint, but as a design principle guiding infrastructure, data placement, and application deployment. Organisations that prioritise adaptability can balance regulatory compliance with innovation and long-term strategic growth. Partnerships powering sovereign cloud Partnerships also play a pivotal role. No single vendor or platform can solve sovereignty challenges by themselves and, in the current interconnected supply chain, there does not exist a perfect vertical integration of suppliers within one region. Open source is often presented as a solution to more autonomy. The reality, however, is that open source solutions create questions on code providence, reliability of a solution when deployed at scale, and different dependencies on support. The most successful sovereign cloud environments combine global technology providers, local operators, and trusted EMEA partners (such as evoila and Arvato). This collaborative approach not only strengthens compliance and transparency, but also accelerates innovation by ensuring that governance does not become a barrier to progress. Meanwhile, the presence of a local ecosystem guarantees the ability to operate and support solutions with a high degree of autonomy. As regulatory and geopolitical landscapes evolve, organisations that foster open dialogue across their supply chain and internal teams will be best placed to adapt. Sovereignty is as much about alignment, strategic choices, and accountability as it is about infrastructure. From compliance requirement to strategic asset Sovereign cloud has moved beyond a purely compliance-driven requirement and is increasingly becoming a source of strategic advantage. Organisations that commit to the ‘right workload, right place’ mindset and have clear data classification, flexible architecture, and prioritise interoperability are the ones that will have a competitive advantage. This approach allows organisations to scale globally whilst remaining aligned to regulatory and geopolitical shifts. Sovereignty is an enabler of AI and should be treated as such.

Scaleway selected for EU sovereign cloud framework
French cloud computing provider Scaleway has been selected by the European Commission as one of four cloud providers under the Cloud III Dynamic Purchasing System, a €180 million (£156 million) programme supporting access to sovereign cloud services for EU institutions. The framework, which runs for up to six years, enables EU bodies and agencies to procure cloud services through a pre-approved group of providers. Selection follows an evaluation process based on the European Commission’s Cloud Sovereignty Framework, which assesses legal, operational, and technical criteria. As part of the programme, Scaleway will be eligible to participate in project-specific competitions to deliver cloud services, including for sensitive and critical workloads. Cloud III is managed by the Directorate-General for Digital Services and was introduced in 2025 as the European Commission’s primary framework for cloud procurement. The initiative promotes a multi-cloud model, allowing institutions to select from a limited group of approved providers rather than relying on a single vendor. It is designed to support resilience, continuity, and flexibility across public sector digital infrastructure. The framework also supports deployment of cloud environments for critical systems, alongside fallback capabilities for existing cloud or on-premises infrastructure in the event of disruption. A framework supporting a sovereign and multi-cloud approach A key element is the Cloud Sovereignty Framework, which establishes a consistent set of criteria for assessing cloud providers. This is intended to improve transparency and standardisation in how sovereignty is defined and applied across the European cloud sector. Scaleway operates as a European-owned provider, with infrastructure and operations based within Europe. Its platform is designed to support data localisation and compliance with European regulatory requirements. Damien Lucas, CEO of Scaleway, comments, "At Scaleway, we are committed to contributing to Europe’s digital autonomy, not only through our technology and our alignment with European regulatory frameworks, but also through how we build and invest in our ecosystem. “Today, for every euro spent with Scaleway, around 68 cents are reinvested in the European economy, compared to around 20 cents when relying on international hyperscalers. "Directing investment towards truly European cloud providers helps strengthen local capabilities and ensures that value, expertise, and innovation remain anchored in Europe”. The company notes that the selection reflects an increasing focus across Europe on sovereign cloud infrastructure, as demand grows for secure, compliant platforms to support data and artificial intelligence workloads.

Carrier opens €12m Montluel HVAC testing facility
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has opened a new testing facility at its European Centre of Excellence in Montluel, France, to support the development of cooling and heating technologies for data centres, industry, and large commercial buildings. The €12 million (£10.4 million) investment expands the company’s research and development capacity, with a focus on high-performance systems aligned with electrification trends and the use of lower-impact refrigerants. Testing at the site follows Eurovent-certified performance methodologies. The expansion comes as demand for data centre infrastructure continues to grow across Europe. According to JLL’s 2026 Global Data Center Outlook, the EMEA region is expected to add 13GW of new capacity by 2030, driven by hyperscale deployments and artificial intelligence workloads, particularly in markets such as London, Frankfurt, and Paris. Increased capacity for HVAC system testing The new laboratory is designed to support testing across a wide range of operating conditions. It enables evaluation of air-cooled chillers up to 3,200kW, air-source heat pumps up to 1,500kW, and water-source systems up to 6,000kW. The facility can simulate temperatures ranging from −20°C to +60°C, with humidity control, and supports water flow rates of up to 1,600m³/h. This allows for testing under varied and demanding conditions relevant to real-world applications. Bertrand Rotagnon, Executive Director, Commercial Business Line and Data Centres Europe at Carrier, comments, “With these new test laboratory facilities, we’re raising the bar on how we support customers and partners in Europe. “The combination of higher test capacity and advanced environmental control lets us validate performance with zero tolerance, earlier, and bring solutions to market faster, giving customers the confidence to move ahead on high-efficiency cooling and heating for data centres, industry, and district heating.” Nicolas Fonte, Director, Systems Engineering at Carrier Climate Solutions Europe, adds, “The new testing facility expands our engineering team's ability to test and validate chillers and heat pumps for very wide and [the] most critical operating conditions. “This new equipment enables us to validate performance, with high precision, of next-generation chillers and large heat pump platforms supporting [increasing] customers' requests for future infrastructures.” The development forms part of the company's stated ongoing investment in HVAC technologies to meet increasing performance, efficiency, and regulatory requirements across European markets. For more from Carrier, click here.

EPRI, OCP aim to advance DCs as flexible grid resources
EPRI (the Electric Power Research Institute), an independent, non-profit energy research and development organisation, and the Open Compute Project (OCP), a non-profit organisation that develops and shares open hardware standards and designs for data centre infrastructure, have announced a collaboration focused on developing data centres as flexible resources for power systems. The initiative aims to support digital infrastructure growth while improving how data centres interact with electricity networks, particularly as demand increases from artificial intelligence and other compute-intensive workloads. By working together, the organisations intend to support improved integration between data centres and power systems while developing technical frameworks to enable more flexible operation. Arshad Mansoor, President and CEO of EPRI, comments, “We’re in the midst of an energy revolution, and it must be smart, flexible, and innovative to keep rates affordable for customers across the globe. “Through this collaboration with OCP, EPRI is combining rigorous power system science with open, scalable data centre innovation to advance practical solutions that enable data centres to operate as flexible, grid-supporting resources - strengthening reliability and affordability for all.” Developing flexible data centre energy models The collaboration brings together stakeholders across the energy and data centre sectors, including a European group involving DCFlex, National Grid, NESO, PPC, RTE, and RWE. This group is working to develop frameworks that reflect operational requirements, with a focus on improving resilience and scalability as data centre capacity expands. Activities include work on shared standards, testing environments, and implementation guidance for flexible data centre operations. Zane Ball, Chief Technology Officer at OCP, notes, “With a growing member base and top-tier data centre expertise coming together with a single vision, our collaboration creates opportunities for harmonised standards, shared testing environments, and coordinated guidance for implementing flexible, resilient, and affordable data centre solutions.” EPRI says it is also supporting the work through field demonstrations at data centres in Europe and the United States, exploring flexible load approaches that could support grid stability and reduce barriers to connection.

Carrier launches AquaEdge chiller
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced the AquaEdge 19MV4 centrifugal chiller, designed to support cooling requirements in high-density AI data centres. The system forms part of the company’s QuantumLeap portfolio and is intended for use in environments where increasing compute density and rising temperatures place pressure on existing cooling infrastructure. The chiller is designed to deliver between 2.1 MW and 3.3 MW of cooling capacity, supporting workloads driven by high-performance GPUs. It is also engineered to operate with chilled-water temperatures of up to 35°C and condensing temperatures up to 55°C, aligning with liquid cooling approaches such as direct-to-chip and rear-door heat exchangers. Designed for high-density cooling environments Carrier states that the system uses a variable-speed centrifugal compressor capable of operating between 10% and 100% load, allowing it to respond to fluctuating AI workloads without frequent cycling. Marti Urpinas, Senior Technical Manager, Vertical Markets EMEA, DC Applied at Carrier, comments, “AI workloads are reshaping data centre specifications, pushing our customers to seek greater thermal headroom without sacrificing power stability. "That sounds like a tall order, but the AquaEdge 19MV4 isn’t a ‘standard’ chiller; it’s a variable-speed centrifugal platform that delivers cooling continuity for high-density racks, even as operators push chilled-water temperatures higher to support direct-to-chip architectures.” The unit is designed to restart within 150 seconds following a power interruption, supporting thermal recovery and reducing the risk of overheating in high-density environments. It also incorporates harmonic filtering to limit electrical distortion and protect associated infrastructure, including uninterruptible power supplies (UPS). Carrier reports that the system can achieve a coefficient of performance (COP) of up to 6.75 and an integrated part load value (IPLV) of 11.4 under AHRI test conditions. The chiller is available with refrigerants including R-1234ze and R-515B, supporting compliance with EU F-Gas regulations. Additionally, noise levels are specified at below 80dBA under defined operating conditions. For more from Carrier, click here.

ZIEHL-ABEGG highlights ZAbluefin fan
German ventilation manufacturer ZIEHL-ABEGG has outlined the performance characteristics of its ZAbluefin centrifugal fan, designed for HVAC and air handling unit applications. The fan uses a biomimetic blade design, including a corrugated leading edge and twisted geometry, to improve airflow efficiency. A serrated trailing edge is intended to reduce turbulence and noise while maintaining stable performance under varying airflow conditions. According to the company, the design supports energy efficiency at typical operating points, particularly in environments where airflow may be disrupted. Focus on efficiency and low-noise operation The ZAbluefin fan is designed to reduce sound output, with a focus on minimising tonal noise, making it suitable for noise-sensitive environments. Its performance curve allows for a wider operating range without flow separation, enabling system designers to meet different requirements without oversizing equipment. The fan is also intended to support compliance with current and future efficiency regulations. The product range covers diameters from 250mm to 1,120mm, with airflow capability of up to around 90,000m³/h and static pressure up to approximately 2,500Pa. This allows use across both compact and large-scale HVAC systems. ZIEHL-ABEGG has also developed a one-piece mounting system to support installation. The mount is designed for multiple orientations, including horizontal and vertical configurations, and is intended to simplify installation and reduce component variation. The company states that the combined fan and mounting design aims to improve efficiency, reduce noise, and simplify deployment across a range of HVAC applications. For more from ZIEHL-ABEGG, click here.

A look ahead to DTX + UCX Manchester 2026
DTX + UCX Manchester, one of the UK’s leading business transformation events, will return to Manchester Central on 29–30 April 2026. As the flagship event of Manchester Tech Week, it’s set to bring together a renowned roster of speakers with an agenda dedicated to the event’s theme: 'From Purpose to Practice: Igniting Curiosity, Building Trust, Confronting Risk'. In an unmissable Day 1 keynote, two of the world's most formidable cyber authorities will tackle the defining challenge of our era: how to leverage AI against emerging threats. Featuring Howard Marshall, the former Deputy Assistant Director of the FBI's Cyber Division, and Kelly Bissell, the former Corporate VP of Product Abuse and Risk at Microsoft, this session offers a rare look at the front lines of global defence. Together, they will share high-level insights on moving from reactive monitoring to autonomous, real-time protection. The momentum continues into Day 2 with a deep dive into the next frontier of automation. Chief AI Officer Chiru Bhavansikar (Arhasi AI), Rahul Kulkarni (AWS), and Andreas Kollegger (Neo4j) will take the stage to dismantle the complexities of Agentic AI, highlighting how knowledge graphs are building the brains behind the next generation of intelligent systems. Across both days, senior technology leaders from Liverpool City Region, GCHQ, the Home Office, and N Brown will share real-world case studies and practical insights, focusing on cyber resilience strategies, regulatory requirements, and deploying AI in a secure, ethical, and commercially viable way. The event brings together IT decision makers covering the entire tech stack, including ITSM, cyber security, IT infrastructure and cloud, data management, communications and collaboration, customer experience, and AI and automation. Visitors can expect exclusive panels, workshops, technical deep-dives, and community meetups. To attend yourself, you can register for a free pass on the event’s website. For more from DTX, click here.



Translate »