News


Kao Data backs Discover Tech careers programme
Kao Data, a developer and operator of data centres, has joined the Cisco-led Discover Tech programme, aimed at widening access to technology careers for young people across the UK. As an employer member, the data centre operator joins organisations including Adobe, Accenture UK&I, IBM, and World Wide Technology, alongside CDW, FDM, BBC, Highpoint, and Softcat. The initiative focuses on supporting underrepresented groups to explore opportunities in the technology sector. A pilot programme launched in February 2026 engaged 100 students, with most participants reporting increased interest in technology careers and participating employers. The scheme will expand in July, offering a two-day programme for around 600 young people in London and Manchester. The first day provides an introduction to the sector, while the second involves on-site visits with employer partners, covering areas such as AI, cybersecurity, and cloud computing. Kao Data will host around 40 students at its Harlow campus on 15 July, providing an overview of data centre infrastructure and its role in supporting digital services. The site is located at Kao Park, associated with early fibre optic research led by Sir Charles Kao. Industry initiative targets skills gap The programme forms part of wider efforts to address skills shortages in the technology sector, particularly within digital infrastructure and data centres. Kalay Moodley, Chief People Officer at Kao Data, comments, “Discover Tech is exactly the kind of initiative the sector needs. The data centre industry is facing a significant skills shortage, and if we are serious about closing that gap we have to reach into communities that have historically been overlooked and show young people what a career in our industry can look like. “Hosting these students at Kao Park is a real privilege. This is the birthplace of fibre optic networks, the technology that carries the modern internet, and we want every young person who walks onto our campus to leave understanding that the digital world they use every day is built by people, and that those people could be them.” Rachel Morar, Managing Director at Connectr Early Engagement, says, “We're delighted to welcome Kao Data to the Discover Tech family. "This group of employers has come together with the ambition of positively impacting 7,000 young people over the next three years, myth-busting about the sector and getting students excited about where they fit into the ecosystem. “Kao Data shares our passion for making sure young people are informed about the tech sector at this pivotal decision-making age in their school and college careers. “Data centres will play a vital role in the UK's economic growth, and Kao Data joining the programme will bring invaluable insights for our Y12s to learn from in July.” The initiative also supports Kao Data’s wider education and skills activities, including its Kao Academy programme for primary schools and its Critical Careers campaign focused on data centre roles. For more from Kao Data, click here.

Neterra adds fourth Sofia–Frankfurt data route
Neterra, an independent Bulgarian global telecommunications provider, has launched a fourth independent data transmission route between Sofia and Frankfurt, expanding capacity and network resilience. The company states it is the only provider in the region operating four separate and geographically diverse routes between the two locations. The infrastructure is supported by its NetIX internet exchange platform. The additional route has been introduced in response to increasing disruption across international networks. Recent outages have shown that multiple routes can be affected at the same time, impacting services across major platforms. Dean Belev (pictured above), Senior Product Manager for Connectivity and NetIX at Neterra, says, “When external interruptions occurred in international infrastructure, we saw that even three routes were not always enough to guarantee the quality we strive for. “That is why we initiated the construction of a fourth line based on our specific requirements. Now, all four routes are completely independent, not only in their physical paths, but also in terms of operators and equipment used. This represents the highest level of protection we can offer our customers.” The new route is already supporting several hundred customers using data transmission, internet access, and NetIX platform services in the region. Capacity upgrade planned across all routes Alongside the new route, Neterra plans to increase capacity across its Sofia–Frankfurt network. The company will upgrade from N × 100Gbps to N × 400Gbps across all four routes. The upgrade reportedly follows continued growth in demand and the expansion of NetIX, which currently operates at around 7Tbps capacity. The combined expansion is intended to improve both performance and resilience across one of Europe’s key data connectivity corridors. For more from Neterra, click here.

How to ensure your infrastructure complies with DORA
In this exclusive article for DCNN, Chris Noon, Director of Solution Engineering, International at Alkira, outlines how financial institutions must embed security, resilience, and transparency into their network infrastructure to meet the demands of DORA: Rethinking network infrastructure The Digital Operational Resilience Act (DORA) marks a major change in how the European financial sector manages technology risk. Instead of focusing only on solvency, DORA emphasises keeping digital services running smoothly. For enterprise organisations, this means every part of the technology stack, especially the network infrastructure connecting cloud environments and data centres, must be reviewed with operational resilience and security in mind. With this new framework, financial institutions are ultimately responsible for their digital resilience, even as they rely more on a complex network of ICT third-party service providers. To manage this, IT and compliance teams need to shift from reactive security to building systems where resilience is built in from the start. The core pillars of DORA compliance DORA requires financial organisations to have a complete strategy for managing ICT risks. This strategy should address five main areas: ICT risk management, incident reporting, operational resilience testing, third-party risk management, and information sharing. From an infrastructure point of view, the regulation says organisations must treat their network and cloud providers as essential parts of service delivery. IT teams should make sure providers go beyond just offering a service-level agreement and also give clear information about how their systems are built, managed, and secured. Security by design in network infrastructure To build security by design, start by choosing infrastructure platforms that follow well-known industry standards. When reviewing a network provider, IT teams should look for signs of a "born-in-the-cloud" or "security-first" approach. This shows the platform was built to work in high-risk, tightly regulated settings. Key indicators of a security-by-design approach include: • Identity and access governance — Providers should have strong identity and access management (IAM) features, such as multi-factor authentication (MFA); detailed, role-based access control (RBAC); and Policy Based Access Control (PBAC). This helps make sure only authorised people can change important network settings. • Encrypted connectivity — Security by design means data must be protected both while moving and when stored. Network providers should make it easy to use encryption across multi-cloud and hybrid setups without making operations more complicated. • Independent validation — Security claims need to be supported by third-party audits. Certifications like SOC 2 Type II, which cover security, availability, and confidentiality, are important standards. These reports give the proof needed for the due diligence required by DORA. Building for operational resilience Operational resilience means a company can handle, respond to, and recover from technology problems. For DORA, this means the network should not have a single point of failure. A resilient setup is usually spread out so if one part fails, traffic is rerouted to keep services running. IT teams should choose providers that focus on high availability as a key part of their services. This means having constant monitoring and alerts to catch problems early. The provider should also have a clear and tested incident response plan. DORA requires financial institutions to report major ICT incidents to regulators quickly, so the network provider must be able to supply the needed data and logs for fast investigation and reporting. Managing third-party risk and oversight A major challenge with DORA is the extra oversight of third-party providers. Financial organisations now have to include clear contract terms about oversight and audit rights. This need for transparency can be hard for some traditional technology providers to handle. When choosing an infrastructure partner, organisations should pick providers with clear processes for handling compliance questions. This means they can share security policies, operational procedures, and proof of regular penetration testing under non-disclosure agreements. The provider should act as a partner, helping the customer meet regulatory requirements, not just supplying a technical service. The role of Infrastructure-as-a-Service (IaaS) As financial institutions update their networks, many are choosing Infrastructure-as-a-Service (IaaS) models to handle the complexity of multi-cloud environments. These platforms connect on-premises data centres with different cloud service providers, acting as the system’s central hub. To meet DORA requirements, an IaaS platform must show it does not create new risks. It should be built on a well-known cloud infrastructure that already meets strong security standards. Using a resilient IaaS model helps IT teams see their whole network clearly, making risk management and compliance easier. Practical steps for IT teams To get ready for DORA, IT and risk management teams should take these practical steps with their network providers: 1. Conduct comprehensive due diligence — Check current and potential providers to make sure they meet DORA’s rules for security controls, incident response, and resilience testing. 2. Audit contractual arrangements — Make sure contracts clearly state audit rights, service levels, and the provider’s duty to help during a regulatory inquiry. 3. Evaluate multi-cloud strategy — Check if your current network setup allows you to quickly move workloads between cloud providers if one goes down. 4. Establish clear reporting lines — Decide how the network provider will communicate during an incident and what information they will give to support your reporting needs. Looking forward DORA is an ongoing operational process, not a one-off project. As regulations change, the need for operational resilience will only grow. Financial institutions that focus on security by design and pick infrastructure partners who value transparency and reliability will be better prepared for these changes. In the end, resilience is something everyone shares. The financial organisation is still responsible to the regulator, but its compliance success depends on its technology providers. By choosing providers who see compliance as a key part of their design, organisations can build a digital foundation that meets DORA and supports the future of digital finance.

Yondr powers up 27MW Toronto data centre
Yondr Group, a global developer, owner, and operator of hyperscale data centres, has energised its 27MW data centre in Toronto, marking its entry into the Canadian market. The 4.5-acre (18,211m²) site is expected to be ready for service in mid-2026 and forms part of the company’s wider expansion across North America and Europe. The facility is designed to provide hyperscale capacity to support growing demand for digital infrastructure in the region. The data centre incorporates a closed-loop cooling system to reduce water usage and has been developed in line with the Toronto Green Standard. The site also includes electric vehicle charging points, cycle parking, bird-friendly glazing, and landscaping using native and pollinator plant species. Yondr states that the development aligns with its target to achieve net zero scope one and two emissions by 2030. Sustainability and community engagement initiatives Alongside the build, Yondr has partnered with the University of Toronto to support a scholarship programme for undergraduate students across disciplines including computer science, commerce, life sciences, and physical sciences. The programme offers awards of up to CA$5,000 (£2,700), with two students supported to date. The company has also contributed to local initiatives, including pre-apprenticeship placements, apprentice site tours, support for youth sports teams, and a tree planting event linked to Earth Day. John Madden, Chief Data Center Officer at Yondr Group, says, “We’re proud to mark the energisation of our Toronto data centre campus - a major milestone that moves us another step closer to delivering critical digital infrastructure for the region. "Demand for capacity is accelerating at a pace we’ve never experienced before, driven by AI scale and a shift towards compute-led economies. Our Toronto campus forms a key part of Yondr’s strategy to deliver the next generation of sustainable, high-performance data centre capacity across North America and beyond.” Todd Sauer, VP Design & Construction Americas at Yondr Group, adds, “This campus has been designed with future demand and long-term environmental responsibility in mind, integrating innovative cooling efficiency, resilience, and local sustainability standards from the outset. "Combined with our delivery model and rapid campus deployment approach, we’re unlocking speed, scale, and certainty for customers as they plan the digital infrastructure of tomorrow. “We’re committed to building not just capacity, but lasting value. From delivering hyperscale-ready infrastructure to working with academic partners like the University of Toronto to invest in future talent pipelines, this project represents a significant commitment to the region and its long-term digital growth.” For more from Yondr Group, click here.

FTTH Conference 2026 highlights Europe’s fibre momentum
The FTTH Conference 2026 has successfully concluded, bringing together industry leaders, policymakers, and investors to assess progress and priorities for Europe’s fibre future. Discussions confirmed steady momentum in fibre rollout, alongside a growing focus on adoption, investment sustainability, and enabling regulatory frameworks. Key topics included the strategic role of fibre as backbone infrastructure for data centres interconnectivity, network resilience and cybersecurity, cloud applications, and technologies of the future like artificial intelligence (AI). The event also provided the stage to recognise excellence across the sector through the FTTH Awards and FTTH Innovation Awards 2026. FTTH Awards 2026 winners: • Individual Award — Trevor Linney, Network Technology Director at Openreach• Operator Award — Netomnia (United Kingdom)• Champion of Diversity Award — SIRO (Ireland) FTTH Innovation Awards 2026 winners: • Passive Infrastructure — Homes Passed+Featuring FiberMag by Emtelle• Active Infrastructure Central Network — Multi-PON line card LLLT-A by Nokia• Active Infrastructure Home Network — SDG 8000 and 9000 Series mesh Wi-Fi solutions by Adtran• Planning, Workflow, Mapping/GIS Software — NET Scan by TKI• Installation Equipment, Tools, Test & Measurement — FTB-Lite Series by EXFO• Artificial Intelligence & Software — Mosaic One Clarity by Adtran FTTH Council Europe President Francesco Nonno comments, “The FTTH Conference 2026 confirms that Europe is facing the last phase of fibre coverage and of migration from copper to fibre, which is proving difficult in a number of countries. "Policy choices, such as copper switch off, are much welcomed to accelerate this process, while the focus is clearly shifting towards building resilient, future-proof networks that will underpin Europe’s digital decade.” The FTTH Council Europe says it now looks ahead to the next edition of the event, FTTH Conference 2027, taking place from 16–18 March 2027 at MiCo Milan in Italy, where the industry will continue to shape the future of fibre connectivity. For more from FTTH, click here.

ECL developing 35MW Santa Clara data centre
ECL, a US data-centre-as-a-service company, has announced plans to develop a 35MW data centre in Santa Clara, California, USA, designed to support high-density AI workloads using a mix of power sources. The facility, known as CSC-1, will combine on-grid electricity with hydrogen and natural gas generation. The approach is intended to address growing demand for power in data centre markets where grid capacity is limited. CSC-1 will launch with rack densities ranging from 75kW to 270kW, and the site is based on ECL’s FlexGrid architecture, which integrates multiple power inputs and is designed to operate alongside local utility infrastructure. The system is expected to deliver a power usage effectiveness (PUE) of below 1.15 while supporting lower emissions through a combination of cooling methods. The development will follow a phased approach, starting with an initial 2.5MW deployment and scaling up to full capacity as demand increases. This model is intended to allow operators to begin AI workloads earlier, without waiting for full site completion. Modular power approach addresses grid constraints Northern California remains a constrained market for data centre power, with delays in grid connections affecting new developments. As a result, alternative approaches such as on-site power generation are becoming more widely adopted. ECL’s FlexGrid system uses modular power blocks that can be deployed incrementally. This allows capacity to be added over time, aligning infrastructure growth with demand for AI compute. The system also incorporates different cooling methods, including direct-to-chip and air cooling. When hydrogen is used as a power source, by-product water can be reused within the cooling process, reducing the need for additional water supply. The architecture is designed to meet Tier III-level reliability requirements and includes a real-time management platform to monitor and adjust power generation, cooling, and rack-level operations. Yuval Bachar, Co-founder and CEO of ECL, comments, “A 35MW facility delivered in Santa Clara in under a year would have been unthinkable through traditional grid-connected development. "Every major AI operator in the Bay Area is staring at the same maths, with years-long interconnection queues pitted against AI deployment needs that are growing by the minute. "By phasing growth through modular power blocks, ECL matches infrastructure deployment to the actual pace of AI demand rather than forcing customers to overbuild or wait. This site demonstrates that power architecture itself can become the enabling layer for AI scale rather than the constraint.” ECL is currently accepting enquiries from prospective tenants for the site. For more from ECL, click here.

Huber+Suhner expands Microsoft Azure fibre collaboration
Huber+Suhner, a Swiss fibre optic cable manufacturer, has strengthened its collaboration with Microsoft Azure to support the wider deployment of hollow core fibre (HCF) connectivity across the Azure network. The company plans further investment in production capabilities to increase manufacturing volumes as Microsoft expands the use of HCF across additional Azure regions. The collaboration is focused on supporting cloud and AI infrastructure requirements. Huber+Suhner has worked with Microsoft’s Azure fibre team in Romsey, UK, since 2017, following the acquisition of Lumenisity, a University of Southampton spin-out. Together, the organisations have developed HCF cable and connector technologies which are already deployed within the Azure network. Higher-capacity variants are also in development to support future infrastructure growth. The two companies have jointly developed and qualified a range of outside plant (OSP) and inside plant (ISP) cable designs for field deployment. Work is also ongoing to develop higher-density HCF cable designs for future network requirements. At Huber+Suhner’s manufacturing facility in Herisau, Switzerland, dedicated processes have been introduced to integrate HCF into multi-fibre loose-tube cables, with scope to increase capacity as demand grows. Connector development supporting HCF deployment Alongside cable development, Huber+Suhner has developed a mode-converting HCF connector designed for hyperscale and metro optical environments. These connectors are manufactured at the company’s Cube Optics facility in Mainz, Germany, with further investment planned to expand production capacity. With both HCF cable and connector designs qualified, Huber+Suhner says it is extending its portfolio to support end-to-end fibre connectivity across cloud infrastructure. Jürgen Walter, COO Communication Segment at Huber+Suhner, comments, “Huber+Suhner is proud to support Microsoft as HCF connectivity solutions move to deployment at scale. "Building on our foundations of innovation and quality, we can expect further advances in our HCF connectivity portfolio as the pace of adoption accelerates. Together, we look forward to shaping the future of cloud connectivity and unlocking the full potential of HCF.” Colin Wallace, GM Cloud Network Engineering at Microsoft Azure, adds, “We value our long-standing collaboration with Huber+Suhner, which has helped us transition HCF technology from advanced research into operational deployment in the Microsoft Azure network. "These HCF cable and connector technologies are already deployed and carrying live traffic over Azure HCF links today, and this integrated capability will help us rapidly co-design and scale connectivity solutions for the future of cloud and AI network infrastructure.” The relevance of HCF HCF technology enables data to be transmitted through air rather than glass, allowing for significantly lower latency in optical networks. Microsoft’s Double-Nested Anti-Resonant Nodeless Fibre design also supports lower signal loss and higher launch powers compared to standard single-mode fibre, reducing the need for optical amplification in some metro networks. The use of HCF in data centre environments is expected to support greater flexibility in site location, as well as improved efficiency in distributed AI workloads by reducing latency between compute clusters. However, wider deployment presents technical challenges, including the need for robust cable designs and compatible termination methods. Huber+Suhner says its HCF connectors are designed to interface with standard single-mode fibre systems while protecting the hollow core structure and maintaining performance in operational environments. For more from Huber+Suhner, click here.

'AI growth doesn’t have to break the grid'
A UK high‑performance computing (HPC) data centre has reportedly cut its carbon emissions by three quarters while easing pressure on the electricity system, offering a blueprint for how the fast‑growing AI sector can expand without overwhelming the grid. Stellium Datacenters, which operates one the UK's largest purpose-built data centre campuses near Newcastle, has switched to a new way of sourcing electricity. This matches its power use with renewable generation hour by hour, rather than relying on annual averages. The move comes as data centres face mounting scrutiny over their energy use, with concerns growing that AI and cloud computing could strain local grids and push up energy costs. That scrutiny has intensified in recent months, with MPs launching an inquiry through the Environmental Audit Committee into the environmental impact of data centres, including their growing electricity and water use and the pressure they place on local grids. Working with renewable energy supplier Good Energy, Stellium now runs its site on a 100% renewable, hourly‑matched electricity supply, linking consumption directly to power generated by more than 3,300 independent UK renewable generators. This approach allows the company to show exactly when its electricity demand is met by renewable sources, achieving an hourly matching score of 95.4%, more than double the current market average of around 43%. Planned additions, including large-scale battery storage, are expected to lift this to 97–98% while being able to show exactly which UK renewable assets powered the data centre and when. 'Hourly matching' as an improved metric Traditionally, many data centres rely on renewable certificates that show clean electricity was generated somewhere on the grid over a year, even if fossil fuels were used at the time power was actually consumed. Some “100% renewable” tariffs relying on this system mask continued reliance on fossil-fuelled power at precisely the moments when the grid is most constrained. By contrast, hourly matching provides a much clearer picture of real‑world impact, demonstrating which users are sourcing clean, homegrown power versus relying on fossil‑fuelled generation at peak times. Stellium says the change has transformed conversations with customers, regulators, and auditors, particularly global AI and technology firms with strict net zero and reporting requirements. The company says it can now demonstrate, in detail, which renewable assets powered its operations, when they did so, and where they are located. Paul Mellon, Operations Director at Stellium, notes, “Data centres often get bad press for their high, inflexible energy use. But this shows that AI and high‑performance computing don’t have to come at the expense of the grid or the climate. "By switching to hourly‑matched renewable power, we’ve been able to cut emissions dramatically while giving customers the transparency they increasingly demand.” Nigel Pocklington, CEO of Good Energy, adds, “By matching electricity use with renewable generation hour by hour, Stellium can show when clean power is actually being used. "That kind of transparency cuts carbon emissions, reduces reliance on fossil fuels at peak times, and proves that digital growth and a resilient energy system can go hand in hand.” Explosive data centre growth in the UK The case comes as the UK prepares for a major expansion in data centre capacity to support AI, cloud computing, and data‑driven industries. As planners, communities, and policymakers look more closely at how new developments will affect local infrastructure, Stellium’s experience suggests that data centres can respond by sourcing and reporting their energy responsibly, rather than relying on offsetting or misleading annualised accounting. With pressure growing on the sector to prove its environmental credentials, the model demonstrates that practical solutions may already exist, and that AI‑driven growth can be aligned with a cleaner, more resilient electricity system. For more from Stellium Datacenters, click here.

How to define the right sovereign cloud strategy
In this exclusive article for DCNN, Joe Baguley, CTO EMEA at Broadcom, gives his insight into how a workload-first approach to sovereign cloud, underpinned by data classification, flexible architecture, and strong partnerships, is reshaping European digital competitiveness: Reclaiming control and competitiveness Across Europe, governments and enterprises alike are increasingly recognising that data control holds the keys to innovation. This means a change in attitudes towards cloud sovereignty; it’s no longer seen as a simple compliance factor, but as a top priority for competitiveness and trust. The European Union is taking steps to support this shift, placing greater emphasis on sovereign infrastructure as part of its broader digital strategy. A clear example is the €180 million (£156 million) tender launched by the European Commission through its Cloud III Dynamic Purchasing System, aimed at procuring sovereign cloud services for EU institutions. To ensure cloud sovereignty, the first step is preparation: organisations need a clear understanding of where their data resides, how it moves, and who controls it. Answering these questions requires a clearly defined strategy, one that aligns workloads with the most appropriate cloud environments and establishes effective data governance. Importantly, it has to support the development of flexible cloud architectures capable of meeting regulatory demands while still enabling innovation. Designing cloud strategies around workload needs At the heart of a successful sovereign cloud strategy lies a simple principle: placing the right workload in the right environment. There is no single solution that fits all applications. Enterprises must align each workload with the cloud environment that best meets its compliance, operational, and performance requirements to determine whether it belongs in a public, private, or sovereign cloud. Some applications may thrive in a hyperscaler environment, while others require the control and security of a sovereign setup. This reality has made hybrid cloud strategies the norm. Over the past decade, many organisations initially committed to a single hyperscaler for all workloads only to realise that different applications have different requirements. Today, IT leaders increasingly need to adopt a ‘right workload, right place’ mindset, recognising that some applications may remain on premises, others run optimally in public clouds, and some require sovereign environments for regulatory or operational reasons. This hybrid approach enables organisations to balance innovation with control while avoiding vendor lock-in and making more effective use of the strengths of different cloud ecosystems. Data classification comes first Of course, organisations cannot secure or govern what they do not fully understand. Comprehensive data classification is a critical first step. Misclassified data is a frequent source of compliance risk and over-classification, often a product of risk aversion, which can create extra operational complexity and cost. Many organisations treat all data as highly classified simply to be safe, but this can lead to over-investment in secure infrastructure where it is not needed. Mapping data flows across borders and providers is equally important. Compliance blind spots often appear when data is inadvertently stored or processed in jurisdictions with restrictive data laws. Understanding where sensitive data resides, how it moves, and which regulations apply is essential to reducing risk, demonstrating accountability, and maintaining trust with partners and customers. Retrofitting compliance into existing infrastructure is costly and complex; embedding that understanding into cloud architecture from the outset is far more efficient. Building flexibility into architecture Flexibility is the cornerstone of effective sovereign cloud implementations. Architectures built for interoperability and portability allow workloads to move seamlessly across private, public, and sovereign clouds. This adaptability is vital for risks posed by geopolitical or regulatory change. Hyperscalers cannot always guarantee sovereignty due to extraterritorial legislation such as the US CLOUD Act, which permits government access to data held by American companies abroad. By contrast, working with local cloud operators enables enterprises to maintain jurisdictional control over their data while still leveraging the latest technology. Moreover, working with local cloud operators can provide additional technological sovereignty benefits ranging from the investment to the local ecosystem and industrial base, all the way to addressing supply chain concerns, promoting interoperability, avoiding vendor lock-in, having stronger operational control, and managing dependency concerns. Sovereignty should be viewed not as a constraint, but as a design principle guiding infrastructure, data placement, and application deployment. Organisations that prioritise adaptability can balance regulatory compliance with innovation and long-term strategic growth. Partnerships powering sovereign cloud Partnerships also play a pivotal role. No single vendor or platform can solve sovereignty challenges by themselves and, in the current interconnected supply chain, there does not exist a perfect vertical integration of suppliers within one region. Open source is often presented as a solution to more autonomy. The reality, however, is that open source solutions create questions on code providence, reliability of a solution when deployed at scale, and different dependencies on support. The most successful sovereign cloud environments combine global technology providers, local operators, and trusted EMEA partners (such as evoila and Arvato). This collaborative approach not only strengthens compliance and transparency, but also accelerates innovation by ensuring that governance does not become a barrier to progress. Meanwhile, the presence of a local ecosystem guarantees the ability to operate and support solutions with a high degree of autonomy. As regulatory and geopolitical landscapes evolve, organisations that foster open dialogue across their supply chain and internal teams will be best placed to adapt. Sovereignty is as much about alignment, strategic choices, and accountability as it is about infrastructure. From compliance requirement to strategic asset Sovereign cloud has moved beyond a purely compliance-driven requirement and is increasingly becoming a source of strategic advantage. Organisations that commit to the ‘right workload, right place’ mindset and have clear data classification, flexible architecture, and prioritise interoperability are the ones that will have a competitive advantage. This approach allows organisations to scale globally whilst remaining aligned to regulatory and geopolitical shifts. Sovereignty is an enabler of AI and should be treated as such.

Scaleway selected for EU sovereign cloud framework
French cloud computing provider Scaleway has been selected by the European Commission as one of four cloud providers under the Cloud III Dynamic Purchasing System, a €180 million (£156 million) programme supporting access to sovereign cloud services for EU institutions. The framework, which runs for up to six years, enables EU bodies and agencies to procure cloud services through a pre-approved group of providers. Selection follows an evaluation process based on the European Commission’s Cloud Sovereignty Framework, which assesses legal, operational, and technical criteria. As part of the programme, Scaleway will be eligible to participate in project-specific competitions to deliver cloud services, including for sensitive and critical workloads. Cloud III is managed by the Directorate-General for Digital Services and was introduced in 2025 as the European Commission’s primary framework for cloud procurement. The initiative promotes a multi-cloud model, allowing institutions to select from a limited group of approved providers rather than relying on a single vendor. It is designed to support resilience, continuity, and flexibility across public sector digital infrastructure. The framework also supports deployment of cloud environments for critical systems, alongside fallback capabilities for existing cloud or on-premises infrastructure in the event of disruption. A framework supporting a sovereign and multi-cloud approach A key element is the Cloud Sovereignty Framework, which establishes a consistent set of criteria for assessing cloud providers. This is intended to improve transparency and standardisation in how sovereignty is defined and applied across the European cloud sector. Scaleway operates as a European-owned provider, with infrastructure and operations based within Europe. Its platform is designed to support data localisation and compliance with European regulatory requirements. Damien Lucas, CEO of Scaleway, comments, "At Scaleway, we are committed to contributing to Europe’s digital autonomy, not only through our technology and our alignment with European regulatory frameworks, but also through how we build and invest in our ecosystem. “Today, for every euro spent with Scaleway, around 68 cents are reinvested in the European economy, compared to around 20 cents when relying on international hyperscalers. "Directing investment towards truly European cloud providers helps strengthen local capabilities and ensures that value, expertise, and innovation remain anchored in Europe”. The company notes that the selection reflects an increasing focus across Europe on sovereign cloud infrastructure, as demand grows for secure, compliant platforms to support data and artificial intelligence workloads.



Translate »