20 April 2026
How to define the right sovereign cloud strategy
 
20 April 2026
STL launches Neuralis US data centre platform
 
20 April 2026
Scaleway selected for EU sovereign cloud framework
 
20 April 2026
DE-CIX, Ooredoo link Doha IX to Marseille
 
17 April 2026
Carrier opens €12m Montluel HVAC testing facility
 

Latest News


OVHcloud expands quantum cloud platform with Quandela
OVHcloud, a French cloud computing provider, has made photonic quantum computing company Quandela’s Belenos quantum computer available through its Quantum platform, expanding access to quantum computing across Europe. The announcement was made at the Quantum Defence Summit, with the addition of Belenos marking a further development of OVHcloud’s cloud-based quantum offering. The OVHcloud Quantum platform provides access to quantum systems through a Quantum-as-a-Service model, allowing organisations to use quantum computing resources without requiring dedicated hardware. Belenos is based on photonic quantum technology and offers a capacity of 12 qubits. It is intended to support experimentation with algorithms across a range of areas, including image processing, artificial intelligence, and quantum machine learning. Potential applications also extend to fields such as simulation, engineering, and environmental modelling. Expanding access to quantum computing in Europe OVHcloud says it has been supporting the European quantum ecosystem since 2022, providing access to quantum emulators through its infrastructure. The platform currently includes multiple emulators, enabling users to test and develop applications across different quantum computing approaches. The addition of Belenos introduces a physical quantum processing unit to the platform, complementing existing emulator-based access. Miroslaw Klaba, R&D Director at OVHcloud, comments, “We are delighted to deliver on the promise of the Quantum platform by adding a second reference quantum computer, Belenos, from the French company Quandela. "The quantum revolution accelerates and OVHcloud is taking its part as the European cloud leader within the ecosystem.” The system is available through a usage-based pricing model, with billing calculated per second and no long-term commitment required. Niccolò Somaschi, CEO and co-founder of Quandela, notes, “The integration of Belenos 12 qubits into the OVHcloud portfolio marks a decisive step for quantum in Europe. Accessible through the cloud, this photonic computer becomes a concrete tool for businesses. "With OVHcloud, we are offering data scientists and innovators alike the means to develop their algorithms on a flexible and sovereign infrastructure.” The expansion reflects ongoing efforts to increase accessibility to quantum computing, supporting research and development across industry and academia. For more from OVHcloud, click here.

EPRI, OCP aim to advance DCs as flexible grid resources
EPRI (the Electric Power Research Institute), an independent, non-profit energy research and development organisation, and the Open Compute Project (OCP), a non-profit organisation that develops and shares open hardware standards and designs for data centre infrastructure, have announced a collaboration focused on developing data centres as flexible resources for power systems. The initiative aims to support digital infrastructure growth while improving how data centres interact with electricity networks, particularly as demand increases from artificial intelligence and other compute-intensive workloads. By working together, the organisations intend to support improved integration between data centres and power systems while developing technical frameworks to enable more flexible operation. Arshad Mansoor, President and CEO of EPRI, comments, “We’re in the midst of an energy revolution, and it must be smart, flexible, and innovative to keep rates affordable for customers across the globe. “Through this collaboration with OCP, EPRI is combining rigorous power system science with open, scalable data centre innovation to advance practical solutions that enable data centres to operate as flexible, grid-supporting resources - strengthening reliability and affordability for all.” Developing flexible data centre energy models The collaboration brings together stakeholders across the energy and data centre sectors, including a European group involving DCFlex, National Grid, NESO, PPC, RTE, and RWE. This group is working to develop frameworks that reflect operational requirements, with a focus on improving resilience and scalability as data centre capacity expands. Activities include work on shared standards, testing environments, and implementation guidance for flexible data centre operations. Zane Ball, Chief Technology Officer at OCP, notes, “With a growing member base and top-tier data centre expertise coming together with a single vision, our collaboration creates opportunities for harmonised standards, shared testing environments, and coordinated guidance for implementing flexible, resilient, and affordable data centre solutions.” EPRI says it is also supporting the work through field demonstrations at data centres in Europe and the United States, exploring flexible load approaches that could support grid stability and reduce barriers to connection.

LS Electric wins $115m data centre contract
LS Electric, a South Korean manufacturer of electrical equipment and automation systems, has secured a $115 million (£84.9 million) contract to supply power infrastructure for a series of data centre developments across North America. The projects will support major technology companies expanding capacity for artificial intelligence and other compute-intensive applications, where consistent and high-quality power is required. Under the agreement, LS Electric will deliver switchgear and distribution transformers designed for continuous operation in high-demand environments. Expanding North American manufacturing footprint The deal comes at a time as data centre operators are increasing focus on power systems that offer reliability, adaptability, and long-term support as facilities scale to meet rising workloads. Large-scale developments of this kind also require suppliers able to meet strict technical standards while maintaining consistent delivery across manufacturing, logistics, and on-site coordination. LS Electric says it will support the projects from design through to commissioning. To fulfil the contract, LS Electric will utilise its growing industrial presence in North America, including operations in Utah and Texas, such as MCM Engineering II and its Bastrop campus. These facilities will support production and system integration, as well as ongoing regional expansion in engineered power infrastructure. LS Electric states it will continue to expand its offering for the sector, focusing on technologies that support reliable and energy-efficient data centre performance. For more from LS Electric, click here.

Lonestar unveils space-based data storage service, StarVault
Lonestar, a space-based data storage company building orbital and lunar data centres, has announced the launch of StarVault, the "world's first" commercial space-based data storage service, alongside plans to expand its orbital infrastructure through a new agreement with Sidus Space, a US space and defence technology company. The platform is designed to store data off-planet, combining space-based infrastructure with cryptographic key management. It is intended for use by organisations seeking additional resilience for critical data. Lonestar has also ordered a second orbital payload from Sidus Space to increase storage capacity and redundancy. The first payload is currently in development and is scheduled to launch in October aboard the LizzieSat-4 satellite, with a second launch planned for 2027. Expansion of orbital data infrastructure The expansion follows earlier test missions and increasing interest from sectors including government, finance, and critical infrastructure. The StarVault platform is designed to provide an additional layer of data protection, supporting resilience against risks such as cyber incidents, environmental disruption, and geopolitical instability. Steve Eisele, CEO of Lonestar, says, “Demand for off-planet data security has exceeded expectations. With StarVault, we are not just launching a new category; we are scaling it.” Sidus Space is building the initial payload, with further deployments expected as Lonestar develops its orbital data storage network. The companies state that the initiative represents an early step in the development of space-based data infrastructure, with a focus on secure storage beyond traditional terrestrial data centres.

Mission Critical Group invests in WattEV
Mission Critical Group (MCG), a critical power infrastructure company, has announced a strategic investment in WattEV to support the development of 800V DC power infrastructure for AI data centres. The partnership focuses on advancing power delivery systems designed to meet the increasing demands of high-density AI workloads, including generative AI and inference applications. As part of the agreement, Mission Critical Group will support the industrialisation and deployment of a medium-voltage solid-state transformer (SST) platform. This technology is intended to enable the transition to 800V DC architectures within large-scale data centre environments. The companies state that traditional AC-based power systems are facing limitations as AI workloads scale, driving interest in alternative approaches to power distribution. The proposed 800V DC architecture enables direct conversion from medium-voltage AC, with the aim of improving efficiency and reducing system complexity. The modular design is intended to support flexible deployment, faster installation, and easier expansion. High-density power delivery Jeff Drees, CEO of Mission Critical Group, says, “We are building the next evolution in modular power delivery. The investment in WattEV highlights our commitment to advancing solutions for ultra-high-density AI workloads, including generative AI and inference.” Michael Maiello, SVP of Innovation at Mission Critical Group, adds, “We are moving beyond incremental improvements to a fundamentally different power architecture. "By converting the ultra-high-power demands of AI directly from medium-voltage AC to 800 VDC, we unlock the full efficiency and performance benefits of 800 VDC distribution.” Salim Youssefzadeh, CEO of WattEV, concludes, “Our technology is already proven in high-power, real-world applications where efficiency and reliability are critical. Together with MCG, we’re bringing that performance into the data centre to accelerate the adoption of 800 VDC architectures with confidence and speed.” The companies state that the collaboration aims to support the deployment of scalable power infrastructure for next-generation AI data centres. For more from Mission Critical Group, click here.

Carrier launches AquaEdge chiller
Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced the AquaEdge 19MV4 centrifugal chiller, designed to support cooling requirements in high-density AI data centres. The system forms part of the company’s QuantumLeap portfolio and is intended for use in environments where increasing compute density and rising temperatures place pressure on existing cooling infrastructure. The chiller is designed to deliver between 2.1 MW and 3.3 MW of cooling capacity, supporting workloads driven by high-performance GPUs. It is also engineered to operate with chilled-water temperatures of up to 35°C and condensing temperatures up to 55°C, aligning with liquid cooling approaches such as direct-to-chip and rear-door heat exchangers. Designed for high-density cooling environments Carrier states that the system uses a variable-speed centrifugal compressor capable of operating between 10% and 100% load, allowing it to respond to fluctuating AI workloads without frequent cycling. Marti Urpinas, Senior Technical Manager, Vertical Markets EMEA, DC Applied at Carrier, comments, “AI workloads are reshaping data centre specifications, pushing our customers to seek greater thermal headroom without sacrificing power stability. "That sounds like a tall order, but the AquaEdge 19MV4 isn’t a ‘standard’ chiller; it’s a variable-speed centrifugal platform that delivers cooling continuity for high-density racks, even as operators push chilled-water temperatures higher to support direct-to-chip architectures.” The unit is designed to restart within 150 seconds following a power interruption, supporting thermal recovery and reducing the risk of overheating in high-density environments. It also incorporates harmonic filtering to limit electrical distortion and protect associated infrastructure, including uninterruptible power supplies (UPS). Carrier reports that the system can achieve a coefficient of performance (COP) of up to 6.75 and an integrated part load value (IPLV) of 11.4 under AHRI test conditions. The chiller is available with refrigerants including R-1234ze and R-515B, supporting compliance with EU F-Gas regulations. Additionally, noise levels are specified at below 80dBA under defined operating conditions. For more from Carrier, click here.

ZIEHL-ABEGG highlights ZAbluefin fan
German ventilation manufacturer ZIEHL-ABEGG has outlined the performance characteristics of its ZAbluefin centrifugal fan, designed for HVAC and air handling unit applications. The fan uses a biomimetic blade design, including a corrugated leading edge and twisted geometry, to improve airflow efficiency. A serrated trailing edge is intended to reduce turbulence and noise while maintaining stable performance under varying airflow conditions. According to the company, the design supports energy efficiency at typical operating points, particularly in environments where airflow may be disrupted. Focus on efficiency and low-noise operation The ZAbluefin fan is designed to reduce sound output, with a focus on minimising tonal noise, making it suitable for noise-sensitive environments. Its performance curve allows for a wider operating range without flow separation, enabling system designers to meet different requirements without oversizing equipment. The fan is also intended to support compliance with current and future efficiency regulations. The product range covers diameters from 250mm to 1,120mm, with airflow capability of up to around 90,000m³/h and static pressure up to approximately 2,500Pa. This allows use across both compact and large-scale HVAC systems. ZIEHL-ABEGG has also developed a one-piece mounting system to support installation. The mount is designed for multiple orientations, including horizontal and vertical configurations, and is intended to simplify installation and reduce component variation. The company states that the combined fan and mounting design aims to improve efficiency, reduce noise, and simplify deployment across a range of HVAC applications. For more from ZIEHL-ABEGG, click here.

A look ahead to DTX + UCX Manchester 2026
DTX + UCX Manchester, one of the UK’s leading business transformation events, will return to Manchester Central on 29–30 April 2026. As the flagship event of Manchester Tech Week, it’s set to bring together a renowned roster of speakers with an agenda dedicated to the event’s theme: 'From Purpose to Practice: Igniting Curiosity, Building Trust, Confronting Risk'. In an unmissable Day 1 keynote, two of the world's most formidable cyber authorities will tackle the defining challenge of our era: how to leverage AI against emerging threats. Featuring Howard Marshall, the former Deputy Assistant Director of the FBI's Cyber Division, and Kelly Bissell, the former Corporate VP of Product Abuse and Risk at Microsoft, this session offers a rare look at the front lines of global defence. Together, they will share high-level insights on moving from reactive monitoring to autonomous, real-time protection. The momentum continues into Day 2 with a deep dive into the next frontier of automation. Chief AI Officer Chiru Bhavansikar (Arhasi AI), Rahul Kulkarni (AWS), and Andreas Kollegger (Neo4j) will take the stage to dismantle the complexities of Agentic AI, highlighting how knowledge graphs are building the brains behind the next generation of intelligent systems. Across both days, senior technology leaders from Liverpool City Region, GCHQ, the Home Office, and N Brown will share real-world case studies and practical insights, focusing on cyber resilience strategies, regulatory requirements, and deploying AI in a secure, ethical, and commercially viable way. The event brings together IT decision makers covering the entire tech stack, including ITSM, cyber security, IT infrastructure and cloud, data management, communications and collaboration, customer experience, and AI and automation. Visitors can expect exclusive panels, workshops, technical deep-dives, and community meetups. To attend yourself, you can register for a free pass on the event’s website. For more from DTX, click here.

Rebellions, SKT, Arm partner on AI infrastructure
Rebellions, a South Korean semiconductor company, has announced a collaboration with South Korean telecommunications company SK Telecom (SKT) and British semiconductor and software design company Arm to develop AI inference infrastructure for sovereign AI and telecom-focused data centres. The partnership will focus on building an AI server combining an Arm-designed data centre CPU with Rebellions’s AI accelerators. The system will be tested within SK Telecom’s data centre environment before potential wider deployment. The initiative is intended to address growing demand for AI inference infrastructure, particularly in sectors requiring data sovereignty and telecom-specific processing capabilities. The planned platform will integrate Arm’s AGI CPU, based on the Neoverse CSS V3 architecture, with Rebellions’ RebelCard accelerator. The companies will also work together on the supporting software stack, including firmware, to ensure compatibility and performance. Development and validation of the AI server platform Testing will take place in SK Telecom’s operational data centres, where the infrastructure will be assessed for performance and stability. This includes evaluating its suitability for large-scale data processing and AI models used in telecommunications environments. There are also plans to assess the use of SK Telecom’s A.X K1 foundation model on the platform as part of the validation process. Following testing, the companies will consider broader deployment opportunities, with a focus on telecom operators and public sector organisations that require independent AI infrastructure. Jinwook Oh, CTO of Rebellions, says, “We expect this 'one-team' collaboration of experts to serve as a significant precedent in the industry for building AI-specialised infrastructure.” Jaeshin Lee, Vice President and Head of AI Business Development at SK Telecom, adds, “By providing a full package that combines inference-optimised infrastructure with our proprietary foundation model, A.X K1, we will further strengthen our competitiveness in the AI data centre market.” Eddie Ramirez, Vice President of the Cloud AI Business Unit at Arm, notes, “As AI infrastructure expands globally, CPUs play a critical role in coordinating workloads across accelerators, memory, and networking. "Arm AGI CPU, built on Arm Neoverse CSS V3, was designed to deliver the performance and efficiency required for large-scale AI deployments. Together with Rebellions and SK Telecom, we’re enabling scalable infrastructure for sovereign AI and telecommunications markets.”

SambaNova, Intel unveil hybrid AI platform
SambaNova, a company specialising in AI hardware and software, and American multinational technology company Intel have announced a new hybrid-chip platform designed to address data centre capacity constraints linked to AI workloads. The architecture combines GPUs for prefill processing, Intel Xeon 6 processors for system control and workload execution, and SambaNova’s reconfigurable dataflow units (RDUs) for inference decoding. The platform is expected to be available in the second half of 2026 for enterprise, cloud, and sovereign AI deployments. The design targets agent-based AI workloads, which require coordinated processing across multiple stages, including data input, model inference, and execution of external tools and applications. Hybrid approach to AI infrastructure The platform reflects a shift towards heterogeneous computing in data centres, where different processor types are used for specific tasks rather than relying solely on GPUs. In this model, GPUs handle the initial processing of prompts, while RDUs manage high-throughput inference tasks. Xeon 6 processors act as both the host system and execution layer, coordinating workloads, running code, and managing interactions with external systems. Rodrigo Liang, CEO and co-founder of SambaNova Systems, explains, “Agentic AI is moving into production, and the winning pattern we’re seeing is GPUs to start the job, Intel Xeon 6 to run it, and SambaNova RDUs to finish it fast. "Together with Intel, we’re giving customers a blueprint they can deploy in existing air-cooled data centres, with broad x86 coverage for the coding agents and tools they already use today.” Kevork Kechichian, Executive Vice President and General Manager of the Data Center Group at Intel, adds, “The data centre software ecosystem is built on x86 and it runs on Xeon, providing a mature, proven foundation that developers, enterprises, and cloud providers rely on at scale. "Workloads of the future will require a heterogeneous mix of computing, and this collaboration with SambaNova delivers a cost-efficient, high-performance inference architecture designed to meet customer needs at scale, powered by Xeon 6.” The companies state that the approach is intended to support increasing demand for AI inference, particularly as agent-based systems move from testing into production environments. Additional industry participants highlighted the growing need for scalable infrastructure to support coding agents and similar workloads, which rely on CPUs for execution alongside accelerators for inference. The announcement marks an expansion of the existing collaboration between SambaNova and Intel, with a focus on enabling large-scale AI deployment across data centre environments.



Translate »