Advertise on DCNN Advertise on DCNN Advertise on DCNN
 
1 July 2025
Industry analysts urge data trust over AI hype
 
1 July 2025
Spirent and Juniper Networks collaborate for first public UET test
 
30 June 2025
Microchip enhances TrustMANAGER platform
 
30 June 2025
Microchip enhances digital signal controller lineup
 
30 June 2025
3000km of DWDM deployed in 72 hours in Eastern Europe
 

Latest News


Data centre district heating project delivered at QMUL
Schneider Electric, a player in energy management and automation, and its EcoXpert Partner, Advanced Power Technology (APT), have delivered a data centre modernisation project at the Queen Mary University of London (QMUL). Together, the companies have created a platform for heat recovery at the University’s data centre, enabling waste heat from the facility to be connected to a campus-wide district heating network, providing heating and hot water for the buildings and student accommodation nearby. The project both reduces the campus' scope one CO2 emissions - in line with Queen Mary’s sustainability goals - and has also allowed it to reduce the costs of its energy bills. Furthermore, the new energy-efficient data centre has provided the university with increased resiliency and processing power for its on-premises, large-scale research and intensive computing applications, helping it to provision for future expansion. Queen Mary University of London is ranked 94th in the world in the 2025-26 edition of the US News and World Report Best Global Universities rankings, and today has over 32,000 students from more than 170  nationalities and 5,700 staff - with nine Nobel Prize winners among its former staff and students. It says it is committed to conducting "world-leading research" and adheres to the principles of sustainable development across all areas of its operational and academic activities. Its vision is to create and oversee the evolution of a large-scale distributed computing infrastructure needed to maintain the UK’s position as a world leader in particle physics. As such, the university is a participant in the Grid for Particle Physics (GridPP) project, a collaborative effort among particle physicists, computer scientists, and engineers to analyse data generated by high-energy physics experiments, such as those conducted at the world-famous Large Hadron Collider (LHC) at CERN in Switzerland. The size, scale, and importance of this work means that the university must operate and maintain a highly-efficient, on-premises data centre - ensuring it meets the technical requirements of existing and future research developments, especially those requiring High Throughput Computing (HTC) applications. Prior to the modernisation project, Queen Mary’s data centre was experiencing reliability, scalability, and availability issues which required manual, on-site interventions to fix. It was also becoming outdated and its operations were, at times, impacted due to a build-up of heat in its server racks from its inefficient cooling systems. Future research computing may also have been hindered due to the data centre’s hosting limitations. The refresh was, therefore, vital to improve and stabilise day-to-day operations. In addition, its proximity to the campus’ district heating network presented an opportunity for a new solution be designed and implemented to bring the data centre in line with the university’s sustainability goals. Schneider Electric’s data centre, power, and cooling solutions were already installed across Queen Mary’s estate, so when it came to the plans to upgrade its operations, the university directly sought help from Schneider Electric’s partner ecosystem. Schneider Electric’s long-standing EcoXpert Partners, Advanced Power Technology (APT), an independent supplier of critical power and cooling systems, was selected to help Queen Mary meet its modernisation and sustainability goals. Key to the strategy was the integration of components including Schneider Electric’s EcoStruxure Row Data Centre system. It also incorporated APC NetShelter Racks, APC NetBotz environmental monitoring equipment, InRow cooling, and EcoStruxure Data Centre Expert software. The new configuration provided by APT, according to Schneider Electric, delivered a more energy-efficient cooling solution and enabled the heat recovery to support the university’s sustainability strategy – allowing Queen Mary to transfer waste heat and reuse it directly for heating and hot water across various buildings, including student accommodation, via a district heating system. Professor Jonathan Hays, Queen Mary University of London, comments, “The support we've had from APT and Schneider Electric has been unparalleled. Both companies came together to help us develop an exciting and innovative project which would enable us to provision for the future. The biggest impact is that we were able to deliver on what we promised while improving our sustainability. The new data centre is more reliable and efficient than ever and, through the heat recovery, we have significantly reduced our spending on heating and hot water while gaining enhanced reputational benefits from taking a lead on sustainability within our data centre operations.” “The project at Queen Mary demonstrates how digital infrastructure can be a catalyst for net zero, allowing today’s organisations to benefit from the power of advanced computing,” adds Mark Yeeles, Vice President, Secure Power division, Schneider Electric UK & Ireland. “By combining innovative engineering with sustainable data centre solutions, the university has developed an enhanced infrastructure platform that will meet its research computing requirements while supporting its sustainability strategy.” “Schneider Electric’s EcoStruxure Data Centre solutions were essential to help Queen Mary bring together its power, cooling, racks, and management systems, and support the deployment of its high-density IT equipment needed for its research,” claims John Andrew, Technical Sales Manager, APT. “This approach also created a platform to support its sustainability objectives via heat reuse, while enabling the University to act proactively and preventatively to intercept and remediate potential future issues.” For more from Schneider Electric, click here.

PoliCloud raises €7.5 million
PoliCloud, a provider and developer of high performance computing (HPC) cloud infrastructure, has announced its €7.5 million (£6.42 million) seed fundraise. The funding was led by Global Ventures, a VC firm in MENA, with participation from MI8 Limited, a Hong Kong multi-family office; OneRagtime, a Paris-based venture capital firm; Inria, France’s National Institute for Research in Digital Science and Technology; and other private investors. The proceeds will be used to hire the operating team and grow the business globally with a focus on public entities in Europe. PoliCloud says it is responding to demand following global cloud growth (~20% annually). Accelerated demand for AI requires affordable and scalable computing power, and the market is ripe for a Europe-led solution to lessen dependence on US cloud providers, who currently dominate the $800 billion (£583.9 billion) market. David Gurlé, Founder of PoliCloud, claims, “PoliCloud is meeting a critical market demand for sovereign cloud infrastructure that is not only secure and abundant, but also eco-responsible. Our unique edge computing capabilities deliver significant benefits to both public and private sector users. “The time is right for a new, European solution that reduces reliance on US cloud providers and offers affordable, scalable computing power, especially as AI adoption accelerates. We are grateful to Global Ventures and all our investors for their support as we enter this exciting phase of expansion.” Current cloud expansion suffers from high usage costs and dependence on hyperscalers - such as Google or Amazon - whose models use massive, centralised data facilities with high implementation costs and challenging environmental conditions. In this respect, PoliCloud claims it has the following competitive advantages: ● 'Unlimited and flexible computing power,' provided by federating with the grid. By y/e 2025, it says it will have >1,000+ GPUs and by y/e 2026 >20,000+ GPUs;● Computing resources are delivered to where they are needed and 'empower local communities;'● Small footprint and energy needs;● Rapid time to market, with flexibility and adaptability;● Capex and Opex offset by sharing unused capacity; and● 'More resilient, higher performance, and more scalable by design.' PoliCloud’s operating model combines its hardware and infrastructure with Hivenet’s distributed storage and computing software. PoliCloud designs, builds, and operates its own computers and micro-data centres. It was launched in February 2025 at the World Artificial Intelligence Cannes Festival (WAICF) with support from the five cities of the Alpes-Maritimes. Simon Sharp, Senior Partner of Global Ventures, comments, “Global Ventures is delighted to lead PoliCloud’s seed fund raise and work again with David and his talented management team, following their track record of successful delivery in Hivenet. [...] Their distributed data centres have multiple competitive advantages: delivering next-gen, sovereign computing resources where they are needed; with more resilience; faster performance; greater security; while being cheaper to build and maintain. The exponential growth in AI demand and the need for reliable, scalable computing power means the company’s future is a very bright one.” Stephanie Hospital, Founder & CEO of OneRagtime, adds, “As an early investor and believer in David and Hivenet, [...] OneRagtime is excited to invest in PoliCloud. The company is uniquely positioned to provide decentralised, unlimited computing power affordably, securely, and in an eco-responsible way – for which substantial demand exists.” Bruno Sportisse, CEO of Inria, says, "Inria Participations is delighted to become an investor in PoliCloud as it is a logical extension of Inria's existing strategic partnership with Hivenet. Inria and PoliCloud share the same philosophy of a decentralised path to the cloud and for secure, distributed computing, but where resources can also be shared according to need. Achieving this goal is of strategic importance for France and its digital sovereignty." Guillaume Dhamelincourt, Managing Director of Mi8, concludes, “The opportunity to invest in PoliCloud was compelling for Mi8 as the world embraces AI and rapidly adjusts its demand for computing power. The multiple use cases for PoliClouds, such as SMEs - but also public enterprises who want to stay mindful of their IT strategy's impact - is an attractive market environment and we look forward to PoliCloud’s future growth with great confidence.”

L2Tek launches Gigalight silicon photonics transceivers
L2Tek, a technical distributor of components for broadcast, professional video, and high-speed data networks, has announced the immediate availability of Gigalight’s silicon photonics (SiPh) transceiver portfolio for customers across the UK and Europe. The new offering includes 100G, 400G, and 800G QSFP modules engineered to meet the bandwidth and density requirements of AI workloads in hyperscale data centres and edge computing. Silicon photonics technology enables the integration of optical and electronic components on a single chip, offering a compact footprint, reduced power consumption, and support for high data rates. Gigalight’s SiPh-based transceivers are designed for short-to-medium reach applications and leverage PAM4 modulation. “Gigalight’s silicon photonics portfolio now available exclusively through L2Tek to UK and European customers, delivers the bandwidth, density, and energy efficiency needed for AI data centres. These transceivers use advanced modulation and chip-level integration to meet the demands of modern interconnects while offering a compelling alternative to traditional solutions,” claims Mark Scott-South, Director, L2Tek. The 100G DR1 transceiver, available in QSFP28 format, supports up to 500 metres over a single 1310nm wavelength. The 400G DR4 model, housed in a QSFP-DD form factor, delivers four 100G lanes over a 500-metre link, making it suitable for high-density leaf-spine and AI cluster deployments. New to the lineup is Gigalight’s 800G DR8 transceiver, based on a new silicon photonics platform and available in OSFP format. It supports eight lanes of 100G PAM4 and is engineered, the company says, for performance across a range of data centre conditions. The transceiver is compliant with IEEE 802.3 standards and would be suited to high-performance computing, cloud infrastructure, and data centre interconnects. Compared to traditional transmitter technologies such as distributed modulated lasers (DMLs) and externally modulated lasers (EMLs), silicon photonics could offer cost-effective scalability at volume, while maintaining the bandwidth and integration benefits required by next-generation network infrastructure. While EMLs continue to serve long-haul and regional network requirements, SiPh modules are emerging as a strong alternative for AI/GPU clusters and cloud computing environments. For more from L2Tek, click here.

Reinforce cooling to avoid summer downtime, operators urged
Off the back of unseasonably high spring temperatures, data centre operators are being encouraged to prepare for the summer heat by working with specialist partners to supplement cooling in emergencies, maintenance, and upgrades. The callout comes from temporary power generation and temperature control company Aggreko, which has warned that the combination of rising temperatures and ageing infrastructure could significantly impact uptime on industrial, commercial, and retail sites across the UK. Temperatures exceeding 25°C are now becoming increasingly common throughout the nation, placing older generations of equipment, which aren’t designed to operate in these ranges, at risk of overheating and subsequently failing. The chances of breakdowns are drastically raised if equipment hasn’t been properly maintained, with blocked condenser coils potentially forcing a system to overwork to the point of compressor failure. In the data centre sector, even a brief failure in cooling systems could lead to catastrophic consequences. Without adequate temperature control, overheating can lead to hardware damage, data loss, and service outages, resulting in severe financial penalties. As temperatures this year have already reached over 29°C, Chris Smith, Head of Temperature Control for UK and Ireland at Aggreko, has called upon data centre operators to assess their cooling capacity to ensure that critical operations remain uninterrupted. He says, “If recent temperatures are anything to go by, then this summer is set to bring even more extreme conditions capable of driving equipment to the point of failure. If facilities rely on ageing HVAC systems to keep processes ticking, then the risk of breakdowns during heatwaves only increases. “Working with a specialist in both HVAC and power can be the real difference maker. Doing so provides contractors with the opportunity to leverage specialist expertise and tailored solutions that address immediate cooling needs and safeguard operations against the risks posed by extreme temperatures.” Aggreko claims that with a 'thorough understanding of the challenges of critical temperature applications,' its team of technical experts can help determine the temporary and supplementary cooling, heating, and dehumidification solutions required based on a project, location, and temperature requirements. Its cooling provision spans industrial chillers ranging from 50kW to 1500kW, air conditioners in sizes from 50kW to 200kW, and cooling towers with single units from 2500kW or combined units for multi-megawatt projects. For more from Aggreko, click here.

Nasuni achieves AWS Energy & Utilities Competency status
Nasuni, a unified file data platform company, has announced that it has achieved Amazon Web Services (AWS) Energy & Utilities Competency status. This designation recognises that Nasuni has demonstrated expertise in helping customers leverage AWS cloud technology to transform complex systems and accelerate the transition to a sustainable energy and utilities future. To receive the designation, AWS Partners undergo a rigorous technical validation process, including a customer reference audit. The AWS Energy & Utilities Competency provides energy and utilities customers the ability to more easily select skilled partners to help accelerate their digital transformations. "Our strategic collaboration with AWS is redefining how energy companies harness seismic data,” comments Michael Sotnick, SVP of Business & Corporate Development at Nasuni. “Together, we’re removing traditional infrastructure barriers and unlocking faster, smarter subsurface decisions. By integrating Nasuni’s global unified file data platform with the power of AWS solutions including Amazon Simple Storage Service (S3), Amazon Bedrock, and Amazon Q, we’re helping upstream operators accelerate time to first oil, boost capital efficiency, and prepare for the next era of data-driven exploration." AWS says it is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify AWS Partners with industry experience and expertise. By bringing together Nasuni’s cloud-native file data platform with Amazon S3 and other AWS services, the company claims energy customers could eliminate data silos, reduce interpretation cycle times, and unlock the value of seismic data for AI-driven exploration. For more from Nasuni, click here.

Bitrise first mobile DevOps platform to launch data centre in EU
Bitrise, a mobile DevOps platform, today announced plans to launch a data centre in the Netherlands in a response to increased demand for data residency in the European Union (EU). The new data centre will be the first in the EU operated by a DevOps platform, aiming to provide businesses with a fully-hosted and managed solution to meet the stringent data security and compliance requirements of the region. Bitrise will invest $3 million (£2.2 million) in the project, supporting the anticipated 22% year-on-year growth in European data centre capacity in 2025 as the continent focuses on operational resilience and data sovereignty. “In an era of geopolitical volatility and increasing regulatory complexity, mobile innovation in Europe demands sovereignty, speed, and security,” announces Barnabás Birmacher, CEO and Co-founder of Bitrise. “By launching the first EU-hosted DevOps platform, Bitrise is giving customers total control over their data, ensuring compliance and empowering them to scale development faster and more securely.” This investment in an Amsterdam-based data centre marks a step forwards in enhancing support for EU customers. By replicating the data centre model used in the US, Bitrise intends to deploy the same high-performance Apple M4 and Linux-based infrastructure in Europe, allowing businesses to choose their data residency and meet risk and compliance requirements. This expansion is a direct response to the growing demand from EU-based companies and global brands operating in the region. By strengthening data security and sovereignty, customers should have access to the tools they need to scale development securely and stay competitive in a rapidly changing market. “With data sovereignty becoming a critical priority for European businesses, Bitrise’s move to launch an EU-based data centre couldn’t be more timely," comments Reza Malekzadeh, General Partner at Partech and Bitrise board member. "Bitrise is setting a new standard for DevOps in Europe by giving companies the ability to meet stringent regulatory requirements without compromising on speed or innovation.” Recent regulatory changes and international data transfer challenges have created a complex environment for companies operating across borders. The data centre market in Europe is estimated to grow by $291.7 billion (£214.3 billion) from 2024 to 2028, driven by demand for local data processing and storage solutions. European companies in security-sensitive and regulated industries often rely on cloud providers in the US or spend millions to build their own local infrastructure. This has created a major gap in the market for compliant, hosted solutions. “We recognise the critical need for sovereign hosting solutions for mobile CI/CD infrastructure in the EU,” Barnabás says. “This move not only strengthens our presence in Europe, but underscores our commitment to solving the complex challenges our partners face, allowing them to innovate and scale without compromise.” Bitrise’s Amsterdam data centre will, according to the company, emulate Bitrise’s existing infrastructure model, providing:• Access to the fastest Apple Silicon and Linux machines for iOS and Android.• Advanced physical and network security measures.• Full compliance with EU data protection standards.• High-speed connectivity to ensure rapid build times. The data centre will support all Bitrise products and services, aiming to allow customers to build, test, and automate their applications without source code ever leaving the EU. In contrast to the majority of DevOps providers with US-only hosting capabilities, Bitrise’s expansion, the company claims, creates a DevOps platform that caters to data residency and digital operational resilience requirements. “By filling this gap in the market, we’re addressing a critical need for businesses throughout the EU,” Barnabás continues. “Our ability to quickly deploy and scale infrastructure based on our successful US model allows us to move fast and establish a strong presence in this underserved market.”

Chemists create molecular magnet, boosting data storage by 100x
Scientists at The University of Manchester have designed a molecule that can remember magnetic information at the highest temperature ever recorded for this kind of material. In a boon for the future of data storage technologies, the researchers have made a new single-molecule magnet that retains its magnetic memory up to 100 Kelvin (-173 °C) – around the temperature of the moon at night. The finding, published in the journal Nature, is a significant advancement on the previous record of 80 Kelvin (-193 °C). While still a long way from working in a standard freezer, or at room temperature, data storage at 100 Kelvin could be feasible in huge data centres, such as those used by Google. If perfected, these single-molecule magnets could pack vast amounts of information into incredibly small spaces – possibly more than three terabytes of data per square centimetre. That’s around half a million TikTok videos squeezed into a hard drive that’s the size of a postage stamp. The research was led by The University of Manchester, with computational modelling led by the Australian National University (ANU). David Mills, Professor of Inorganic Chemistry at The University of Manchester, comments, “This research showcases the power of chemists to deliberately design and build molecules with targeted properties. The results are an exciting prospect for the use of single-molecule magnets in data storage media that is 100 times more dense than the absolute limit of current technologies. “Although the new magnet still needs cooling far below room temperature, it is now well above the temperature of liquid nitrogen (77 Kelvin), which is a readily available coolant. So, while we won’t be seeing this type of data storage in our mobile phones for a while, it does make storing information in huge data centres more feasible.” Magnetic materials have long played an important role in data storage technologies. Currently, hard drives store data by magnetising tiny regions made up of many atoms all working together to retain memory. Single-molecule magnets can store information individually and don’t need help from any neighbouring atoms to retain their memory, offering the potential for incredibly high data density. But, until now, the challenge has always been the incredibly cold temperatures needed in order for them to function. The key to the new magnets’ success is the unique structure, with the element dysprosium located between two nitrogen atoms. These three atoms are arranged almost in a straight line – a configuration predicted to boost magnetic performance, but now realised for the first time. Usually, when dysprosium is bonded to only two nitrogen atoms it tends to form molecules with more bent or irregular shapes. In the new molecule, the researchers added a chemical group called an alkene that acts like a molecular pin, binding to dysprosium to hold the structure in place. The team at the Australian National University developed a new theoretical model to simulate the molecule’s magnetic behaviour to allow them to explain why this particular molecular magnet performs so well compared to previous designs. Now, the researchers will use these results as a blueprint to guide the design of even better molecular magnets.

InfraPartners launches Advanced Research and Engineering
InfraPartners, a designer and builder of prefabricated AI data centres, today announced the launch of a new research function, InfraPartners Advanced Research and Engineering. Led by recent hire Bal Aujla, previously the Global Head of Innovation Labs at BlackRock, InfraPartners has assembled a team of experts based in Europe and the US to act as a resource for AI innovation in the data centre industry. The function seeks to foster industry collaboration to provide forward-looking insights and thought leadership. AI demand is driving a surge in new global data centre builds, which are projected to triple by 2030. AI-specific infrastructure is expected to drive approximately 70% of this growth. Additionally, regulation, regionalisation, and geopolitical shifts are reshaping infrastructure needs. As a result, operators are looking at new ways to meet these changes with solutions that deliver scale, schedule certainty, and accelerated time-to-value while improving sustainability and avoiding technology obsolescence. InfraPartners Advanced Research and Engineering intends to accelerate data centre innovation by identifying and focusing on the biggest opportunities and challenges of this next wave of AI-driven growth. With plans for Gigawatts (GWs) of data centre builds globally and projected investments reaching nearly $7 trillion (£5.15 trillion), the impact of new innovation will be significant. Through partnerships with industry experts, regulators, and disruptive newcomers, the InfraPartners Advanced Research and Engineering team aims to foster a community where ideas and research can be shared to grow data centre knowledge, capabilities, and opportunities. These efforts will aim to advance the digital infrastructure sector as a whole. “At InfraPartners, our new research function represents the deliberate convergence of expertise from across the AI and data centre ecosystem. We’re bringing together professionals with diverse perspectives and backgrounds in artificial intelligence, data centre architecture, power infrastructure, and capital allocation to address the evolving needs of AI and the significant value it can bring to the world,” says Bal Aujla, Director, Head of Advanced Research and Engineering at InfraPartners. “This integrated team approach enables us to look at opportunities and challenges from end-to-end and across every layer of the stack. We’re no longer approaching digital infrastructure as a siloed engineering challenge. Instead, the new team will focus on the initiatives that have the most impact on transforming data centre architecture and creating business value.” InfraPartners Advanced Research and Engineering says it has developed a new design philosophy that prioritises flexibility, upgradeability, and rapid refresh cycles. Called the 'Upgradeable Data Center,' this design, it claims, future-proofs data centre investments and enables greater resilience and sustainability in a fast-changing digital landscape. “The Upgradeable Data Center reflects the fact data centres must now be built to evolve. In a world where GPU generations shift every 12–18 months and designs change significantly each time, it is no longer viable to build static infrastructure which has decades-long refresh cycles. Our design approach enables operators to deploy the latest GPUs and upgrade data centre infrastructure in a seamless way,” notes Harqs Singh, Chief Technology Officer at InfraPartners. In its first white paper, Data Centers Transforming at the pace of Technology change, the team explores the rapid growth of AI workloads and its impact on digital infrastructure, including the GPU technology roadmap, increasing rack densities, and the implications for the modern data centre. It highlights the economic risks and commercial opportunities emerging from these trends and introduces how the Upgradeable Data Center is seeking to enable new data centres to transform at the pace of technology change. InfraPartners' model is to build 80% of the data centre offsite and 20% onsite, helping address key industry challenges like skilled labour shortages and power constraints, whilst aligning infrastructure investment with business agility and long-term growth.

LSC completes new dark fibre route in Kansas City
Light Source Communications (LSC), an owner-operator of networks serving enterprises throughout the US, has announced it has completed work on a new dark fibre network in Kansas City, Missouri, USA, aiming to deliver new opportunities to the region’s tech-rich ecosystem at a time when artificial intelligence (AI) and other high-performance computing (HPC) technologies are driving demand for greater connectivity. The 35-mile metro ring already has a major hyperscaler as the anchor tenant, with more on the way, as well as connections to four data centres so far. The Kansas City route is the first of LSC’s four new network builds to be completed this year, with projects in Las Vegas, Phoenix, and Tulsa also on track to be finished in 2025. “We’re thrilled to raise a toast to this exciting milestone,” says Debra Freitas, CEO of LSC. “Expanding into the Kansas City market is a key step in our strategic growth across high-demand US regions. As AI and HPC continue to drive unprecedented connectivity needs, we remain committed to delivering high-capacity, low-latency solutions to organisations of all sizes. As a carrier-neutral, customer-agnostic provider, LSC is proud to support the evolving demands of today’s digital economy.” Kansas City is the third-fastest-growing tech market in the US and has emerged as a hub for data centre projects. The area’s infrastructure, existing tech sector, and trained workforce make it a prime location for LSC’s dark fibre network. The route will be entirely underground with a high fibre count and conduit system. The Kansas City project follows a similar pattern to LSC’s other dark fibre builds underway. In Las Vegas, the company is building a 60-mile route that intends to bring hyper-connectivity to one of the country’s fastest-growing data centre markets. The Phoenix network will encompass 300+ miles and 15 rings. In Tulsa, LSC is adding 80 miles of new fibre to its existing 50-mile network. In addition to all of the networks being underground, all are anchored by a hyperscale tenant.

IBM, RIKEN unveil first Quantum System Two outside of the US
IBM, an American multinational technology corporation, and RIKEN, a national research laboratory in Japan, have unveiled the first IBM Quantum System Two ever to be deployed outside of the United States and beyond an IBM quantum data centre. The availability of this system also marks a milestone as the first quantum computer to be co-located with RIKEN's supercomputer, Fugaku — one of the most powerful classical systems on Earth. This effort is supported by the New Energy and Industrial Technology Development Organisation (NEDO), an organisation under the jurisdiction of Japan's Ministry of Economy, Trade, and Industry (METI)'s 'Development of Integrated Utilisation Technology for Quantum and Supercomputers' as part of the 'Project for Research and Development of Enhanced Infrastructures for Post 5G Information and Communications Systems.' IBM Quantum System Two at RIKEN is powered by IBM's 156-qubit IBM Quantum Heron, one of the company's quantum processors. IBM Heron's quality as measured by the two-qubit error rate, across a 100-qubit layered circuit, is 3x10-3 — which, the company claims, is 10 times better than the previous generation 127-qubit IBM Quantum Eagle. IBM Heron's speed, as measured by the CLOPS (circuit layer operations per second) metric, is 250,000, which would reflect another 10 times improvement in the past year over IBM Eagle. At a scale of 156 qubits, with these quality and speed metrics, Heron is the most performant quantum processor in the world. This latest Heron is capable of running quantum circuits that are beyond brute-force simulations on classical computers, and its connection to Fugaku will enable RIKEN teams to use quantum-centric supercomputing approaches to push forward research on advanced algorithms, such as fundamental chemistry problems. The new IBM Quantum System Two is co-located with Fugaku within the RIKEN Center for Computational Science (R-CCS), one of Japan's high-performance computing (HPC) centres. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. This low-level integration aims to allow RIKEN and IBM engineers to develop parallelised workloads, low-latency classical-quantum communication protocols, and advanced compilation passes and libraries. Because quantum and classical systems will ultimately offer different computational strengths, this hopes to allow each paradigm to perform the parts of an algorithm for which it is best suited. This new development expands IBM's global fleet of quantum computers and was officially launched during a ribbon-cutting ceremony today (24 June 2025) in Kobe, Japan. The event featured opening remarks from RIKEN President Makoto Gonokami; Jay Gambetta, IBM Fellow and Vice President of IBM Quantum; Akio Yamaguchi, General Manager of IBM Japan; as well as local parliament members and representatives from the Kobe Prefecture and City, METI, NEDO, and MEXT. "The future of computing is quantum-centric and with our partners at RIKEN we are taking a big step forward to make this vision a reality," claims Jay Gambetta, VP, IBM Quantum. "The new IBM Quantum System Two, powered by our latest Heron processor and connected to Fugaku, will allow scientists and engineers to push the limits of what is possible." "By combining Fugaku and the IBM Quantum System Two, RIKEN aims to lead Japan into a new era of high-performance computing," says Mitsuhisa Sato, Division Director of the Quantum-HPC Hybrid Platform Division, RIKEN Center for Computational Science. "Our mission is to develop and demonstrate practical quantum-HPC hybrid workflows that can be explored by both the scientific community and industry. The connection of these two systems enables us to take critical steps toward realising this vision." The installation of IBM Quantum System Two at RIKEN is poised to expand previous efforts by RIKEN and IBM researchers as they seek to discover algorithms that offer quantum advantage: the point at which a quantum computer can solve a problem faster, cheaper, or more accurately than any known classical method. This includes work recently featured on the cover of Science Advances, based on sample-based quantum diagonalisation (SQD) techniques to accurately model the electronic structure of iron sulphides, a compound present widely in nature and organic systems. The ability to realistically model such a complex system is essential for many problems in chemistry, and was historically believed to require fault-tolerant quantum computers. SQD workflows are among the first demonstrations of how the near-term quantum computers of today can provide scientific value when integrated with powerful classical infrastructure. For more from IBM, click here.



Translate »