Data Centre Software for Smarter Operations


Schneider joins OpenUSD alliance to advance digital twins
Global energy technology company Schneider Electric, alongside AVEVA and ETAP, has joined the Alliance for OpenUSD (AOUSD) to accelerate the development of interoperable digital twins and simulation-ready 3D assets for industrial environments. The three companies join existing contributors such as NVIDIA, Pixar, Adobe, and Autodesk. The announcement was made during Schneider Electric’s Innovation Summit North America in Las Vegas, which brought together more than 2,500 industry leaders to discuss the future of resilient and intelligent energy systems. OpenUSD is an extensible framework designed to improve interoperability between software tools and data formats used to build virtual environments and industrial digital twins. By joining the alliance, the three companies aim to advance open standards that support industrial simulation, collaborative design, and large-scale AI infrastructure planning. Supporting next-generation digital twins The collaboration aligns the companies more closely with NVIDIA’s vision for real-time, physically accurate digital twins that can model buildings, data centres, factories, and emerging AI infrastructure. Many organisations now use NVIDIA Omniverse libraries to develop digital twin applications that optimise design, performance, and sustainability. By adopting OpenUSD as a shared foundation, Schneider Electric, AVEVA, and ETAP aim to support new capabilities across: • SimReady asset development, enabling interoperable models of physical infrastructure such as power and cooling systems for use in Omniverse-based simulations. • Digital twin collaboration, allowing integrated views of complex systems, including data centres, energy networks, and industrial facilities built on platforms such as EcoStruxure, AVEVA, and ETAP. • AI infrastructure planning, using tools including the NVIDIA Omniverse DSX Blueprint to support the co-design of gigawatt-scale AI factories. These capabilities are intended to support more accurate modelling of thermal behaviour, power distribution, airflow, and other operational variables within data centres and industrial sites. Jim Simonelli, Senior Vice President and Chief Technology Officer for Data Centres at Schneider Electric, comments, “Joining the Alliance allows us to contribute to a shared digital language that empowers collaboration, simulation, and innovation across the AI ecosystem.” Rev Lebaredian, Vice President of Omniverse and Simulation Technology at NVIDIA, notes, “To efficiently design and operate complex systems like AI factories, industries need a robust, simulation-ready foundation. "Schneider Electric’s expertise in energy management, hardware, and software, combined with NVIDIA Omniverse libraries, will accelerate the creation of the AI factories and intelligent grids of the future.” Expanding long-term collaboration The three companies have an established partnership with NVIDIA across digital twin development and AI infrastructure design. They are co-developing reference architectures and integrated hardware and software approaches to support power, cooling, and energy management for next-generation AI factories. At the recent GTC DC event, Schneider Electric was named as a power, cooling, and energy technology partner for the NVIDIA AI Factory Research Center, which is powered by the NVIDIA Vera Rubin platform. The facility serves as a foundation for the Omniverse DSX Blueprint and supports research in generative AI, scientific computing, and advanced manufacturing. In March 2025, ETAP by Schneider Electric released a new digital twin tool capable of accurately modelling the power requirements of AI factories. For more from Schneider Electric, click here.

Red Hat adds support for OpenShift on NVIDIA BlueField DPUs
Red Hat, a US provider of open-source software, has announced support for running Red Hat OpenShift on NVIDIA BlueField data processing units (DPUs). The company says the development is intended to help organisations deploy AI workloads with improved security, networking, and storage performance. According to Red Hat, modern AI applications increasingly compete with core infrastructure services for system resources, which can affect performance and security. The company states that running OpenShift with BlueField aims to separate AI workloads from infrastructure functions, such as networking and security, to improve operational efficiency and reduce system contention. It says the platform will support enhanced networking, more streamlined lifecycle management, and resource offloading to the DPU. Workload isolation and resource efficiency Red Hat states that by shifting networking services and infrastructure management tasks to the DPU, CPU resources can be used for AI applications, also highlighting acceleration features for data-plane and storage-traffic processing, including support for NVMe over Fabrics and optimised Open vSwitch data paths. Additional features include distributed routing for multi-tenant environments and security controls designed to reduce attack surfaces by isolating workloads away from infrastructure services. Support for BlueField on OpenShift will be offered initially as a technical preview, with broader integration planned. Red Hat notes that ongoing work with NVIDIA aims to add further support for the NVIDIA DOCA software framework and third-party network functions. The companies also expect future capability enhancements with the next generation of BlueField hardware and integration with NVIDIA’s Spectrum-X Ethernet networking for distributed AI environments. Ryan King, Vice President, AI and Infrastructure, Partner Ecosystem Success at Red Hat, comments, “As the adoption of generative and agentic AI grows, the demand for advanced security and performance in data centres has never been higher, particularly with the proliferation of AI workloads. "Our collaboration with NVIDIA to enable Red Hat OpenShift support for NVIDIA BlueField DPUs provides customers with a more reliable, secure, and high-performance platform to address this challenge and maximise their hardware investment.” Justin Boitano, Vice President, Enterprise Products at NVIDIA, adds, “Data-intensive AI reasoning workloads demand a new era of secure and efficient infrastructure. "The Red Hat OpenShift integration of NVIDIA BlueField builds on our longstanding work to empower organisations to achieve unprecedented scale and performance across their AI infrastructure.” For more from Red Hat, click here.

Cadence adds NVIDIA DGX SuperPOD to digital twin platform
Cadence, a developer of electronic design automation software, has expanded its Reality Digital Twin Platform library with a digital model of NVIDIA’s DGX SuperPOD with DGX GB200 systems. The addition is aimed at supporting data centre designers and operators in planning and managing facilities for large-scale AI workloads. The Reality Digital Twin Platform enables users to create detailed digital replicas of data centres, simulating power, cooling, space, and performance requirements before physical deployment. By adding the NVIDIA DGX SuperPOD, Cadence says engineers can model AI factory environments with greater accuracy, supporting faster deployment and improved operational efficiency. Digital twins for AI data centres Michael Jackson, Senior Vice President of System Design and Analysis at Cadence, says, “Rapidly scaling AI requires confidence that you can meet your design requirements with the target equipment and utilities. "With the addition of a digital model of NVIDIA’s DGX SuperPOD with DGX GB200 systems to our Cadence Reality Digital Twin Platform library, designers can model behaviourally accurate simulations of some of the most powerful accelerated systems in the world, reducing design time and improving decision-making accuracy for mission-critical projects.” Tim Costa, General Manager of Industrial and Computational Engineering at NVIDIA, adds, “Creating the digital twin of our DGX SuperPOD with DGX GB200 systems is an important step in enabling the ecosystem to accelerate AI factory buildouts. "This step in our ongoing collaboration with Cadence fills a crucial need as the pace of innovation increases and time-to-service shrinks.” The Cadence Reality Digital Twin Platform allows engineers to drag and drop vendor-provided models into simulations to design and test data centres. It can also be used to evaluate upgrade paths, failure scenarios, and long-term performance. The library currently contains more than 14,000 items from over 750 vendors. Industry engagement The addition of the NVIDIA model is part of Cadence’s ongoing collaboration with NVIDIA, following earlier support for the NVIDIA Omniverse blueprint for AI factory design. Cadence will highlight the expanded platform at the AI Infra Summit in Santa Clara from 9-11 September, where company experts will take part in keynotes, panels, and talks on chip efficiency and simulation-driven data centre operations. For more from Cadence, click here.

Microchip enhances TrustMANAGER platform
International cybersecurity regulations continue to adapt to meet the evolving threat landscape. One major focus is on outdated firmware in IoT devices, which can present significant security vulnerabilities. To address these challenges, Microchip Technology, an American semiconductor manufacturer, is enhancing its TrustMANAGER platform to include secure code signing and Firmware Over-the-Air (FOTA) update delivery as well as remote management of firmware images, cryptographic keys, and digital certificates. These advancements support compliance with the European Cyber Resilience Act (CRA) which mandates strong cybersecurity measures for digital products sold in the European Union (EU). Aligned with standards like the European Telecommunications Standards Institute (ETSI) EN 303 645 baseline requirements of cybersecurity for consumer IoT and the International Society of Automation (ISA)/International Electrotechnical Commission (IEC) 62443 security of industrial automation and control systems standards, the CRA sets a precedent that is expected to influence regulations worldwide. Microchip’s ECC608 TrustMANAGER leverages Kudelski IoT’s keySTREAM Software as a Service (SaaS) to deliver a secure authentication Integrated Circuit (IC) that is designed to store, protect, and manage cryptographic keys and certificates. With the addition of FOTA services, the platform helps customers securely deploy real-time firmware updates to remotely patch vulnerabilities and comply with cybersecurity regulations. “As evolving cybersecurity regulations require connected device manufacturers to prioritise the implementation of mechanisms for secure firmware updates, lifecycle credential management, and effective fleet deployment, the addition of FOTA services to Microchip’s TrustMANAGER platform offers a scalable solution that removes the need for manual, expensive, static infrastructure security updates," says Nuri Dagdeviren, Corporate Vice President of Microchip’s Security Products Business Unit. "FOTA updates allow customers to save resources while fulfilling compliance requirements and helping to future-proof their products against emerging threats and evolving regulations." Further enhancing cybersecurity compliance, the Microchip WINCS02PC Wi-Fi network controller module used in the TrustMANAGER development kit is now certified against the Radio Equipment Directive (RED) for secure and reliable cloud connectivity. RED establishes strict standards for radio devices in the EU, focusing on network security, data protection, and fraud prevention. Beginning 1 August 2025, all wireless devices sold in the EU market must adhere to RED cybersecurity provisions. By incorporating these additional services, TrustMANAGER - governed by keySTREAM - tackles key challenges with IoT security, regulatory compliance, device lifecycle management, and fleet management. This solution is designed to serve IoT device manufacturers and industrial automation providers. For more from Microchip, click here.

Hitachi Vantara launches Virtual Storage Platform 360
Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, today announced the launch of Virtual Storage Platform 360 (VSP 360), a unified management software solution designed to help customers simplify data infrastructure management operations, improve decision-making, and the delivery of data services. With support for block, file, object, and software-defined storage, VSP 360 consolidates multiple management tools and aims to enable IT teams, including those with limited storage expertise, to more efficiently control hybrid cloud deployments, gain AIOps predictive insights, and simplify data lifecycle governance. Organisations today are struggling to manage sprawling data environments spread across disparate storage systems, fragmented data silos, and complex application workloads, all while grappling with overextended IT teams and rising demands for compliance and AI readiness. A recent survey showed AI has led to a dramatic increase in the amount of data storage that businesses require, with the amount of data expected to increase 122% by 2026. The survey also revealed that many IT leaders are being forced to implement AI before their data infrastructure is ready to handle it, with many embarking on a journey of experimentation, hoping to find additional ways to recover some of the cost of their investments. VSP 360 seeks to address these obstacles by integrating data management tools across enterprise storage to monitor key performance indicators, including storage capacity utilisation and overall system health, helping to deliver optimal performance and efficient resource management. It intends to improve end-to-end visibility, leveraging AIOps observability to break down data silos, as well as streamlining the deployment of VSP One data services. “VSP 360 represents a bold step forward in unifying the way enterprises manage their data,” says Octavian Tanase, Chief Product Officer, Hitachi Vantara. “It’s not just a new management tool—it’s a strategic approach to modern data infrastructure that gives IT teams complete command over their data, wherever it resides. With built-in AI and automation and by making it available via SaaS, Private, or via your mobile phone, we're empowering our customers to make faster, smarter decisions and eliminate the traditional silos that slow innovation.” “VSP 360 gives our customers the unified visibility and control they’ve been asking for,” claims Dan Pittet, Senior Solutions Architect, Stoneworks Technologies. “The ability to manage block, file, object, and software-defined storage from a single AI-driven platform helps streamline operations and reduce complexity across hybrid environments. It’s especially valuable for IT teams with limited resources who need to respond quickly to evolving data demands without compromising performance or governance.” "VSP 360 hits the mark for what modern enterprises need," states Ashish Nadkarni, Group Vice President and General Manager, Worldwide Infrastructure Research, IDC. "It goes beyond monitoring to deliver true intelligence across the storage lifecycle. The solution's robust data resiliency helps businesses maintain continuous operations and protect their critical assets, even in the face of unexpected disruptions. By integrating advanced analytics, automation, and policy enforcement, Hitachi Vantara is giving customers the agility and resilience needed to thrive in a data-driven economy.” For more from Hitachi, click here.

R&M introduces latest version of its DCIM software
R&M, a globally active developer and provider of high-end infrastructure solutions for data and communications networks, is now offering Release 5 of the DCIM software, inteliPhy net. With Release 5, inteliPhy net is turning into a digital architect for data centres. Computer rooms can be flexibly designed according to the demand, application, size, and category of the data centre. Planners can position the infrastructure modules intuitively on an arbitrary floor plan using drag-and-drop, and inteliPhy net enables detailed 2D and 3D visualisations that are also suitable for project presentations. With inteliPhy net, it is possible to insert, structure and move racks, rack rows and enclosures with just a few clicks, R&M tells us. Patch panels, PDUs, cable ducts and pre-terminated trunk cables can be added, adapted and connected virtually just as quickly. The software finds optimal routes for the trunk cables and calculates the cable lengths. inteliPhy net contains an extensive library of templates for the entire infrastructure, such as racks, patch panels, cables and power supply. Models for active devices with data on weight, size, ports, slots, feed connections, performance and consumption are also included. Users can configure metamodels and save them for future planning. During planning, inteliPhy net generates an inventory list that can be used directly for cost calculations and orders. The planning process results in a database with a digital twin of the computer room. It serves as the basis for the entire Data Centre Infrastructure Management (DCIM), which is the main function of inteliPhy net. R&M also now offers ready-made KPI reports with zero-touch configuration for inteliPhy net. Users can link the reports with environmental, monitoring, infrastructure, and operating data to monitor the efficiency of the data centre. Customisable dashboards and automated KPI analyses help them to regulate power consumption and temperatures more precisely, and to utilise resources. Another new feature is the interaction of inteliPhy net with a focus on the savings in packaging service from R&M. Customers can, for example, configure Netscale 48 patch panels individually with inteliPhy net. R&M assembles the patch panels completely ready for installation and delivers them in single packaging. The concept saves a considerable amount of individual packaging for small parts. This reduces raw material consumption, waste and the time required for installation. For more from R&M, click here.

New partnership will integrate digital twin software with DCIM platform
Schneider Electric has partnered with IT services provider DC Smarter, the German specialist for augmented reality solutions, to integrate its digital twin software, DC Vision, within Schneider Electric’s Data Centre Infrastructure Management (DCIM) platform, EcoStruxureä IT Advisor.  Available immediately for purchase via Schneider Electric, DC Vision utilises data from EcoStruxure IT and combines it with augmented reality (AR) to create a comprehensive software solution that helps optimise infrastructure performance and increase operational resilience.  By integrating these technologies and consolidating data from DCIM, IT Service Management (ITSM) and Building Management System (BMS) technologies within the Digital Twin, data centre operators can gain granular insights into the health and status of their infrastructure, and make an informed decision to increase performance, sustainability and reliability.  Further, by removing conventional IT silos, the digital twin allows owners, operators and IT managers to visualise the data centre environment using real-time information and thereby create a comprehensive digital replica which leverages all relevant performance data from the data centre.  DC Vision, integrated with Schneider Electric EcoStruxureä IT Advisor, also provides key opportunities for seamless collaboration between on-site workers and remote technicians. Its Remote Assist feature, for example, makes it possible for experts and colleagues to interact with the platform in real-time, and share a common AR view of their critical environment to support and encourage collective problem-solving alongside the processing of complex tasks - regardless of location. Schneider Electric's digital twin technology is an important core of DC Vision’s advanced capabilities. With Schneider Electric's EcoStruxure IT Advisor solution, a virtual image of the physical data centre can be created, including the necessary database.  The virtual 3D representation allows precise monitoring and analysis of the entire IT infrastructure. Additionally, the integration of AR software provides IT personnel with easy-to-understand real-time status information and actionable, contextual instructions, which help to quickly identify faults and properly handle complex service tasks. DC Vision is available immediately for purchase via Schneider Electric.  For more from Schneider Electric, click here.

VictoriaMetrics and CMS team up to monitor the universe
VictoriaMetrics has announced its role assisting the monitoring tasks of the Compact Muon Solenoid (CMS) experiment at the European laboratory for particle physics, CERN.  Tailor-made monitoring solutions  The CMS experiment is one of four particle physics detectors built at the Large Hadron Collider (LHC). Located deep underground at the border of Switzerland and France, the project is currently focused on experiments investigating standard model physics, extra dimensions and dark matter.  The computing infrastructure to deal with the multi-petabyte data sets produced by CMS requires best-in-class systems to monitor workload and data management, data transfers, and submission of production requests.  The CMS experiment has long relied on scalable, open source solutions to satisfy real-time and historical monitoring needs. However, after encountering storage and scalability issues with long term monitoring solutions such as Prometheus and InfluxDB, the CMS monitoring team began the search for alternatives. Edging out existing technology The CMS monitoring team has engaged VictoriaMetrics following a post by CTO and Co-Founder Aliaksandr Valialkin on Medium, which benchmarked VictoriaMetrics against other popular monitoring systems, and were won over by the detail on display.  "We were searching for alternative solutions following performance issues with Prometheus and InfluxDB. VictoriaMetrics' walkthrough of use cases, and concise detail gave us excellent insight into how they could help us. The solution's backwards compatibility with Prometheus made implementation into the CMS monitoring cluster as smooth and seamless as possible." says V. Kuznetsov from Cornell University (member of CMS collaboration). Initially implementing VictoriaMetrics as backend storage for Prometheus, the CMS monitoring team progressed to using the solution as front end storage to replace InfluxDB and Prometheus. This had the added impact of removing cardinality issues with Influx.  Since installing VictoriaMetrics, the CMS monitoring team had zero issues with cardinality, or using the software on the operational side. The CMS monitoring team gained added confidence in the open source flexibility of VictoriaMetrics after seamlessly implementing new features for vmalert, the solution's alerting system. "Working with CMS to monitor the experiment computing infrastructure is a great honour for the team here. The number of use cases for monitoring and observability is growing exponentially, and seeing our tech applied to cutting-edge science is testament to how critical monitoring has become. Our open source, community driven model is and will be at the core of our offering, granting us the flexibility to serve projects as complex as CMS infrastructure in the future", says Roman Khavronenko, Co-Founder of VictoriaMetrics.

Uptime Institute finds downtime consequences worsening as efforts to curb outage frequency fall short
The digital infrastructure sector is struggling to achieve a measurable reduction in outage rates and severity, and the financial consequences and overall disruption from outages are steadily increasing, according to Uptime Institute, which today released the findings of its 2022 annual Outage Analysis report. “Digital infrastructure operators are still struggling to meet the high standards that customers expect and service level agreements demand – despite improving technologies and the industry’s strong investment in resiliency and downtime prevention,” says Andy Lawrence, Founding Member and Executive Director, Uptime Institute Intelligence. “The lack of improvement in overall outage rates is partly the result of the immensity of recent investment in digital infrastructure, and all the associated complexity that operators face as they transition to hybrid, distributed architectures,” comments Lawrence. “In time, both the technology and operational practices will improve, but at present, outages remain a top concern for customers, investors, and regulators. Operators will be best able to meet the challenge with rigorous staff training and operational procedures to mitigate the human error behind many of these failures.” Uptime’s annual outage analysis is unique in the industry, and draws on multiple surveys, information supplied by Uptime Institute members and partners, and its database of publicly reported outages. Key Findings Include: • High outage rates haven’t changed significantly. One in five organisations report experiencing a ‘serious’ or ‘severe’ outage (involving significant financial losses, reputational damage, compliance breaches and in some severe cases, loss of life) in the past three years, marking a slight upward trend in the prevalence of major outages. According to Uptime’s 2022 Data Centre Resiliency Survey, 80% of data centre managers and operators have experienced some type of outage in the past three years – a marginal increase over the norm, which has fluctuated between 70% and 80%. • The proportion of outages costing over $100,000 has soared in recent years. Over 60% of failures result in at least $100,000 in total losses, up substantially from 39% in 2019. The share of outages that cost upwards of $1 million increased from 11% to 15% over that same period. • Power-related problems continue to dog data centre operators. Power-related outages account for 43% of outages that are classified as significant (causing downtime and financial loss). The single biggest cause of power incidents is uninterruptible power supply (UPS) failures. • Networking issues are causing a large portion of IT outages. According to Uptime’s 2022 Data Centre Resiliency Survey, networking-related problems have been the single biggest cause of all IT service downtime incidents – regardless of severity – over the past three years. Outages attributed to software, network and systems issues are on the rise due to complexities from the increasing use of cloud technologies, software-defined architectures and hybrid, distributed architectures. • The overwhelming majority of human error-related outages involve ignored or inadequate procedures. Nearly 40% of organisations have suffered a major outage caused by human error over the past three years. 85% of these incidents stem from staff failing to follow procedures or from flaws in the processes and procedures themselves. • External IT providers cause most major public outages. The more workloads that are outsourced to external providers, the more these operators account for high-profile, public outages. Third-party, commercial IT operators (including cloud, hosting, colocation, telecommunication providers, etc.) account for 63% of all publicly reported outages that Uptime has tracked since 2016. In 2021, commercial operators caused 70% of all outages. • Prolonged downtime is becoming more common in publicly reported outages. The gap between the beginning of a major public outage and full recovery has stretched significantly over the last five years. Nearly 30% of these outages in 2021 lasted more than 24 hours, a disturbing increase from just 8% in 2017. • Public outage trends suggest there will be at least 20 serious, high-profile IT outages worldwide each year. Of the 108 publicly reported outages in 2021, 27 were serious or severe. This ratio has been consistent since the Uptime Intelligence team began cataloguing major outages in 2016, indicating that roughly one-fourth of publicly recorded outages each year are likely to be serious or severe.

Kao Data to host Speechmatics' HPC and AI supercomputer
Kao Data has announced it has secured a new customer contract with Speechmatics, providing highly accurate speech-to-text capabilities to global enterprise businesses. Speechmatics' new high-performance computing (HPC) deployment includes NVIDIA A100 GPUs, housed within an advanced Supermicro computing infrastructure, which will be hosted at Kao Data's Harlow campus. The supercomputer will allow Speechmatics to expand its GPU-accelerated neural network research and development, supporting increasing customer demand for its leading speech recognition technology. Kao Data was chosen as the UK's premier location for HPC and AI, in addition to its commitments to sustainability and energy efficiency. Speechmatics' high-density supercomputer will benefit from bespoke colocation, an SLA-backed PUE of 1.2, and will be powered by 100% renewable energy. Recognised in both 2020 and 2021 by the FT1000 as one Europe's fastest-growing companies, and named within NVIDIA's prestigious Inception Program, Speechmatics exploits deep learning technology to deliver highly accurate and ultra-fast speech-to-text technology. The company utilises NVIDIA's acclaimed GPU hardware to complete its aim of understanding every voice, regardless of demographic, accent, or dialect. Speechmatics' supercomputer will work in tandem with its hyperscale cloud deployments, providing a pioneering example of fluent, hybrid HPC in-action. Central to this capability is Megaport's hyperscale connectivity solutions at Kao Data's Harlow campus, which provides seamless connectivity and on-ramps between the on-premises supercomputer, and Speechmatics' instances within Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. "Powering modern machine learning research and development requires an advanced computing infrastructure which only becomes more demanding as you scale" says Will Williams, VP Machine Learning, Speechmatics. "As we continue to develop our technological edge through the use of self-supervised learning in our models, it's crucial to ensure our data centre provider can scale with our demands, but in a sustainable way. Kao Data was the obvious choice for us as the best HPC data centre operator in the UK with the benefit of being powered by 100% renewable energy." "The opportunity to support Speechmatics in this crucial phase of the company's expansion is an exciting prospect for our organisation, and further underpins Kao Data's position as the UK's preeminent provider of sustainable data centres for HPC and AI," says Lee Myall, CEO, Kao Data. "We look forward to working closely with Speechmatics to help power and support their unique approach of applying neural networks to speech recognition."



Translate »