Data Centre Operations: Optimising Infrastructure for Performance and Reliability


Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

Duos deploys fifth edge data centre
Duos Technologies Group, through its operating subsidiary Duos Edge AI, a provider of adaptive, versatile and streamlined edge data centre (EDC) solutions tailored to meet evolving needs in any environment, has announced its latest data centre deployment towards its anticipated goal of 15 deployments by the year's end. The latest EDC is in partnership with Dumas Independent School District to deploy an on-premise EDC in Dumas, Texas. This project marks another milestone in Duos Edge AI’s expansion into rural communities, providing low-latency compute and connectivity that directly support K-12 education and regional growth. The Dumas ISD edge data centre will serve as a localised hub for real-time data processing, enabling advanced educational tools, stronger digital infrastructure, and improved connectivity for students and staff across the district. “As Director of Information Technology for Dumas ISD, I am excited about our partnership with Duos Edge AI,” says Raymond Brady, Director of Information Technology at Dumas ISD. “This collaboration brings direct, on premise access to a cutting-edge data centre, an extraordinary opportunity for a rural community like Dumas. It will significantly strengthen the district’s technology capabilities and support our mission of achieving academic excellence through collaboration with students, parents, and the community. I look forward to working with Duos Edge AI as we continue to provide innovative technology for our students and staff, ensuring every student is prepared for success.” “This partnership with Dumas ISD is a perfect example of how edge technology can create lasting impact in rural communities,” adds Doug Recker, newly appointed President of Duos Technologies Group and the founder of subsidiary, Duos Edge AI. “By placing powerful computing infrastructure directly on campus, we’re helping schools like Dumas unlock real-time digital tools that drive student achievement, workforce readiness, and community growth.” This deployment is part of Duos Edge AI’s broader 2025 plan to establish 15 modular EDCs nationwide, with a focus on underserved and high-growth markets. By locating advanced computing infrastructure closer to end users, Duos Edge AI ensures reliable, secure, and scalable technological access for schools, healthcare facilities, and local communities. For more from Duos Edge, click here.

Cadence adds NVIDIA DGX SuperPOD to digital twin platform
Cadence, a developer of electronic design automation software, has expanded its Reality Digital Twin Platform library with a digital model of NVIDIA’s DGX SuperPOD with DGX GB200 systems. The addition is aimed at supporting data centre designers and operators in planning and managing facilities for large-scale AI workloads. The Reality Digital Twin Platform enables users to create detailed digital replicas of data centres, simulating power, cooling, space, and performance requirements before physical deployment. By adding the NVIDIA DGX SuperPOD, Cadence says engineers can model AI factory environments with greater accuracy, supporting faster deployment and improved operational efficiency. Digital twins for AI data centres Michael Jackson, Senior Vice President of System Design and Analysis at Cadence, says, “Rapidly scaling AI requires confidence that you can meet your design requirements with the target equipment and utilities. "With the addition of a digital model of NVIDIA’s DGX SuperPOD with DGX GB200 systems to our Cadence Reality Digital Twin Platform library, designers can model behaviourally accurate simulations of some of the most powerful accelerated systems in the world, reducing design time and improving decision-making accuracy for mission-critical projects.” Tim Costa, General Manager of Industrial and Computational Engineering at NVIDIA, adds, “Creating the digital twin of our DGX SuperPOD with DGX GB200 systems is an important step in enabling the ecosystem to accelerate AI factory buildouts. "This step in our ongoing collaboration with Cadence fills a crucial need as the pace of innovation increases and time-to-service shrinks.” The Cadence Reality Digital Twin Platform allows engineers to drag and drop vendor-provided models into simulations to design and test data centres. It can also be used to evaluate upgrade paths, failure scenarios, and long-term performance. The library currently contains more than 14,000 items from over 750 vendors. Industry engagement The addition of the NVIDIA model is part of Cadence’s ongoing collaboration with NVIDIA, following earlier support for the NVIDIA Omniverse blueprint for AI factory design. Cadence will highlight the expanded platform at the AI Infra Summit in Santa Clara from 9-11 September, where company experts will take part in keynotes, panels, and talks on chip efficiency and simulation-driven data centre operations. For more from Cadence, click here.

Nokia, Supermicro partner for AI-optimised DC networking
Finnish telecommunications company Nokia has entered into a partnership with Supermicro, a provider of application-optimised IT systems, to deliver integrated networking platforms designed for AI, high-performance computing (HPC), and cloud workloads. The collaboration combines Supermicro’s advanced switching hardware with Nokia’s data centre automation and network operating system for cloud providers, hyperscalers, enterprises, and communications service providers (CSPs). Building networks for AI-era workloads Data centres are under increasing pressure from the rising scale and intensity of AI and cloud applications. Meeting these requirements demands a shift in architecture that places networking at the core, with greater emphasis on performance, scalability, and automation. The joint offering integrates Supermicro’s 800G Ethernet switching platforms with Nokia’s Service Router Linux (SR Linux) Network Operating System (NOS) and Event-Driven Automation (EDA). Together, these form an infrastructure platform that automates the entire network lifecycle - from initial design through deployment and ongoing operations. According to the companies, customers will benefit from "a pre-validated solution that shortens deployment timelines, reduces operational costs, and improves network efficiency." Industry perspectives Cenly Chen, Chief Growth Officer, Senior Vice President, and Managing Director at Supermicro, says, "This collaboration gives our customers more choice and flexibility in how they build their infrastructure, with the confidence that Nokia’s SR Linux and EDA are tightly integrated with our systems. "It strengthens our ability to deliver networked compute architectures for high-performance workloads, while simplifying orchestration and automation with a unified platform." Vach Kompella, Senior Vice President and General Manager of the IP Networks Division at Nokia, adds, "Partnering with Supermicro further validates Nokia SR Linux and Event-Driven Automation as the right software foundation for today’s data centre and IP networks. "It also gives us significantly greater reach into the enterprise market through Supermicro’s extensive channels and direct sales, aligning with our strategy to expand in cloud, HPC, and AI-driven infrastructure." For more from Nokia, click here.

Duos Edge AI awarded patent for modular DC entryway
The US Patent and Trademark Office has granted Duos Edge AI, a provider of edge data centre (EDC) systems, a patent for a new entryway design for modular data centres. The system aims to improve security and protect mission-critical equipment by combining a two-door access configuration with filtration to reduce the intrusion of dust, dirt, and moisture. Duos Edge AI, a subsidiary of Duos Technologies Group, develops modular edge data centres intended to provide reliable, low-latency data access in areas where traditional infrastructure is limited. The patented entryway is designed to support these facilities in remote or rural locations by improving equipment resilience and service uptime. Supporting communities The company’s edge data centres are used by schools, hospitals, warehouses, carriers, and first responders. By enhancing environmental protection for infrastructure, the new design is expected to strengthen operational continuity in sectors that depend on constant access to digital services. Doug Recker, President and founder of Duos Edge AI, says, "This patent demonstrates our commitment to delivering ruggedised, field-ready edge data centres that meet the unique needs of rural and underserved markets. "By addressing critical challenges like environmental intrusion, we are setting a higher standard for reliability and long-term value for our customers." The modular approach aligns with Duos Edge AI’s wider focus on delivering scalable, rapidly deployable facilities that move data processing closer to users. This can help reduce latency, support real-time applications, and expand digital access in regions with growing demand. For more from Duos Edge AI, click here.

World's first AI internet exchange launched by DE-CIX
DE-CIX, an internet exchange (IX) operator, has announced the launch of what it calls the world’s first AI Internet Exchange (AI-IX), making its global network of internet exchanges “AI-ready.” The company has completed the first phase of the rollout, connecting more than 50 AI-related networks – including providers of AI inference and GPU services, alongside a range of cloud platforms – to its ecosystem. DE-CIX says it now operates over 160 cloud on-ramps globally, supported by its proprietary multi-AI routing system. The new exchange capabilities are available across all DE-CIX locations worldwide, including its flagship site in Frankfurt. Two-phase rollout The second phase of deployment will see DE-CIX AI-IX made Ultra Ethernet-ready, designed to support geographically distributed AI training as workloads move away from large centralised data centres. Once complete, the operator says it will be the first to offer an exchange capable of supporting both AI inference and AI training. AI inference – where trained models are applied in real-world use cases – depends on low-latency, high-security connections. According to DE-CIX CEO Ivo Ivanov, the growth of AI agents and AI-enabled devices is creating new demand for direct interconnection. “This is the core benefit of the DE-CIX AI-IX, which uses the unique DE-CIX AI router to enable seamless multi-agent inference for today’s complex use-cases and tomorrow’s innovation in all industry segments,” Ivo says. Ultra ethernet and AI training Phase two focuses on AI model training. DE-CIX CTO Thomas King says that Ultra Ethernet, a new networking protocol optimised for AI, will enable disaggregated computing across metropolitan areas. This could reduce reliance on centralised data centres and create more cost-effective, resilient private AI infrastructure. “Until now, huge, centralised data centres have been needed to quickly process AI computing loads on parallel clusters,” Thomas explains. “Ultra Ethernet is driving the trend towards disaggregated computing, enabling AI training to be carried out in a geographically distributed manner.” DE-CIX hardware is already compatible with Ultra Ethernet and the operator plans to introduce it once network equipment vendors make the necessary software features available. For more from DE-CIX, click here.

Colocation's role in an AI world
In this article, Mark Pestridge, Executive Vice President & General Manager, Telehouse Europe, explores colocation’s dual role in supporting today’s transformation and tomorrow’s artificial intelligence (AI): Behind the curtain of AI The proliferation of AI solutions is a source of headlines which can easily obscure the critical role of data centres as ever more workloads run in the background. Recent research by S&P Global Market Intelligence, commissioned by Telehouse, shines a light on what is going on behind the scenes. It reveals that 76% of AI workloads are hosted in the cloud or in data centres, including colocation facilities. AI, however, is not the sole focus of every organisation. Many are undertaking technical transformations that data centres must be capable of supporting, while also providing infrastructure with the flexibility to scale for future AI integrations. This question of flexibility is becoming central to how colocation providers serve a growing market. The growth of AI and its processing needs The research shows between 16% and 20% of overall workloads are AI-related. While hyperscalers lead with 35% of these workloads, third-party colocation data centres hold a 10% share. Within this space, several factors are shaping buying decisions, including GPU-as-a-Service (18%), access to specialised cooling (15%), and GPU-based compute installation (13%). One key issue is becoming increasingly important as AI workloads grow: effective cooling. The GPUs used in AI processing are only as reliable as the cooling systems that support them, which demands a more advanced set of cooling technologies to cope with the heat generated by their higher power densities. This brings liquid technology into play, as it requires less power than traditional air cooling, making it a more efficient, reliable method. Liquid cooling may be essential for AI growth, but it also has more immediate gains for data centre operators. By enabling higher rack densities – some reaching up to 100KW – it can accommodate a wider range of power-hungry workloads within a smaller footprint. At the same time, its lower energy requirements help to reduce costs and improve PUE ratings. These efficiency gains contribute directly to the reduction of carbon footprints and boost attainment of environmental, social, and governmental (ESG) goals. Cloud on-ramps for AI lift-off As well as meeting present-day digital demands from customers, data centre operators must consider cloud on-ramps for AI workloads. More than 90% of companies view on-ramps as either critical or quite important to AI/ML architecture, with the potential for multiple AI-related functions. This includes moving data in and out of the cloud for training inferencing purposes, alongside the transportation of AI-related data for analysis. Cloud on-ramps are also important for digital transformation strategies. As a direct, secure connection, they enable faster, more reliable access to cloud services from a data centre. Low-latency connectivity between business networks and other cloud services via a private connection is enabled by cloud exchanges with links to the leading cloud providers, such as Amazon Web Services and Microsoft Azure. If they provide access to cloud on-ramps and cloud exchanges, colocation services should be at the centre of distributed AI workloads that make use of public cloud services. The value of consultancy and expertise The research is also clear about the role of expertise in guiding the data centre choices for both transformation strategies and longer-term AI plans. A fifth of respondents (20%) cite AI/ML consulting services as the main differentiator when considering a colocation provider. This is especially true for healthcare and life sciences companies which value the availability of IT skills and expertise. Remote and smart-hands services offered by leading data centre operators clearly have an important role in offering guidance on how organisations meet their goals. In colocation centres, smart-hands services make it easier to configure changes to equipment and to install new equipment. Colocation has a central role As AI workloads grow to power dramatic new use cases, colocation is expanding its critical support. It offers the infrastructure, connectivity, and expertise that organisations require to achieve their current targets and the requirements of the AI-shaped future. The combination of advanced cooling systems, direct cloud connectivity, and value-added services makes colocation fundamental to evolving workloads, whether focused on digital transformation or AI. For more from Telehouse, click here.

Macquarie, Dell bring AI factories to Australia
Australian data centre operator Macquarie Data Centres, part of Macquarie Technology Group, is collaborating with US multinational technology company Dell Technologies with the aim of providing a secure, sovereign home for AI workloads in Australia. Macquarie Data Centres will host the Dell AI Factory with NVIDIA within its AI and cloud data centres. This approach seeks to power enterprise AI, private AI, and neo cloud projects while achieving high standards of data security within sovereign data centres. This development will be particularly relevant for critical infrastructure providers and highly regulated sectors such as healthcare, finance, education, and research, which have strict regulatory compliance conditions relating to data storage and processing. This collaboration hopes to give them the secure, compliant foundation needed to build, train, and deploy advanced AI applications in Australia, such as AI digital twins, agentic AI, and private LLMs. Answering the Government’s call for sovereign AI The Australian Government has linked the data centre sector to its 'Future Made in Australia' policy agenda. Data centres and AI also play an important role in the Australian Federal Government’s new push to improve Australia’s productivity. “For Australia's AI-driven future to be secure, we must ensure that Australian data centres play a core role in AI, data, infrastructure, and operations,” says David Hirst, CEO, Macquarie Data Centres. “Our collaboration with Dell Technologies delivers just that, the perfect marriage of global tech and sovereign infrastructure.” Sovereignty meets scalability Dell AI Factory with NVIDIA infrastructure and software will be supported by Macquarie Data Centres’ newest purpose-built AI and cloud data centre, IC3 Super West. The 47MW facility is, according to the company, "purpose-built for the scale, power, and cooling demands of AI infrastructure." It is to be ready in mid-2026 with the entire end-state power secured. “Our work with Macquarie Data Centres helps bring the Dell AI Factory with NVIDIA vision to life in Australia,” comments Jamie Humphrey, General Manager, Australia & New Zealand Specialty Platforms Sales, Dell Technologies ANZ. “Together, we are enabling organisations to develop and deploy AI as a transformative and competitive advantage in Australia in a way that is secure, sovereign, and scalable.” Macquarie Technology Group and Dell Technologies have been collaborating for more than 15 years. For more from Macquarie Data Centres, click here.

Cloudera is bringing Private AI to data centres
Cloudera, a hybrid platform for data, analytics, and AI, today announced the latest release of Cloudera Data Services, bringing Private AI on premises and aiming to give enterprises secure, GPU-accelerated generative AI capabilities behind their firewall. With built-in governance and hybrid portability, Cloudera says organisations can now build and scale their own sovereign data cloud in their own data centre, "eliminating security concerns." The company claims it is the only vendor that delivers the full data lifecycle with the same cloud-native services on premises and in the public cloud. Concerns about keeping sensitive data and intellectual property secure is a key factor in what holds back AI adoption for enterprises across industries. According to Accenture, 77% of organisations lack the foundational data and AI security practices needed to safeguard critical models, data pipelines, and cloud infrastructure. Cloudera argues that it directly addresses the biggest security and intellectual property risks of enterprise AI, allowing customers to "accelerate their journey from prototype to production from months to weeks." Through this release, the company claims users could reduce infrastructure costs and streamline data lifecycles, boosting data team productivity, as well as accelerating workload deployment, enhancing security by automating complex tasks, and achieving faster time-to-value for AI deployment. As part of this release, both Cloudera AI Inference Service and AI Studios are now available in data centres. Both of these tools are designed to tackle the barriers to enterprise AI adoption and have previously been available in cloud only. Details of the products • Cloudera AI Inference services, accelerated by NVIDIA: The company says this is one of the industry’s first AI inference services to provide embedded NVIDIA NIM microservice capabilities and it is streamlining the deployment and management of large-scale AI models to data centres. It continues to suggest the engine helps deploy and manage the AI production lifecycle, right in the data centre, where data already securely resides. • Cloudera AI Studios: The company claims this offering democratises the entire AI application lifecycle, offering "low-code templates that empower teams to build and deploy GenAI applications and agents." Data and comments According to an independent Total Economic Impact (TEI) study - conducted by Forrester Consulting and commissioned by Cloudera - a composite organisation representative of interviewed customers who adopted Cloudera Data Services on premises saw: • An 80% faster time-to-value for workload deployment• A 20% increase in productivity for data practitioners and platform teams• Overall savings of 35% from the modern cloud-native architecture The study also highlighted operational efficiency gains, with some organisations improving hardware utilisation from 30% to 70% and reporting they needed between 25% to 50% less capacity after modernising. “Historically, enterprises have been forced to cobble together complex, fragile DIY solutions to run their AI on premises,” comments Sanjeev Mohan, an industry analyst. “Today, the urgency to adopt AI is undeniable, but so are the concerns around data security. What enterprises need are solutions that streamline AI adoption, boost productivity, and do so without compromising on security.” Leo Brunnick, Cloudera’s Chief Product Officer, claims, “Cloudera Data Services On-Premises delivers a true cloud-native experience, providing agility and efficiency without sacrificing security or control. “This release is a significant step forward in data modernisation, moving from monolithic clusters to a suite of agile, containerised applications.” Toto Prasetio, Chief Information Officer of BNI, states, "BNI is proud to be an early adopter of Cloudera’s AI Inference service. "This technology provides the essential infrastructure to securely and efficiently expand our generative AI initiatives, all while adhering to Indonesia's dynamic regulatory environment. "It marks a significant advancement in our mission to offer smarter, quicker, and more dependable digital banking solutions to the people of Indonesia." This product is being demonstrated at Cloudera’s annual series of data and AI conferences, EVOLVE25, starting this week in Singapore.

F5 and Equinix expand collaboration
US technology company F5 and Equinix, a US-based data centre and colocation provider, today announced an expansion of their partnership to support secure deployment of modern applications and AI workloads across hybrid multi-cloud environments. The collaboration integrates the F5 Application Delivery and Security Platform (ADSP) with Equinix’s Network Edge and Equinix Fabric with the intention of enabling global, virtualised deployment of application services without requiring physical infrastructure. The move is aimed at helping enterprises reduce the operational complexity and cost associated with managing distributed digital infrastructure, while supporting regulatory compliance and improved security. A key feature of the expanded offering is the availability of F5 Distributed Cloud Customer Edge as a virtual network function (VNF) on Equinix Network Edge. This should enable organisations to provision F5’s application delivery and security services across Equinix’s global infrastructure in near real-time, allowing for rapid scalability without physical hardware deployments. The system supports a range of AI-related use cases, including low-latency environments for inference and retrieval-augmented generation (RAG), while also addressing concerns around data sovereignty and privacy. John Maddison, Chief Product and Corporate Marketing Officer at F5, comments, “AI is putting massive new demands on infrastructure, especially at the edge, where latency, security, and control are critical. "Enterprises need faster, more secure ways to deploy and connect applications and AI workloads globally without the complexity of managing physical infrastructure. Our expanded partnership with Equinix gives customers exactly that: a flexible, high-performance foundation to support AI-driven use cases and deliver exceptional digital experiences across any environment.” Key features of the offering • Support for distributed AI workloads – Enables secure, high-speed connections for AI use cases, including inference and RAG, while protecting sensitive data • Global deployment without physical infrastructure – Allows enterprises to launch application services in new locations using virtual functions, reducing time to market and capital expenditure • Improved agility and responsiveness – Provides the ability to scale and adapt infrastructure to changing demands across multiple environments • Unified policy enforcement – Supports consistent application of security and compliance policies across different regions and jurisdictions The integration also provides F5 customers with access to Equinix’s global interconnection ecosystem, including low-latency links to major cloud providers, while Equinix users can now deploy F5 services directly through the Network Edge platform. Existing purchasing agreements can be used by customers of either company to access the joint system. Maryam Zand, Vice President of Partnerships and Ecosystem Development at Equinix, says, “Organisations are racing to adopt AI, but legacy infrastructure can slow them down or expose them to unnecessary risk. "By partnering with F5, we’re giving our customers a seamless way to scale their AI applications and modern distributed workloads with built-in security, compliance, and performance. This solution can help businesses innovate faster, safeguard their operations, and maintain a competitive edge.”



Translate »