Artificial Intelligence in Data Centre Operations


Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

Nokia, Supermicro partner for AI-optimised DC networking
Finnish telecommunications company Nokia has entered into a partnership with Supermicro, a provider of application-optimised IT systems, to deliver integrated networking platforms designed for AI, high-performance computing (HPC), and cloud workloads. The collaboration combines Supermicro’s advanced switching hardware with Nokia’s data centre automation and network operating system for cloud providers, hyperscalers, enterprises, and communications service providers (CSPs). Building networks for AI-era workloads Data centres are under increasing pressure from the rising scale and intensity of AI and cloud applications. Meeting these requirements demands a shift in architecture that places networking at the core, with greater emphasis on performance, scalability, and automation. The joint offering integrates Supermicro’s 800G Ethernet switching platforms with Nokia’s Service Router Linux (SR Linux) Network Operating System (NOS) and Event-Driven Automation (EDA). Together, these form an infrastructure platform that automates the entire network lifecycle - from initial design through deployment and ongoing operations. According to the companies, customers will benefit from "a pre-validated solution that shortens deployment timelines, reduces operational costs, and improves network efficiency." Industry perspectives Cenly Chen, Chief Growth Officer, Senior Vice President, and Managing Director at Supermicro, says, "This collaboration gives our customers more choice and flexibility in how they build their infrastructure, with the confidence that Nokia’s SR Linux and EDA are tightly integrated with our systems. "It strengthens our ability to deliver networked compute architectures for high-performance workloads, while simplifying orchestration and automation with a unified platform." Vach Kompella, Senior Vice President and General Manager of the IP Networks Division at Nokia, adds, "Partnering with Supermicro further validates Nokia SR Linux and Event-Driven Automation as the right software foundation for today’s data centre and IP networks. "It also gives us significantly greater reach into the enterprise market through Supermicro’s extensive channels and direct sales, aligning with our strategy to expand in cloud, HPC, and AI-driven infrastructure." For more from Nokia, click here.

World's first AI internet exchange launched by DE-CIX
DE-CIX, an internet exchange (IX) operator, has announced the launch of what it calls the world’s first AI Internet Exchange (AI-IX), making its global network of internet exchanges “AI-ready.” The company has completed the first phase of the rollout, connecting more than 50 AI-related networks – including providers of AI inference and GPU services, alongside a range of cloud platforms – to its ecosystem. DE-CIX says it now operates over 160 cloud on-ramps globally, supported by its proprietary multi-AI routing system. The new exchange capabilities are available across all DE-CIX locations worldwide, including its flagship site in Frankfurt. Two-phase rollout The second phase of deployment will see DE-CIX AI-IX made Ultra Ethernet-ready, designed to support geographically distributed AI training as workloads move away from large centralised data centres. Once complete, the operator says it will be the first to offer an exchange capable of supporting both AI inference and AI training. AI inference – where trained models are applied in real-world use cases – depends on low-latency, high-security connections. According to DE-CIX CEO Ivo Ivanov, the growth of AI agents and AI-enabled devices is creating new demand for direct interconnection. “This is the core benefit of the DE-CIX AI-IX, which uses the unique DE-CIX AI router to enable seamless multi-agent inference for today’s complex use-cases and tomorrow’s innovation in all industry segments,” Ivo says. Ultra ethernet and AI training Phase two focuses on AI model training. DE-CIX CTO Thomas King says that Ultra Ethernet, a new networking protocol optimised for AI, will enable disaggregated computing across metropolitan areas. This could reduce reliance on centralised data centres and create more cost-effective, resilient private AI infrastructure. “Until now, huge, centralised data centres have been needed to quickly process AI computing loads on parallel clusters,” Thomas explains. “Ultra Ethernet is driving the trend towards disaggregated computing, enabling AI training to be carried out in a geographically distributed manner.” DE-CIX hardware is already compatible with Ultra Ethernet and the operator plans to introduce it once network equipment vendors make the necessary software features available. For more from DE-CIX, click here.

Macquarie, Dell bring AI factories to Australia
Australian data centre operator Macquarie Data Centres, part of Macquarie Technology Group, is collaborating with US multinational technology company Dell Technologies with the aim of providing a secure, sovereign home for AI workloads in Australia. Macquarie Data Centres will host the Dell AI Factory with NVIDIA within its AI and cloud data centres. This approach seeks to power enterprise AI, private AI, and neo cloud projects while achieving high standards of data security within sovereign data centres. This development will be particularly relevant for critical infrastructure providers and highly regulated sectors such as healthcare, finance, education, and research, which have strict regulatory compliance conditions relating to data storage and processing. This collaboration hopes to give them the secure, compliant foundation needed to build, train, and deploy advanced AI applications in Australia, such as AI digital twins, agentic AI, and private LLMs. Answering the Government’s call for sovereign AI The Australian Government has linked the data centre sector to its 'Future Made in Australia' policy agenda. Data centres and AI also play an important role in the Australian Federal Government’s new push to improve Australia’s productivity. “For Australia's AI-driven future to be secure, we must ensure that Australian data centres play a core role in AI, data, infrastructure, and operations,” says David Hirst, CEO, Macquarie Data Centres. “Our collaboration with Dell Technologies delivers just that, the perfect marriage of global tech and sovereign infrastructure.” Sovereignty meets scalability Dell AI Factory with NVIDIA infrastructure and software will be supported by Macquarie Data Centres’ newest purpose-built AI and cloud data centre, IC3 Super West. The 47MW facility is, according to the company, "purpose-built for the scale, power, and cooling demands of AI infrastructure." It is to be ready in mid-2026 with the entire end-state power secured. “Our work with Macquarie Data Centres helps bring the Dell AI Factory with NVIDIA vision to life in Australia,” comments Jamie Humphrey, General Manager, Australia & New Zealand Specialty Platforms Sales, Dell Technologies ANZ. “Together, we are enabling organisations to develop and deploy AI as a transformative and competitive advantage in Australia in a way that is secure, sovereign, and scalable.” Macquarie Technology Group and Dell Technologies have been collaborating for more than 15 years. For more from Macquarie Data Centres, click here.

Cloudera is bringing Private AI to data centres
Cloudera, a hybrid platform for data, analytics, and AI, today announced the latest release of Cloudera Data Services, bringing Private AI on premises and aiming to give enterprises secure, GPU-accelerated generative AI capabilities behind their firewall. With built-in governance and hybrid portability, Cloudera says organisations can now build and scale their own sovereign data cloud in their own data centre, "eliminating security concerns." The company claims it is the only vendor that delivers the full data lifecycle with the same cloud-native services on premises and in the public cloud. Concerns about keeping sensitive data and intellectual property secure is a key factor in what holds back AI adoption for enterprises across industries. According to Accenture, 77% of organisations lack the foundational data and AI security practices needed to safeguard critical models, data pipelines, and cloud infrastructure. Cloudera argues that it directly addresses the biggest security and intellectual property risks of enterprise AI, allowing customers to "accelerate their journey from prototype to production from months to weeks." Through this release, the company claims users could reduce infrastructure costs and streamline data lifecycles, boosting data team productivity, as well as accelerating workload deployment, enhancing security by automating complex tasks, and achieving faster time-to-value for AI deployment. As part of this release, both Cloudera AI Inference Service and AI Studios are now available in data centres. Both of these tools are designed to tackle the barriers to enterprise AI adoption and have previously been available in cloud only. Details of the products • Cloudera AI Inference services, accelerated by NVIDIA: The company says this is one of the industry’s first AI inference services to provide embedded NVIDIA NIM microservice capabilities and it is streamlining the deployment and management of large-scale AI models to data centres. It continues to suggest the engine helps deploy and manage the AI production lifecycle, right in the data centre, where data already securely resides. • Cloudera AI Studios: The company claims this offering democratises the entire AI application lifecycle, offering "low-code templates that empower teams to build and deploy GenAI applications and agents." Data and comments According to an independent Total Economic Impact (TEI) study - conducted by Forrester Consulting and commissioned by Cloudera - a composite organisation representative of interviewed customers who adopted Cloudera Data Services on premises saw: • An 80% faster time-to-value for workload deployment• A 20% increase in productivity for data practitioners and platform teams• Overall savings of 35% from the modern cloud-native architecture The study also highlighted operational efficiency gains, with some organisations improving hardware utilisation from 30% to 70% and reporting they needed between 25% to 50% less capacity after modernising. “Historically, enterprises have been forced to cobble together complex, fragile DIY solutions to run their AI on premises,” comments Sanjeev Mohan, an industry analyst. “Today, the urgency to adopt AI is undeniable, but so are the concerns around data security. What enterprises need are solutions that streamline AI adoption, boost productivity, and do so without compromising on security.” Leo Brunnick, Cloudera’s Chief Product Officer, claims, “Cloudera Data Services On-Premises delivers a true cloud-native experience, providing agility and efficiency without sacrificing security or control. “This release is a significant step forward in data modernisation, moving from monolithic clusters to a suite of agile, containerised applications.” Toto Prasetio, Chief Information Officer of BNI, states, "BNI is proud to be an early adopter of Cloudera’s AI Inference service. "This technology provides the essential infrastructure to securely and efficiently expand our generative AI initiatives, all while adhering to Indonesia's dynamic regulatory environment. "It marks a significant advancement in our mission to offer smarter, quicker, and more dependable digital banking solutions to the people of Indonesia." This product is being demonstrated at Cloudera’s annual series of data and AI conferences, EVOLVE25, starting this week in Singapore.

Report: 'UK risks losing billions in AI investment'
According to a new report published today from trade association TechUK, the Data Centre Alliance, Copper Consultancy, and law firm Charles Russell Speechlys, the UK risks losing out on billions in AI investment if it doesn’t take clear steps to unlock data centre development. The report, How to Make the UK an AI Leader, brings together reflections from some of the biggest data centre developers - as well as planners and construction, engineering, and legal professions - at a recent industry roundtable organised by Copper and Charles Russell Speechlys. The roundtable, and subsequent report, lay bare the challenges facing data centre development in the UK, and the impacts this could have on investment into UK plc. Key regulatory barriers – such as energy availability, energy cost, and planning complexity – were identified alongside low public awareness as the main issues hobbling development of data centres in the UK. Luisa Cardani, Head of Data Centre Programme at TechUK, says, “The insights in this report echo what TechUK and the sector have been advocating for a long time: the UK has the talent, the ambition, and the capability to lead in AI and digital infrastructure, but leadership is not guaranteed. It requires bold decisions, cross-sector collaboration, and a shared national vision.” Steve Hone, CEO at Data Centre Alliance, adds, “As [a] trade association representing the UK data centre digital sector, we were delighted to be invited to collaborate in the recent roundtable which has culminated in the creation of this report. "This timely report is an important contribution to the debate and hopefully will act as a catalyst for the action needed to ensure the UK’s digital infrastructure remains world leading.” The report notes how high energy prices are currently hindering the UK's global competitiveness in data centres and AI - actively dissuading investment in the UK. Given the resource-intensive nature of data centres, the report suggests that the industry needs the Government to intervene through targeted subsidies, reducing costs to match energy costs in rival regions like the US and Nordics. Concerns have also been raised with 'AI Growth Zones' being seen as a "silver bullet for the industry." Whilst, according to the report, the industry welcomes government support, the current planning framework is seen as "overly rigid" and "risks misalignment with actual demand and repeating past planning mistakes like Slough's unplanned growth." As a response, a new planning use class could allow for flexible, demand-led planning, which would be especially important in the fast-moving AI industry. Finally, public perception is seen as a critical non-regulatory issue for the sector to tackle, with half the UK’ s population not knowing what a data centre is. Such low awareness leaves the public open to misinformation and a fundamental lack of understanding as to why data centres are critical to a future economy. The report calls on the industry to engage more proactively on the needs case for data centres with the public, supported by the Government outlining why their development is critical to growth. Ronan Cloud, Director at Copper Consultancy, argues, “While Grey Belt reforms are beneficial, considerable planning inertia remains. Government should create a dedicated planning use class for data centres at once, distinct from broader industrial uses. "This tailored classification would increase planning approvals and accommodate future technological developments.” Kevin Gibbs, Senior Consultant at Charles Russell Speechlys, comments, “Whilst the Government’s AI Opportunities Action Plan commits £2 billion to AI Growth Zones to accelerate infrastructure delivery, there is much more that both the industry and Government can, and should, be doing. "To truly become an AI leader and unlock economic growth, the UK needs to make a clear and compelling case for data centres. It needs to act now to alleviate some of the barriers.”

Kioxia announces 245.76TB SSD for enterprise AI
Memory manufacturer Kioxia Europe has expanded its LC9 Series of enterprise solid-state drives (SSDs) with the launch of a 245.76TB model, available in both 2.5-inch and Enterprise and Datacentre Standard Form Factor (EDSFF) E3.L formats. According to the company, it is the first NVMe SSD of this capacity to be offered in these form factors. The new model adds to the previously announced 122.88TB SSD and is aimed at enterprise environments, particularly those handling generative AI workloads. These workloads require large-scale, high-speed storage with high energy efficiency to support training large language models (LLMs), creating embeddings and building vector databases used in retrieval-augmented generation (RAG). The LC9 Series is based on a 32-die stack of 2Tb BiCS FLASH QLC 3D flash memory, using Kioxia’s CBA (CMOS directly bonded to array) technology. This combination enables 8TB in a compact 154-ball grid array (BGA) package. The design leverages advancements in wafer processing, materials science, and wire bonding. The new drives are intended for use in data lakes and other large-scale data environments where high performance and storage density are essential. In such use cases, traditional hard disk drives (HDDs) can limit throughput and underutilise GPUs. By comparison, Kioxia says that each LC9 drive can deliver up to 245.76TB while reducing the need for multiple HDDs, lowering power consumption, improving cooling efficiency, and ultimately reducing total cost of ownership (TCO). Key specifications of the LC9 Series SSDs include: • Capacity up to 245.76TB in 2.5-inch and E3.L form factors • 122.88TB models also available in 2.5-inch and E3.S form factors • Designed to PCIe 5.0 (up to 128GT/s Gen5 single x4 or dual x2), NVMe 2.0, and NVMe-MI 1.2c specifications • Support for the Open Compute Project (OCP) Datacentre NVMe SSD specification v2.5 (partial compliance) • Flexible Data Placement (FDP) support to reduce write amplification and extend drive lifespan • Security options including SIE, SED, and FIPS SED • CNSA 2.0 signing algorithm, intended for future quantum security standards “We continue to drive innovation with the new Kioxia LC9 Series, providing cutting-edge technology that enables our data centre and hyperscaler customers to stay ahead,” claims Paul Rowan, Vice President and Chief Marketing Officer at Kioxia Europe. “The 32-die stack of 2Tb BiCS FLASH QLC 3D flash memory, coupled with our innovative CBA technology and the E3.L form factor within the LC9 Series SSDs, address their unique requirements of generative AI applications for speed, scale, and efficiency.” The LC9 Series SSDs are currently sampling to select customers and will be showcased at the Future of Memory and Storage 2025 conference, taking place from 5 to 7 August in Santa Clara, USA. For more from Kioxia, click here.

Cybersecurity teams pushing back against AI hype
Despite industry hype and pressure from business leaders to accelerate adoption, cybersecurity teams are reportedly taking a cautious approach to artificial intelligence (AI). This is according to a new survey from ISC2, a non-profit organisation that provides cybersecurity training and certifications. While AI is widely promoted as a game-changer for security operations, only a small proportion of practitioners have integrated these tools into their daily workflows, with many remaining hesitant due to concerns over privacy, oversight, and unintended risks. Many CISOs remain cautious about AI adoption, citing concerns around privacy, oversight, and the risks of moving too quickly. A recent survey of over 1,000 cybersecurity professionals found that just 30% of cybersecurity teams are currently using AI tools in their daily operations, while 42% are still evaluating their options. Only 10% said they have no plans to adopt AI at all. Adoption is most advanced in industrial sectors (38%), IT services (36%), and professional services (34%). Larger organisations with more than 10,000 employees are further ahead on the adoption curve, with 37% actively using AI tools. In contrast, smaller businesses - particularly those with fewer than 99 staff or between 500 and 2,499 employees - show the lowest uptake, with only 20% using AI. Among the smallest organisations, 23% say they have no plans to evaluate AI security tools at all. Andy Ward, SVP International at Absolute Security, comments, “The ISC2 research echoes what we’re hearing from CISOs globally. There’s real enthusiasm for the potential of AI in cybersecurity, but also a growing recognition that the risks are escalating just as fast. "Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control. "AI offers huge promise to improve detection, speed up response times, and strengthen defences, but without robust strategies for cyber resilience and real-time visibility, organisations risk sleepwalking into deeper vulnerabilities. "As attackers leverage AI to reduce the gap between vulnerability and exploitation, our defences must evolve with equal urgency. Now is the time for security leaders to ensure their people, processes, and technologies are aligned, or risk being left dangerously exposed.” Arkadiy Ukolov, Co-Founder and CEO at Ulla Technology, adds, “It’s no surprise to see security professionals taking a measured, cautious approach to AI. While these tools bring undeniable efficiencies, privacy and control over sensitive data must come first. "Too many AI solutions today operate in ways that risk exposing confidential information through third-party platforms or unsecured systems. "For AI to be truly fit for purpose in cybersecurity, it must be built on privacy-first foundations, where data remains under the user’s control and is processed securely within an enclosed environment. Protecting sensitive information demands more than advanced tech alone, it requires ongoing staff awareness, training on AI use, and a robust infrastructure that doesn’t compromise security." Despite this caution, where AI has been implemented, the benefits are clear. 70% of those already using AI tools report positive impacts on their cybersecurity team’s overall effectiveness. Key areas of improvement include network monitoring and intrusion detection (60%), endpoint protection and response (56%), vulnerability management (50%), threat modelling (45%), and security testing (43%). Looking ahead, AI adoption is expected to have a mixed impact on hiring. Over half of cybersecurity professionals believe AI will reduce the need for entry-level roles by automating repetitive tasks. However, 31% anticipate that AI will create new opportunities for junior talent or demand new skill sets, helping to rebalance some of the projected reductions in headcount. Encouragingly, 44% said their hiring plans have not yet been affected, though the same proportion report that their organisations are actively reconsidering the skills and roles required to manage AI technologies.

Industry analysts urge data trust over AI hype
As organisations continue to increasingly embrace AI to unlock new operations, industry analysts - at the Gartner Data & Analytics Summit in Sydney - urged a critical reminder that without trustworthy data, even the most advanced AI systems can lead businesses astray. Amid rising interest in generative AI and autonomous agents, business leaders are being reminded that flashy AI capabilities are meaningless if built on unreliable data. According to information technology research and advisory company Gartner's 2024 survey, data quality and availability remain the biggest barriers to effective AI implementation. If the foundation is flawed, so is the intelligence built on top of it. While achieving perfect data governance is an admirable goal, it's often impractical in fast-moving business environments. Instead, analysts recommend implementing "trust models" that assess the reliability of data based on its origin, lineage, and level of curation. These models enable more nuanced, risk-aware decision-making and can prevent the misuse of data without stalling innovation. Richard Bovey, Chief for Data at AND Digital, comments, "Trust in data isn't just a technical challenge, it's deeply cultural and organisational. While advanced tools and trust models can help address the reliability of data, true confidence in data quality comes from clear ownership, clear practices, and company-wide commitment to transparency. "Too often, organisations are rushing into AI initiatives without fixing the basics. According to our research, 56% of businesses are implementing AI despite knowing their data may not be accurate in order to prevent from falling behind their competitors. "Businesses must consider taking a data and AI approach to their technical operations to build trust, cross-functional collaboration, and ongoing education. Only then can AI initiatives truly succeed." At the summit, autonomy was a central theme. AI systems may act independently in low-risk or time-sensitive situations, but full autonomy still raises concerns as, while users accept AI advice, they're still adjusting to autonomous AI decision-making. Stuart Harvey, CEO of Datactics, argues, "One of the biggest misconceptions we see is the belief that AI performance is purely a function of the model itself, when in reality, it all starts with data. Without well-governed, high-quality data, even the most sophisticated AI systems will produce inconsistent or misleading results. "Organisations often underestimate the foundational role of data management, but these aren't back-office tasks, they're strategic enablers of trustworthy AI and those businesses that rush into AI without addressing fragmented or unverified data sources put themselves at significant risk. Strong data foundations aren't just nice to have in today's technical landscape, they're essential for reliable, ethical, and scalable AI adoption." Gartner predicts that by 2027, 20% of business processes will be fully managed by autonomous analytics and these "perceptive" systems will move beyond dashboards, offering proactive, embedded insights. The company also believes that by 2030, AI agents will replace 30% of SaaS interfaces, turning apps into intelligent data platforms. To thrive, data leaders should thus prioritise trust, influence, and organisational impact, or risk being sidelined. For more from Gartner, click here.

'More than a third of UK businesses unprepared for AI risks'
Despite recognising artificial intelligence (AI) as a major threat, with nearly a third (30%) of UK organisations surveyed naming it among their top three risks, many remain significantly unprepared to manage AI risk. Recent research from CyXcel, a global cyber security consultancy, highlights a concerning gap: nearly a third (29%) of UK businesses surveyed have only just implemented their first AI risk strategy - and 31% don’t have any AI governance policy in place. This critical gap exposes organisations to substantial risks including data breaches, regulatory fines, reputational harm, and critical operational disruptions, especially as AI threats continue to grow and rapidly evolve. CyXcel’s research shows that nearly a fifth (18%) of UK and US companies surveyed are still not prepared for AI data poisoning, a type of cyberattack that targets the training datasets of AI and machine learning (ML) models, or for a deepfake or cloning security incident (16%). Responding to these mounting threats and geopolitical challenges, CyXcel has launched its Digital Risk Management (DRM) platform, which aims to provide businesses with insight into evolving AI risks across major sectors, regardless of business size or jurisdiction. The DRM seeks to help organisations identify risk and implement the right policies and governance to mitigate them. Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, comments, “Organisations want to use AI but are worried about risks – especially as many do not have a policy and governance process in place. The CyXcel DRM provides clients across all sectors, especially those that have limited technological resources in house, with a robust tool to proactively manage digital risk and harness AI confidently and safely.” Edward Lewis, CEO of CyXcel, adds, “The cybersecurity regulatory landscape is rapidly evolving and becoming more complex, especially for multinational organisations. Governments worldwide are enhancing protections for critical infrastructure and sensitive data through legislation like the EU’s Cyber Resilience Act, which mandates security measures such as automatic updates and incident reporting. Similarly, new laws are likely to arrive in the UK next year which introduce mandatory ransomware reporting and stronger regulatory powers. With new standards and controls continually emerging, staying current is essential.”



Translate »