Artificial Intelligence in Data Centre Operations


Kao Data unveils blueprint to accelerate UK AI ambitions
Kao Data, a specialist developer and operator of data centres engineered for AI and advanced computing, has released a strategic report charting a clear path to accelerate the UK's AI ambitions in support of the UK Government's AI Opportunities Action Plan. The report, AI Taking, Not Making, delivers practical recommendations to bridge the gap between government and industry - helping organisations to capitalise on the recent £31 billion in commitments from leading US technology firms including Microsoft, Google, NVIDIA, OpenAI, and CoreWeave, and highlighting key pitfalls which could prevent them from materialising. Drawing on Kao Data's expertise in delivering hyperscale-inspired AI infrastructure, the report identifies three strategic pillars which must be addressed for the UK to secure the much-anticipated economic boost from the Government’s Plan for Change. Key areas include energy pricing and grid modernisation, proposed amendments to the UK’s AI copyright laws, and coordinated investment strategies across the country’s energy and data centre infrastructure systems. "Matt Clifford's AI Opportunities Action Plan has galvanised the industry around a bold vision for Britain's digital future, and the recent investment pledges from global technology leaders signals tremendous confidence in our potential," says Spencer Lamb, MD & CCO of Kao Data. "What's needed now is focused collaboration between industry and government to transform these commitments into world-class infrastructure. Our new report offers a practical roadmap to make this happen - drawing on our experience developing data centres, engineered for AI and advanced computing, and operating those which already power some of the world's most demanding workloads." The new Kao Data report outlines concrete opportunities for partnership across a series of strategic pillars, including integrating data centres into Energy Intensive Industry (EII) frameworks, implementing zonal power pricing near renewable energy generation and accelerating grid modernisation to unlock the projected 10GW AI opportunity by 2030. Additionally, the report proposes new measures to evolve UK AI copyright law with a pragmatic approach that protects creative industries whilst ensuring Britain remains a competitive location for the large-scale AI training deployments essential for attracting frontier AI workloads. Further, the paper shares key considerations to optimise the government's AI Growth Zones (AIGZs), defining benefit structures that create stronger alignment between public infrastructure programmes and the private sector to ensure rapid deployment of sovereign UK AI capacity. "Britain possesses extraordinary advantages, world-leading research institutions, exceptional engineering talent, and now substantial investment to back the country’s AI ambitions," Spencer continues. "By working in partnership with government, we believe we can transform these strengths into the physical infrastructure that will power the next generation of industrial-scale AI innovations and deliver solutions that position the UK at the forefront of the global AI race." To download the full report, click here. For more from Kao Data, click here.

Start Campus, Nscale to deploy NVIDIA Blackwell in the EU
Start Campus, a designer, builder, and operator of sustainable data centres, in partnership with hyperscaler Nscale, has announced one of the European Union’s first deployments of the NVIDIA GB300 NVL72 platform at its SIN01 data centre in Sines, Portugal. The project supports Microsoft’s AI infrastructure strategy and marks a milestone in the development of advanced, sovereign AI capacity within the EU. Nscale, a European-headquartered AI infrastructure company operating globally, selected Start Campus’s site for its strategic location, readiness, and scalability. The first phase of the deployment is scheduled to go live in early 2026 at the SINES Data Campus. High-density power to support next-generation AI The NVIDIA GB300 NVL72 platform is designed for high-performance AI inference and training workloads, supporting larger and more complex model development. Start Campus says the installation will accommodate rack densities exceeding 130 kW, with power and cooling systems engineered to meet the requirements of ultra-dense AI computing. Portugal’s government has welcomed the investment as a key step in strengthening the country’s position in the European digital economy. Castro Almeida, Minister of Economy and Territorial Cohesion, comments, “This investment in Sines confirms international confidence in Portugal as a destination for innovation and technology. It strengthens our position in the global digital economy and supports high-value job creation.” Miguel Pinto Luz, Minister of Infrastructure and Housing, adds, “Start Campus illustrates how digital and infrastructure strategies can align to deliver long-term sustainability. "Sines demonstrates the convergence of the digital transition with Portugal’s geographic advantages - particularly its port, which plays a strategic role in connecting new submarine cables and enabling low-carbon investment.” Recent research by Copenhagen Economics projects that data centre investment could contribute up to €26 billion (£22.6 billion) to Portugal’s GDP by 2030, creating tens of thousands of jobs. Portugal’s location supports strong connectivity through high-capacity subsea cables such as Equiano, 2Africa, and EllaLink, providing low-latency global links. Data from national grid operator Redes Energéticas Nacionais (REN) shows that renewable energy supplied 71% of Portugal’s electricity consumption in 2024, rising to 81% in early 2025. The country’s energy costs also remain below EU and Euro Area averages. The next steps Robert Dunn, CEO of Start Campus, says, “With SIN01 now at full capacity and expanding to meet demand, we have demonstrated that the SINES Data Campus is ready for ultra-dense, next-generation AI workloads. "Partnering with Nscale and NVIDIA on this deployment highlights Portugal’s role as a leader in sustainable AI infrastructure.” The company is also progressing work on its next facility, the 180 MW SIN02 data centre, which will form part of the same campus. Josh Payne, CEO and founder of Nscale, notes, “AI requires an environment that combines scale, resilience, and sustainability. This deployment demonstrates our ability to deliver advanced infrastructure in the European Union while meeting the technical demands of modern workloads. "Partnering with Start Campus allows us to lay the groundwork for the next generation of AI.” Nscale is expanding its European footprint, including building the UK’s largest AI supercomputer with Microsoft at its Loughton campus and partnering with Aker ASA on Stargate Norway - a joint venture linked to multi-billion-euro agreements with Microsoft. For more from Start Campus, click here.

Rethinking infrastructure for the AI era
In this exclusive article for DCNN, Jon Abbott, Technologies Director, Global Strategic Clients at Vertiv, explains how the challenge for operators is no longer simply maintaining uptime; it’s adapting infrastructure fast enough to meet the unpredictable, high-intensity demands of AI workloads: Built for backup, ready for what’s next Artificial intelligence (AI) is changing how facilities are built, powered, cooled and secured. The industry is now facing hard questions about whether existing infrastructure, which has been designed for traditional enterprise or cloud loads, can be successfully upgraded to support the pace and intensity of AI-scale deployments. Data centres are being pushed to adapt quickly and the pressure is mounting from all sides: from soaring power densities to unplanned retrofits, and from tighter build timelines to demands for grid interactivity and physical resilience. What’s clear is that we’ve entered a phase where infrastructure is no longer just about uptime; instead, it’s about responsiveness, integration, and speed. The new shape of demand Today’s AI systems don’t scale in neat, predictable increments; they arrive with sharp step-changes in power draw, heat generation, and equipment footprint. Racks that once averaged under 10kW are being replaced by those consuming 30kW, 40kW, or even 80kW - often in concentrated blocks that push traditional cooling systems to their limits. This represents a physical problem. Heavier and wider AI-optimised racks require new planning for load distribution, cooling systems design, and containment. Many facilities are discovering that the margins they once relied on - in structural tolerance, space planning, or energy headroom - have already evaporated. Cooling strategies, in particular, are under renewed scrutiny. While air cooling continues to serve much of the IT estate, the rise of liquid-cooled AI workloads is accelerating. Rear-door heat exchangers and direct-to-chip cooling systems are no longer reserved for experimental deployments; they are being actively specified for near-term use. Most of these systems do not replace air entirely, but work alongside it. The result is a hybrid cooling environment that demands more precise planning, closer system integration, and a shift in maintenance thinking. Deployment cycles are falling behind One of the most critical tensions AI introduces is the mismatch between innovation cycles and infrastructure timelines. AI models evolve in months, but data centres are typically built over years. This gap creates mounting pressure on procurement, engineering, and operations teams to find faster, lower-risk deployment models. As a result, there is increasing demand for prefabricated and modular systems that can be installed quickly, integrated smoothly, and scaled with less disruption. These approaches are not being adopted to reduce cost; they are being adopted to save time and to de-risk complex commissioning across mechanical and electrical systems. Integrated uninterruptable power supply (UPS) and power distribution units, factory-tested cooling modules, and intelligent control systems are all helping operators compress build timelines while maintaining performance and compliance. Where operators once sought redundancy above all, they are now prioritising responsiveness as well as the ability to flex infrastructure around changing workload patterns. Security matters more when the stakes rise AI infrastructure is expensive, energy-intensive, and often tied to commercially sensitive operations. That puts physical security firmly back on the agenda - not only for hyperscale operators, but also for enterprise and colocation facilities managing high-value compute assets. Modern data centres are now adopting a more layered approach to physical security. It begins with perimeter control, but extends through smart rack-level locking systems, biometric or multi-factor authentication, and role-based access segmentation. For some facilities - especially those serving AI training operations - real-time surveillance and environmental alerting are being integrated directly into operational platforms. The aim is to reduce blind spots between security and infrastructure and to help identify risks before they interrupt service. The invisible fragility of hybrid environments One of the emerging risks in AI-scale facilities is the unintended fragility created by multiple overlapping systems. Cooling loops, power chains, telemetry platforms, and asset tracking tools all work in parallel, but without careful integration, they can fail to provide a coherent operational picture. Hybrid cooling systems may introduce new points of failure that are not always visible to standard monitoring tools. Secondary fluid networks, for instance, must be managed with the same criticality as power infrastructure. If overlooked, they can become weak points in otherwise well-architected environments. Likewise, inconsistent commissioning between systems can lead to drift, incompatibility, and inefficiency. These challenges are prompting many operators to invest in more integrated control platforms that span thermal, electrical, and digital infrastructure. The goal is now to have the ability to see issues and to act quickly - to re-balance loads, adapt cooling, or respond to anomalies in real time. Power systems are evolving too As compute densities rise, so too does energy consumption. Operators are looking at how backup systems can do more than sit idle: UPS fleets are being turned into grid-support assets. Demand response and peak shaving programmes are becoming part of energy strategy. Many data centres are now exploring microgrid models that incorporate renewables, fuel cells, or energy storage to offset demand and reduce reliance on volatile grid supply. What all of this reflects is a shift in mindset. Infrastructure is no longer a fixed investment; it is a dynamic capability - one that must scale, flex, and adapt in real time. Operators who understand this are the best placed to succeed in a fast-moving environment. From resilience to responsiveness The old model of data centre resilience was built around failover and redundancy. Today, resilience also means responsiveness: the ability to handle unexpected load spikes, adjust cooling to new workloads, maintain uptime under tighter energy constraints, and secure physical systems across more fluid operating environments. This shift is already reshaping how data centres are specified, how vendors are selected, and how operators evaluate return on infrastructure investment. What once might have been designed in isolated disciplines - cooling, power, controls, access - is now being engineered as part of a joined-up, system-level operational architecture. Intelligent data centres are not defined by their scale, but by their ability to stay ahead of what’s coming next. For more from Vertiv, click here.

AI infrastructure as Trojan horses for climate infrastructure
Data centres are getting bigger, denser, and more power-hungry than ever before. The rapid rise of artificial intelligence (AI) is accelerating this expansion, driving one of the largest capital build-outs of our time. Left unchecked, hyperscale growth could deepen strains on energy, water, and land - while concentrating economic benefits in just a few regions. But this trajectory isn’t inevitable. In this whitepaper, Shilpika Gautam, CEO and founder of Opna, explores how shifting from training-centric hyperscale facilities to inference-first, modular, and distributed data centres can align AI’s growth with climate resilience and community prosperity. The paper examines: • How right-sized, locally integrated data centres can anchor clean energy projects and strengthen grids through flexible demand, • Opportunities to embed circularity by reusing waste heat and water, and to drive demand for low-carbon materials and carbon removal, and • The need for transparency, contextual siting, and community accountability to ensure measurable, lasting benefits. Decentralised compute decentralises power. By embracing modular, inference-first design, AI infrastructure can become a force for both planetary sustainability and shared prosperity. You can download the whitepaper for yourself by clicking this link.

Saudi Arabia’s first integrated data science and AI diploma
DataVolt, a Saudi Arabian developer and operator of sustainable data centres, has partnered with the Energy & Water Academy (EWA) and Innovatics to launch the Kingdom of Saudi Arabia’s first fully industry-integrated Diploma in Data Science and Artificial Intelligence (AI). The new programme is designed to blend academic learning with practical, real-world experience, helping to prepare Saudi talent for the digital economy. Announced at the LEARN event in Riyadh, the Diploma is approved by the Technical and Vocational Training Corporation (TVTC) and College of Excellence (CoE), and endorsed by the Ministry of Communications and Information Technology (MCIT). It is also supported by the Human Resources Development Fund (HRDF). Unlike traditional academic pathways, the programme combines classroom study with applied projects. Students will work with sponsoring companies, including DataVolt, on live industry challenges, developing proof-of-concept AI applications and gaining hands-on experience that directly aligns with workforce needs. As part of its commitment, DataVolt will sponsor five students from the first cohort and guarantee them employment after graduation. They will join the company’s operations supporting high-power-density workloads at its data centres, including its planned AI campus in Oxagon, NEOM. DataVolt is inviting other organisations to co-sponsor the inaugural class of 100 students, with a target of 50% female participation. The first intake is scheduled to begin in November 2025. Industry-led learning for the digital future Rajit Nanda, CEO at DataVolt, says, “DataVolt is not only building the Kingdom’s next-generation data centres, but also the local Saudi talent to power them, ensuring the country is prepared to lead the global AI economy in the long term. "Our investment in this first-of-its-kind Diploma demonstrates our commitment to Vision 2030 and we encourage our partners across the industry to join us in sponsoring the programme and future-proofing the local workforce.” With AI expected to contribute around US$320 billion (£239.1 billion) to the Middle East economy by 2030, and Saudi Arabia set to see the greatest share of that value, the initiative supports the country’s Vision 2030 and the National Strategy for Data and AI (NSDAI). The programme aims to help bridge the national skills gap and contribute to the target of training 20,000 AI professionals over the next five years. Salwa Smaoui, CEO of Innovatics, comments, “This Diploma is not just education; it is a strategic workforce initiative. Our mission is to ensure every graduate is ready to contribute from day one to the Kingdom’s most ambitious AI projects.” Tariq Alshamrani, CEO of EWA, adds, “EWA is proud to partner with DataVolt and Innovatics to deliver this programme. Together, we are developing the next generation of data scientists and AI professionals who will power the Kingdom’s digital future.” DataVolt continues to expand its data centre footprint across Saudi Arabia. Earlier this year, the company signed an agreement with NEOM to design and develop the region’s first sustainable, net-zero AI campus in Oxagon. The first phase of the 1.5 GW development, backed by an initial investment of US$5 billion (£3.7 billion), is expected to begin operations in 2028. For more from DataVolt, click here.

Paving the way for efficient high-density AI at 400G & 800G
AI workloads are reshaping the data centre. As back-end traffic scales and racks densify, the interconnect choices you make today will determine the performance, efficiency, and scalability of tomorrow’s AI infrastructure. In this fast, focused 30-minute live tech talk, Siemon’s experts will share a practical, cabling-led view to help you plan smarter and deploy faster. Drawing on field experience and expectations from large-scale AI deployments, the session will give you clear context and actionable guidance before your next design, upgrade, or AI back-end project begins. Discover: • AI market overview & nomenclature: A clear look at scale-up vs scale-out networks and where each fit in AI planning. • Reference designs & deployment sizes: Common GPU pod approaches (including air-cooled and liquid-to-chip) and what they mean for density and footprint. • AI network connection points: Critical interconnect considerations for high-performance AI back-end networks. • AI network cabling considerations: What to evaluate when selecting cables for demanding 400G/800G workloads. • Cabling options that improve efficiency: Real-world examples of how architecture choices affect deployment efficiency, including a 1024-GPU comparison. Walk away with: • A clear understanding of high-density interconnect options. • Insight into proven deployment strategies and the trade-offs that matter. • Confidence to make informed decisions that scale with AI workloads. Speaker: Ryan Harris, Director, Systems Engineering (High-Speed Interconnect), SiemonDate: Thursday, 2 October 2025Time: 2:00–2:30 PM BST | 3:00–3:30 PM CET This is the must-see tech talk for anyone planning, designing, or deploying high-density AI data centres. Don’t miss your chance to get the insight that can accelerate your next project and keep your infrastructure ready for the demands ahead. Register now via this link to secure your spot. For more from Siemon, click here.

Schneider Electric unveils AI DC reference designs
Schneider Electric, a French multinational specialising in energy management and industrial automation, has announced new data centre reference designs developed with NVIDIA, aimed at supporting AI-ready infrastructure and easing deployment for operators. The designs include integrated power management and liquid cooling controls, with compatibility for NVIDIA Mission Control, the company’s AI factory orchestration software. They also support deployment of NVIDIA GB300 NVL72 racks with densities of up to 142kW per rack. Integrated power and cooling management The first reference design provides a framework for combining power management and liquid cooling systems, including Motivair technologies. It is designed to work with NVIDIA Mission Control to help manage cluster and workload operations. This design can also be used alongside Schneider Electric’s other data centre blueprints for NVIDIA Grace Blackwell systems, allowing operators to manage the power and liquid cooling requirements of accelerated computing clusters. A second reference design sets out a framework for AI factories using NVIDIA GB300 NVL72 racks in a single data hall. It covers four technical areas: facility power, cooling, IT space, and lifecycle software, with versions available under both ANSI and IEC standards. Deployment and performance focus According to Schneider Electric, operators are facing significant challenges in deploying GPU-accelerated AI infrastructure at scale. Its designs are intended to speed up rollout and provide consistency across high-density deployments. Jim Simonelli, Senior Vice President and Chief Technology Officer at Schneider Electric, says, “Schneider Electric is streamlining the process of designing, deploying, and operating advanced, AI infrastructure with its new reference designs. "Our latest reference designs, featuring integrated power management and liquid cooling controls, are future-ready, scalable, and co-engineered with NVIDIA for real-world applications - enabling data centre operators to keep pace with surging demand for AI.” Scott Wallace, Director of Data Centre Engineering at NVIDIA, adds, “We are entering a new era of accelerated computing, where integrated intelligence across power, cooling, and operations will redefine data centre architectures. "With its latest controls reference design, Schneider Electric connects critical infrastructure data with NVIDIA Mission Control, delivering a rigorously validated blueprint that enables AI factory digital twins and empowers operators to optimise advanced accelerated computing infrastructure.” Features of the controls reference design The controls system links operational technology and IT infrastructure using a plug-and-play approach based on the MQTT protocol. It is designed to provide: • Standardised publishing of power management and liquid cooling data for use by AI management software and enterprise systems• Management of redundancy across cooling and power distribution equipment, including coolant distribution units and remote power panels• Guidance on measuring AI rack power profiles, including peak power and quality monitoring Reference design for NVIDIA GB300 NVL72 The NVIDIA GB300 NVL72 reference design supports clusters of up to 142kW per rack. A data hall based on this design can accommodate three clusters powered by up to 1,152 GPUs, using liquid-to-liquid coolant distribution units and high-temperature chillers. The design incorporates Schneider Electric’s ETAP and EcoStruxure IT Design CFD models, enabling operators to create digital twins for testing and optimisation. It builds on earlier blueprints for the NVIDIA GB200 NVL72, reflecting Schneider Electric’s ongoing collaboration with NVIDIA. The company now offers nine AI reference designs covering a range of scenarios, from prefabricated modules and retrofits to purpose-built facilities for NVIDIA GB200 and GB300 NVL72 clusters. For more from Schneider Electric, click here.

Nokia, Supermicro partner for AI-optimised DC networking
Finnish telecommunications company Nokia has entered into a partnership with Supermicro, a provider of application-optimised IT systems, to deliver integrated networking platforms designed for AI, high-performance computing (HPC), and cloud workloads. The collaboration combines Supermicro’s advanced switching hardware with Nokia’s data centre automation and network operating system for cloud providers, hyperscalers, enterprises, and communications service providers (CSPs). Building networks for AI-era workloads Data centres are under increasing pressure from the rising scale and intensity of AI and cloud applications. Meeting these requirements demands a shift in architecture that places networking at the core, with greater emphasis on performance, scalability, and automation. The joint offering integrates Supermicro’s 800G Ethernet switching platforms with Nokia’s Service Router Linux (SR Linux) Network Operating System (NOS) and Event-Driven Automation (EDA). Together, these form an infrastructure platform that automates the entire network lifecycle - from initial design through deployment and ongoing operations. According to the companies, customers will benefit from "a pre-validated solution that shortens deployment timelines, reduces operational costs, and improves network efficiency." Industry perspectives Cenly Chen, Chief Growth Officer, Senior Vice President, and Managing Director at Supermicro, says, "This collaboration gives our customers more choice and flexibility in how they build their infrastructure, with the confidence that Nokia’s SR Linux and EDA are tightly integrated with our systems. "It strengthens our ability to deliver networked compute architectures for high-performance workloads, while simplifying orchestration and automation with a unified platform." Vach Kompella, Senior Vice President and General Manager of the IP Networks Division at Nokia, adds, "Partnering with Supermicro further validates Nokia SR Linux and Event-Driven Automation as the right software foundation for today’s data centre and IP networks. "It also gives us significantly greater reach into the enterprise market through Supermicro’s extensive channels and direct sales, aligning with our strategy to expand in cloud, HPC, and AI-driven infrastructure." For more from Nokia, click here.

World's first AI internet exchange launched by DE-CIX
DE-CIX, an internet exchange (IX) operator, has announced the launch of what it calls the world’s first AI Internet Exchange (AI-IX), making its global network of internet exchanges “AI-ready.” The company has completed the first phase of the rollout, connecting more than 50 AI-related networks – including providers of AI inference and GPU services, alongside a range of cloud platforms – to its ecosystem. DE-CIX says it now operates over 160 cloud on-ramps globally, supported by its proprietary multi-AI routing system. The new exchange capabilities are available across all DE-CIX locations worldwide, including its flagship site in Frankfurt. Two-phase rollout The second phase of deployment will see DE-CIX AI-IX made Ultra Ethernet-ready, designed to support geographically distributed AI training as workloads move away from large centralised data centres. Once complete, the operator says it will be the first to offer an exchange capable of supporting both AI inference and AI training. AI inference – where trained models are applied in real-world use cases – depends on low-latency, high-security connections. According to DE-CIX CEO Ivo Ivanov, the growth of AI agents and AI-enabled devices is creating new demand for direct interconnection. “This is the core benefit of the DE-CIX AI-IX, which uses the unique DE-CIX AI router to enable seamless multi-agent inference for today’s complex use-cases and tomorrow’s innovation in all industry segments,” Ivo says. Ultra ethernet and AI training Phase two focuses on AI model training. DE-CIX CTO Thomas King says that Ultra Ethernet, a new networking protocol optimised for AI, will enable disaggregated computing across metropolitan areas. This could reduce reliance on centralised data centres and create more cost-effective, resilient private AI infrastructure. “Until now, huge, centralised data centres have been needed to quickly process AI computing loads on parallel clusters,” Thomas explains. “Ultra Ethernet is driving the trend towards disaggregated computing, enabling AI training to be carried out in a geographically distributed manner.” DE-CIX hardware is already compatible with Ultra Ethernet and the operator plans to introduce it once network equipment vendors make the necessary software features available. For more from DE-CIX, click here.

Macquarie, Dell bring AI factories to Australia
Australian data centre operator Macquarie Data Centres, part of Macquarie Technology Group, is collaborating with US multinational technology company Dell Technologies with the aim of providing a secure, sovereign home for AI workloads in Australia. Macquarie Data Centres will host the Dell AI Factory with NVIDIA within its AI and cloud data centres. This approach seeks to power enterprise AI, private AI, and neo cloud projects while achieving high standards of data security within sovereign data centres. This development will be particularly relevant for critical infrastructure providers and highly regulated sectors such as healthcare, finance, education, and research, which have strict regulatory compliance conditions relating to data storage and processing. This collaboration hopes to give them the secure, compliant foundation needed to build, train, and deploy advanced AI applications in Australia, such as AI digital twins, agentic AI, and private LLMs. Answering the Government’s call for sovereign AI The Australian Government has linked the data centre sector to its 'Future Made in Australia' policy agenda. Data centres and AI also play an important role in the Australian Federal Government’s new push to improve Australia’s productivity. “For Australia's AI-driven future to be secure, we must ensure that Australian data centres play a core role in AI, data, infrastructure, and operations,” says David Hirst, CEO, Macquarie Data Centres. “Our collaboration with Dell Technologies delivers just that, the perfect marriage of global tech and sovereign infrastructure.” Sovereignty meets scalability Dell AI Factory with NVIDIA infrastructure and software will be supported by Macquarie Data Centres’ newest purpose-built AI and cloud data centre, IC3 Super West. The 47MW facility is, according to the company, "purpose-built for the scale, power, and cooling demands of AI infrastructure." It is to be ready in mid-2026 with the entire end-state power secured. “Our work with Macquarie Data Centres helps bring the Dell AI Factory with NVIDIA vision to life in Australia,” comments Jamie Humphrey, General Manager, Australia & New Zealand Specialty Platforms Sales, Dell Technologies ANZ. “Together, we are enabling organisations to develop and deploy AI as a transformative and competitive advantage in Australia in a way that is secure, sovereign, and scalable.” Macquarie Technology Group and Dell Technologies have been collaborating for more than 15 years. For more from Macquarie Data Centres, click here.



Translate »