10 December 2025
Supermicro launches liquid-cooled NVIDIA HGX B300 systems
 
10 December 2025
Sabey's Manhattan facility becomes AI inference hub
 
9 December 2025
Siemens, nVent develop reference design for AI DCs
 
9 December 2025
tde expands tML breakout module for 800GbE ethernet
 
9 December 2025
Energy Estate unveils Tasmanian subsea cables and hubs
 

Latest News


De-risking data centre construction
In this article for DCNN, Straightline Consulting’s Craig Eadie discusses how the rapid rise of hyperscale and AI-driven data centres is amplifying project complexity and risk, as well as why comprehensive commissioning from day one is now critical. Craig highlights how long-lead equipment delays, tightening grid constraints, and beginning commissioning before quality control can derail builds, stressing the need for commissioning teams to actively manage supply chains, verify equipment readiness, and address power availability early: How can comprehensive commissioning de-risk data centre construction? Data centres are the backbone of the global economy. As more businesses and governments depend on digital infrastructure, demand for hyperscale and megascale facilities is surging. With that growth comes greater risk. Project timelines are tighter, systems more complex, and the cost of failure has never been higher. From small enterprise facilities to multi-megawatt hyperscale builds, it’s critical that commissioning teams control and mitigate risk throughout the process. It’s rarely one big crisis that causes a data centre project to fail. More often, it’s a chain of small missteps - from quality control and documentation to equipment delivery or communication - that compound into disaster. Taking a comprehensive approach to commissioning from day one to handover can significantly de-risk the process. Managing upwards (in the supply chain) It wasn’t long ago that a 600 kW project was considered large. Now, the industry routinely delivers facilities of 25, 40, or even 60 MW. Both sites and the delivery process are getting more complex as well, with advanced systems, increasing digitisation, and external pressures on manufacturers and their supply chains. However, the core challenges remain the same; it’s just the consequences that have become far more serious. Long-lead equipment like generators or switchgear can have wait times of 35 to 50 weeks. Clients often procure equipment a year in advance, but that doesn’t guarantee it will arrive on time. There’s no use expecting delivery on 1 July if the manufacturer is still waiting on critical components. Commissioning teams can de-risk this process by actively managing the equipment supply chain. Factory visits to check part inventories and verify assembly schedules can ensure that if a generator is going to be late, everyone knows early enough to re-sequence commissioning activities and keep the project moving. The critical path may shift, but the project doesn’t grind to a halt. Managing the supply chain on behalf of the customer is an increasingly important part of commissioning complex, high-stakes facilities like data centres. Luckily, a lot of companies are realising that spending a little up front is better than paying a few hundred thousand dollars every week when the project is late. Securing power More and more clients are facing grid limitations. As AI applications grow, so too do power demands, and the utilities often can’t keep pace. A data centre without power is just an expensive warehouse, which is why some clients are turning to behind-the-meter solutions like near-site wind farms or rooftop solar to secure their timelines, while bigger players are negotiating preferential rates and access with utilities. This approach is meeting with increasingly stern regulatory pushback in a lot of markets, however. You can have a perfectly coordinated build, but if the grid can’t deliver power on time, it’s game over. Power availability needs to be considered as early as possible in the process and sometimes you have to get creative about solving these challenges. Commissioning without quality control is just firefighting One of the most common mistakes we see is starting commissioning before quality control is complete. This turns commissioning into a fault-finding exercise instead of a validation process. The intended role of commissioning is to confirm that systems have been installed correctly and work as designed. If things aren’t ready, commissioning becomes firefighting, and that’s where schedules slip. Data centres are not forgiving environments. You can’t simply shut down a hyperscale AI facility to fix an oversight after it’s gone live. There is no “more or less right” in commissioning. It’s either right or it isn’t. Technology-driven transparency and communication One of the biggest improvements we’ve seen in recent years is through better project visibility. By using cutting edge platforms like Facility Grid, commissioning teams have a complete cradle-to-grave record of every asset in a facility. If a switchboard is built in a factory in Germany and installed in a project in France, it’s tracked from manufacturing to installation. Every test, every piece of documentation is uploaded. If a server gets plugged into a switchboard, the platform knows who did it, when they did it, what comments were made, with a photographic backup of every step. It means that commissioning, construction, and design teams can collaborate across disciplines with full transparency. Tags and process gates ensure that no stage is marked complete until all required documentation and quality checks are in place. That traceability removes ambiguity. It helps keep everyone accountable and on the same page, even on the most complex projects when adjustments are an essential part of reducing the risk of delays and disruption. The biggest differences between a project that fails and one that succeeds are communication and clear organisational strategies. Precise processes, reliable documentation, early engagement, and constant communication - everyone on the project team needs to be pulling in the same direction, whether they’re a part of the design, construction, or commissioning and handover processes. This isn’t just about checking boxes and handing over a building; commissioning is about de-risking the whole process so that you can deliver a complex, interconnected, multi-million pound facility that works, that’s safe, and that will stay operational long after servers spin up and the clients move in. In the past, the commissioning agent was typically seen as a necessary evil. Now, in the data centre sector and other high-stakes, mission critical industries, commissioning is a huge mitigator of risk and an enabler of success.

Europe races to build its own AI backbone
Recent outages across global cloud infrastructure have once again served as a reminder of how deeply Europe depends on foreign hyperscalers. When platforms run on AWS or services protected by Cloudflare fail, European factories, logistics hubs, retailers, and public services can stall instantly. US-based cloud providers currently dominate Europe’s infrastructure landscape. According to market data, Amazon, Microsoft, and Google together control roughly 70% of Europe’s public cloud market. In contrast, all European providers combined account for only about 15%. This share has declined sharply over the past decade. For European enterprises, this means limited leverage over resilience, performance, data governance, and long-term sovereignty. This same structural dependency is now extending from cloud infrastructure directly into artificial intelligence and its underlying investments. Between 2018 and 2023, US companies attracted more than €120 billion (£104 billion) in private AI investment, while the European Union drew about €32.5 billion (£28 billion) over the same period. In 2024 alone, US-based AI firms raised roughly $109 billion (£81 billion), more than six times the total private AI investment in Europe that year. Europe is therefore trying to close the innovation gap while simultaneously tightening regulation, creating a paradox in which calls for digital sovereignty grow louder even as reliance on non-European infrastructure deepens. The European Union’s Apply AI Strategy is designed to move AI out of research environments and into real industrial use, backed by more than one billion euros in funding. However, most of the computing power, cloud platforms, and model infrastructure required to deploy these systems at scale still comes from outside Europe. This creates a structural risk: even as AI adoption accelerates inside European industry, much of the strategic control over its operation may remain in foreign hands. Why industrial AI is Europe’s real monitoring ground For any large-scale technology strategy to succeed, it must be tested and refined through real-world deployment, not only shaped at the policy level. The effectiveness of Europe’s AI push will ultimately depend on how quickly new rules, funding mechanisms, and technical standards translate into working systems, and how fast feedback from practice can inform the next iteration. This is where industrial environments become especially important. They produce large amounts of real-time data, and the results of AI use are quickly visible in productivity and cost. As a result, industrial AI is becoming one of the main testing grounds for Europe’s AI ambitions. The companies applying AI in practice will be the first to see what works, what does not, and what needs to be adjusted. According to Giedrė Rajuncė, CEO and co-founder of GREÏ, an AI-powered operational intelligence platform for industrial sites, this shift is already visible on the factory floor, where AI is changing how operations are monitored and optimised in real time. She notes, “AI can now monitor operations in real time, giving companies a new level of visibility into how their processes actually function. I call it a real-time revolution, and it is available at a cost no other technology can match. Instead of relying on expensive automation as the only path to higher effectiveness, companies can now plug AI-based software into existing cameras and instantly unlock 10–30% efficiency gains.” She adds that Apply AI reshapes competition beyond technology alone, stating, “Apply AI is reshaping competition for both talent and capital. European startups are now competing directly with US giants for engineers, researchers, and investors who are increasingly focused on industrial AI. From our experience, progress rarely starts with a sweeping transformation. It starts with solving one clear operational problem where real-time detection delivers visible impact, builds confidence, and proves return on investment.” The data confirms both movement and caution. According to Eurostat, 41% of large EU enterprises had adopted at least one AI-based technology in 2024. At the same time, a global survey by McKinsey & Company shows that 88% of organisations worldwide are already using AI in at least one business function. “Yes, the numbers show that Europe is still moving more slowly,” Giedrė concludes. “But they also show something even more important. The global market will leave us no choice but to accelerate. That means using the opportunities created by the EU’s push for AI adoption before the gap becomes structural.”

BAC immersion cooling tank gains Intel certification
BAC (Baltimore Aircoil Company), a provider of data centre cooling equipment, has received certification for its immersion cooling tank as part of the Intel Data Center Certified Solution for Immersion Cooling, covering fourth- and fifth-generation Xeon processors. The programme aims to validate immersion technologies that meet the efficiency and sustainability requirements of modern data centres. The Intel certification process involved extensive testing of immersion tanks, cooling fluids, and IT hardware. It was developed collaboratively by BAC, Intel, ExxonMobil, Hypertec, and Micron. The programme also enables Intel to offer a warranty rider for single-phase immersion-cooled Xeon processors, providing assurance on durability and hardware compatibility. Testing was carried out at Intel’s Advanced Data Center Development Lab in Hillsboro, Oregon. BAC’s immersion cooling tanks, including its CorTex technology, were used to validate performance, reliability, and integration between cooling fluid and IT components. “Immersion cooling represents a critical advancement in data centre thermal management, and this certification is a powerful validation of that progress,” says Jan Tysebeart, BAC’s General Manager of Data Centers. “Our immersion cooling tanks are engineered for the highest levels of efficiency and reliability. "By participating in this collaborative certification effort, we’re helping to ensure a trusted, seamless, and superior experience for our customers worldwide.” Joint testing to support industry adoption The certification builds on BAC’s work in high-efficiency cooling design. Its Cobalt immersion system, which combines an indoor immersion tank with an outdoor heat-rejection unit, is designed to support low Power Usage Effectiveness values while improving uptime and sustainability. Jan continues, “Through rigorous joint testing and validation by Intel, we’ve proven that immersion cooling can bridge IT hardware and facility infrastructure more efficiently than ever before. “Certification programmes like this one are key to accelerating industry adoption by ensuring every element - tank, fluid, processor, and memory - meets the same high standards of reliability and performance.” For more from BAC, click here.

Portable data centre to heat Scandinavian towns
Power Mining, a Baltics-based personal Bitcoin mining device manufacturer, has developed a portable data centre that will heat towns using residual heat from Bitcoin mining. The first two data centres, housed in shipping containers, will be shipped to a Scandinavian town, where they will be connected to the municipal heating system. In one year, one Power Mining data centre can reportedly mine up to 9.7 Bitcoin and heat up to 2000 homes. With 1.6 MW/h in power, the data centre achieves 95% energy efficiency, thereby providing the municipality with 1.52 MW/h. A portable data centre design The data centres are built in Latvia, at a cost starting from €300,000 (£262,000). Due to being put together in a shipping container, they are easily shipped around the world. The data centre is made up of eight server closets, each outfitted with 20 Whatsminer M63S++ servers that consume 10kW of electricity each and create an equivalent amount of heat. The servers can raise the incoming coolant temperature by 10-14°C, producing the equivalent amount of heat while mining Bitcoin. Each server closet is equipped with warm and cool fluid collectors which send the warmed liquid to a built-in heat pump station, where a 1.7 MW heat exchanger ensures the redistribution of heat from the data centre to the town’s heating grid. If the heating grid does not require additional heat from the data centre, the heated fluid is redirected to a built-in dry cooler, which adjusts the temperature to suit the needs of the servers. This way, the data centre is able to cool itself and also contribute to balancing the municipality’s heating grid. Steps towards increased energy efficiency The development of a passive heating data centre is one step towards increased energy efficiency in Bitcoin mining. While classical data centres can collect heat at approximately 27°C, Power Mining says its data centres can collect heat up to 65°C, providing cities with more efficient sources of heat. European data centres already make up more than 3% of the continent’s total electricity consumption, which is expected to surpass 150 TW/h annually - an equivalent of all of Poland’s electricity demands. Up to 40% of this energy is turned into heat, which most often is released into the atmosphere. If this energy were collected and redirected back to heating, it could provide up to 10 million European households with heat. Heat collection from data centres could become one of the most effective ways to combine digitalisation and climate goals.

Aggreko expands liquid-cooled load banks for AI DCs
Aggreko, a British multinational temporary power generation and temperature control company, has expanded its liquid-cooled load bank fleet by 120MW to meet rising global demand for commissioning equipment used in high-density data centres. The company plans to double this capacity in 2026, supporting deployments across North America, Europe, and Asia, as operators transition to liquid cooling to manage the growth of AI and high-performance computing. Increasing rack densities, now reaching between 300kW and 500kW in some environments, have pushed conventional air-cooling systems to their limits. Liquid cooling is becoming the standard approach, offering far greater heat removal efficiency and significantly lower power consumption. As these systems mature, accurate simulation of thermal and electrical loads has become essential during commissioning to minimise downtime and protect equipment. The expanded fleet enables Aggreko to provide contractors and commissioning teams with equipment capable of testing both primary and secondary cooling loops, including chiller lines and coolant distribution units. The load banks also simulate electrical demand during integrated systems testing. Billy Durie, Global Sector Head – Data Centres at Aggreko, says, “The data centre market is growing fast, and with that speed comes the need to adopt energy efficient cooling systems. With this comes challenges that demand innovative testing solutions. “Our multi-million-pound investment in liquid-cooled load banks enables our partners - including those investing in hyperscale data centre delivery - to commission their facilities faster, reduce risks, and achieve ambitious energy efficiency goals.” Supporting commissioning and sustainability targets Liquid-cooled load banks replicate the heat output of IT hardware, enabling operators to validate cooling performance before systems go live. This approach can improve Power Usage Effectiveness and Water Usage Effectiveness while reducing the likelihood of early operational issues. Manufactured with corrosion-resistant materials and advanced control features, the equipment is designed for use in environments where reliability is critical. Remote operation capabilities and simplified installation procedures are also intended to reduce commissioning timelines. With global data centre power demand projected to rise significantly by 2030, driven by AI and high-performance computing, the ability to validate cooling systems efficiently is increasingly important. Aggreko says it also provides commissioning support as part of project delivery, working with data centre teams to develop testing programmes suited to each site. Billy continues, “Our teams work closely with our customers to understand their infrastructure, challenges, and goals, developing tailored testing solutions that scale with each project’s complexity. "We’re always learning from projects, refining our design and delivery to respond to emerging market needs such as system cleanliness, water quality management, and bespoke, end-to-end project support.” Aggreko states that the latest investment strengthens its ability to support high-density data centre construction and aligns with wider moves towards more efficient and sustainable operations. Billy adds, “The volume of data centre delivery required is unprecedented. By expanding our liquid-cooled load bank fleet, we’re scaling to meet immediate market demand and to help our customers deliver their data centres on time. "This is about providing the right tools to enable innovation and growth in an era defined by AI.” For more from Aggreko, click here.

AVK and Rolls-Royce form new capacity agreement
AVK, a supplier of innovative power solutions for data centres in Europe, and the Rolls-Royce division Power Systems, an expert in power solutions, have announced a new multi-year capacity framework as an addition to their established System Integrator Agreement. The new partnership capacity framework cements a closer collaboration between the two companies, with a focus on increasing industrial capacity for genset orders whilst accelerating joint innovation across the data centre and critical power markets. It comes 12 months after AVK announced a record-breaking year of sales and collaboration between the two companies, with 2024 seeing AVK deliver its 500 mtu generator from Rolls-Royce. The HVO-ready generators were primarily sold into the data centre sector. Under the new Memorandum of Understanding (MoU) agreed between the parties, their relationship moves considerably beyond 2024’s achievements to become a strategically integrated, longer‑term alliance. The new framework formalises a five‑year capacity partnership between Rolls‑Royce Power Solutions and AVK, with Rolls-Royce increasing supply and AVK committing to order volume. A parallel six-year master framework designates AVK as the exclusive System Integrator for mtu generator sets across the UK and Ireland until 2031. The agreement defines commercial terms, pricing, delivery lead times, compliance standards and anti‑bribery obligations, positioning AVK as Rolls‑Royce’s strategic partner for turnkey, mission‑critical energy solutions. As a result, there will be stronger alignment across production planning, product roadmaps and go‑to‑market activity, accelerating innovation and boosting the resilience of supply chains. The new framework with its increased capacity will foster end‑to‑end certainty for customers that specify mtu solutions for decarbonisation and resilience programmes. Finally, the new agreement supports AVK’s ambition to serve the expected surge in data‑centre demand while reinforcing Rolls‑Royce’s commitment to scalable, power platforms. “We are pleased to have signed a Capacity Agreement and extended our System Integrator contract with AVK,” says Vittorio Pierangeli, Senior VP Power Generation at Rolls Royce. This agreement reflects the commitment from both parties to continue our collaboration and support the rapidly growing European Data Centre market. With this in place, we look forward to working closely together in the coming years. “This is an exciting time for our industry, where innovation and collaboration are more important than ever. Technology shifts are accelerating, and the acceptance of alternative fuels is just one example of how the market is evolving. We are investing extensively to introduce our gas product range into this space, complementing our established diesel portfolio. With both diesel and gas solutions, as well as energy storage options, Rolls-Royce is uniquely positioned to meet future demands. Partners such as AVK will also be well placed to capitalise on these opportunities. We look forward to strengthening our partnership as the market continues to develop." Ben Pritchard, CEO of AVK, is delighted with the new framework deal, which he believes is testament to the long‑standing partnership the company has with Rolls-Royce. He comments, “This is not simply about growth. It’s about a much stronger, more enduring collaboration. With Rolls-Royce one of our valued technology partners, we have both achieved impressive growth individually as well as in our work together. Driving growth in innovative solutions on an international scale, our joint projects can be seen succeeding across Europe. We’re excited for this to develop even further.” Ben continues, “Our relationship with Rolls-Royce has evolved in an exciting way and this new multi‑year capacity agreement secures access to equipment across the UK and Europe just as the data centre market is about to boom. The new framework puts us in a position where we have the industrial capacity to deliver technology in line with the growing demands of the market. This makes AVK a pivotal part of the coming energy transition. AVK and Rolls‑Royce are positioned to deliver more than equipment – together we will provide integrated, future‑proofed power solutions that combine industrial scale, sustainability and the delivery certainty required by today’s mission‑critical operators.” For more from AVK, click here.

Study finds consumer GPUs can cut AI inference costs
A peer-reviewed study has found that consumer-grade GPUs, including Nvidia’s RTX 4090, can significantly reduce the cost of running large language model (LLM) inference. The research, published by io.net - a US developer of decentralised GPU cloud infrastructure - and accepted for the 6th International Artificial Intelligence and Blockchain Conference (AIBC 2025), provides the first open benchmarks of heterogeneous GPU clusters deployed on the company’s decentralised cloud platform. The paper, Idle Consumer GPUs as a Complement to Enterprise Hardware for LLM Inference, reports that clusters built from RTX 4090 GPUs can deliver between 62% and 78% of the throughput of enterprise-grade H100 hardware at roughly half the cost. For batch processing or latency-tolerant workloads, token costs fell by up to 75%. The study also notes that, while H100 GPUs remain more energy efficient on a per-token basis, extending the life of existing consumer hardware and using renewable-rich grids can reduce overall emissions. Aline Almeida, Head of Research at IOG Foundation and lead author of the study, says, “Our findings demonstrate that hybrid routing across enterprise and consumer GPUs offers a pragmatic balance between performance, cost, and sustainability. "Rather than a binary choice, heterogeneous infrastructure allows organisations to optimise for their specific latency and budget requirements while reducing carbon impact.” Implications for LLM development and deployment The research outlines how AI developers and MLOps teams can use mixed hardware clusters to improve cost-efficiency. Enterprise GPUs can support real-time applications, while consumer GPUs can be deployed for batch tasks, development, overflow capacity, and workloads with higher latency tolerance. Under these conditions, the study reports that organisations can achieve near-H100 performance with substantially lower operating costs. Gaurav Sharma, CEO of io.net, comments, “This peer-reviewed analysis validates the core thesis behind io.net: that the future of compute will be distributed, heterogeneous, and accessible. "By harnessing both data-centre-grade and consumer hardware, we can democratise access to advanced AI infrastructure while making it more sustainable.” The company also argues that the study supports its position that decentralised networks can expand global compute capacity by making distributed GPU resources available to developers through a single, programmable platform. Key findings include: • Cost-performance ratios — Clusters of four RTX 4090 GPUs delivered 62% to 78% of H100 throughput at around half the operational cost, achieving the lowest cost per million tokens ($0.111–0.149). • Latency profiles — H100 hardware maintained sub-55ms P99 time-to-first-token even at higher loads, while consumer GPU clusters were suited to workloads tolerating 200–500ms tail latencies, such as research, development environments, batch jobs, embeddings, and evaluation tasks.

A round-up of DataCentres Ireland 2025
DataCentres Ireland was again heralded as a success by the exhibitors, alongside many of the visitors and speakers, as it delivered more attendees, more content, and more delegates than ever before. The total attendance was up 23.0% YoY over the two days of the event, delivering to over 3000 people interested in the sector. Louisa Cilenti, Chief Legal Officer at Clear Decisions, notes, “DataCentres Ireland was an outstanding forum to get underneath the strategic issues shaping Ireland’s future as a global data centre hub. "From Minister Dooley’s address to the highly practical breakout sessions, the day struck the perfect balance between policy depth and real-world innovation. "The relaxed venue made meaningful networking effortless and, as a startup, Clear Decisions really valued the genuine peer environment. A fantastic event that brings the whole ecosystem into one conversation.” Day 1 was busy from the outset. Starting with a keynote address from Minister of State Timmy Dooley TD detailing the Irish Government recognising the essential nature of data centres in modern society and its wish to work with the data centre community both for the leveraging of AI as well as recognising the impact of data centres in attracting foreign direct investment. Day 1 saw a massive 34.7% growth on attendance YoY, with exhibitors and attendees commenting on the buzz in the exhibition hall. Day 2 had a slower start, though footfall built with a good feeling in the hall. The event delivered over 600 new individuals for exhibitors to network and do business with, which were similar numbers to those achieved on Day 2 in 2024. This was the largest visitor and total attendance in the event's 15-year history, delivering over 3000 attendees across the two days. The number of exhibitors also grew, with the exhibition featuring over 140 individual stands and showcasing more than 180 companies and thousands of brands. An event of opportunities Exhibitors and attendees acknowledged that DataCentres Ireland provided a professional business environment where people were able to network with colleagues; see the latest in products, services, technology, and equipment; and listen to industry leaders and experts discussing the latest issues, approaches, and ideas affecting data centres and critical environments. Paul Flanagan, EMEA Regional Director West, Camfil, comments, “As usual, a well-organised event by Stepex. Great variety of exhibitors and visitors, so plenty of networking opportunities along with being able to see all the latest and greatest technologies and services available on the market today. "The data centre segment is getting smarter and more collaborative, and this event guarantees any visitor the opportunity to appreciate that in many ways. Exhibitors commented on the quality of attendees present at DataCentres Ireland and lack of 'time wasters' at the show, giving them more time to engage with buyers, discuss their needs, and forge lasting business contacts and relationships. The conference programme featured 90 international and local experts and industry leaders. The conference addressed a wide range of issues from data centre development, regionalisation, training and staff retention, the impact of AI on data centres, as well as decarbonisation, energy reduction, and heat re-use. One of the many highlights was an insightful presentation by Mark Foley, CEO of Mark Foley Strategic Solutions, on the state of the Irish grid network and what could be actioned to make this grid more flexible for data centres. All presentations were broadly supportive of data centres and their continued development in Ireland. Mark comments, “An excellent conference at a challenging time for the sector in Ireland. The key issues were discussed and practical and innovative solutions are now emerging if Government and regulators make the right decisions.” For more from DataCentres Ireland, click here.

Macquarie tops out $350M, 47MW AI data centre
The New South Wales Treasurer, Hon. Daniel Mookhey MP, today poured the final concrete on Macquarie Data Centres’ newest 47 megawatt (MW) facility, IC3 Super West. The ceremony marks the completion of the building’s external structure and a major milestone toward its scheduled opening in September 2026. IC3 Super West will be the only data centre to add new AI capacity to Sydney’s north zone in 2026, Macquarie states. With all the end-state power already secured, the facility is being purpose-built to meet the growing demands from hyperscalers, enterprise and neoclouds for GPU and high-performance computing capacity in the Tier 1 hub. The facility is part of Macquarie Data Centres’ 200MW development pipeline adding more AI and cloud capacity to the market. NSW Treasurer, Hon. Daniel Mookhey MP, says, “Companies like Macquarie Data Centres keep investing, keep expanding, and keep believing that NSW can be a global home for high-tech infrastructure. And it happens because the government has chosen to take planning and investment delivery seriously. “In the years ahead, thousands of businesses will run smarter because this building exists. Research will accelerate because this building exists. AI capability will expand because this building exists. And NSW will be more competitive – globally competitive – because this building exists.” Sydney is seeking to cement its position as a leading hub for AI, cloud and digital innovation, supported by new initiatives such as the NSW Government’s new Investment Delivery Authority - which aims to accelerate future technology infrastructure projects like Macquarie Data Centres’ recently announced 150MW planned site. Macquarie Data Centres Group Executive, David Hirst, comments, “IC3 Super West is the next data centre in our pipeline of sites planned to add circa 200MW of AI and cloud capacity in Sydney. Demand for high-density AI infrastructure is the most significant megatrend we’ve seen in over 25 years in the data centre industry. IC3 Super West, opening in Q3 2026, is purpose-built for the high-density power and liquid cooling demands of new AI technology. Sovereign data centres keep Australia competitive in the global market and are the foundation of our AI future.” IC3 Super West is the third facility to be built at the leading provider’s 65MW Macquarie Park Data Centre Campus in Sydney’s north zone and is designed to support a hybrid mix of air and liquid cooling for direct-to-chip, high-density AI and cloud workloads. Phase 1 of the build is a circa $350 million investment and will deliver the complete core and shell with 6MW IT load fitted out. For more from Macquarie, click here.

OpenNebula, Canonical partner on cloud security
OpenNebula Systems, a global open-source technology provider, has formed a new partnership with UK developer Canonical to offer Ubuntu Pro as a built-in, security-maintained operating system for hypervisor nodes running OpenNebula. The collaboration is intended to streamline installation, improve long-term maintenance, and reinforce security and compliance for enterprise cloud environments. OpenNebula is used for virtualisation, cloud deployment, and multi-cluster Kubernetes management. It integrates with a range of technology partners, including NetApp and Veeam, and is supported by relationships with NVIDIA, Dell Technologies, and Ampere. These partnerships support its use in high-performance and AI-focused environments. Beginning with the OpenNebula 7.0 release, Ubuntu Pro becomes an optional operating system for hypervisor nodes. Canonical’s long-term security maintenance, rapid patch delivery, and established update process are designed to help teams manage production systems where new vulnerabilities emerge frequently. Integrated security maintenance for hypervisor nodes With Ubuntu Pro embedded into OpenNebula workflows, users will gain access to extended security support, expedited patching, and coordinated lifecycle updates. The approach aims to reduce operational risk and maintain compliance across large-scale, distributed environments. Constantino Vázquez, VP of Engineering Services at OpenNebula Systems, explains, “Our mission is to provide a truly sovereign and secure multi-tenant cloud and edge platform for enterprises and public institutions. "Partnering with Canonical to integrate Ubuntu Pro into OpenNebula strengthens our customers’ confidence by combining open innovation with long-term stability, security, and compliance.” Mark Lewis, VP of Application Services at Canonical, adds, “Ubuntu Pro provides the secure foundation that modern cloud and AI infrastructures demand. "By embedding Ubuntu Pro into OpenNebula, we are providing enterprises [with] a robust and compliance-ready environment from the bare metal to the AI workload - making open source innovation ready for enterprise-grade operations.”



Translate »