News


AirTrunk opens hyperscale data centre campus in Melbourne
AirTrunk, a hyperscale data centre specialist in the Asia Pacific & Middle East region, has announced the acquisition of a new site in Melbourne’s North West for its second Melbourne campus, to be known as MEL2. With over 354MW capacity, MEL2 will add more than AUD $5 billion (£2.48bn) in new direct investment and lift AirTrunk’s total deployable capacity in Melbourne to over 630 MW. Across MEL1 and MEL2, AirTrunk’s investment in the city’s digital infrastructure will exceed AUD $7 billion (£3.45bn), delivering one of the largest economic and productivity boosts to Victoria. MEL2 is expected to create over 4,000 jobs during the multi-phase construction and over 200 direct jobs, once operational. In addition, AirTrunk will boost the local supply chain creating in excess of 1,000 full-time jobs to support its data centres. The new site will complement AirTrunk’s existing Australian campuses, giving global AI and cloud customers greater geographical diversity across the Sydney and Melbourne markets. AirTrunk will operate five campuses nationally - SYD1 (121 MW+), SYD2 (158 MW+), SYD3 (330 MW+), MEL1 (276 MW+), and MEL2 (354 MW+) - delivering a combined capacity of more than 1.2 GW. Robin Khuda, Founder & CEO of AirTrunk, says, “Australia has set bold ambitions to become a global AI hub, and demand for AI ready infrastructure continues to grow. MEL2 is part of our response. Working closely with Invest Victoria, we’re expanding in Melbourne to support Australia’s AI future while creating new opportunities for local business and communities. “AI data centres require significant upfront investment, and AirTrunk’s strong balance sheet and proven regional track record helps give global AI customers confidence in reliable, on time deployment in Australia.” Victorian Premier, the Hon. Jacinta Allan, adds, “Victoria is leading Australia’s digital transformation, and investments like this will strengthen our state’s position as a hub for cloud and AI innovation, create thousands of jobs, and deliver sustainable infrastructure that supports our growing technology ecosystem." AirTrunk’s expansion in Melbourne follows last week’s announcement of a new hyperscale campus in Osaka, Japan, delivering up to 100MW of IT load in Japan and a AUD $3 billion-plus (£1.48bn) new direct investment in Japan. OSK2 and MEL2 - which will become AirTrunk’s fourteenth and fifteenth data centres respectively - expand the company’s hyperscale platform to deliver a total capacity in excess of 2.6 GW across six markets in Asia Pacific and Middle East: Australia, Singapore, Japan, Malaysia, Hong Kong and Saudi Arabia. AirTrunk’s Melbourne expansion comes as Australia advances its National AI Plan, released in late 2025, which outlines the country’s ambition to become a global hub for artificial intelligence. The plan is built around three pillars: capturing the opportunity through investment in infrastructure and skills, spreading the benefits across industries and communities, and keeping Australians safe through responsible AI governance. By delivering a new hyperscale data centre in Melbourne, AirTrunk says that it is directly supporting these national goals, enabling smarter government services, faster business innovation, and stronger human connection, while creating opportunities for local talent and suppliers. For more from AirTrunk, click here.

Telehouse Canada, Megaport partner to expand cloud options
Telehouse Canada, an operator of colocation data centres, has announced a strategic partnership with Network-as-a-Service (NaaS) provider Megaport, expanding cloud connectivity capabilities across its Canadian data centre portfolio. The agreement enables Telehouse Canada customers to access Megaport’s global ecosystem, which includes more than 280 cloud on-ramps and over 300 service providers. Through the integration, organisations can establish scalable, private connections to leading cloud platforms and global IT services directly from Telehouse Canada facilities. Customers can access the Megaport Portal from all Telehouse Canada data centres, allowing them to design flexible, high-performance network architectures that support hybrid and multi-cloud workloads, as well as more traditional enterprise use cases. Simplified networking and AI-focused connectivity By leveraging Megaport’s global platform, organisations can scale connectivity on demand and streamline network operations. Services available include Megaport Cloud Router, which enables direct data transfer between multiple cloud environments, and API-based integration to automate deployment and ongoing management. Atsushi Kubo, President and CEO of Telehouse Canada, says the partnership enhances the value of its data centre ecosystem, stating, “This collaboration reflects our shared commitment to delivering high-quality, efficient connectivity. "Alongside colocation, we are providing customers with access to a highly interconnected environment that supports growth and reduces complexity through Megaport’s platform.” The partnership also provides access to Megaport’s AI Exchange (AIx), a connectivity ecosystem designed to support AI-driven workloads. AIx enables organisations to interconnect with GPU-as-a-Service providers, neocloud platforms, third-party AI models, and storage and compute resources, supporting the rapid delivery of AI services at a global scale. Michael Reid, CEO of Megaport, notes, “As organisations operate across increasingly complex environments, connectivity and compute must work seamlessly together. "Partnering with Telehouse Canada allows us to extend our capabilities into a strong local ecosystem, giving customers the foundations required to support advanced workloads today and adapt as their needs evolve.” Telehouse Canada and Megaport say they plan to continue developing the partnership, with a shared focus on strengthening secure, high-performance digital infrastructure to support Canadian organisations and international connectivity requirements. For more from Telehouse, click here.

Yondr completes RFS milestone at Northern Virginia campus
Yondr Group, a global developer, owner, and operator of hyperscale data centres, has completed the first ready-for-service (RFS) milestone for the second building at its 96MW hyperscale data centre campus in Loudoun County, Northern Virginia, USA. The milestone marks the delivery of the first 12MW of capacity within a 48MW facility, which forms part of Yondr’s wider Northern Virginia development. The second building is scheduled to become fully operational in 2026. Developed in partnership with JK Land Holdings, the project supports Yondr’s strategy to deliver additional cloud and digital infrastructure capacity as global data demand continues to rise, driven in part by the growing adoption of artificial intelligence workloads. Yondr completed the first 48MW data centre at the Northern Virginia campus in 2024. The company also has a further 240MW of capacity planned on an adjacent parcel of land, which would bring the total campus capacity to 336MW. Northern Virginia remains one of the world’s most established data centre markets, accounting for close to 60% of the primary data centre inventory in the United States and hosting more than 30 million square feet of operational data centre space. Continued expansion across North America Yondr says the Loudoun County development reflects its broader growth strategy across North America. In addition to Northern Virginia, the company currently has a 27MW project under development in Toronto and has secured a 163-acre site in Lancaster, south of Dallas, where it plans to develop a 550MW hyperscale campus. Todd Sauer, VP Design & Construction Americas at Yondr Group, says, “This RFS milestone is the latest in a series of achievements across our North American data centre portfolio and continues the strong progress we’re making in the important Northern Virginia market.” John Madden, Chief Data Centre Officer at Yondr Group, adds, “As demand for capacity continues to increase, we are stepping up our investment in North America, a high-growth, dynamic market full of opportunities. "We look forward to expanding in the region and continuing to deliver scalable, reliable infrastructure that meets our customers’ evolving requirements.” For more from Yondr Group, click here.

Enecom upgrades data storage with Infinidat's InfiniBox
Infinidat, a provider of enterprise data storage systems, has announced that Enecom, a Japanese ICT services provider operating primarily in the Chugoku region, has upgraded its enterprise data infrastructure using multiple InfiniBox storage systems. Enecom has deployed five InfiniBox systems across its environment. Two systems support the company’s EneWings enterprise cloud service, two are used for internal virtual infrastructure, and one is dedicated to backup and verification. The deployment is intended to support service availability, scalability, and resilience as data volumes increase. According to Enecom, the investment was driven by customer requirements for high system reliability, concerns around cyber security, and the rising cost and operational impact of legacy storage platforms. Masayuki Chikaraishi, Solution Service Department, Solution Business Division at Enecom, says, “When we were choosing how to upgrade our storage infrastructure, our customers told us that system reliability was particularly important and that the threat of damage caused by cyberattacks was a major concern. "We also had to address the rising costs of the legacy systems and the fallout when hardware failures occurred. For the longer term, we needed to be proactive to be able to handle the expected future growth in cloud demand and to strengthen the appeal of our EneWings brand.” Availability and cyber resilience focus Enecom says it is using an active-active configuration across two InfiniBox systems to maintain service continuity during maintenance and software upgrades. Takashi Ueki, Solution Service Department, Solution Business Division at Enecom, notes, “Many of our customers are concerned that even the slightest outage will affect their business. "By using two InfiniBox systems in an active-active cluster configuration, we can continue to provide services with higher reliability and peace of mind without interruption, even when performing maintenance or software version upgrades.” Cyber resilience was also a key consideration. Enecom is using InfiniSafe features within the InfiniBox platform, including immutable snapshots and recovery capabilities, to support rapid restoration following cyber incidents. Masayuki continues, “InfiniBox provides high-speed, tamper-proof, immutable snapshots creation as a standard feature to enable rapid recovery from a future cyberattack. Keeping data within Japan for data security reasons will become more important in the future.” For more from Infinidat, click here.

Motivair by Schneider Electric introduces new CDUs
Motivair, a US provider of liquid cooling solutions for data centres and AI computing, owned by Schneider Electric, has introduced a new range of coolant distribution units (CDUs) designed to address the increasing thermal requirements of high performance computing and AI workloads. The new units are designed for installation in utility corridors rather than within the white space, reflecting changes in how liquid cooling infrastructure is being deployed in modern data centres. According to the company, this approach is intended to provide operators with greater flexibility when integrating cooling systems into different facility layouts. The CDUs will be available globally, with manufacturing scheduled to increase from early 2026. Motivair states that the range supports a broader set of operating conditions, allowing data centre operators to use a wider range of chilled water temperatures when planning and operating liquid cooled environments. The additions expand the company’s existing liquid cooling portfolio, which includes floor-mounted and in-rack units for use across hyperscale, colocation, edge, and retrofit sites. Cooling design flexibility for AI infrastructure Motivair says the new CDUs reflect changes in infrastructure design as compute densities increase and AI workloads become more prevalent. The company notes that operators are increasingly placing CDUs outside traditional IT spaces to improve layout flexibility and maintenance access, as having multiple CDU deployment options allows cooling approaches to be aligned more closely with specific data centre designs and workload requirements. The company highlights space efficiency, broader operating ranges, easier access for maintenance, and closer integration with chiller plant infrastructure as key considerations for operators planning liquid cooling systems. Andrew Bradner, Senior Vice President, Cooling Business at Schneider Electric, says, “When it comes to data centre liquid cooling, flexibility is the key with customers demanding a more diverse and larger portfolio of end-to-end solutions. "Our new CDUs allow customers to match deployment strategies to a wider range of accelerated computing applications while leveraging decades of specialised cooling experience to ensure optimal performance, reliability, and future-readiness.” The launch marks the first new product range from Motivair since Schneider Electric acquired the company in February 2025. Rich Whitmore, CEO of Motivair, comments, “Motivair is a trusted partner for advanced liquid cooling solutions and our new range of technologies enables data centre operators to navigate the AI era with confidence. "Together with Schneider Electric, our goal is to deliver next-generation cooling solutions that adapt to any HPC, AI, or advanced data centre deployment to deliver seamless scalability, performance, and reliability when it matters most.” For more from Schneider Electric, click here.

Funding for community projects from Kao SEED Fund
Harlow-based community groups are celebrating new funding awards from the Kao SEED Fund Harlow, sharing a total of £30,000 to power community initiatives that aim to create positive social and environmental change. Run by advanced data centre operator Kao Data, the second Kao SEED Fund (Social Enterprise and Environment Development Fund) was launched in September as part of the company’s ongoing commitment to supporting the town where it operates its AI data centre campus. Developed in partnership with Harlow Council, the fund offered grants of between £500 and £2,500 to local community groups and not-for-profit organisations to help launch new programmes or create new pilot initiatives. The wide-ranging projects includes funding to support a fully sustainable theatre production of Alice in Wonderland, a women-only boxing programme, free tuition for disadvantaged pupils, and a forest garden for a Scout group. Funding local communities Councillor Dan Swords, Leader of Harlow Council, comments, “In Harlow, we are building a community where innovation, opportunity, and local pride go hand in hand. "The Kao SEED Fund is a fantastic example of how business and local government can work together to invest in the people and projects that make a real difference. "The Harlow SEED Fund will help community groups across our town to roll out new projects or fund existing work in order to reach more residents and continue to make Harlow a great place to live.” Lizzy McDowell, Director of Marketing at Kao Data, adds, “We have been so impressed with the creativity and dedication behind the community projects across Harlow. "It was incredibly difficult to narrow down the applications, but we’re thrilled to support a further 20 inspiring groups, through our Kao SEED Fund initiative, that make such a tangible difference, from environmental programmes [and] arts initiatives through to youth and wellbeing projects.” The Kao SEED Fund was launched for the first time in Harlow in September in order to recognise and invest in community-led projects that make the town a better place to live and work. The 20 funded Harlow projects are: Butterfly Effect Wellbeing, Changing Lives Football, Epping and Harlow Community Transport, Harlow Arts Trust, Harlow Band Stand, Harlow Hospital Radio, Matipo Arts, Norman Booth Recreation Centre, Open Road Vision, PATACC, Plant pots and Wellies, Potter Street Health & Wellbeing Hub, Razed Roof, Rise Community, Roots to Wellbeing, The Frequency Machine, The Parent Hood of Harlow, The Scouts, The Victoria Hall Performing Arts Association, and Yellow Brick Road. For more from Kao Data, click here.

ABB, Ark deploy medium voltage UPS in UK
ABB, a multinational corporation specialising in industrial automation and electrification products, has completed what it describes as the UK’s first deployment of a medium voltage uninterruptible power supply (UPS) system at Ark Data Centres’ Surrey campus. The installation, with a capacity of 25MVA, is intended to support rising demand for high-density AI computing and large-scale digital workloads. Ark Data Centres is among the early adopters of ABB’s medium voltage power architecture, which combines grid connection and UPS at the same voltage level to accommodate the growing electrical requirements of AI hardware. The project was delivered in partnership with ABB and JCA. The installation forms part of Ark’s ongoing expansion, including electrical capacity for next-generation GPUs used in AI training and inference. These systems support high-throughput computing across sectors such as research, healthcare, finance, media, and entertainment, and require stable, scalable power infrastructure. Medium voltage architecture for AI workloads Andy Garvin, Chief Operating Officer at Ark Data Centres, comments, “AI is accelerating data centre growth and intensifying the pressure to deliver capacity that is efficient, resilient, and sustainable. With ABB, we’ve delivered a first-of-its-kind solution that positions Ark to meet these challenges while supporting the UK’s digital future.” Stephen Gibbs, UK Distribution Solutions Marketing and Sales Director at ABB Electrification, adds, “We’re helping data centres design from day one for emerging AI workloads. Our medium voltage UPS technology is AI-ready and a critical step in meeting the power demands of future high-density racks. Delivered as a single solution, we are supporting today’s latest technology and futureproofing for tomorrow’s megawatt-powered servers. "ABB’s new medium voltage data centre architecture integrates HiPerGuard, the industry’s first solid-state medium voltage UPS, with its UniGear MV switchgear and Zenon ZEE600 control system into a single, end-to-end system. This approach eliminates interface risks and streamlines coordination across design, installation, and commissioning.” Steve Hill, Divisional Contracts Director at JCA, says, “Delivering a project of this scale brings challenges. Having one partner responsible for the switchgear, UPS, and controls reduced complexity and helped keep the programme on track. "Working alongside ABB, we were able to coordinate the installation and commissioning effectively so that Ark could benefit from the new system without delays or risks.” The system reportedly provides up to 25MVA of conditioned power, achieving 98% efficiency under heavy load and freeing floor space for AI computing equipment. Stabilising power at medium voltage should also reduce generator intervention and energy losses. For more from ABB, click here.

Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

tde expands tML breakout module for 800GbE ethernet
trans data elektronik (tde), a German manufacturer of fibre optic and copper cabling systems for data centres, has further developed its tML system and made it fit for increased network requirements, with the new breakout modules now supporting transceivers up to 800GbE. QSFP, QSFP-DD, and OSFP transceivers can now be used more efficiently and split into ports with lower data rates (4 x 100GbE or 8 x 100GbE). This allows data centre and network operators to increase the port density of their switch and router chassis and make better use of existing hardware. The company says the new breakout module is particularly suitable for use in high-speed data centres and modern telecommunications infrastructures. “Breakout applications have become firmly established in the high-speed sector,” explains André Engel, Managing Director of tde. "With our tML breakout modules, customers can now use transceivers up to 800GbE and still split them into smaller, clearly structured port speeds. "This allows them to combine maximum port density with very clear, structured cabling." Efficient use of MPO-based high-speed transceivers The current high-speed transceivers in the QSFP, QSFP-DD, and OSFP form factors have MPO connectors with 12, 16, or 24 fibres - in multimode (MM) or single-mode (SM). Typical applications such as SR4, DR4, and FR4 use eight fibres of the 12-fibre MPO, while SR8, DR8, and FR8 use sixteen fibres of a 16- or 24-fibre MPO. This is where tde says it comes in with its tML breakout modules. Depending on the application, the modules split the incoming transmission rate into, for example, four 100GbE or eight 100GbE channels with LC duplex connections. This allows multiple dedicated links with lower data rates to be provided from a single high-speed port - for switches, routers, or storage systems, for example. Alternatively, special versions with other connector faces such as MDC, SN, SC, or E2000 are available. Front MPO connectors and maximum packing density tde also relies on front-integrated MPO connectors for the latest generation of tML breakout modules. The MPO connections are plugged in directly from the front via patch cables. Compared to conventional solutions with rear MPO connectors, this aims to simplify structured patching, ensure clarity in the rack, and facilitate moves, adds, and changes during operation. A high port density can be achieved without the need for separate fanout cables. Eight tML breakout modules can be installed in the tML module carrier with one height unit. Future-proofing and investment protection tde says it has designed the tML breakout module for maximum ease of use. It can only be patched in the front patch panel level, seeking to support structured and clear cabling. Since the tML module carrier can be mixed and matched depending on the desired application and requirements, the breakout module should offer high packing density. Fibre-optic and copper modules can also be combined. André concludes, “With the addition of the tML breakout module, our tML system platform is well equipped for the future and will remain competitive in the long term.”

Energy Estate unveils Tasmanian subsea cables and hubs
Energy Estate Digital, a digital infrastructure platform backed by Energy Estate, has set out plans for new data centre hubs and subsea connectivity in Tasmania as part of a wider programme to support the growth of artificial intelligence infrastructure across Australia, New Zealand, and key international markets. The company is developing subsea cable routes between Australia and New Zealand, as well as major global hubs including California, Japan, and India. These new links are intended to support the expanding AI sector by connecting regions that offer land availability, renewable energy potential, and access to water resources. The platform, launched in December 2024, aligns with national objectives under the Australian National AI Plan announced recently by the Federal Government. As part of its approach to sovereign capability, the company says it intends to offer “golden shares” to councils and economic development agencies in landing-point regions. Two proposed subsea cable landings in Tasmania will form part of the network: the CaliNewy route from California will come ashore at Bell Bay, while the IndoMaris route from Oman and India will land near Burnie. These proposed locations are designed to complement existing cable links between Victoria and Tasmania and future upgrades anticipated through the Marinus Link project. Large-scale energy and infrastructure precincts are expected to develop around these landings, hosting AI facilities, data centre campuses, and other power-intensive industries such as manufacturing, renewable fuels production, and electrified transport. These precincts will be supported by renewable energy and storage projects delivered by Energy Estate and its partners. Partnership to develop industrial and digital precincts Energy Estate has signed a memorandum of understanding with H2U Group to co-develop energy and infrastructure precincts in Tasmania, beginning with the Bell Bay port and wider industrial area. In 2025, H2U signed a similar agreement with TasPorts to explore a large-scale green hydrogen and ammonia facility within the port. Bell Bay has been identified by the Tasmanian Government and the Australian Federal Government as a strategic location for industrial development, particularly for hydrogen and green manufacturing projects. Energy Estate and H2U plan to produce a masterplan that builds on existing infrastructure, access to renewable energy, and the region’s established industrial expertise. The work will also align with ongoing efforts within the Bell Bay Advanced Manufacturing Zone. The digital infrastructure hub proposed for Bell Bay will be the first of three locations Energy Estate intends to develop in Northern Tasmania. The company states that the scale of interest reflects Tasmania’s emerging position as a potential global centre for AI-related activity. Beyond Tasmania, Energy Estate is advancing similar developments in other regions, including the Hunter in New South Wales; Bass Coast and Portland in Victoria; Waikato, Manawatu, and South Canterbury in New Zealand; and the Central Valley in California.



Translate »