News


Yondr completes RFS milestone at Northern Virginia campus
Yondr Group, a global developer, owner, and operator of hyperscale data centres, has completed the first ready-for-service (RFS) milestone for the second building at its 96MW hyperscale data centre campus in Loudoun County, Northern Virginia, USA. The milestone marks the delivery of the first 12MW of capacity within a 48MW facility, which forms part of Yondr’s wider Northern Virginia development. The second building is scheduled to become fully operational in 2026. Developed in partnership with JK Land Holdings, the project supports Yondr’s strategy to deliver additional cloud and digital infrastructure capacity as global data demand continues to rise, driven in part by the growing adoption of artificial intelligence workloads. Yondr completed the first 48MW data centre at the Northern Virginia campus in 2024. The company also has a further 240MW of capacity planned on an adjacent parcel of land, which would bring the total campus capacity to 336MW. Northern Virginia remains one of the world’s most established data centre markets, accounting for close to 60% of the primary data centre inventory in the United States and hosting more than 30 million square feet of operational data centre space. Continued expansion across North America Yondr says the Loudoun County development reflects its broader growth strategy across North America. In addition to Northern Virginia, the company currently has a 27MW project under development in Toronto and has secured a 163-acre site in Lancaster, south of Dallas, where it plans to develop a 550MW hyperscale campus. Todd Sauer, VP Design & Construction Americas at Yondr Group, says, “This RFS milestone is the latest in a series of achievements across our North American data centre portfolio and continues the strong progress we’re making in the important Northern Virginia market.” John Madden, Chief Data Centre Officer at Yondr Group, adds, “As demand for capacity continues to increase, we are stepping up our investment in North America, a high-growth, dynamic market full of opportunities. "We look forward to expanding in the region and continuing to deliver scalable, reliable infrastructure that meets our customers’ evolving requirements.” For more from Yondr Group, click here.

Enecom upgrades data storage with Infinidat's InfiniBox
Infinidat, a provider of enterprise data storage systems, has announced that Enecom, a Japanese ICT services provider operating primarily in the Chugoku region, has upgraded its enterprise data infrastructure using multiple InfiniBox storage systems. Enecom has deployed five InfiniBox systems across its environment. Two systems support the company’s EneWings enterprise cloud service, two are used for internal virtual infrastructure, and one is dedicated to backup and verification. The deployment is intended to support service availability, scalability, and resilience as data volumes increase. According to Enecom, the investment was driven by customer requirements for high system reliability, concerns around cyber security, and the rising cost and operational impact of legacy storage platforms. Masayuki Chikaraishi, Solution Service Department, Solution Business Division at Enecom, says, “When we were choosing how to upgrade our storage infrastructure, our customers told us that system reliability was particularly important and that the threat of damage caused by cyberattacks was a major concern. "We also had to address the rising costs of the legacy systems and the fallout when hardware failures occurred. For the longer term, we needed to be proactive to be able to handle the expected future growth in cloud demand and to strengthen the appeal of our EneWings brand.” Availability and cyber resilience focus Enecom says it is using an active-active configuration across two InfiniBox systems to maintain service continuity during maintenance and software upgrades. Takashi Ueki, Solution Service Department, Solution Business Division at Enecom, notes, “Many of our customers are concerned that even the slightest outage will affect their business. "By using two InfiniBox systems in an active-active cluster configuration, we can continue to provide services with higher reliability and peace of mind without interruption, even when performing maintenance or software version upgrades.” Cyber resilience was also a key consideration. Enecom is using InfiniSafe features within the InfiniBox platform, including immutable snapshots and recovery capabilities, to support rapid restoration following cyber incidents. Masayuki continues, “InfiniBox provides high-speed, tamper-proof, immutable snapshots creation as a standard feature to enable rapid recovery from a future cyberattack. Keeping data within Japan for data security reasons will become more important in the future.” For more from Infinidat, click here.

Motivair by Schneider Electric introduces new CDUs
Motivair, a US provider of liquid cooling solutions for data centres and AI computing, owned by Schneider Electric, has introduced a new range of coolant distribution units (CDUs) designed to address the increasing thermal requirements of high performance computing and AI workloads. The new units are designed for installation in utility corridors rather than within the white space, reflecting changes in how liquid cooling infrastructure is being deployed in modern data centres. According to the company, this approach is intended to provide operators with greater flexibility when integrating cooling systems into different facility layouts. The CDUs will be available globally, with manufacturing scheduled to increase from early 2026. Motivair states that the range supports a broader set of operating conditions, allowing data centre operators to use a wider range of chilled water temperatures when planning and operating liquid cooled environments. The additions expand the company’s existing liquid cooling portfolio, which includes floor-mounted and in-rack units for use across hyperscale, colocation, edge, and retrofit sites. Cooling design flexibility for AI infrastructure Motivair says the new CDUs reflect changes in infrastructure design as compute densities increase and AI workloads become more prevalent. The company notes that operators are increasingly placing CDUs outside traditional IT spaces to improve layout flexibility and maintenance access, as having multiple CDU deployment options allows cooling approaches to be aligned more closely with specific data centre designs and workload requirements. The company highlights space efficiency, broader operating ranges, easier access for maintenance, and closer integration with chiller plant infrastructure as key considerations for operators planning liquid cooling systems. Andrew Bradner, Senior Vice President, Cooling Business at Schneider Electric, says, “When it comes to data centre liquid cooling, flexibility is the key with customers demanding a more diverse and larger portfolio of end-to-end solutions. "Our new CDUs allow customers to match deployment strategies to a wider range of accelerated computing applications while leveraging decades of specialised cooling experience to ensure optimal performance, reliability, and future-readiness.” The launch marks the first new product range from Motivair since Schneider Electric acquired the company in February 2025. Rich Whitmore, CEO of Motivair, comments, “Motivair is a trusted partner for advanced liquid cooling solutions and our new range of technologies enables data centre operators to navigate the AI era with confidence. "Together with Schneider Electric, our goal is to deliver next-generation cooling solutions that adapt to any HPC, AI, or advanced data centre deployment to deliver seamless scalability, performance, and reliability when it matters most.” For more from Schneider Electric, click here.

Funding for community projects from Kao SEED Fund
Harlow-based community groups are celebrating new funding awards from the Kao SEED Fund Harlow, sharing a total of £30,000 to power community initiatives that aim to create positive social and environmental change. Run by advanced data centre operator Kao Data, the second Kao SEED Fund (Social Enterprise and Environment Development Fund) was launched in September as part of the company’s ongoing commitment to supporting the town where it operates its AI data centre campus. Developed in partnership with Harlow Council, the fund offered grants of between £500 and £2,500 to local community groups and not-for-profit organisations to help launch new programmes or create new pilot initiatives. The wide-ranging projects includes funding to support a fully sustainable theatre production of Alice in Wonderland, a women-only boxing programme, free tuition for disadvantaged pupils, and a forest garden for a Scout group. Funding local communities Councillor Dan Swords, Leader of Harlow Council, comments, “In Harlow, we are building a community where innovation, opportunity, and local pride go hand in hand. "The Kao SEED Fund is a fantastic example of how business and local government can work together to invest in the people and projects that make a real difference. "The Harlow SEED Fund will help community groups across our town to roll out new projects or fund existing work in order to reach more residents and continue to make Harlow a great place to live.” Lizzy McDowell, Director of Marketing at Kao Data, adds, “We have been so impressed with the creativity and dedication behind the community projects across Harlow. "It was incredibly difficult to narrow down the applications, but we’re thrilled to support a further 20 inspiring groups, through our Kao SEED Fund initiative, that make such a tangible difference, from environmental programmes [and] arts initiatives through to youth and wellbeing projects.” The Kao SEED Fund was launched for the first time in Harlow in September in order to recognise and invest in community-led projects that make the town a better place to live and work. The 20 funded Harlow projects are: Butterfly Effect Wellbeing, Changing Lives Football, Epping and Harlow Community Transport, Harlow Arts Trust, Harlow Band Stand, Harlow Hospital Radio, Matipo Arts, Norman Booth Recreation Centre, Open Road Vision, PATACC, Plant pots and Wellies, Potter Street Health & Wellbeing Hub, Razed Roof, Rise Community, Roots to Wellbeing, The Frequency Machine, The Parent Hood of Harlow, The Scouts, The Victoria Hall Performing Arts Association, and Yellow Brick Road. For more from Kao Data, click here.

ABB, Ark deploy medium voltage UPS in UK
ABB, a multinational corporation specialising in industrial automation and electrification products, has completed what it describes as the UK’s first deployment of a medium voltage uninterruptible power supply (UPS) system at Ark Data Centres’ Surrey campus. The installation, with a capacity of 25MVA, is intended to support rising demand for high-density AI computing and large-scale digital workloads. Ark Data Centres is among the early adopters of ABB’s medium voltage power architecture, which combines grid connection and UPS at the same voltage level to accommodate the growing electrical requirements of AI hardware. The project was delivered in partnership with ABB and JCA. The installation forms part of Ark’s ongoing expansion, including electrical capacity for next-generation GPUs used in AI training and inference. These systems support high-throughput computing across sectors such as research, healthcare, finance, media, and entertainment, and require stable, scalable power infrastructure. Medium voltage architecture for AI workloads Andy Garvin, Chief Operating Officer at Ark Data Centres, comments, “AI is accelerating data centre growth and intensifying the pressure to deliver capacity that is efficient, resilient, and sustainable. With ABB, we’ve delivered a first-of-its-kind solution that positions Ark to meet these challenges while supporting the UK’s digital future.” Stephen Gibbs, UK Distribution Solutions Marketing and Sales Director at ABB Electrification, adds, “We’re helping data centres design from day one for emerging AI workloads. Our medium voltage UPS technology is AI-ready and a critical step in meeting the power demands of future high-density racks. Delivered as a single solution, we are supporting today’s latest technology and futureproofing for tomorrow’s megawatt-powered servers. "ABB’s new medium voltage data centre architecture integrates HiPerGuard, the industry’s first solid-state medium voltage UPS, with its UniGear MV switchgear and Zenon ZEE600 control system into a single, end-to-end system. This approach eliminates interface risks and streamlines coordination across design, installation, and commissioning.” Steve Hill, Divisional Contracts Director at JCA, says, “Delivering a project of this scale brings challenges. Having one partner responsible for the switchgear, UPS, and controls reduced complexity and helped keep the programme on track. "Working alongside ABB, we were able to coordinate the installation and commissioning effectively so that Ark could benefit from the new system without delays or risks.” The system reportedly provides up to 25MVA of conditioned power, achieving 98% efficiency under heavy load and freeing floor space for AI computing equipment. Stabilising power at medium voltage should also reduce generator intervention and energy losses. For more from ABB, click here.

Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

tde expands tML breakout module for 800GbE ethernet
trans data elektronik (tde), a German manufacturer of fibre optic and copper cabling systems for data centres, has further developed its tML system and made it fit for increased network requirements, with the new breakout modules now supporting transceivers up to 800GbE. QSFP, QSFP-DD, and OSFP transceivers can now be used more efficiently and split into ports with lower data rates (4 x 100GbE or 8 x 100GbE). This allows data centre and network operators to increase the port density of their switch and router chassis and make better use of existing hardware. The company says the new breakout module is particularly suitable for use in high-speed data centres and modern telecommunications infrastructures. “Breakout applications have become firmly established in the high-speed sector,” explains André Engel, Managing Director of tde. "With our tML breakout modules, customers can now use transceivers up to 800GbE and still split them into smaller, clearly structured port speeds. "This allows them to combine maximum port density with very clear, structured cabling." Efficient use of MPO-based high-speed transceivers The current high-speed transceivers in the QSFP, QSFP-DD, and OSFP form factors have MPO connectors with 12, 16, or 24 fibres - in multimode (MM) or single-mode (SM). Typical applications such as SR4, DR4, and FR4 use eight fibres of the 12-fibre MPO, while SR8, DR8, and FR8 use sixteen fibres of a 16- or 24-fibre MPO. This is where tde says it comes in with its tML breakout modules. Depending on the application, the modules split the incoming transmission rate into, for example, four 100GbE or eight 100GbE channels with LC duplex connections. This allows multiple dedicated links with lower data rates to be provided from a single high-speed port - for switches, routers, or storage systems, for example. Alternatively, special versions with other connector faces such as MDC, SN, SC, or E2000 are available. Front MPO connectors and maximum packing density tde also relies on front-integrated MPO connectors for the latest generation of tML breakout modules. The MPO connections are plugged in directly from the front via patch cables. Compared to conventional solutions with rear MPO connectors, this aims to simplify structured patching, ensure clarity in the rack, and facilitate moves, adds, and changes during operation. A high port density can be achieved without the need for separate fanout cables. Eight tML breakout modules can be installed in the tML module carrier with one height unit. Future-proofing and investment protection tde says it has designed the tML breakout module for maximum ease of use. It can only be patched in the front patch panel level, seeking to support structured and clear cabling. Since the tML module carrier can be mixed and matched depending on the desired application and requirements, the breakout module should offer high packing density. Fibre-optic and copper modules can also be combined. André concludes, “With the addition of the tML breakout module, our tML system platform is well equipped for the future and will remain competitive in the long term.”

Energy Estate unveils Tasmanian subsea cables and hubs
Energy Estate Digital, a digital infrastructure platform backed by Energy Estate, has set out plans for new data centre hubs and subsea connectivity in Tasmania as part of a wider programme to support the growth of artificial intelligence infrastructure across Australia, New Zealand, and key international markets. The company is developing subsea cable routes between Australia and New Zealand, as well as major global hubs including California, Japan, and India. These new links are intended to support the expanding AI sector by connecting regions that offer land availability, renewable energy potential, and access to water resources. The platform, launched in December 2024, aligns with national objectives under the Australian National AI Plan announced recently by the Federal Government. As part of its approach to sovereign capability, the company says it intends to offer “golden shares” to councils and economic development agencies in landing-point regions. Two proposed subsea cable landings in Tasmania will form part of the network: the CaliNewy route from California will come ashore at Bell Bay, while the IndoMaris route from Oman and India will land near Burnie. These proposed locations are designed to complement existing cable links between Victoria and Tasmania and future upgrades anticipated through the Marinus Link project. Large-scale energy and infrastructure precincts are expected to develop around these landings, hosting AI facilities, data centre campuses, and other power-intensive industries such as manufacturing, renewable fuels production, and electrified transport. These precincts will be supported by renewable energy and storage projects delivered by Energy Estate and its partners. Partnership to develop industrial and digital precincts Energy Estate has signed a memorandum of understanding with H2U Group to co-develop energy and infrastructure precincts in Tasmania, beginning with the Bell Bay port and wider industrial area. In 2025, H2U signed a similar agreement with TasPorts to explore a large-scale green hydrogen and ammonia facility within the port. Bell Bay has been identified by the Tasmanian Government and the Australian Federal Government as a strategic location for industrial development, particularly for hydrogen and green manufacturing projects. Energy Estate and H2U plan to produce a masterplan that builds on existing infrastructure, access to renewable energy, and the region’s established industrial expertise. The work will also align with ongoing efforts within the Bell Bay Advanced Manufacturing Zone. The digital infrastructure hub proposed for Bell Bay will be the first of three locations Energy Estate intends to develop in Northern Tasmania. The company states that the scale of interest reflects Tasmania’s emerging position as a potential global centre for AI-related activity. Beyond Tasmania, Energy Estate is advancing similar developments in other regions, including the Hunter in New South Wales; Bass Coast and Portland in Victoria; Waikato, Manawatu, and South Canterbury in New Zealand; and the Central Valley in California.

Study finds consumer GPUs can cut AI inference costs
A peer-reviewed study has found that consumer-grade GPUs, including Nvidia’s RTX 4090, can significantly reduce the cost of running large language model (LLM) inference. The research, published by io.net - a US developer of decentralised GPU cloud infrastructure - and accepted for the 6th International Artificial Intelligence and Blockchain Conference (AIBC 2025), provides the first open benchmarks of heterogeneous GPU clusters deployed on the company’s decentralised cloud platform. The paper, Idle Consumer GPUs as a Complement to Enterprise Hardware for LLM Inference, reports that clusters built from RTX 4090 GPUs can deliver between 62% and 78% of the throughput of enterprise-grade H100 hardware at roughly half the cost. For batch processing or latency-tolerant workloads, token costs fell by up to 75%. The study also notes that, while H100 GPUs remain more energy efficient on a per-token basis, extending the life of existing consumer hardware and using renewable-rich grids can reduce overall emissions. Aline Almeida, Head of Research at IOG Foundation and lead author of the study, says, “Our findings demonstrate that hybrid routing across enterprise and consumer GPUs offers a pragmatic balance between performance, cost, and sustainability. "Rather than a binary choice, heterogeneous infrastructure allows organisations to optimise for their specific latency and budget requirements while reducing carbon impact.” Implications for LLM development and deployment The research outlines how AI developers and MLOps teams can use mixed hardware clusters to improve cost-efficiency. Enterprise GPUs can support real-time applications, while consumer GPUs can be deployed for batch tasks, development, overflow capacity, and workloads with higher latency tolerance. Under these conditions, the study reports that organisations can achieve near-H100 performance with substantially lower operating costs. Gaurav Sharma, CEO of io.net, comments, “This peer-reviewed analysis validates the core thesis behind io.net: that the future of compute will be distributed, heterogeneous, and accessible. "By harnessing both data-centre-grade and consumer hardware, we can democratise access to advanced AI infrastructure while making it more sustainable.” The company also argues that the study supports its position that decentralised networks can expand global compute capacity by making distributed GPU resources available to developers through a single, programmable platform. Key findings include: • Cost-performance ratios — Clusters of four RTX 4090 GPUs delivered 62% to 78% of H100 throughput at around half the operational cost, achieving the lowest cost per million tokens ($0.111–0.149). • Latency profiles — H100 hardware maintained sub-55ms P99 time-to-first-token even at higher loads, while consumer GPU clusters were suited to workloads tolerating 200–500ms tail latencies, such as research, development environments, batch jobs, embeddings, and evaluation tasks.

A round-up of DataCentres Ireland 2025
DataCentres Ireland was again heralded as a success by the exhibitors, alongside many of the visitors and speakers, as it delivered more attendees, more content, and more delegates than ever before. The total attendance was up 23.0% YoY over the two days of the event, delivering to over 3000 people interested in the sector. Louisa Cilenti, Chief Legal Officer at Clear Decisions, notes, “DataCentres Ireland was an outstanding forum to get underneath the strategic issues shaping Ireland’s future as a global data centre hub. "From Minister Dooley’s address to the highly practical breakout sessions, the day struck the perfect balance between policy depth and real-world innovation. "The relaxed venue made meaningful networking effortless and, as a startup, Clear Decisions really valued the genuine peer environment. A fantastic event that brings the whole ecosystem into one conversation.” Day 1 was busy from the outset. Starting with a keynote address from Minister of State Timmy Dooley TD detailing the Irish Government recognising the essential nature of data centres in modern society and its wish to work with the data centre community both for the leveraging of AI as well as recognising the impact of data centres in attracting foreign direct investment. Day 1 saw a massive 34.7% growth on attendance YoY, with exhibitors and attendees commenting on the buzz in the exhibition hall. Day 2 had a slower start, though footfall built with a good feeling in the hall. The event delivered over 600 new individuals for exhibitors to network and do business with, which were similar numbers to those achieved on Day 2 in 2024. This was the largest visitor and total attendance in the event's 15-year history, delivering over 3000 attendees across the two days. The number of exhibitors also grew, with the exhibition featuring over 140 individual stands and showcasing more than 180 companies and thousands of brands. An event of opportunities Exhibitors and attendees acknowledged that DataCentres Ireland provided a professional business environment where people were able to network with colleagues; see the latest in products, services, technology, and equipment; and listen to industry leaders and experts discussing the latest issues, approaches, and ideas affecting data centres and critical environments. Paul Flanagan, EMEA Regional Director West, Camfil, comments, “As usual, a well-organised event by Stepex. Great variety of exhibitors and visitors, so plenty of networking opportunities along with being able to see all the latest and greatest technologies and services available on the market today. "The data centre segment is getting smarter and more collaborative, and this event guarantees any visitor the opportunity to appreciate that in many ways. Exhibitors commented on the quality of attendees present at DataCentres Ireland and lack of 'time wasters' at the show, giving them more time to engage with buyers, discuss their needs, and forge lasting business contacts and relationships. The conference programme featured 90 international and local experts and industry leaders. The conference addressed a wide range of issues from data centre development, regionalisation, training and staff retention, the impact of AI on data centres, as well as decarbonisation, energy reduction, and heat re-use. One of the many highlights was an insightful presentation by Mark Foley, CEO of Mark Foley Strategic Solutions, on the state of the Irish grid network and what could be actioned to make this grid more flexible for data centres. All presentations were broadly supportive of data centres and their continued development in Ireland. Mark comments, “An excellent conference at a challenging time for the sector in Ireland. The key issues were discussed and practical and innovative solutions are now emerging if Government and regulators make the right decisions.” For more from DataCentres Ireland, click here.



Translate »