18 December 2025
Nostrum details availability of new data centres in Spain
 
17 December 2025
1547 announces the McAllen Internet Exchange (MCT-IX)
 
17 December 2025
Vertiv, GreenScale to deploy DC platforms across Europe
 
17 December 2025
InfraRed invests in Spanish data centre platform
 
16 December 2025
Ireland’s first liquid-cooled AI supercomputer
 

Latest News


Pure DC signs Europe’s largest hyperscale DC lease for 2025
Pure Data Centres Group (Pure DC), a designer, developer, and operator of hyperscale cloud and AI data centres, today announced it has signed 2025’s largest standalone hyperscale data centre lease in Europe. A hyperscale customer is leasing the entire 78MW campus, situated in Westpoort, Amsterdam, with Pure DC investing over €1 billion (£877 million) to develop the site. As part of the lease deal, Pure DC has purchased a site and secured planning permissions and 100MVA of power via a private substation. The land was purchased on a long leasehold from the Port of Amsterdam. Securing the site, combined with permitting approvals, power, and supply constraints within Amsterdam, reportedly required complex negotiations and creative partnership over many months to secure the deal. Regional growth and energy resilience The company believes this infrastructure investment is set to play a pivotal role in supporting the region’s digital growth and energy resilience. As well as the Euro investment by Pure DC, the development will provide over 1,000 jobs and support more roles through the extended supply chain. This hopes to drive demand for skilled positions, utilising local companies wherever possible. Once complete, the data centre will provide approximately 80 permanent skilled jobs including engineers, maintenance, security, and administrative staff. Designated AMS01, Pure DC’s data centre campus will comprise of three 85-metre towers, powered by a private substation with a firm connection into the 50kV grid. Each of the three towers will house 26MW of data halls, designed to support high density compute with high efficiency cooling and to achieve the Netherlands energy efficiency target PUE of 1.2. The private substation is already constructed and live, with development of the data halls expected to begin in January 2026. Dame Dawn Childs, CEO of Pure Data Centres, says, “Amsterdam is one of Europe’s most constrained markets for digital infrastructure and Pure DC has again demonstrated its ability to unlock new, low-latency, high-quality capacity. "This deal demonstrates how our specialist teams have the creativity and approach to deliver compelling proposals for even previously distressed assets - delivering solutions for local authorities, potential customers, and our supply chain.” Pure DC says it is committed to working with local communities near current and future operational locations, noting that, in Amsterdam, it aims to replicate programs running in its other projects - including working with local schools and universities to provide training, career guidance, and outreach programs; supporting local charitable organisations; and working with community partners on environmental conservation projects. For more from Pure DC, click here.

Motivair by Schneider Electric introduces new CDUs
Motivair, a US provider of liquid cooling solutions for data centres and AI computing, owned by Schneider Electric, has introduced a new range of coolant distribution units (CDUs) designed to address the increasing thermal requirements of high performance computing and AI workloads. The new units are designed for installation in utility corridors rather than within the white space, reflecting changes in how liquid cooling infrastructure is being deployed in modern data centres. According to the company, this approach is intended to provide operators with greater flexibility when integrating cooling systems into different facility layouts. The CDUs will be available globally, with manufacturing scheduled to increase from early 2026. Motivair states that the range supports a broader set of operating conditions, allowing data centre operators to use a wider range of chilled water temperatures when planning and operating liquid cooled environments. The additions expand the company’s existing liquid cooling portfolio, which includes floor-mounted and in-rack units for use across hyperscale, colocation, edge, and retrofit sites. Cooling design flexibility for AI infrastructure Motivair says the new CDUs reflect changes in infrastructure design as compute densities increase and AI workloads become more prevalent. The company notes that operators are increasingly placing CDUs outside traditional IT spaces to improve layout flexibility and maintenance access, as having multiple CDU deployment options allows cooling approaches to be aligned more closely with specific data centre designs and workload requirements. The company highlights space efficiency, broader operating ranges, easier access for maintenance, and closer integration with chiller plant infrastructure as key considerations for operators planning liquid cooling systems. Andrew Bradner, Senior Vice President, Cooling Business at Schneider Electric, says, “When it comes to data centre liquid cooling, flexibility is the key with customers demanding a more diverse and larger portfolio of end-to-end solutions. "Our new CDUs allow customers to match deployment strategies to a wider range of accelerated computing applications while leveraging decades of specialised cooling experience to ensure optimal performance, reliability, and future-readiness.” The launch marks the first new product range from Motivair since Schneider Electric acquired the company in February 2025. Rich Whitmore, CEO of Motivair, comments, “Motivair is a trusted partner for advanced liquid cooling solutions and our new range of technologies enables data centre operators to navigate the AI era with confidence. "Together with Schneider Electric, our goal is to deliver next-generation cooling solutions that adapt to any HPC, AI, or advanced data centre deployment to deliver seamless scalability, performance, and reliability when it matters most.” For more from Schneider Electric, click here.

Funding for community projects from Kao SEED Fund
Harlow-based community groups are celebrating new funding awards from the Kao SEED Fund Harlow, sharing a total of £30,000 to power community initiatives that aim to create positive social and environmental change. Run by advanced data centre operator Kao Data, the second Kao SEED Fund (Social Enterprise and Environment Development Fund) was launched in September as part of the company’s ongoing commitment to supporting the town where it operates its AI data centre campus. Developed in partnership with Harlow Council, the fund offered grants of between £500 and £2,500 to local community groups and not-for-profit organisations to help launch new programmes or create new pilot initiatives. The wide-ranging projects includes funding to support a fully sustainable theatre production of Alice in Wonderland, a women-only boxing programme, free tuition for disadvantaged pupils, and a forest garden for a Scout group. Funding local communities Councillor Dan Swords, Leader of Harlow Council, comments, “In Harlow, we are building a community where innovation, opportunity, and local pride go hand in hand. "The Kao SEED Fund is a fantastic example of how business and local government can work together to invest in the people and projects that make a real difference. "The Harlow SEED Fund will help community groups across our town to roll out new projects or fund existing work in order to reach more residents and continue to make Harlow a great place to live.” Lizzy McDowell, Director of Marketing at Kao Data, adds, “We have been so impressed with the creativity and dedication behind the community projects across Harlow. "It was incredibly difficult to narrow down the applications, but we’re thrilled to support a further 20 inspiring groups, through our Kao SEED Fund initiative, that make such a tangible difference, from environmental programmes [and] arts initiatives through to youth and wellbeing projects.” The Kao SEED Fund was launched for the first time in Harlow in September in order to recognise and invest in community-led projects that make the town a better place to live and work. The 20 funded Harlow projects are: Butterfly Effect Wellbeing, Changing Lives Football, Epping and Harlow Community Transport, Harlow Arts Trust, Harlow Band Stand, Harlow Hospital Radio, Matipo Arts, Norman Booth Recreation Centre, Open Road Vision, PATACC, Plant pots and Wellies, Potter Street Health & Wellbeing Hub, Razed Roof, Rise Community, Roots to Wellbeing, The Frequency Machine, The Parent Hood of Harlow, The Scouts, The Victoria Hall Performing Arts Association, and Yellow Brick Road. For more from Kao Data, click here.

atNorth's ICE03 wins award for environmental design
atNorth, an Icelandic operator of sustainable Nordic data centres, has received the Environmental Impact Award at the Data Center Dynamics Awards for its expansion of the ICE03 data centre in Akureyri, Iceland. The category recognises projects that demonstrate clear reductions in the environmental impact of data centre operations. The expansion was noted for its design approach, which combines environmental considerations with social and economic benefits. ICE03 operates in a naturally cool climate and uses renewable energy to support direct liquid cooling. The site was constructed with sustainable materials, including Glulam and Icelandic rockwool, and was planned with regard for the surrounding landscape. Heat-reuse partnership and local benefits atNorth says all its new sites are built to accommodate heat reuse equipment. For ICE03, the company partnered with the local municipality to distribute waste heat to community projects, including a greenhouse where school children learn about ecological cultivation and sustainable food production. The initiative reduces the data centre’s carbon footprint, supports locally grown produce, and contributes to a regional circular economy. ICE03 operates with a PUE below 1.2, compared with a global average of 1.56. During the first phase of ICE03’s development, more than 90% of the workforce was recruited locally, and the company says it intends to continue hiring within the area as far as possible. atNorth also states it supports educational and community initiatives through volunteer activity and financial contributions. Examples include donating mechatronics equipment to the Vocational College of Akureyri and supporting local sports, events, and search and rescue services. The ICE03 site has also enabled a new point of presence (PoP) in the region, established by telecommunications company Farice. The PoP provides direct routing to mainland Europe via submarine cables, improving network resilience for northern Iceland. Ásthildur Sturludóttir, the Mayor of Akureyri, estimates that atNorth’s total investment in the town will reach around €109 million (£95.7 million), with long-term investment expected to rise to approximately €200 million (£175.6 million). This, combined with the wider economic and social contribution of the site, is positioned as a model for future data centre development. Eyjólfur Magnús Kristinsson, CEO of atNorth, comments, “We are delighted that our ICE03 data centre has been recognised for its positive impact on its local environment. "There is a critical need for a transformation in the approach to digital infrastructure development to ensure the scalability and longevity of the industry. Data centre operators must take a holistic approach to become long-term, valued partners of thriving communities.” For more from atNorth, click here.

ABB, Ark deploy medium voltage UPS in UK
ABB, a multinational corporation specialising in industrial automation and electrification products, has completed what it describes as the UK’s first deployment of a medium voltage uninterruptible power supply (UPS) system at Ark Data Centres’ Surrey campus. The installation, with a capacity of 25MVA, is intended to support rising demand for high-density AI computing and large-scale digital workloads. Ark Data Centres is among the early adopters of ABB’s medium voltage power architecture, which combines grid connection and UPS at the same voltage level to accommodate the growing electrical requirements of AI hardware. The project was delivered in partnership with ABB and JCA. The installation forms part of Ark’s ongoing expansion, including electrical capacity for next-generation GPUs used in AI training and inference. These systems support high-throughput computing across sectors such as research, healthcare, finance, media, and entertainment, and require stable, scalable power infrastructure. Medium voltage architecture for AI workloads Andy Garvin, Chief Operating Officer at Ark Data Centres, comments, “AI is accelerating data centre growth and intensifying the pressure to deliver capacity that is efficient, resilient, and sustainable. With ABB, we’ve delivered a first-of-its-kind solution that positions Ark to meet these challenges while supporting the UK’s digital future.” Stephen Gibbs, UK Distribution Solutions Marketing and Sales Director at ABB Electrification, adds, “We’re helping data centres design from day one for emerging AI workloads. Our medium voltage UPS technology is AI-ready and a critical step in meeting the power demands of future high-density racks. Delivered as a single solution, we are supporting today’s latest technology and futureproofing for tomorrow’s megawatt-powered servers. "ABB’s new medium voltage data centre architecture integrates HiPerGuard, the industry’s first solid-state medium voltage UPS, with its UniGear MV switchgear and Zenon ZEE600 control system into a single, end-to-end system. This approach eliminates interface risks and streamlines coordination across design, installation, and commissioning.” Steve Hill, Divisional Contracts Director at JCA, says, “Delivering a project of this scale brings challenges. Having one partner responsible for the switchgear, UPS, and controls reduced complexity and helped keep the programme on track. "Working alongside ABB, we were able to coordinate the installation and commissioning effectively so that Ark could benefit from the new system without delays or risks.” The system reportedly provides up to 25MVA of conditioned power, achieving 98% efficiency under heavy load and freeing floor space for AI computing equipment. Stabilising power at medium voltage should also reduce generator intervention and energy losses. For more from ABB, click here.

Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

Sabey's Manhattan facility becomes AI inference hub
Sabey Data Centers, a data centre developer, owner, and operator, has said that its New York City facility at 375 Pearl Street is becoming a hub for organisations running advanced AI inference workloads. The facility, known as SDC Manhattan, offers dense connectivity, scalable power, and flexible cooling infrastructure designed to host latency-sensitive, high-throughput systems. As enterprises move from training to deployment, inference infrastructure has become critical for delivering real-time AI applications across industries. Tim Mirick, President of Sabey Data Centers, says, "The future of AI isn't just about training; it's about delivering intelligence at scale. Our Manhattan facility places that capability at the edge of one of the world's largest and most connected markets. "That's an enormous advantage for inference models powering everything from financial services to media to healthcare." Location and infrastructure Located within walking distance of Wall Street and major carrier hotels, SDC Manhattan is among the few colocation providers in Manhattan with available power. The facility has nearly one megawatt of turnkey power available and seven megawatts of utility power across two powered shell spaces. The site provides access to numerous network providers as well as low-latency connectivity to major cloud on-ramps and enterprises across the Northeast. Sabey says it offers organisations the ability to deploy inference clusters close to their users, reducing response times and enabling real-time decision-making. The facility's liquid-cooling-ready infrastructure supports hybrid cooling configurations to accommodate GPUs and custom accelerators. For more from Sabey Data Centers, click here.

Siemens, nVent develop reference design for AI DCs
German multinational technology company Siemens and nVent, a US manufacturer of electrical connection and protection systems, are collaborating on a liquid cooling and power reference architecture intended for hyperscale AI environments. The design aims to support operators facing rising power densities, more demanding compute loads, and the need for modular infrastructure that maintains uptime and operational resilience. The joint reference architecture is being developed for 100MW-scale AI data centres using liquid-cooled infrastructure such as the NVIDIA DGX SuperPOD with GB200 systems. It combines Siemens’ electrical and automation technology with NVIDIA’s reference design framework and nVent’s liquid cooling capabilities. The companies state that the architecture is structured to be compatible with Tier III design requirements. Reference model for power and cooling integration “We have decades of expertise supporting customers’ next-generation computing infrastructure needs,” says Sara Zawoyski, President of Systems Protection at nVent. “This collaboration with Siemens underscores that commitment. "The joint reference architecture will help data centre managers deploy our cutting-edge cooling infrastructure to support the AI buildout.” Ciaran Flanagan, Global Head of Data Center Solutions at Siemens, adds, “This reference architecture accelerates time-to-compute and maximises tokens-per-watt, which is the measure of AI output per unit of energy. “It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Together with nVent and our broader ecosystem of partners, we’re connecting the dots across the value chain to drive innovation, interoperability, and sustainability, helping operators build future-ready data centres that unlock AI’s full potential.” Reference architectures are increasingly used by data centre operators to support rapid deployment and consistent interface standards. They are particularly relevant as facilities adapt to higher rack-level densities and more intensive computing requirements. Siemens says it contributes its experience in industrial electrical systems and automation, ranging from medium- and low-voltage distribution to energy management software. nVent adds that it brings expertise in liquid cooling, working with chip manufacturers, original equipment makers, and hyperscale operators. For more from Siemens, click here.

tde expands tML breakout module for 800GbE ethernet
trans data elektronik (tde), a German manufacturer of fibre optic and copper cabling systems for data centres, has further developed its tML system and made it fit for increased network requirements, with the new breakout modules now supporting transceivers up to 800GbE. QSFP, QSFP-DD, and OSFP transceivers can now be used more efficiently and split into ports with lower data rates (4 x 100GbE or 8 x 100GbE). This allows data centre and network operators to increase the port density of their switch and router chassis and make better use of existing hardware. The company says the new breakout module is particularly suitable for use in high-speed data centres and modern telecommunications infrastructures. “Breakout applications have become firmly established in the high-speed sector,” explains André Engel, Managing Director of tde. "With our tML breakout modules, customers can now use transceivers up to 800GbE and still split them into smaller, clearly structured port speeds. "This allows them to combine maximum port density with very clear, structured cabling." Efficient use of MPO-based high-speed transceivers The current high-speed transceivers in the QSFP, QSFP-DD, and OSFP form factors have MPO connectors with 12, 16, or 24 fibres - in multimode (MM) or single-mode (SM). Typical applications such as SR4, DR4, and FR4 use eight fibres of the 12-fibre MPO, while SR8, DR8, and FR8 use sixteen fibres of a 16- or 24-fibre MPO. This is where tde says it comes in with its tML breakout modules. Depending on the application, the modules split the incoming transmission rate into, for example, four 100GbE or eight 100GbE channels with LC duplex connections. This allows multiple dedicated links with lower data rates to be provided from a single high-speed port - for switches, routers, or storage systems, for example. Alternatively, special versions with other connector faces such as MDC, SN, SC, or E2000 are available. Front MPO connectors and maximum packing density tde also relies on front-integrated MPO connectors for the latest generation of tML breakout modules. The MPO connections are plugged in directly from the front via patch cables. Compared to conventional solutions with rear MPO connectors, this aims to simplify structured patching, ensure clarity in the rack, and facilitate moves, adds, and changes during operation. A high port density can be achieved without the need for separate fanout cables. Eight tML breakout modules can be installed in the tML module carrier with one height unit. Future-proofing and investment protection tde says it has designed the tML breakout module for maximum ease of use. It can only be patched in the front patch panel level, seeking to support structured and clear cabling. Since the tML module carrier can be mixed and matched depending on the desired application and requirements, the breakout module should offer high packing density. Fibre-optic and copper modules can also be combined. André concludes, “With the addition of the tML breakout module, our tML system platform is well equipped for the future and will remain competitive in the long term.”

Energy Estate unveils Tasmanian subsea cables and hubs
Energy Estate Digital, a digital infrastructure platform backed by Energy Estate, has set out plans for new data centre hubs and subsea connectivity in Tasmania as part of a wider programme to support the growth of artificial intelligence infrastructure across Australia, New Zealand, and key international markets. The company is developing subsea cable routes between Australia and New Zealand, as well as major global hubs including California, Japan, and India. These new links are intended to support the expanding AI sector by connecting regions that offer land availability, renewable energy potential, and access to water resources. The platform, launched in December 2024, aligns with national objectives under the Australian National AI Plan announced recently by the Federal Government. As part of its approach to sovereign capability, the company says it intends to offer “golden shares” to councils and economic development agencies in landing-point regions. Two proposed subsea cable landings in Tasmania will form part of the network: the CaliNewy route from California will come ashore at Bell Bay, while the IndoMaris route from Oman and India will land near Burnie. These proposed locations are designed to complement existing cable links between Victoria and Tasmania and future upgrades anticipated through the Marinus Link project. Large-scale energy and infrastructure precincts are expected to develop around these landings, hosting AI facilities, data centre campuses, and other power-intensive industries such as manufacturing, renewable fuels production, and electrified transport. These precincts will be supported by renewable energy and storage projects delivered by Energy Estate and its partners. Partnership to develop industrial and digital precincts Energy Estate has signed a memorandum of understanding with H2U Group to co-develop energy and infrastructure precincts in Tasmania, beginning with the Bell Bay port and wider industrial area. In 2025, H2U signed a similar agreement with TasPorts to explore a large-scale green hydrogen and ammonia facility within the port. Bell Bay has been identified by the Tasmanian Government and the Australian Federal Government as a strategic location for industrial development, particularly for hydrogen and green manufacturing projects. Energy Estate and H2U plan to produce a masterplan that builds on existing infrastructure, access to renewable energy, and the region’s established industrial expertise. The work will also align with ongoing efforts within the Bell Bay Advanced Manufacturing Zone. The digital infrastructure hub proposed for Bell Bay will be the first of three locations Energy Estate intends to develop in Northern Tasmania. The company states that the scale of interest reflects Tasmania’s emerging position as a potential global centre for AI-related activity. Beyond Tasmania, Energy Estate is advancing similar developments in other regions, including the Hunter in New South Wales; Bass Coast and Portland in Victoria; Waikato, Manawatu, and South Canterbury in New Zealand; and the Central Valley in California.



Translate »