15 December 2025
Pure DC signs Europe’s largest hyperscale DC lease for 2025
 
15 December 2025
Motivair by Schneider Electric introduces new CDUs
 
15 December 2025
Funding for community projects from Kao SEED Fund
 
12 December 2025
atNorth's ICE03 wins award for environmental design
 
12 December 2025
ABB, Ark deploy medium voltage UPS in UK
 

Latest News


Supermicro launches liquid-cooled NVIDIA HGX B300 systems
Supermicro, a provider of application-optimised IT systems, has announced the expansion of its NVIDIA Blackwell architecture portfolio with new 4U and 2-OU liquid-cooled NVIDIA HGX B300 systems, now available for high-volume shipment. The systems form part of Supermicro's Data Centre Building Block approach, delivering GPU density and power efficiency for hyperscale data centres and AI factory deployments. Charles Liang, President and CEO of Supermicro, says, "With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today. "We're now offering the industry's most compact NVIDIA HGX B300 options - achieving up to 144 GPUs in a single rack - whilst reducing power consumption and cooling costs through our proven direct liquid-cooling technology." System specifications and architecture The 2-OU liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 specification, enables up to 144 GPUs per rack. The rack-scale design features blind-mate manifold connections, modular GPU and CPU tray architecture, and component liquid cooling. The system supports eight NVIDIA Blackwell Ultra GPUs at up to 1,100 watts thermal design power each. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8-megawatt in-row coolant distribution units. The 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system uses Supermicro's DLC-2 technology to capture up to 98% of heat generated by the system through liquid cooling. Supermicro NVIDIA HGX B300 systems feature 2.1 terabytes of HBM3e GPU memory per system. Both the 2-OU and 4U platforms deliver performance gains at cluster level by doubling compute fabric network throughput up to 800 gigabits per second via integrated NVIDIA ConnectX-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. With the DLC-2 technology stack, data centres can reportedly achieve up to 40% power savings, reduce water consumption through 45°C warm water operation, and eliminate chilled water and compressors. Supermicro says it delivers the new systems as fully validated, tested racks before shipment. The systems expand Supermicro's portfolio of NVIDIA Blackwell platforms, including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each system is also NVIDIA-certified. For more from Supermicro, click here.

Sabey's Manhattan facility becomes AI inference hub
Sabey Data Centers, a data centre developer, owner, and operator, has said that its New York City facility at 375 Pearl Street is becoming a hub for organisations running advanced AI inference workloads. The facility, known as SDC Manhattan, offers dense connectivity, scalable power, and flexible cooling infrastructure designed to host latency-sensitive, high-throughput systems. As enterprises move from training to deployment, inference infrastructure has become critical for delivering real-time AI applications across industries. Tim Mirick, President of Sabey Data Centers, says, "The future of AI isn't just about training; it's about delivering intelligence at scale. Our Manhattan facility places that capability at the edge of one of the world's largest and most connected markets. "That's an enormous advantage for inference models powering everything from financial services to media to healthcare." Location and infrastructure Located within walking distance of Wall Street and major carrier hotels, SDC Manhattan is among the few colocation providers in Manhattan with available power. The facility has nearly one megawatt of turnkey power available and seven megawatts of utility power across two powered shell spaces. The site provides access to numerous network providers as well as low-latency connectivity to major cloud on-ramps and enterprises across the Northeast. Sabey says it offers organisations the ability to deploy inference clusters close to their users, reducing response times and enabling real-time decision-making. The facility's liquid-cooling-ready infrastructure supports hybrid cooling configurations to accommodate GPUs and custom accelerators. For more from Sabey Data Centers, click here.

Siemens, nVent develop reference design for AI DCs
German multinational technology company Siemens and nVent, a US manufacturer of electrical connection and protection systems, are collaborating on a liquid cooling and power reference architecture intended for hyperscale AI environments. The design aims to support operators facing rising power densities, more demanding compute loads, and the need for modular infrastructure that maintains uptime and operational resilience. The joint reference architecture is being developed for 100MW-scale AI data centres using liquid-cooled infrastructure such as the NVIDIA DGX SuperPOD with GB200 systems. It combines Siemens’ electrical and automation technology with NVIDIA’s reference design framework and nVent’s liquid cooling capabilities. The companies state that the architecture is structured to be compatible with Tier III design requirements. Reference model for power and cooling integration “We have decades of expertise supporting customers’ next-generation computing infrastructure needs,” says Sara Zawoyski, President of Systems Protection at nVent. “This collaboration with Siemens underscores that commitment. "The joint reference architecture will help data centre managers deploy our cutting-edge cooling infrastructure to support the AI buildout.” Ciaran Flanagan, Global Head of Data Center Solutions at Siemens, adds, “This reference architecture accelerates time-to-compute and maximises tokens-per-watt, which is the measure of AI output per unit of energy. “It’s a blueprint for scale: modular, fault-tolerant, and energy-efficient. Together with nVent and our broader ecosystem of partners, we’re connecting the dots across the value chain to drive innovation, interoperability, and sustainability, helping operators build future-ready data centres that unlock AI’s full potential.” Reference architectures are increasingly used by data centre operators to support rapid deployment and consistent interface standards. They are particularly relevant as facilities adapt to higher rack-level densities and more intensive computing requirements. Siemens says it contributes its experience in industrial electrical systems and automation, ranging from medium- and low-voltage distribution to energy management software. nVent adds that it brings expertise in liquid cooling, working with chip manufacturers, original equipment makers, and hyperscale operators. For more from Siemens, click here.

tde expands tML breakout module for 800GbE ethernet
trans data elektronik (tde), a German manufacturer of fibre optic and copper cabling systems for data centres, has further developed its tML system and made it fit for increased network requirements, with the new breakout modules now supporting transceivers up to 800GbE. QSFP, QSFP-DD, and OSFP transceivers can now be used more efficiently and split into ports with lower data rates (4 x 100GbE or 8 x 100GbE). This allows data centre and network operators to increase the port density of their switch and router chassis and make better use of existing hardware. The company says the new breakout module is particularly suitable for use in high-speed data centres and modern telecommunications infrastructures. “Breakout applications have become firmly established in the high-speed sector,” explains André Engel, Managing Director of tde. "With our tML breakout modules, customers can now use transceivers up to 800GbE and still split them into smaller, clearly structured port speeds. "This allows them to combine maximum port density with very clear, structured cabling." Efficient use of MPO-based high-speed transceivers The current high-speed transceivers in the QSFP, QSFP-DD, and OSFP form factors have MPO connectors with 12, 16, or 24 fibres - in multimode (MM) or single-mode (SM). Typical applications such as SR4, DR4, and FR4 use eight fibres of the 12-fibre MPO, while SR8, DR8, and FR8 use sixteen fibres of a 16- or 24-fibre MPO. This is where tde says it comes in with its tML breakout modules. Depending on the application, the modules split the incoming transmission rate into, for example, four 100GbE or eight 100GbE channels with LC duplex connections. This allows multiple dedicated links with lower data rates to be provided from a single high-speed port - for switches, routers, or storage systems, for example. Alternatively, special versions with other connector faces such as MDC, SN, SC, or E2000 are available. Front MPO connectors and maximum packing density tde also relies on front-integrated MPO connectors for the latest generation of tML breakout modules. The MPO connections are plugged in directly from the front via patch cables. Compared to conventional solutions with rear MPO connectors, this aims to simplify structured patching, ensure clarity in the rack, and facilitate moves, adds, and changes during operation. A high port density can be achieved without the need for separate fanout cables. Eight tML breakout modules can be installed in the tML module carrier with one height unit. Future-proofing and investment protection tde says it has designed the tML breakout module for maximum ease of use. It can only be patched in the front patch panel level, seeking to support structured and clear cabling. Since the tML module carrier can be mixed and matched depending on the desired application and requirements, the breakout module should offer high packing density. Fibre-optic and copper modules can also be combined. André concludes, “With the addition of the tML breakout module, our tML system platform is well equipped for the future and will remain competitive in the long term.”

Energy Estate unveils Tasmanian subsea cables and hubs
Energy Estate Digital, a digital infrastructure platform backed by Energy Estate, has set out plans for new data centre hubs and subsea connectivity in Tasmania as part of a wider programme to support the growth of artificial intelligence infrastructure across Australia, New Zealand, and key international markets. The company is developing subsea cable routes between Australia and New Zealand, as well as major global hubs including California, Japan, and India. These new links are intended to support the expanding AI sector by connecting regions that offer land availability, renewable energy potential, and access to water resources. The platform, launched in December 2024, aligns with national objectives under the Australian National AI Plan announced recently by the Federal Government. As part of its approach to sovereign capability, the company says it intends to offer “golden shares” to councils and economic development agencies in landing-point regions. Two proposed subsea cable landings in Tasmania will form part of the network: the CaliNewy route from California will come ashore at Bell Bay, while the IndoMaris route from Oman and India will land near Burnie. These proposed locations are designed to complement existing cable links between Victoria and Tasmania and future upgrades anticipated through the Marinus Link project. Large-scale energy and infrastructure precincts are expected to develop around these landings, hosting AI facilities, data centre campuses, and other power-intensive industries such as manufacturing, renewable fuels production, and electrified transport. These precincts will be supported by renewable energy and storage projects delivered by Energy Estate and its partners. Partnership to develop industrial and digital precincts Energy Estate has signed a memorandum of understanding with H2U Group to co-develop energy and infrastructure precincts in Tasmania, beginning with the Bell Bay port and wider industrial area. In 2025, H2U signed a similar agreement with TasPorts to explore a large-scale green hydrogen and ammonia facility within the port. Bell Bay has been identified by the Tasmanian Government and the Australian Federal Government as a strategic location for industrial development, particularly for hydrogen and green manufacturing projects. Energy Estate and H2U plan to produce a masterplan that builds on existing infrastructure, access to renewable energy, and the region’s established industrial expertise. The work will also align with ongoing efforts within the Bell Bay Advanced Manufacturing Zone. The digital infrastructure hub proposed for Bell Bay will be the first of three locations Energy Estate intends to develop in Northern Tasmania. The company states that the scale of interest reflects Tasmania’s emerging position as a potential global centre for AI-related activity. Beyond Tasmania, Energy Estate is advancing similar developments in other regions, including the Hunter in New South Wales; Bass Coast and Portland in Victoria; Waikato, Manawatu, and South Canterbury in New Zealand; and the Central Valley in California.

De-risking data centre construction
In this article for DCNN, Straightline Consulting’s Craig Eadie discusses how the rapid rise of hyperscale and AI-driven data centres is amplifying project complexity and risk, as well as why comprehensive commissioning from day one is now critical. Craig highlights how long-lead equipment delays, tightening grid constraints, and beginning commissioning before quality control can derail builds, stressing the need for commissioning teams to actively manage supply chains, verify equipment readiness, and address power availability early: How can comprehensive commissioning de-risk data centre construction? Data centres are the backbone of the global economy. As more businesses and governments depend on digital infrastructure, demand for hyperscale and megascale facilities is surging. With that growth comes greater risk. Project timelines are tighter, systems more complex, and the cost of failure has never been higher. From small enterprise facilities to multi-megawatt hyperscale builds, it’s critical that commissioning teams control and mitigate risk throughout the process. It’s rarely one big crisis that causes a data centre project to fail. More often, it’s a chain of small missteps - from quality control and documentation to equipment delivery or communication - that compound into disaster. Taking a comprehensive approach to commissioning from day one to handover can significantly de-risk the process. Managing upwards (in the supply chain) It wasn’t long ago that a 600 kW project was considered large. Now, the industry routinely delivers facilities of 25, 40, or even 60 MW. Both sites and the delivery process are getting more complex as well, with advanced systems, increasing digitisation, and external pressures on manufacturers and their supply chains. However, the core challenges remain the same; it’s just the consequences that have become far more serious. Long-lead equipment like generators or switchgear can have wait times of 35 to 50 weeks. Clients often procure equipment a year in advance, but that doesn’t guarantee it will arrive on time. There’s no use expecting delivery on 1 July if the manufacturer is still waiting on critical components. Commissioning teams can de-risk this process by actively managing the equipment supply chain. Factory visits to check part inventories and verify assembly schedules can ensure that if a generator is going to be late, everyone knows early enough to re-sequence commissioning activities and keep the project moving. The critical path may shift, but the project doesn’t grind to a halt. Managing the supply chain on behalf of the customer is an increasingly important part of commissioning complex, high-stakes facilities like data centres. Luckily, a lot of companies are realising that spending a little up front is better than paying a few hundred thousand dollars every week when the project is late. Securing power More and more clients are facing grid limitations. As AI applications grow, so too do power demands, and the utilities often can’t keep pace. A data centre without power is just an expensive warehouse, which is why some clients are turning to behind-the-meter solutions like near-site wind farms or rooftop solar to secure their timelines, while bigger players are negotiating preferential rates and access with utilities. This approach is meeting with increasingly stern regulatory pushback in a lot of markets, however. You can have a perfectly coordinated build, but if the grid can’t deliver power on time, it’s game over. Power availability needs to be considered as early as possible in the process and sometimes you have to get creative about solving these challenges. Commissioning without quality control is just firefighting One of the most common mistakes we see is starting commissioning before quality control is complete. This turns commissioning into a fault-finding exercise instead of a validation process. The intended role of commissioning is to confirm that systems have been installed correctly and work as designed. If things aren’t ready, commissioning becomes firefighting, and that’s where schedules slip. Data centres are not forgiving environments. You can’t simply shut down a hyperscale AI facility to fix an oversight after it’s gone live. There is no “more or less right” in commissioning. It’s either right or it isn’t. Technology-driven transparency and communication One of the biggest improvements we’ve seen in recent years is through better project visibility. By using cutting edge platforms like Facility Grid, commissioning teams have a complete cradle-to-grave record of every asset in a facility. If a switchboard is built in a factory in Germany and installed in a project in France, it’s tracked from manufacturing to installation. Every test, every piece of documentation is uploaded. If a server gets plugged into a switchboard, the platform knows who did it, when they did it, what comments were made, with a photographic backup of every step. It means that commissioning, construction, and design teams can collaborate across disciplines with full transparency. Tags and process gates ensure that no stage is marked complete until all required documentation and quality checks are in place. That traceability removes ambiguity. It helps keep everyone accountable and on the same page, even on the most complex projects when adjustments are an essential part of reducing the risk of delays and disruption. The biggest differences between a project that fails and one that succeeds are communication and clear organisational strategies. Precise processes, reliable documentation, early engagement, and constant communication - everyone on the project team needs to be pulling in the same direction, whether they’re a part of the design, construction, or commissioning and handover processes. This isn’t just about checking boxes and handing over a building; commissioning is about de-risking the whole process so that you can deliver a complex, interconnected, multi-million pound facility that works, that’s safe, and that will stay operational long after servers spin up and the clients move in. In the past, the commissioning agent was typically seen as a necessary evil. Now, in the data centre sector and other high-stakes, mission critical industries, commissioning is a huge mitigator of risk and an enabler of success.

Europe races to build its own AI backbone
Recent outages across global cloud infrastructure have once again served as a reminder of how deeply Europe depends on foreign hyperscalers. When platforms run on AWS or services protected by Cloudflare fail, European factories, logistics hubs, retailers, and public services can stall instantly. US-based cloud providers currently dominate Europe’s infrastructure landscape. According to market data, Amazon, Microsoft, and Google together control roughly 70% of Europe’s public cloud market. In contrast, all European providers combined account for only about 15%. This share has declined sharply over the past decade. For European enterprises, this means limited leverage over resilience, performance, data governance, and long-term sovereignty. This same structural dependency is now extending from cloud infrastructure directly into artificial intelligence and its underlying investments. Between 2018 and 2023, US companies attracted more than €120 billion (£104 billion) in private AI investment, while the European Union drew about €32.5 billion (£28 billion) over the same period. In 2024 alone, US-based AI firms raised roughly $109 billion (£81 billion), more than six times the total private AI investment in Europe that year. Europe is therefore trying to close the innovation gap while simultaneously tightening regulation, creating a paradox in which calls for digital sovereignty grow louder even as reliance on non-European infrastructure deepens. The European Union’s Apply AI Strategy is designed to move AI out of research environments and into real industrial use, backed by more than one billion euros in funding. However, most of the computing power, cloud platforms, and model infrastructure required to deploy these systems at scale still comes from outside Europe. This creates a structural risk: even as AI adoption accelerates inside European industry, much of the strategic control over its operation may remain in foreign hands. Why industrial AI is Europe’s real monitoring ground For any large-scale technology strategy to succeed, it must be tested and refined through real-world deployment, not only shaped at the policy level. The effectiveness of Europe’s AI push will ultimately depend on how quickly new rules, funding mechanisms, and technical standards translate into working systems, and how fast feedback from practice can inform the next iteration. This is where industrial environments become especially important. They produce large amounts of real-time data, and the results of AI use are quickly visible in productivity and cost. As a result, industrial AI is becoming one of the main testing grounds for Europe’s AI ambitions. The companies applying AI in practice will be the first to see what works, what does not, and what needs to be adjusted. According to Giedrė Rajuncė, CEO and co-founder of GREÏ, an AI-powered operational intelligence platform for industrial sites, this shift is already visible on the factory floor, where AI is changing how operations are monitored and optimised in real time. She notes, “AI can now monitor operations in real time, giving companies a new level of visibility into how their processes actually function. I call it a real-time revolution, and it is available at a cost no other technology can match. Instead of relying on expensive automation as the only path to higher effectiveness, companies can now plug AI-based software into existing cameras and instantly unlock 10–30% efficiency gains.” She adds that Apply AI reshapes competition beyond technology alone, stating, “Apply AI is reshaping competition for both talent and capital. European startups are now competing directly with US giants for engineers, researchers, and investors who are increasingly focused on industrial AI. From our experience, progress rarely starts with a sweeping transformation. It starts with solving one clear operational problem where real-time detection delivers visible impact, builds confidence, and proves return on investment.” The data confirms both movement and caution. According to Eurostat, 41% of large EU enterprises had adopted at least one AI-based technology in 2024. At the same time, a global survey by McKinsey & Company shows that 88% of organisations worldwide are already using AI in at least one business function. “Yes, the numbers show that Europe is still moving more slowly,” Giedrė concludes. “But they also show something even more important. The global market will leave us no choice but to accelerate. That means using the opportunities created by the EU’s push for AI adoption before the gap becomes structural.”

BAC immersion cooling tank gains Intel certification
BAC (Baltimore Aircoil Company), a provider of data centre cooling equipment, has received certification for its immersion cooling tank as part of the Intel Data Center Certified Solution for Immersion Cooling, covering fourth- and fifth-generation Xeon processors. The programme aims to validate immersion technologies that meet the efficiency and sustainability requirements of modern data centres. The Intel certification process involved extensive testing of immersion tanks, cooling fluids, and IT hardware. It was developed collaboratively by BAC, Intel, ExxonMobil, Hypertec, and Micron. The programme also enables Intel to offer a warranty rider for single-phase immersion-cooled Xeon processors, providing assurance on durability and hardware compatibility. Testing was carried out at Intel’s Advanced Data Center Development Lab in Hillsboro, Oregon. BAC’s immersion cooling tanks, including its CorTex technology, were used to validate performance, reliability, and integration between cooling fluid and IT components. “Immersion cooling represents a critical advancement in data centre thermal management, and this certification is a powerful validation of that progress,” says Jan Tysebeart, BAC’s General Manager of Data Centers. “Our immersion cooling tanks are engineered for the highest levels of efficiency and reliability. "By participating in this collaborative certification effort, we’re helping to ensure a trusted, seamless, and superior experience for our customers worldwide.” Joint testing to support industry adoption The certification builds on BAC’s work in high-efficiency cooling design. Its Cobalt immersion system, which combines an indoor immersion tank with an outdoor heat-rejection unit, is designed to support low Power Usage Effectiveness values while improving uptime and sustainability. Jan continues, “Through rigorous joint testing and validation by Intel, we’ve proven that immersion cooling can bridge IT hardware and facility infrastructure more efficiently than ever before. “Certification programmes like this one are key to accelerating industry adoption by ensuring every element - tank, fluid, processor, and memory - meets the same high standards of reliability and performance.” For more from BAC, click here.

Portable data centre to heat Scandinavian towns
Power Mining, a Baltics-based personal Bitcoin mining device manufacturer, has developed a portable data centre that will heat towns using residual heat from Bitcoin mining. The first two data centres, housed in shipping containers, will be shipped to a Scandinavian town, where they will be connected to the municipal heating system. In one year, one Power Mining data centre can reportedly mine up to 9.7 Bitcoin and heat up to 2000 homes. With 1.6 MW/h in power, the data centre achieves 95% energy efficiency, thereby providing the municipality with 1.52 MW/h. A portable data centre design The data centres are built in Latvia, at a cost starting from €300,000 (£262,000). Due to being put together in a shipping container, they are easily shipped around the world. The data centre is made up of eight server closets, each outfitted with 20 Whatsminer M63S++ servers that consume 10kW of electricity each and create an equivalent amount of heat. The servers can raise the incoming coolant temperature by 10-14°C, producing the equivalent amount of heat while mining Bitcoin. Each server closet is equipped with warm and cool fluid collectors which send the warmed liquid to a built-in heat pump station, where a 1.7 MW heat exchanger ensures the redistribution of heat from the data centre to the town’s heating grid. If the heating grid does not require additional heat from the data centre, the heated fluid is redirected to a built-in dry cooler, which adjusts the temperature to suit the needs of the servers. This way, the data centre is able to cool itself and also contribute to balancing the municipality’s heating grid. Steps towards increased energy efficiency The development of a passive heating data centre is one step towards increased energy efficiency in Bitcoin mining. While classical data centres can collect heat at approximately 27°C, Power Mining says its data centres can collect heat up to 65°C, providing cities with more efficient sources of heat. European data centres already make up more than 3% of the continent’s total electricity consumption, which is expected to surpass 150 TW/h annually - an equivalent of all of Poland’s electricity demands. Up to 40% of this energy is turned into heat, which most often is released into the atmosphere. If this energy were collected and redirected back to heating, it could provide up to 10 million European households with heat. Heat collection from data centres could become one of the most effective ways to combine digitalisation and climate goals.

Aggreko expands liquid-cooled load banks for AI DCs
Aggreko, a British multinational temporary power generation and temperature control company, has expanded its liquid-cooled load bank fleet by 120MW to meet rising global demand for commissioning equipment used in high-density data centres. The company plans to double this capacity in 2026, supporting deployments across North America, Europe, and Asia, as operators transition to liquid cooling to manage the growth of AI and high-performance computing. Increasing rack densities, now reaching between 300kW and 500kW in some environments, have pushed conventional air-cooling systems to their limits. Liquid cooling is becoming the standard approach, offering far greater heat removal efficiency and significantly lower power consumption. As these systems mature, accurate simulation of thermal and electrical loads has become essential during commissioning to minimise downtime and protect equipment. The expanded fleet enables Aggreko to provide contractors and commissioning teams with equipment capable of testing both primary and secondary cooling loops, including chiller lines and coolant distribution units. The load banks also simulate electrical demand during integrated systems testing. Billy Durie, Global Sector Head – Data Centres at Aggreko, says, “The data centre market is growing fast, and with that speed comes the need to adopt energy efficient cooling systems. With this comes challenges that demand innovative testing solutions. “Our multi-million-pound investment in liquid-cooled load banks enables our partners - including those investing in hyperscale data centre delivery - to commission their facilities faster, reduce risks, and achieve ambitious energy efficiency goals.” Supporting commissioning and sustainability targets Liquid-cooled load banks replicate the heat output of IT hardware, enabling operators to validate cooling performance before systems go live. This approach can improve Power Usage Effectiveness and Water Usage Effectiveness while reducing the likelihood of early operational issues. Manufactured with corrosion-resistant materials and advanced control features, the equipment is designed for use in environments where reliability is critical. Remote operation capabilities and simplified installation procedures are also intended to reduce commissioning timelines. With global data centre power demand projected to rise significantly by 2030, driven by AI and high-performance computing, the ability to validate cooling systems efficiently is increasingly important. Aggreko says it also provides commissioning support as part of project delivery, working with data centre teams to develop testing programmes suited to each site. Billy continues, “Our teams work closely with our customers to understand their infrastructure, challenges, and goals, developing tailored testing solutions that scale with each project’s complexity. "We’re always learning from projects, refining our design and delivery to respond to emerging market needs such as system cleanliness, water quality management, and bespoke, end-to-end project support.” Aggreko states that the latest investment strengthens its ability to support high-density data centre construction and aligns with wider moves towards more efficient and sustainable operations. Billy adds, “The volume of data centre delivery required is unprecedented. By expanding our liquid-cooled load bank fleet, we’re scaling to meet immediate market demand and to help our customers deliver their data centres on time. "This is about providing the right tools to enable innovation and growth in an era defined by AI.” For more from Aggreko, click here.



Translate »