Data Centres


Trane adds CRAH units to DC cooling portfolio
Trane, an American manufacturer of heating, ventilation, and air conditioning (HVAC) systems, has expanded its data centre thermal management range with the addition of a Computer Room Air Handler (CRAH) system. The unit is designed to maintain airflow and temperature conditions for servers and other electronic equipment, aiming to support operational uptime while reducing energy use. The CRAH system is equipped with Trane’s Symbio controller, which provides a broad capacity range and customisable configurations. The controller enables leader designation and dynamic reassignment for up to 32 units, allowing continuous operation and access to digital tools for lifecycle management. According to Trane, the new airside system is intended for both colocation and hyperscale data centre operators seeking flexible integration into existing or new-build facilities. Steve Obstein, Vice President and General Manager, Data Centres, Trane Technologies, says, “Expansion of our airside offer gives our colo and hyperscale customers greater flexibility for configuring custom systems and addresses the growing trend toward a single-source solutions provider.” Integration and lifecycle support The CRAH addition is part of Trane’s wider approach to unifying and integrating thermal management systems through smart controls. The company offers local service teams across North America and remote monitoring capabilities for predictive maintenance and operational oversight. Recent updates to Trane’s thermal management portfolio include: • Scalable liquid cooling platforms• A fan coil wall platform• Larger capacity and higher ambient temperature air-cooled chillers The CRAH system has been developed to operate alongside these technologies as part of a consolidated data centre cooling strategy, with the aim of improving efficiency, reliability, and sustainability. For more from Trane, click here.

DC BLOX secures $1.15bn for Atlanta data centre
DC BLOX, a provider of connected data centres and fibre networks, has announced that it has closed $1.15 billion (£858 million) in green loan financing for the construction of a data centre campus in Douglas County, Georgia, USA. The funds will support the development of a 120 MW data centre and include campus expansion to support an additional 80 MW, available in 2027. “Securing this capital confirms confidence in our execution track record,” comments Melih Ileri, SVP of Capital Markets & Strategy at DC BLOX. “Continuing to deliver our projects on time and with excellence has earned us the trust of our customers and investors, leading to this historic growth in our business.” This project comes on the heels of recently announced DC BLOX projects including multiple hyperscale edge nodes across the US Southeast. With additional hyperscale-ready data centre capacity available in Conyers and Douglasville, Georgia, DC BLOX believes it is set to rapidly expand its presence around Atlanta. “With this latest project announcement, DC BLOX continues to deliver on its mission to build the foundational digital infrastructure needed to drive the Southeast’s growing economy,” claims Jeff Uphues, CEO of DC BLOX. “Atlanta is the fastest-growing data centre market in the US today and we are proud to enable our customers to expand their footprint in our region.” This financing follows the prior $265 million (£197.5 million) green loan secured from industry lenders, as well as the growth equity that was committed by Post Road Group in the fourth quarter of 2024. “The DC BLOX management team has done a terrific job positioning the business for success in the Southeast, with a consistent focus on serving the customer and community,” says Michael Bogdan, Managing Partner at Post Road Group. “We are thankful to all our capital partners who have helped capitalise the company to meet the tremendous hyperscale and edge growth the company has experienced.” Those involved in the deal • ING Capital served as Structuring and Administrative Agent• ING, Mizuho Bank, and Natixis Corporate & Investment Banking (Natixis CIB) served as Initial Coordinating Lead Arrangers and Joint Bookrunners• First Citizens Bank served as Coordinating Lead Arranger• CoBank ACB, LBBW New York Branch, The Toronto-Dominion Bank New York Branch, and KeyBank National Association served as Joint Lead Arrangers• The Huntington National Bank served as Mandated Lead Arranger• ING and Natixis CIB also served as Joint Green Loan Coordinators• A&O Shearman served as counsel to DC BLOX• Milbank served as counsel to the lenders For more from DC BLOX, click here.

Why data centres should care about atmospheric chemistry
Data centres are multiplying to satisfy the world’s appetite for computational power, driven by AI and other emerging technologies. The outcome has been an unprecedented surge in energy demand and greenhouse gas (GHG) emissions. Here, Alexander Krajete, CEO at emissions treatment specialist Krajete, explains why data centres must look beyond their direct carbon footprint and adopt a holistic approach to multi-emission capture and valorisation: What's changed? Data centres once had a modest footprint, accounting for under 1% of global GHG emissions, according to the International Energy Agency. But rising demand from AI, streaming, and blockchain is set to more than double their energy use from 415 TWh in 2024 to 945 TWh by 2030. Some tech giants share these predictions. Google stated in its 2024 Environmental Report that “in spite of the progress we're making, we face significant challenges that we’re actively working through. In 2023, our total GHG emissions increased 13% year-over-year, primarily driven by increased data centre energy consumption and supply chain emissions.” A holistic approach to data centre sustainability Some leading tech companies claim to have purchased or generated enough renewable electricity to match 100% of their operational energy consumption. As the IEA notes, buying renewable energy or certificates doesn’t guarantee a data centre runs on clean power 24/7 due to the intermittency of renewables and potential mismatches in location or grid. A more accurate, holistic calculation also includes indirect emissions throughout the supply chain — the so-called scope three emissions. These include mining raw materials like copper, silicon, and lithium - used in a data centre’s server racks - or the production of building materials like aluminium, steel, and concrete. Complying with new sustainability regulations Although not specifically aimed at data centres, the EU’s Corporate Sustainability Reporting Directive (CSRD) requires organisations, including tech companies, to report on their sustainability performance, including scope one, two, and three emissions. In addition, in 2024, the European Commission adopted legislation specifically aimed at “establishing an EU-wide scheme to rate the sustainability of EU data centres.” To comply with these new legal obligations, data centre operators must examine their environmental footprint holistically. Why atmospheric chemistry matters to data centres Although reducing the amount of CO2 in the atmosphere remains vital, we must also address other gases that can harm our ecosystems and climate. These chemicals include nitrogen oxides (NOX), carbon monoxide (CO), hydrogen sulphide (H2S), sulphur oxides (SOX), hydrocarbons, and various metals. Once released, these gases can react with one another, leading to secondary pollutants. The consequences of these are yet to be fully understood. They originate from combustion-heavy sectors like mining, cement, and energy, all contributors to scope two and three emissions. Traditionally, there have been two ways of capturing atmospheric pollutants. Take CO2 as an example. The sacrificial method uses limestone to remove CO2 and other gases, creating non-reusable carbonates. The regenerative amine-based method produces reusable amine carbamates but emits harmful, amine-based degradation products. Advanced adsorption is a low-energy, low-emission regenerative process that captures and valorises emissions at temperatures below 100°C, far lower than the 150–200°C required for amine-based methods. Pollutant gases weakly bind to a complex inorganic filter, allowing for easy separation. It can be applied at the exhaust point of any combustion process, such as cement factory chimneys or stationary diesel engines. By supporting the adoption of advanced adsorption technology throughout their supply chains, data centres can address their scope two and three emissions more effectively and meet their sustainability goals. Multi-emission capture is the key to sustainable data centres Thanks to innovative technologies like advanced adsorption, we can go beyond capturing and neutralising pollutants like nitrogen oxides. We can also transform these emissions into valuable by-products like fertilisers, supporting a circular economy. As the world’s insatiable demand for data grows, data centres must adopt holistic sustainability strategies that withstand the test of time. Multi-emission capture must be part of the solution, enabling data centres to balance the growing need for powerful AI with the needs of our planet.

BSRIA first UKAS-accredited provider for BTS 4/2024
BSRIA, a consultancy and testing organisation, has become the first organisation to receive UKAS accreditation in accordance with BTS 4/2024 for airtightness testing of Raised Access Plenum Floors (RAPFs), following a successful ISO 17025 audit earlier in 2024. The accreditation formally extends BSRIA’s scope of approved activities and introduces an industry-recognised methodology for testing RAPFs, which play a key role in airflow management in data centres. Chris Knights, BSRIA Building Performance Evaluation Business Manager and lead author of BTS 4/2024, comments, “The UKAS accreditation ensures we continue to provide independent testing to the highest standards of quality, repeatability, and traceability. "This is a significant advancement, enabling the industry to adopt a dedicated standard that supports higher-performing building services for owners and operators.” BTS 4/2024 standard The accreditation follows the introduction of BTS 4/2024 Airtightness Testing of Raised Access Plenum Floors, which sets out a methodology for measuring RAPF air leakage. The standard is designed to support efficient airflow management by ensuring conditioned air in underfloor voids is directed to the intended occupied areas rather than escaping through cavities, risers, stairwells, or other adjacent spaces. RAPFs are widely used in modern construction, particularly in data centres, where optimised airflow is important for both cooling performance and energy efficiency. BTS 4/2024 supersedes previous guidance, BG 65/2016 Floor Plenum Airtightness – Guidance and Testing Methodology, and incorporates clearer guidance and refined testing processes developed in response to industry feedback. Chris continues, “An effectively constructed and sealed raised access plenum floor is essential for achieving the air distribution performance intended during the design phase. "The methodology in BTS 4/2024 provides clear criteria and a step-by-step process for verifying as-built performance. "With increasing demand for high-performing environments such as data centres, specifying BTS 4/2024 supports effective air distribution and helps ensure RAPFs deliver on design intent.”

Macquarie, Dell bring AI factories to Australia
Australian data centre operator Macquarie Data Centres, part of Macquarie Technology Group, is collaborating with US multinational technology company Dell Technologies with the aim of providing a secure, sovereign home for AI workloads in Australia. Macquarie Data Centres will host the Dell AI Factory with NVIDIA within its AI and cloud data centres. This approach seeks to power enterprise AI, private AI, and neo cloud projects while achieving high standards of data security within sovereign data centres. This development will be particularly relevant for critical infrastructure providers and highly regulated sectors such as healthcare, finance, education, and research, which have strict regulatory compliance conditions relating to data storage and processing. This collaboration hopes to give them the secure, compliant foundation needed to build, train, and deploy advanced AI applications in Australia, such as AI digital twins, agentic AI, and private LLMs. Answering the Government’s call for sovereign AI The Australian Government has linked the data centre sector to its 'Future Made in Australia' policy agenda. Data centres and AI also play an important role in the Australian Federal Government’s new push to improve Australia’s productivity. “For Australia's AI-driven future to be secure, we must ensure that Australian data centres play a core role in AI, data, infrastructure, and operations,” says David Hirst, CEO, Macquarie Data Centres. “Our collaboration with Dell Technologies delivers just that, the perfect marriage of global tech and sovereign infrastructure.” Sovereignty meets scalability Dell AI Factory with NVIDIA infrastructure and software will be supported by Macquarie Data Centres’ newest purpose-built AI and cloud data centre, IC3 Super West. The 47MW facility is, according to the company, "purpose-built for the scale, power, and cooling demands of AI infrastructure." It is to be ready in mid-2026 with the entire end-state power secured. “Our work with Macquarie Data Centres helps bring the Dell AI Factory with NVIDIA vision to life in Australia,” comments Jamie Humphrey, General Manager, Australia & New Zealand Specialty Platforms Sales, Dell Technologies ANZ. “Together, we are enabling organisations to develop and deploy AI as a transformative and competitive advantage in Australia in a way that is secure, sovereign, and scalable.” Macquarie Technology Group and Dell Technologies have been collaborating for more than 15 years. For more from Macquarie Data Centres, click here.

Sabey's Ashburn campus opening for tours
Sabey Data Centers, a data centre developer, owner, and operator, has announced that its Ashburn campus in Virginia, USA, will be featured as an exclusive tour stop during the 2025 Data Center Frontier Trends Summit. The off-site tour will take place on Thursday, 28 August 2025, offering attendees an up-close look at the infrastructure and sustainable design powering mission-critical IT environments. Located in in the centre of Loudoun County’s Data Center Alley, Sabey’s 38-acre campus includes two completed buildings providing more than 36 MW of power. The site features flexible colocation and powered shell space, along with access to multiple Tier 1 connectivity providers. The campus is Energy Star Certified and equipped with low PUE design and advanced cooling technologies. Attendees will tour Sabey’s secure facility and view key IT and critical infrastructure equipment. Tour details When:Thursday, 28 August 2025 | 1:30pm(Transportation departs from Hyatt Regency Reston at 12:30pm)Duration: Approximately 1.5 hours Where:Sabey Data Centers - Ashburn21741 Red Rum DriveAshburn, Virginia 20147 The tour has limited space and pre-registration is required via the Data Center Frontier Trends Summit website. For more from Sabey, click here.

Microchip launches Adaptec SmartRAID 4300 accelerators
Semiconductor manufacturer Microchip Technology has introduced the Adaptec SmartRAID 4300 series, a new family of NVMe RAID storage accelerators designed for use in server OEM platforms, storage systems, data centres, and enterprise environments. The series aims to support scalable, software-defined storage (SDS) solutions, particularly for high-performance workloads in AI-focused data centres. The SmartRAID 4300 series uses a disaggregated architecture, separating software and hardware elements to improve efficiency. The accelerators integrate with Microchip’s PCIe-based storage controllers to offload key RAID processes from the host CPU, while the main storage software stack runs directly on the host system. This approach allows data to flow at native PCIe speeds, while offloading parity-based functions such as XOR to dedicated accelerator hardware. According to internal testing by Microchip, the new architecture has delivered input/output (I/O) performance gains of up to seven times compared with the company’s previous generation products. Architecture and capabilities The SmartRAID 4300 accelerators are designed to work with Gen 4 and Gen 5 PCIe host CPUs and can support up to 32 CPU-attached x4 NVMe devices and 64 logical drives or RAID arrays. This is intended to help address data bottlenecks common in conventional in-line storage solutions by taking advantage of expanded host PCIe infrastructure. By removing the reliance on a single PCIe slot for all data traffic, Microchip aims to deliver greater performance and system scalability. Storage operations such as writes now occur directly between the host CPU and the NVMe endpoints, while the accelerator handles redundancy tasks. Brian McCarson, Corporate Vice President of Microchip’s Data Centre Solutions Business Unit, says, “Our innovative solution with separate software and hardware addresses the limitations of traditional architectures that rely on a PCIe host interface slot for all data flows. "The SmartRAID 4300 series allows us to enhance performance, efficiency, and adaptability to better support modern enterprise infrastructure systems.” Power efficiency and security Power optimisation features include automatic idling of processor cores and autonomous power reduction mechanisms. To help maintain data integrity and system security, the SmartRAID 4300 series incorporates features such as secure boot and update, hardware root of trust, attestation, and Self-Encrypting Drive (SED) support. Management tools and compatibility The series is supported by Microchip’s Adaptec maxView management software, which includes an HTML5-based web interface, the ARCCONF command line tool, and plug-ins for both local and remote management. The tools are accessible through standard desktop and mobile browsers and are designed to remain compatible with existing Adaptec SmartRAID utilities. For out-of-band management via Baseboard Management Controllers (BMCs), the series supports Distributed Management Task Force (DMTF) standards, including Platform-Level Data Model (PLDM) and Redfish Device Enablement (RDE), using MCTP protocol. For more from Microchip, click here.

Digital Connexion announces first DGX-ready Chennai data centre
Data centre operator Digital Connexion today announced that its MAA10 facility in Ambattur, Chennai, has been certified as part of the NVIDIA DGX-Ready Data Center program. This certification reflects the facility’s capabilities to support accelerated computing workloads required for AI training and GPU-intensive computing. The company says the MAA10 data centre is purpose-built to offer a resilient, GPU-optimised environment capable of supporting compute-intensive AI training and inference workloads. In line with global operational standards, MAA10 is compliant with ASHRAE W2 thermal guidelines, which ensures stable and efficient cooling in environments with elevated heat loads. The facility supports both air and liquid cooling configurations, enabling flexible deployment of diverse infrastructure from conventional GPU servers to high-density systems requiring advanced thermal management. It also features a 'unique' N+2C power architecture, offering an added layer of redundancy that aims to enhance uptime and operational reliability. “The ability to process and manage data at scale is foundational to successful AI deployments," says CR Srinivasan, Chief Executive Officer, Digital Connexion. "As AI adoption accelerates across India’s key industries, so does the need for infrastructure that can overcome data gravity barriers and support increasingly intensive AI workloads. "Our certification as part of the NVIDIA DGX-Ready Data Center program strengthens MAA10’s position as a purpose-built, high-performance environment engineered to aggregate, process, and manage large volumes of AI data, empowering enterprises to innovate at scale.” As Indian enterprises embed AI more deeply into their operations, the amount of data to be managed - and thus the need for reliable data centres - continues to grow. As indicated by the Data Gravity Index Report 2.0, by the end of 2025, Delhi will have generated 12.3k exabytes of data, boosting the need for optimised data management. MAA10 is TIA-942 Rated 3, which highlights the facility’s capability to maintain critical operations even during maintenance activities. The data centre also holds an IGBC Platinum rating, reflecting its alignment with high benchmarks in sustainability, energy efficiency, and responsible resource management. Digital Connexion asserts that with "dedicated infrastructure engineered to handle dynamic GPU load patterns, MAA10 is positioned to support enterprises developing and deploying data-intensive AI applications in India."

Scolmore introduces IEC Lock C21 Locking Connector
Scolmore, a UK-based manufacturer of electrical wiring accessories, circuit protection products, and lighting equipment, has expanded its IEC Lock range with the addition of a new C21 locking connector, compatible with both C20 and C22 inlets.Featuring a side button release, the IEC Lock C21’s design aims to offer extra protection against accidental disconnection, making it an appropriate choice for applications where reliability is essential.Designed to handle the heat, the company says the C21 is a durable, lockable connector built to protect appliances that are sensitive to vibration against power loss. The product is particularly suited to data centres, servers, and other industrial equipment where maintaining the proper device temperature is critical to operational success.

BSDI announces 5,000-acre campus in Montana
Big Sky Digital Infrastructure (BSDI), a Quantica Infrastructure (Quantica) company, has just announced a major project: a 5,000-acre energy and digital infrastructure campus outside Billings, Montana, USA. The initial projected capacity is 500 MW of renewable power and battery energy storage, expandable to 1 GW. The company plans construction of the Big Sky Campus beginning in 2026. “Montana has always been a state that builds its future on the strength of its people and natural resources,” says Damon Obie, a Montana native and co-founder of Big Sky Digital Infrastructure. “The Big Sky Campus represents a unique opportunity to build on the industries that powered our history with the digital economy that will define our future. "This project is about creating opportunities for Montanans, so our communities can thrive in the digital age while staying true to our values and heritage.” John Chesser, co-founder of Big Sky Digital Infrastructure, adds, “A well-planned digital economy can support communities through employment opportunities and infrastructure investments. “This project uses the rising demand for hyperscale, AI, and cloud computing to deliver land, renewable energy, and high-speed fibre in one integrated solution.” “Having worked in the Montana power industry for over twenty years,” comments Charlie Baker, BSDI’s Chief Financial Officer, “I look forward to bringing BSDI’s approach of combining traditional grid power with planned renewable and battery energy storage to help customers meet sustainability and reliability goals. "Improvements to in-state telecommunications that come with this will benefit the whole community including schools, healthcare, and community services.” The site is expected to be connected to hundreds of miles of new fibre-ready underground conduit, enabling diverse routes to major metropolitan areas and aiming to ensure fast, resilient connectivity. The site will also include large-scale renewable energy and battery energy storage to support the campus. Through this project, the BSDI team expects to create construction jobs and permanent positions, boosting local economic development and workforce training.



Translate »