Sunday, April 27, 2025

Cooling


Schneider experts explore liquid cooling for AI data centres
Schneider Electric has released its latest white paper, Navigating Liquid Cooling Architectures for Data Centres with AI Workloads. The paper provides a thorough examination of liquid cooling technologies and their applications in modern data centres, particularly those handling high-density AI workloads. The demand for AI is growing at an exponential rate. As a result, the data centres required to enable AI technology are generating substantial heat, particularly those containing AI servers with accelerators used for training large language models and inference workloads. This heat output is increasing the necessity for the use of liquid cooling to maintain optimal performance, sustainability, and reliability. Schneider Electric’s latest white paper guides data centre operators and IT managers through the complexities of liquid cooling, offering clear answers to critical questions about system design, implementation, and operation. Over the 12 pages, authors Paul Lin, Robert Bunger, and Victor Avelar identify two main categories of liquid cooling for AI servers: direct-to-chip and immersion cooling. They describe the components and functions of a coolant distribution unit (CDU), which are essential for managing temperature, flow, pressure, and heat exchange within the cooling system. “AI workloads present unique cooling challenges that air cooling alone cannot address,” says Robert Bunger, Innovation Product Owner, CTO Office, Data Centre Segment, Schneider Electric. “Our white paper aims to demystify liquid cooling architectures, providing data centre operators with the knowledge to make informed decisions when planning liquid cooling deployments. Our goal is to equip data centre professionals with practical insights to optimise their cooling systems. By understanding the trade-offs and benefits of each architecture, operators can enhance their data centres’ performance and efficiency.” The white paper outlines three key elements of liquid cooling architectures: Heat capture within the server: Utilising a liquid medium (e.g. dielectric oil, water) to absorb heat from IT components. CDU type: Selecting the appropriate CDU based on heat exchange methods (liquid-to-air, liquid-to-liquid) and form factors (rack-mounted, floor-mounted). Heat rejection method: Determining how to effectively transfer heat to the outdoors, either through existing facility systems or dedicated setups. The paper details six common liquid cooling architectures, combining different CDU types and heat rejection methods, and provides guidance on selecting the best option based on factors such as existing infrastructure, deployment size, speed, and energy efficiency. With the increasing demand for AI processing power and the corresponding rise in thermal loads, liquid cooling is becoming a critical component of data centre design. The white paper also addresses industry trends such as the need for greater energy efficiency, compliance with environmental regulations, and the shift towards sustainable operations. “As AI continues to drive the need for advanced cooling solutions, our white paper provides a valuable resource for navigating these changes,” Robert adds. “We are committed to helping our customers achieve their high-performance goals while improving sustainability and reliability.” This white paper is particularly timely and relevant in light of Schneider Electric's recent collaboration with NVIDIA to optimise data centre infrastructure for AI applications. This partnership introduced the first publicly available AI data centre reference designs, leveraging NVIDIA's advanced AI technologies and Schneider Electric's expertise in data centre infrastructure. Schneider claims that the reference designs set new standards for AI deployment and operation, providing data centre operators with innovative solutions to manage high-density AI workloads efficiently. For more information and to download the white paper, click here. For more from Schneider Electric, click here.

Vertiv cooling unit seeks to lower carbon footprint
Vertiv, a global provider of critical digital infrastructure and continuity solutions, has introduced new, highly efficient Vertiv Liebert PDX-PAM direct expansion perimeter units with low global warming potential (GWP) and non-flammable R513A refrigerant. Available now in the EMEA region, the system is designed to operate with an eco-friendly refrigerant (as compared to legacy refrigerants) to enable increased efficiency, reliability and maximum flexibility of installation. Liebert PDX-PAM allows data centre owners to comply with the EU F-Gas Regulation 2024/573 and enables their pressing sustainability goals. The non-flammable R513A refrigerant provides up to a 70% GWP reduction when compared to the traditional R410A, without compromising safety or reliability. No additional safety devices are required, as is the case for units using flammable refrigerants, enabling reduced installation costs and CAPEX. "In an era where efficiency and reliability are paramount, we recognise the urgent need for eco-friendly alternatives to stay ahead of regulatory requirements and provide our customers with state-of-the-art innovations,” states Karsten Winther, President for Vertiv in Europe, Middle East and Africa. “With this new solution, we're not just addressing our customers' current sustainability objectives; we're actively innovating and advancing the future of cooling technology and setting new heights for efficiency and reliability." Liebert PDX-PAM is available from 10 kW to 80 kW with a wide range of airflow configurations, options and accessories, making the unit easily adaptable to various installation needs, from small to medium data centres including edge computing applications, UPS and battery rooms. In conjunction with the Liebert PDX-PAM units, a wide choice of cooling solutions are available for managing heat rejection externally, depending on the specific system configuration. Vertiv is seeking to raise the technology threshold with Liebert PDX-PAM, a low-GWP, non-flammable R513A refrigerant solution with inverter-driven brushless motor compressors, staged coil design with an innovative patent-pending filter, electronic expansion valves and state-of-the-art electronically commutated (EC) fans, all included as standard features. The integrated Vertiv Liebert iCOM controller enables seamless synchronisation of these components, allowing complete modulation of performance. This way, the Liebert PDX-PAM unit can adapt to changing operating conditions and heat load efficiently and reliably. The full continuous modulation capability significantly reduces the annual power consumption, resulting in a more cost-effective solution, thanks to the enhanced part load efficiency and precise monitoring of the machine's operation, facilitating performance tracking and more timely and effective maintenance, thereby creating opportunities for predictive maintenance actions. “The introduction of low GWP refrigerants for direct expansion systems marks a significant advancement in sustainable air-cooling technology,” says Lucas Beran, Research Director at Dell’Oro Group. By utilising low-GWP and non-flammable refrigerants, Vertiv complies with EU F-Gas Regulation requirements and aims to reduce carbon footprints without compromising on safety or efficiency. This innovation is significant for data centre operators aiming to achieve their sustainability goals while maintaining high operational standards." For more from Vertiv, click here.

New white paper published on liquid cooling for AI data centres
Schneider Electric has released white paper 133 titled Navigating Liquid Cooling Architectures for Data Centres with AI Workloads. The paper provides a thorough examination of liquid cooling technologies and their applications in modern data centres, particularly those handling high-density AI workloads. The demand for AI is growing at an exponential rate. As a result, the data centres required to enable AI technology are generating substantial heat, particularly those containing AI servers with accelerators used for training large language models and inference workloads. This heat output is increasing the necessity for the use of liquid cooling to maintain optimal performance, sustainability, and reliability. Schneider Electric’s latest white paper guides data centre operators and IT managers through the complexities of liquid cooling, offering clear answers to critical questions about system design, implementation, and operation. Understanding liquid cooling architectures Over the 12-pages, authors Paul Lin, Robert Bunger and Victor Avelar identify two main categories of liquid cooling for AI servers: direct-to-chip and immersion cooling. They describe the components and functions of a coolant distribution unit (CDU), which are essential for managing temperature, flow, pressure, and heat exchange within the cooling system. “AI workloads present unique cooling challenges that air cooling alone cannot address,” says Robert Bunger, Innovation Product Owner, CTO Office, Data Centre Segment, Schneider Electric. “Our white paper aims to demystify liquid cooling architectures, providing data centre operators with the knowledge to make informed decisions when planning liquid cooling deployments. Our goal is to equip data centre professionals with practical insights to optimise their cooling systems. By understanding the trade-offs and benefits of each architecture, operators can enhance their data centres’ performance and efficiency.” The white paper outlines three key elements of liquid cooling architectures: Heat Capture Within the Server: Utilising a liquid medium (e.g. dielectricoil, water) to absorb heat from IT components. CDU Type: Selecting the appropriate CDU based on heat exchange methods(liquid-to-air, liquid-to-liquid) and form factors (rack-mounted,floor-mounted). Heat Rejection Method: Determining how to effectively transfer heat to theoutdoors, either through existing facility systems or dedicated setups. Choosing the right architecture The paper details six common liquid cooling architectures, combining different CDU types and heat rejection methods, and provides guidance on selecting the best option based on factors such as existing infrastructure, deployment size, speed and energy efficiency. With the increasing demand for AI processing power and the corresponding rise in thermal loads, liquid cooling is becoming a critical component of data centre design. The white paper also addresses industry trends such as the need for greater energy efficiency, compliance with environmental regulations, and the shift towards sustainable operations. “As AI continues to drive the need for advanced cooling solutions, our white paper provides a valuable resource for navigating these changes,” adds Robert. “We are committed to helping our customers achieve their high-performance goals while improving sustainability and reliability.” For more from Schneider Electric, click here.

Data centre cooling market to reach £13.2bn in 2028
According to new research from global analyst, Omdia, the data centre thermal management market has surged to a staggering $7.67bn (£6bn), outpacing previous forecasts. This unprecedented growth is poised to continue with a robust CAGR of 18.4% until 2028. This surge will largely be fuelled by AI-driven demands and innovations in high-density infrastructure, marking a pivotal moment for the industry. As AI computing becomes ubiquitous, the demand for liquid cooling has surged dramatically. Key trends include the rapid adoption of Rear Door Heat Exchangers (RDHx) combined with 1-P direct-to-chip cooling, achieving an impressive 65% year-over-year growth, frequently integrating heat reuse applications. This period also sees a strategic blend of air and liquid cooling technologies, creating a balanced and efficient thermal management. Omdia’s Principal Analyst, Shen Wang, explains, “In 2023, the global data centre cooling market experienced increased consolidation, Top 5 and Top 10 concentration ratios rising by 5% from the previous year. Omdia expanded vendor coverage in its report to include 49 companies, up from 40, adding Chinese OEMs and direct liquid cooling component suppliers. Vertiv, Johnson Controls, and Stulz retained their top three positions, with Vertiv notably gained 6% market share, due to strong North American demand and cloud partnerships.” Market growth for data centre cooling was primarily constrained by production capacity, particularly for components like Cooling Distribution Units (CDUs), rather than a lack of demand. Numerous supply chain players struggled to satisfy the soaring market needs, causing component shortages. However, improvements forecasted for 2024 are expected to alleviate this issue, unlocking orders delayed from the previous year due to supply chain bottlenecks. During this time, liquid cooling adoption witnessed robust growth, particularly in North America and China, with new vendors entering the scene and tracked companies exhibiting significant expansion. In this near $1bn (£785m) market of liquid cooling, direct-to-chip vendor, CoolIT, remains the top leader in liquid cooling market, followed by immersion cooling leader, Sugon, and server vendor, Lenovo. The data centre thermal management is advancing due to AI's growing influence and sustainability requirements. Despite strong growth prospects, the industry faces challenges with supply chain constraints in liquid cooling and embracing sustainable practices. Moving forward, the integration of AI-optimised cooling systems, strategic vendor partnerships, and a continued push for energy-efficient and environmentally friendly solutions will shape the industry's evolution. Successfully addressing these challenges will ensure growth and establish thermal management as a cornerstone of sustainable and efficient data centre operations, aligning technology with environmental stewardship. Shen adds, “Data centre cooling is projected to be a $16.8bn (£13.2bn) market by 2028, fuelled by digitalisation, high power capacity demand, and a shift towards eco-friendly infrastructure, with liquid cooling emerging as the biggest technology in the sector.”

Schneider reveals data centre White Space portfolio
Schneider Electric, the leader in digital transformation of energy management and automation, today unveiled its revamped data centre White Space portfolio, where racks and IT equipment sit within a data centre. This new portfolio includes the second generation of NetShelter SX Enclosures (NetShelter SX Gen2), new NetShelter Aisle Containment, and a future update to the NetShelter Rack PDU Advanced, designed to meet the evolving needs of modern data centres - particularly those handling high-density applications and AI workloads, as well as regulatory requirements like the European Energy Efficiency Directive (EED). The NetShelter SX Gen2 enclosures are specifically engineered to support the demands of contemporary data centres. These new racks can support up to 25% more weight than previous models, handling approximately 4,000 pounds (1,814 kilograms), which is essential for accommodating the heavier, denser equipment associated with AI and high-performance computing. Enhanced perforation in the doors increases airflow, vital for cooling high-density server configurations, and the racks offer more space and better cable management options for larger, more complex server setups. With security of physical equipment remaining an important requirement, the enclosures feature all-steel construction and three -point locking systems to improve data centre protection. The NetShelter SX Gen2 racks reduce their overall climate change impact by around 3.3% per rack and are designed to be highly recyclable, with approximately 97% of the rack being recyclable. These racks are available in standard sizes of 42U, 45U, and 48U along with wide, extra-wide and deep models. “Our NetShelter SX Gen2 enclosures are a leap forward in addressing the critical requirements of high-density applications,” says Elliott Turek, Director of Category Management, Secure Power Division, Schneider Electric. “With enhanced weight support, airflow management, and physical security, we are enabling our customers to optimise their data centre operations while also advancing sustainability.” Advanced cooling and flexibility with NetShelter Aisle Containment The latest NetShelter Aisle Containment can achieve up to 20% more cooling capacity. This is crucial for managing the heat generated by AI servers and other high-density applications. The system incorporates an air flow controller that automates fan speed, reducing fan energy consumption by up to 40% compared to traditional passive cooling systems. The vendor neutral containment systems provide greater flexibility and speed of setup for data centre operators, allowing for easier integration and adaptation to existing builds. The new design also simplifies installation and field modifications, while reducing energy expenses by between 5 and 10%. “Containment remains paramount in today's high-density data centres," Elliott notes. "Even in liquid cooled applications, air heat rejection plays a critical role. Our NetShelter Aisle Containment solutions not only enhance cooling capacity but also offer significant energy savings, aligning with our commitment to sustainability.” Security and management with NetShelter Rack PDU Advanced and Secure NMC3 The NetShelter Rack PDU Advanced with Secure NMC3 is an updated power distribution unit equipped with advanced security features and enhanced management capabilities. The Secure NMC3 network management card provides robust cybersecurity measures and enables third-party validation for firmware updates for consistent compliance. This support for mass firmware updates significantly reduces the manual effort required to keep the PDUs secure and up-to-date, which is crucial for maintaining security across large deployments. The PDU is suitable for a range of applications, including those with power requirements up to an including 70kW per rack, making it a versatile solution for various data centre configurations. It includes features that enhance energy efficiency and operational reliability, contributing to the overall sustainability of the data centre. “Security and efficiency are at the forefront of our advanced PDUs,” Elliott explains. “By integrating expended security and management features, we are ensuring that our customers can maintain secure and efficient operations with ease.” All products in Schneider Electric’s revamped White Space portfolio are available for quotation and order (Secure NMC3 coming in Q4). For more from Schneider Electric, click here.

Vertiv to supply cooling solutions for EcoDataCenter plant
Vertiv, a global provider of critical digital infrastructure and continuity solutions, has been awarded a contract by Swedish data centre company, EcoDataCenter, to supply high-efficiency chilled water cooling solutions for EcoDataCenter’s state-of-the-art plants being built in Falun, Sweden. EcoDataCenter, founded in 2014, has been very successful with its state-of-the-art data centres and is continuing to grow and expand its operations to support rising demand for AI and high performance computing (HPC). EcoDataCenter’s commitment to sustainability is perfectly aligned with Vertiv's focus on efficient infrastructure and principles of environmental stewardship. It was a natural choice for the company to extend their relationship with Vertiv by naming it as a solutions provider on this new project. EcoDataCenter operates multiple data centre facilities in four Swedish locations. The two new data centres in Falun are planned to be commissioned at the beginning of 2025. The project includes an expected installation of 96 Vertiv Liebert PCW chilled water cooling units for a total capacity of around 12 MW. These floor-mounted systems feature optimised coils and an aerodynamic design of the internal components, including patented elements, allowing a reduction of energy consumption. Moreover, the units are customised according to customer specifications, further enabling enhanced cooling efficiency and effective waste heat reuse. "We selected Vertiv’s cooling systems due to their energy-efficient, reliable solutions, exceptional expertise and service. Vertiv is quick to translate technological advances into products, and its innovations integrate seamlessly with our deployments" says Mikael Svanfeldt, CTO at EcoDataCenter. "This framework agreement with EcoDataCenter is a feather in the cap for Vertiv in the Swedish market. EcoDataCenter and Vertiv have a history of working together to apply innovative, efficient, and reliable solutions to support EcoDataCenter's sustainability goals. This knowledge sharing helps both companies to anticipate future needs," adds Victor Elm, Strategic Segment and Partners Director, Colocation and Hyperscale for Northern Europe at Vertiv. The companies plan to continue their technology partnership to support AI and HPC applications. For more from Vertiv, click here.

STULZ launches new coolant management and distribution unit
STULZ, a global mission critical air conditioning specialist, has announced the launch of CyberCool CMU – an innovative new coolant management and distribution unit (CDU) that is designed to maximise heat exchange efficiency in liquid cooling solutions. Launched at Data Centre World Frankfurt 2024 earlier this week, CyberCool CMU seeks to offer industry-leading levels of energy efficiency, flexibility and reliability within a small footprint, while providing precise control over an entire liquid cooling system. "The rapid advancement of high-performance computing, artificial intelligence (AI) and machine learning (ML) has led to a massive increase in data centre rack and server power density," explains Joerg Desler, Global Director Technology at STULZ. "Central processing units (CPUs) and graphics processing units (GPUs) are expected to exceed 1000W per processor or higher in the next few years. These processing requirements are placing tremendous demands on data centre cooling systems, and where liquid cooling was once an option, it is rapidly becoming essential." CyberCool CMU has been developed to maximise heat exchange by isolating the facilities water system (FWS) and technology cooling system (TCS) elements of a liquid cooling system. This significantly reduces the risk of cross-contamination and leaks, thereby enhancing overall reliability. It also provides precise control over each side of the cooling system, enabling better management of coolant flow rates, temperatures and pressure, which improves overall system efficiency. As it is precision engineered, CyberCool CMU accurately controls the supply temperature and flow rate of the coolant with minimal power consumption. Comprising premium grade water pumps, plate heat exchangers, water valves and controllers, CyberCool CMU provides a reliable and efficient liquid coolant supply. High liquid coolant quality is ensured through sanitary grade stainless-steel pipelines, and to enhance system compatibility the unit offers a range of structural, electrical and control options including the flexibility to accommodate customer specific configurations and power loads. Alongside a series of standard unit configurations and capacities, this new product line from STULZ can offer a high level of customisation, adapting to specific needs in the DLC market. Data centres are under increasing pressure to become more sustainable, so CyberCool CMU is designed to seamlessly integrate with ancillary STULZ A/C products, providing an efficient system solution throughout; as well as supporting ASHRAE’s guidelines for water cooling specifications. To achieve the highest standards of reliability and usability, CyberCool CMU’s software and hardware are perfectly harmonised with any liquid cooling solution, while its intuitive touchscreen display provides clear menu navigation. Multiple variable speed pumps provide adaptation to required liquid flow rates alongside energy efficiency gains as well as build in redundancy. Joerg concludes, "The transition to liquid cooling in data centres is well underway and we are confident that CyberCool CMU can meet the heat transfer demands of these systems sustainably, efficiently, reliably and flexibly." For more from STULZ, click here.

Sonic Edge partners with Iceotope to launch dedicated AI Pods
Sonic Edge, a provider of modular data centres (MDCs), is partnering with Iceotope, a global precision liquid cooling expert, to launch new Iceotope AI Pods. Sonic Edge provides a range of Edge and HPC (High-Performance Computing)-ready MDCs that enable organisations to run their operations anywhere in the world. With the significant increase in the compute densities required for AI applications, Sonic Edge recognised the opportunity to design and build containerised MDCs, or Pods, that are AI application-ready, incorporating advanced precision liquid cooling technology from Iceotope. The resulting Iceotope AI Pods are multi-tenant MDCs with a capacity of up to 450kW, a 12x4m footprint, and can be deployed either on-premise, or in remote locations. They are designed to include everything you would find in a standard data centre facility, such as UPS backup, fire suppression, and monitoring and evaluation. Stuart Priest, Founder and CEO, Sonic Edge, explains, “There are many organisations, particularly start-ups, that can’t afford to wait for colocation space to become available for their operations. They are looking to get their own high-performance, AI-ready MDCs up and running fast. We’re excited about our collaboration with Iceotope because we can now provide cloud or edge providers with multi-tenant Pods that have Iceotope’s advanced precision liquid cooling built in.” David Craig, CEO, Iceotope, adds, “We’re seeing an unprecedented surge in data generation and the evolving role of data centres as interactive AI powerhouses. To meet this demand – and with scalability, serviceability, and sustainability at the forefront of industry demands – our precision liquid cooling is pivotal to providers such as Sonic Edge. We are delighted to be partnering with them to have our technology incorporated into fast and easy-to-deploy Pods to facilitate high-performance AI.” Rapid implementation and cost-effectiveness are major benefits of the AI Pods, according to Stuart Priest. He notes, “To build and get a data centre up and running can take five or six years, whereas with an AI Pod it takes just 16 weeks from order to delivery. Everything needed to make it operational is there from day one, and we offer ‘tier three ready’ as standard. We also ensure that the Iceotope AI Pods adhere to all relevant industry compliance standards. The highest levels of security can also be incorporated, ranging from SR1 to SR8.” Flexibility is at the heart of the Iceotope AI Pods. Stuart continues, “Our Pods are meticulously designed to adapt seamlessly to customers’ growing requirements. We believe in building a solution to fit the project, rather than trying to fit the project into the solution. With Iceotope AI Pods, you can literally ‘pay as you grow’.” For more from Iceotope, click here.

Carrier launches new chiller range for data centres
Carrier has launched a new range of high performance chillers for data centres, designed to minimise energy use and carbon emissions while cutting running costs for operators. Available in capacities from 400kW to 2100kW, the Eurovent and AHRI certified units are based on proven Carrier screw compressors, ensuring efficient, reliable operation and long working life. Carrier is a part of Carrier Global Corporation, global leader in intelligent climate and energy solutions. The new AquaForce 30XF air-cooled screw chillers are equipped with an integrated hydronic free-cooling system and variable-speed inverter drives, which combine to deliver energy savings of up to 50% during total free-cooling operation. The chillers, available on ultra-low global warming potential refrigerant HFO R-1234ze(E), claim to offer excellent resilience with an ultra-fast recovery system that, in the event of a power cut, can resume 100% of cooling output within two minutes of power being restored. This ensures cooling is maintained for critical servers and data protected. The chiller can operate in a wide range of ambient conditions, from -20 to 55°C, making it suitable for use in cold, temperate and hot climates, while Carrier's smart monitoring system ensures optimum efficiency and performance. Variable-speed fans further increase energy efficiency and support quiet operation at part-load. To further enhance chiller performance, the units are equipped with a dual power supply (400/230V or 400/400V) with electronic harmonic filter. The filter automatically monitors and maintains the quality of the power supply, preventing damage to the chiller's electrical components and improving overall system efficiency. The hydronic free-cooling system is available in a glycol-free option for applications where glycol cannot be used. This operates with glycol in the outdoor unit only, and enables the size of the glycol-free indoor units to be reduced by up to 15%. "The new AquaForce 30XF has been designed specifically to meet the strict environmental, efficiency and reliability requirements of data centre applications, and ensure servers keep running cool around the clock," says Raffaele D'Alvise, Carrier HVAC Marketing and Communication Director. "The chiller helps data centre operators achieve their budget and sustainability goals by reducing energy consumption and carbon emissions, while providing excellent resilience and extended working life." The AquaForce 30XF is part of Carrier's comprehensive range of cooling solutions for data centres, which includes AquaSnap 30RBP air-cooled scroll chillers, AquaEdge 19DV water-cooled centrifugal chiller, AquaForce 61XWH-ZE water-cooled heat-pump, plus computer room air conditioners, air handlers and fan-walls, all supported by Carrier BluEdge lifecycle and service and support to maintain optimum performance.

LiquidStack opens new facility to scale liquid cooling production
LiquidStack, provider of liquid cooling solutions for data centres, has announced its new US manufacturing site and headquarters located in Carrollton, Texas. The new facility is a major milestone in its mission to deliver high performance, cost-effective and reliable liquid cooling solutions for high performance data centre and edge computing applications. With a significant uptick in liquid cooling demand associated with scaling generative AI, the new facility enables it to respond to customers' needs in an agile fashion, while maintaining the standards and services the company is known for. LiquidStack’s full range of liquid cooling solutions are being manufactured on site, including direct-to-chip Coolant Distribution Units (CDUs), single phase and two phase immersion cooling solutions and the company’s MacroModular and MicroModular prefabricated data centres. The site will also host a service training and demonstration centre for customers and its global network of service engineers and partners. “We are seeing incredibly high demand for liquid cooling globally as a result of the introduction of ultra-high TDP chips that are driving the scale and buildout of generative AI. Our investment in this new facility allows us to serve the rapidly growing market while creating new, high-skilled jobs right here in Carrollton,” says Joe Capes, CEO, LiquidStack. The new manufacturing facility and headquarters occupies over 20,000sqft. It has also been in operation since December 2023, and a formal ribbon cutting ceremony will be held on March 22, 2024. Expected attendees include members of the city council and the Metrocrest chamber of commerce, as well as LiquidStack customers and partners.



Translate »