Infrastructure Management


How smart innovation is helping data centres hit net zero
By Simon Prichard, Product Strategy Manager, Mitsubishi Electric. As data centres become an increasingly critical element of the UK’s digital infrastructure, the question continues to circle about how we can make them as energy efficient as possible. This is a vital point that needs to be addressed. After all, data centres are a huge source of carbon emissions, collectively pumping out the equivalent CO2 of a mid-sized country every year. In order to reach the UK’s carbon reduction goals, the emissions caused by the whole data centre industry must be significantly cut over the next few (vital) years. Of course, the environmental impact of data centres is already being closely considered and green construction is coming to the forefront of design. In fact, many centre operators have pledged to reach zero-carbon emissions by 2030. At an industry level, the Climate Neutral Data Centre Pact has committed to the fact that data centre electricity demand will be matched by 75% renewable energy or hourly carbon-free energy by December 31, 2025, and 100% by December 31, 2030. As part of this push, one area that must be focused on is investing in sustainable and efficient cooling solutions - as cooling is a big energy user in data centres - as well as reducing the level of embodied carbon in data centres and using analytics and smart insights to optimise how power is used. Making cooling sustainable Data centres and IT cooling rooms are unique environments, where a constant temperature must be maintained year-round. The spaces are built to be resilient with back-up generators and cooling equipment designed to work reliably 24 hours a day. This equipment is crucial, as it is there to stop computers from overheating and shutting down. For example, when humidity levels vary, reliable, close control cooling must kick in and keep equipment at the right temperate to work effectively. And it isn’t just the performance of cooling equipment at the start of its lifecycle that needs to be taken into account, a chiller also needs to continue delivering that level of performance every day through to year 10 and beyond. This constant cooling requires a lot of energy, so the challenge is to cool spaces reliably while making sure it happens in the most energy efficient, sustainable way possible. Advanced controls can help here, as well as the use of planned, preventative maintenance, which can keep products working as efficiently as possible. Chiller diagnostic checks, run-performance evaluations in combination with inverter technology, all promote optimised performance and reduce wear and tear - which should in turn keep energy being used most effectively. Harnessing analytics and smart insights As technology continues to evolve, it’s becoming possible to harness real-time data visualisation and smart insights to optimise the use of cooling equipment. By giving organisations this kind of detailed information into the operations of their data centres, it’s possible to reduce energy use in certain areas, and identify systems that could be running more efficiently.  This includes mapping all of the systems that a data centre has, locating them, tagging them and then converting ‘invisible’ raw data into actionable intelligence to make things work better and more efficiently. This is particularly handy when data centres may be using multiple systems from different vendors - which often can’t talk to one another or require the use of numerous dashboards. By being able to see the data all together, it’s possible to better address many challenges - including performance management, accuracy of billing, energy costs and system capacity - and importantly, make necessary changes to keep systems operating in the most energy efficient way possible. This innovation can support everyone involved in data centres - including manufacturing companies, building owners, facility managers and even government organisations - to reduce the cost of deploying and maintaining IT cooling equipment, and bring down both the carbon footprint and the cost of running data centres. Reusing, not wasting, energy When it comes to improving data centre sustainability, there is also a big opportunity to reuse energy, rather than waste it. Heat pumps are the key here, as they have the potential to take recovered heat which is generated in data centres - which would otherwise be released out into the environment - and upgrade it to be distributed and heat local spaces, potentially via a heat network. The transfer of ‘waste’ heat into something useful will help to reduce the amount of energy needed to generate new heat across the wider network. Tackling embodied carbon As well as cutting the energy created by cooling and heating within a data centre, it’s also important to consider the whole process of building, running, maintaining - and even demolishing - a data centre, and the carbon footprint that each stage has. Fundamentally, data centres are buildings, which means that they contain a lot of ‘embodied’ carbon too. This includes the overall CO2 emissions in the metal, bricks and mortar used in the construction of the building, for example. By identifying how design, construction and management decisions affect a data centre’s carbon footprint and how to minimise it, it is possible to bring down the overall environmental impact of a data centre. Data centres ostensibly play a very important role in keeping our IT systems running, but in doing so they are inevitably energy intensive spaces. Through more sophisticated analytics and monitoring, as well as taking into consideration how energy can be reused and how embodied carbon can be reduced across every stage of building and running a data centre, it is possible to keep carbon emissions, energy use and running cost down. On the road to reaching net-zero carbon emission by 2050, and the even more ambitious goal of achieving a zero-carbon data centre by 2030, these factors will be key. www.mitsubishielectric.com

Time for a coffee? It’s time to discuss five key topics influencing enterprise storage
By Gareth Beanland, UK and Ireland Country Manager at Infinidat As we all head back to office life, even if just for a small proportion of the working week, old customs, like strategy conversations in the kitchen or over a cup of coffee, are returning. A lot has changed in the workplace and these topical chit-chats may seem like a quaint relic of a long forgotten, pre-COVID age, but they are as important as ever, if not more so. If you are thinking about long-term IT expenditures, cost of ownership reduction and what to do about enterprise storage in the next three to five years, here are five good conversation starters to get things going. Can enterprise storage help guard against cyber-attacks? Yes, and the key issue to discuss is consistency. Due to the cyber threats of ransomware and malware, it is imperative for enterprises to implement modern data protection and cyber resilience practices and capabilities across their primary and secondary storage estates. Features to be adding include immutable snapshots, logical air gapping, fenced forensic environments and virtually instantaneous recovery. Cyber criminals are targeting data backup as well as primary storage, so the secondary storage needs to be secure and robust enough to withstand attacks too. Remember, it is not a question of if you will suffer a cyber-attack, because cyber-attacks are pretty much a given these days, but when and how often. When you are considering enterprise storage technology to guard against a cyber-attack, look for solutions with purpose-built backup appliances for your secondary storage environments, to nullify ransomware and malware with automated functions. This will ensure business continuity and protect some of your company’s biggest strategic assets - data. It means that when a hacker declares they have taken your data ransom, you can go back to the immutable snapshots and simply ignore the cyber-criminal. There will be no need to pay any ‘ransom’, as you have a good copy of the data to recover. Can you detect when data storage is overly complex? Yes, and the key issue to discuss is storage consolidation. When enterprise storage becomes overly complex, your data ends up in silos and becomes costly to utilise and manage. This is a common nightmare and one that storage administrators too often tolerate. By consolidating your storage arrays, you not only reduce the number you have to manage, but also reduce the number of independent silos that require management, dramatically simplifying the entire data process. Can you tell if a storage solution provider can scale adequately? Yes, look at the company they keep. One of the easiest and best ways to vet the capabilities of an enterprise storage solution provider is to see if they count any Cloud Service Providers (CSPs) and Managed Service Providers (MSPs) as customers. It shows robustness and technical resilience if CSPs and MSPs trust a storage vendor to this degree. If CSPs and MSPs are relying on the vendor’s storage platform for their own clients, then CIOs and IT decision-makers can be confident that their storage capabilities are top-notch and proven. It will mean that 90,000 back-ups per day are being run using that storage technology instead of only 30,000. As an additional check, look for storage providers who list CSPs, MSPs, and clients in highly regulated markets like banking, healthcare, and insurance. They will inevitably be giving their storage suppliers stringent SLA requirements and you can be confident they will meet them. How can we shift from managing infrastructure to focus on applications? There has been a shift within enterprise IT to focus on managing applications rather than managing the infrastructure. Many CIOs now talk about adopting a ‘set-it-and-forget-it’ approach, which seems to be the preferred mode to reinforce this focus on applications. They want their storage systems to be automated and autonomous. To be able to benefit from the same level of comfort and avoid painful and extensive ‘performance tuning’, seek storage vendors offering intelligent, Neural Cache functionalities. This means the system can employ a form of machine learning, it will enable automated adjustments to ongoing changes in the application infrastructure and reflects a set-it-and-forget-it mentality. To go a stage further, look out for support for Red Hat’s Ansible Framework, which allows storage admins to let the DevOps and operations teams allocate and configure storage within the parameters they have established in Ansible, allowing those teams to work with enterprise storage platforms without causing any problems. Can we reduce CAPEX and OPEX by consolidating our storage? Yes. This is one of today’s biggest trends in enterprise storage, because companies are always looking to reduce expenditures and storage can be one of the most expensive pieces of their data centre infrastructure budgets.  This is particularly exacerbated in an era where data and storage growth is exponential. When a data centre uses vast sets of arrays, it means there is a need for more rack space, larger data centres, more power consumption, more cooling, and more daily operational management. This is all highly resource and cost intensive. By consolidating storage, both CAPEX and OPEX can be dramatically reduced. A set-it-and-forget-it ease of use approach to your storage environment also lowers operational manpower. There is no need for either RAID groups or LUNs and no requirement to ‘play’ with the storage to optimise application performance needs or to perform any other configuration. Technology has liberated us to remote working. As the COVID pandemic has proven, it does work, but there’s nothing like the spontaneous interactions that take place in an office environment for sparking new ideas, especially when it comes to enterprise IT strategy. Informal conversations over a cup of coffee are hugely valuable for sounding out advice and new ideas or discussing collaboration opportunities. Ultimately, it leads to better decision-making and increased efficiency and effectiveness.

ESR announces over $1bn first close of inaugural Data Centre Fund
ESR has announced the first close of over $1 billion in equity commitments for its inaugural vehicle, Data Centre Fund 1, dedicated to the development of its growing data centre business. ESR DC Fund 1 brings together some of the world’s largest institutional investors, including sovereign wealth and pension funds. ESR will raise a separate discretionary capital sleeve to co-invest into the fund which will likely close the balance of the fund at the hard cap of $1.5 billion. Additionally, the partners have an upsize option of an additional equity commitment of $1.5 billion, that would bring the total investment capacity to as much as $7.5 billion over time. ESR’s current data centre development portfolio comprises data centre projects primely located in major data centre clusters across Asia, including Hong Kong, Osaka, Tokyo, Seoul, Sydney, Mumbai and Singapore, delivering 300MW IT load. Amongst these projects is a key asset the group acquired in Osaka that will be developed into a multi-phase data centre campus with a development potential of up to 95MW IT load to serve both hyperscalers and colocation operators in the rapidly growing Osaka market. Jeffrey Shen and Stuart Gibson, Co-Founders and Co-CEOs of ESR, says: “APAC is the prime market for data centre development and investment in the new era of digitalisation. The substantial first close of our inaugural data centre fund marks a significant milestone for ESR as we continue to grow and scale our digital infrastructure business. We thank our capital partners for their strong support to this exciting effort. “As the largest new economy real estate platform in APAC, we are looking to play into the critical need for digital infrastructure in a big way going forward by leveraging our core competitive advantages with a singular focus to support our capital partners and customers to thrive and capitalise on the continued rise of the new economy and digital transformation in APAC.” Diarmid Massey, ESR Data Centres CEO, highlights: “With nearly $60 billion of New Economy AUM, digital infrastructure is a key strategic focus for ESR Group. Naturally, our ambition is to offset high energy consumption by aligning with our ESG strategy to refurbish, re-develop, convert some of our existing 39.8 million sqm GFA of assets into large and edge data centres, and to explore sustainable options through actual renewable energy generation from the rooftops.” Devashish Gupta, ESR Data Centres CIO, elaborates: “The APAC Data Centre fund is uniquely placed to take advantage of ESR Group’s adjacencies in land, power, fibre origination, strong pipeline of recently acquired data centre specific sites, a dedicated team of experienced data centre professionals, and partnerships with best-in-class data centre operators for co-location assets. Our ability to offer powered shells, fully fitted, and colocation assets to serve hyperscalers, enterprises as well as operators, provides a scalable solution with shorter ready-for-service timelines to our customers; and risk-adjusted strategies to our capital partners.”

MyCena announces channel drive in critical infrastructure sectors
MyCena has announced that it is actively recruiting channel partners in the critical infrastructure sectors and contact centres. MyCena’s drive for partners comes as the threats of phishing and ransomware continue to grow. According to Proofpoint’s annual phishing report, 83% of organisations experienced a successful email-based phishing attack in 2021 and 54% said they dealt with more than three successful attacks. Julia O’Toole, CEO and Founder, MyCena explains: “Amid rapidly evolving cyber threats, the Ukraine-Russia conflict and a new generation of cyber-attackers like Lapsus$, organisations need to understand the difference between identity and access, and how mixing the two has led to companies giving away access control to their employees. “In the physical world, there is no ambiguity. You use your identity to identify yourself when you cross a border or sit an exam for example. And you use keys to open doors: doors don’t recognise people and open for them. In companies, managers hand access keys to employees when they join and take them back when they leave. “But in the digital world, identity and access have been confused. Employees have used their identity and made their own access keys or passwords to open the company’s doors and access all assets. This has mechanically opened companies up to a whole range of new risks, including password phishing, unauthorised sharing, loss and fraud, thus compromising the integrity and privacy of all their data. “The use of single access has further accelerated breaches and supply-chain compromises as it removed obstacles and facilitated the spread of infection to multiple networks in one go.” As cyberattacks increase in severity and frequency, the absence of access control and segmentation is particularly dangerous for critical infrastructure and contact centres. As seen in the Okta breach by ransomware group Lapsus$, customer support providers or contact centres represent a huge surface of attack which can infect many critical infrastructure sectors. MyCena’s patented solutions provide access segmentation, control and security to organisations by distributing strong unique encrypted passwords to employees in real time for every system, whether IT, OT, IoT, SSH, RDP, web applications or legacy systems. Passwords remain encrypted from creation, distribution, storage, use, to expiry, thus eliminating the risks of human error, password fraud or man-in-the middle attacks. Julia continues: “There is an over-reliance of companies on tools like Multi-Factor Authentication without understanding where access control falls short. When you allow employees to make their own passwords, you have lost control of access then. After losing access control, companies should consider all their passwords compromised by default, which means that a second factor cannot guarantee legitimate access. With access control, multi-factor authentication provides an additional validation. Without access control, MFA provides a false sense of security.” “As ransomware groups continue to intensify their attacks on critical infrastructure, the pressure to ‘shield up’ will continue to mount. Trusted channel partners can play a key role in preventing those attacks and limiting their effects. “MyCena’s solutions are a simple, effective and secure way to prevent cyber breaches. By regaining their access control, organisations regain control of their data and infrastructure, while relieving their employees from the mental charge of remembering company passwords. By segmenting access, organizations develop structural cyber resilience and thus protect themselves against ransomware.”

SUPERNAP signs PPA with WHA Utilities & Power to power its data centre
SUPERNAP will produce its own energy, and will lower its carbon footprint, leading a green approach to digital transformation, and bringing renewable energy to the digital infrastructure of Thailand. In line with the company’s policy to help save the planet, reduce global warming and greenhouse effects, the project will also help SUPERNAP, and its clients, to reduce electricity costs significantly throughout the system’s life, while offsetting 18,250 tonnes of CO2 emission to the environment. “SUPERNAP is the forerunner in the region since our hyperscale facility opened in 2017. Since then, our leading technology provides 100% uptime. Our commitment to provide the best digital infrastructure is once again demonstrated with this initiative towards efficiency and sustainability. WHAUP has been chosen to install the solar power system at SUPERNAP because of its expertise in engineering and safety and its solid experience in the installation of solar power systems. We are confident in the skills and professionalism of the company” says Sunita Bottse, Chief Executive Officer of SUPERNAP. SUPERNAP has started working with WHA Utilities & Power towards the implementation of the solar panel farm. The solar farm will be built on SUPERNAP’s land on its data centre premises located in the Economic Eastern Corridor (EEC), outside the Bangkok flood zone and close to international network landing station with links across the country of Thailand. “SUPERNAP is a Tier-IV certified data centre colocation and cloud services provider with the most advanced technology in the ASEAN region. It is driven by demand in Asia Pacific for purpose-built data centres that can guarantee performance, availability and disaster risk reduction. The growth of data and applications in the region is derived from the need to stay closer to businesses and consumers to improve customer experience using Cloud, AI, IoT and BIG Data. SUPERNAP is the leader in Asia, offering higher service capabilities than any other data centres in Southeast Asia. Having such a great company as our customer reinforces WHAUP’s position as a standard service provider of solar power systems,” comments Dr. Niphon Bundechanan, Chief Executive Officer, WHA Utilities and Power PLC (WHAUP). The Power Purchase Agreement (PPA) includes engineering, procurement and construction (EPC), as well as an energy storage system to store excess power and reuse it when the solar energy system cannot generate enough power to satisfy the demand. Furthermore, WHAUP will be responsible for operation and maintenance of the system for 20 years. The project, which is scheduled for completion in fall of this year, began early April. By being the first colocation and cloud data centre implementing renewable energy, SUPERNAP will contribute to the development of the green digital infrastructure of the region, supporting the national strategy to reduce greenhouse gas emissions, as well as lowering the carbon footprint of its client.

Colt Technology Services and Oracle expand collaboration
Colt Technology Services has announced the deeper integration of its Colt On Demand service with Oracle Cloud Infrastructure (OCI) FastConnect, offering customers an even smoother user experience. The new API-based feature is focused on delivering productivity and efficiency gains via an enhanced end-to-end customer experience. It offers an improved customer interface, with the integration between the OCI platform and the Colt On Demand portal. This allows customers to directly provision a more secure private connection from their on-premise environment to their preferred Oracle Cloud Region from the OCI Portal, rather than switching between the OCI Portal and the Colt On Demand portal. This represents the latest expansion of the relationship between Colt and Oracle, and follows the announcement of the global FastConnect partnership and interconnecting of the Colt IQ Network to the Oracle Cloud Regions. Colt On Demand for OCI allows customers to self-provision very secure, high bandwidth connectivity to Oracle Cloud Regions in a matter of minutes and allows them to dynamically scale bandwidth up or down in near-real-time. Colt On Demand for OCI is controlled by an online customer portal and purchased through a flexible, pay-as-you-use commercial model. Colt Technology Service’s CEO, Keri Gilder, says: “Our growing partnership with Oracle has allowed us to develop this new API-based feature, which delivers increased efficiency for customers and reduces the probability of human error as critical connectivity related data is transferred automatically from and through the OCI Portal to the Colt On Demand Portal without the need to switch back and forth with cutting and pasting.”

Guide from Siemon provides design advice for next gen data centres
Siemon has announced a new data centre application and product guide designed to provide data centre professionals with comprehensive guidance for selecting, designing and deploying business-critical IT infrastructure in the data centre. The rise of connected technologies and data volumes is driving change in the data centre. New data centre interconnect (DCI) technology, distributed cloud and full-mesh switch fabric architectures in highly virtualised environments result in the need for higher-density and more complex fibre links. Siemon’s guide highlights a range of specialised fibre solutions including high-density fibre enclosures and smaller diameter cables and assemblies that help manage these critical connections in tight data centre spaces. As transmission speeds migrate to 400/800G Ethernet, the guide also highlights recommendations on how to migrate to high-density fibre channels utilising existing infrastructure. 400 Gigabit applications have initially been deployed in cloud data centres for switch-to-switch uplinks. Examples are given of how 400G switch-to-switch links can be deployed using singlemode channels that support Base-8 MTP fibre trunks either with 8 fibre MTP jumpers or with MTP-LC Cassettes and LC fibre patch cords. A 400G switch could then be connected to 4x100G switches using the same Base-8 trunk and a breakout MTP-LC cord. Other examples include 100G switch-to-server downlinks and how they can support breakout to 4x25G channels using direct attach copper cable assemblies (DACs). Also showcased within the guide is Siemon’s comprehensive portfolio of data centre infrastructure solutions, including category 6A shielded copper cabling, pre-terminated and high density fibre patching solutions, automated infrastructure management (AIM) for remote monitoring and real-time view of data centre connections, as well as a new toolless fibre routing system for protecting and routing fibre cabling. Information about the company’s data centre design services completes the guide, plus details of its ecosystem of partners, including Arista and Cisco. “With this new application and product guide we encompass all the support that Siemon can offer to data centre professionals,” explains Alberto Zucchinali, Data Centre Solutions and Services Manager at Siemon. “It provides a single access point to the most relevant information to help data centre professionals identify the right solutions to support a robust, scalable and standards compliant network infrastructure.”

Green decentralised energy solutions could help bridge expansion gap
As recent reports show rapidly escalating data centre supply in the FLAP markets, construction site stakeholders need to identify energy solutions to keep powering this rapid expansion, says Aggreko. According to a recent CBRE report, new supply surged in 2021, with 397MW coming online across the FLAP markets. With this momentum expected to continue into 2022, Aggreko is encouraging data centre owners and operators to get ahead of the curve when it comes to energy scarcity and powering increasing amounts of space. “Though the European market’s continued boom is undoubtedly good news, certain worrying trends can be identified by delving through the data behind this continued growth,” says Billy Durie, Global Head for Data Centres at Aggreko. “Nowhere is this more apparent than in the fact that the FLAP markets have experienced its two largest quarters for take-up in the second half of the year, with 105MW in Q3 and 96MW in Q4 respectively. Taking this into account, the question must be asked- how is all of this going to be powered? “Such scale, coupled with the fact the CBRE has anticipated higher costs caused by energy price rises, will pose challenges for those building new facilities to service this growth. Namely, data centre stakeholders may find themselves unable to power expanded halls or increased take-up in existing premises due to grid constraints.” The effects of grid strain can be clearly seen in Amsterdam, where a moratorium has been in place since July 2019 on data centre construction in both the city and nearby Haarlemmermeer’s municipalities. Though not as extreme, other areas are experiencing similar issues. For example, the CBRE has identified government-imposed restrictions on build activity as a key challenge for the rapidly expanding Frankfurt market. Such limitations highlight the role decentralised energy solutions could play in meeting data centre construction demand. Specifically, by using generator technology as a bridging solution during both the project and when facilities come online, power provision can be maintained while grid capacity is increased for these new or expanded facilities. Billy says: “Looked upon from the outside, it could be argued that the European data centre sector is experiencing what might be regarded as an enviable problem. Yet this remains a pressing concern, and one that must be addressed if the FLAP markets’ upward trajectory is to continue unabated. The provision of green distributed energy equipment on a hired basis is one such way to bridge this gap between ever-growing data centre demand and increasingly unreliable and constrained grid infrastructure. “Whether through the use of alternative fuels such as HVO, alongside Stage V generators for larger sites or hybrid battery systems for smaller loads, the technology is already there to sustainably assist this transition. Considering these facilities are increasingly situated in built-up areas subject to emissions controls, this is also a vital concern.”

Kao Data expands Harlow campus with second 10MW data centre
Kao Data has announced the expansion of its Harlow data centre campus, with construction now underway on its second 10MW facility, underpinning infrastructure in the UK innovation corridor. Following a recent investment from Infratil (and the launch of its Slough data centre last month) the new facility, named KLON-02, will build upon the company’s infrastructure, energy efficiency and sustainability capabilities, providing customers with a reduced carbon footprint and low Total cost of ownership (TCO) over the lifecycle. KLON-02 will offer up to 10MW of capacity and provide an energy efficient home for almost 1,800 racks of IT equipment across 3,400m2 of technical space and via four Technology Suites. Once fully operational, the facility will be NVIDIA DGX-Ready data centre certified and OCP-Ready, enabling High Performance Computing (HPC), Artificial Intelligence (AI), and enterprise computing users to scale quickly and efficiently by deploying pre-populated OCP Accepted racks, or bespoke high density architectures within its hyperscale-inspired design. In similar fashion to KLON-01, the new carrier-neutral data centre will benefit from access to resilient, low-latency connectivity via major multiple networks including BT Openreach, euNetworks, Jisc/Janet, the London Internet Exchange (LINX), LUMEN and Vorboss. Furthermore, Kao Data's continued partnership with Megaport will deliver high-performance connectivity to all major Tier I and Tier II cloud service providers, including Amazon Web Services (AWS), Google Cloud, Microsoft Azure and Alibaba Cloud. Central to the design and build of KLON-02 will be the highest sustainability features, from ensuring the new architecture is BREEAM ‘Excellent’ certified and both its Technology Suites and related cooling infrastructure configure to hyperscale design principles, to ensuring customers’ mission critical workloads are delivered with an SLA-backed PUE of <1.2. These will be complimented by using 100% renewable energy and 100% sustainable, Hydrotreated Vegetable Oil (HVO) in its backup generators. “The expansion of our Harlow campus is another strategic milestone in Kao Data’s evolution as we continue to scale the business’s high performance data centre offering across the UK and Europe”, says Lee Myall, CEO, Kao Data. “With our second 10MW facility in Harlow following the launch of our 16MW data centre in Slough closely, Kao Data is bringing advanced colocation capacity at industrial scale to the west and east of London.” “KLON-02 will build on the award-winning design and build principles of our KLON-01 facility, offering greater efficiencies for our customers’ colocation environments,” adds Paul Finch, COO, Kao Data. “It’s exciting to be able to continue to innovate within our construction, engineering, and technical principles, and I am confident our second facility at Harlow will raise the bar again for sustainable, energy efficient data centres.”

Airedale launch data centre chiller range, with Enhanced Free Cooling
Airedale has announced the launch of the DCSTM range, a family of air-cooled chillers specifically optimised for demanding data centre environments, featuring ground-breaking Enhanced Free Cooling technology. With uptime and energy efficiency hard-wired into every facet of their design, the DCSTM range has evolved from decades of worldwide data centre experience. The range has been specifically designed by the Data Centre Solutions team, based at Airedale’s global headquarters in Leeds, UK, to deliver powerful, reliable and sustainable cooling in the most demanding of environments. Utilising existing Airedale chiller platforms, DeltaChill, TurboChill and OptiChill, the DCS team has implemented multiple upgrades to meet the unique requirements of the data centre market. Enhanced controls functionality, such as compressor fast-start and optimised head pressure control, has been added along with hardware upgrades such as battery-free UPS and Automatic Transfer Switches to conserve uptime. The chillers are also designed to operate at increased supply / return water temperatures (20/32oC), in line with data centre trends and to maximise free cooling potential. Available with a range of compressor technologies and refrigerants, including low GWP option R1234ze, and featuring the Enhanced Free Cooling package, this range of super-high efficiency chillers offers unparalleled energy savings and a much reduced carbon footprint. Enhanced Free Cooling With a complete redesign of the V-block condenser coils and the implementation of larger EC fans to maximise air flow, the Enhanced Free Cooling package is able to deliver a higher percentage of full free cooling over the course of a year. The mechanical upgrades, coupled with the ability of the DCS chillers to operate at increased maximum supply / return water temperatures, can deliver up to 39% annual energy cost savings over traditional free cooling methods. The reduced requirement for less mechanical cooling in turn reduces the stress on components such as compressors, thus reducing maintenance requirements and increasing longevity of the unit. Over its full life cycle, the combination of reduced energy consumption and lower approach temperatures leads to improved PUEs, allowing data centre operators to more easily meet their environmental targets. • Larger 910mm EC fans delivering increased air flow (compared to 800mm on standard units)• 5 rows of free cooling coils for increased free cooling capacity (compared to 3 rows on standard units)• Increased maximum supply/return water temperatures, up to a maximum of 20/32oC (68/90oF)• Closer approach temperatures Patrick Cotton, Product Manager for chillers at Airedale, says, “In modern high-capacity data centres, balancing performance with sustainability, without compromising, is a priority, as data centre operators work hard to reduce their carbon footprint whilst maintaining availability during a global surge in demand for data.” Patrick continues, “Working closely with some of the world’s largest data centre operators, we have been able to design and deliver our DCS range of chiller products to meet worldwide efficiency targets, whilst delivering the future of data centre cooling.” The DCSTM chiller range includes TurboChill DCSTM (200kW to 1830kW), DeltaChill DCSTM (110kW to 1100kW) and OptiChill DCS (1850kW). Enhanced Free Cooling is available on TurboChill DCS and DeltaChill DCS.



Translate »