Features


How data centres can tackle their environmental impact
By Inspired Energy Environmental impact is something more and more data centres are looking into as part of the Climate Neutral Data Centre Pact, where data centres commit to become climate neutral by 2030. Alongside this, the demand for data centres is increasing, with data becoming the new fuel as we continue to innovate and rely more on data and technology in our day to day lives. But how can data centres tackle their environmental impact without affecting efficiency? Tackling the environmental impact With many data centres having already developed net zero and heat decarbonisation plans, here are some key areas of focus to tackle environmental impact without impacting efficiency: Greenhouse gas (GHG) emissions GHG emissions are a main measure for many businesses on their journey to net zero, however data centres are often measured by carbon intensity. This metric provides a relative comparison of GHG emissions characteristics after factoring in the scale of a business and the emission rate. Most businesses are already reporting on their Scope 1 and 2 emissions, and only large LLPs are required to report on some of their Scope 3 emissions under the Streamlined Energy and Carbon Reporting (SECR) scheme. Although some businesses are voluntarily reporting their Scope 3 emissions as it’s likely they will become mandatory in the future. Data centres who report on their Scope 3 are demonstrating their commitment to sustainability as one of the largest energy consumers. Sourcing renewable energy As energy intensive users, data centres are turning towards renewable energy to support their net zero commitments. Some larger data centres are securing renewable energy partnerships to power their sites. Renewable energy procurement can help to reduce or even eliminate Scope 2 emissions and there are a number of purchasing options. A virtual or physical Corporate Power Purchase Agreement (PPA), where renewable energy is purchased directly from an energy generator, or green tariffs secured through an energy provider are also another route to sourcing renewable power, along with on-site generation and renewable energy certificates. Calculating your Power Usage Effectiveness (PUE) Data centres should look to optimise their PUE score to achieve maximum efficiency by reducing the amount of energy used for anything other than running equipment. Data centres looking to reduce their PUE may find it challenging to reduce both costs and improve their environmental impact. However, there are ways to optimise PUE: Allowing server rooms to operate at a higher temperatureReducing the density and therefore energy consumed per square metre - this helps to dissipate heat but can be counteractive to other practicesImprove the flow of cool air in computer rooms through containment solutionsOptimising the production of cool air through combined use of outside air and heat exchangersLocating data centres in the Artic or under the sea Cooling Data centres use water for cooling their equipment in cooling towers. Many servers in a hyperscale data centre require a larger cooling capacity but water consumption varies due to the climate and the type of cooling system used. Large data centres may rely on evaporative cooling which uses less energy, but the process requires more water - often the preferred approach as it’s less expensive. Whilst data centres need to start paying attention to their water consumption and WUE, they should also explore how to use water more efficiently. From increasing cycles of concentration in cooling towering to switching to chemical treatments to demineralise evaporating surfaces and recommissions to reduce cool load. Hardware Hardware can also be a part of the solution when it comes to reducing emissions whilst ensuring performance efficiency. New processors to increase computing power without increasing energy consumption isn’t far off as the ongoing ‘arms race’ between chip-makers encouraged innovation of the energy computing ratio. Not only this but updating old, outdated, broken and inefficient equipment is also part of supporting the overall efforts to tackling environmental impact. Software AI software can also play a role by supporting data centres to better manage their infrastructure and maximising the utilisation of their CPUs. This will help to mitigate and alleviate consumption issues and deliver energy savings. Improving CPU performance will help data centres keep up with demand for data processing and help reduce the number of processors they need to achieve the same or more computing processes. Data storage can also be optimised to reduce environmental impact. Working through these key areas of focus, your data centre can tackle its environmental impact to further its progress to carbon neutral by 2030.

Smart power management in buildings and data centres
By Matthias Gerber, Market Manager LAN Cabling, Reichle & De Massari and Carsten Ludwig, Market Manager DCr, Reichle & De-Massari AG Regulations regarding energy savings required to reach climate goals are becoming increasingly stringent. Recent research from Deloitte shows that buildings are currently responsible for 30-40% of all urban emissions. This must be reduced by 80-90% in order to achieve COP21 targets by 2050. Using intelligence in digital buildings is essential to achieving this. However, according to the BSRIA ‘Trends in the global structured cabling markets’ report (April 2022), no more than 1-2% of buildings deploy cutting edge smart technologies. Buildings everywhere need to be renovated and managed smarter. For buildings to become more energy efficient, they need to become smarter. All energy needs to be used wisely and not wasted on anything that might be considered unnecessary - an intelligent building connects all devices, automates processes and leverages data to improve performance. Intelligent buildings continuously learn, adapt and respond. This can highlight areas where energy usage is being wasted and help find solutions so that HVAC systems, smart lighting, and other in-building systems (sensors) reduce their energy consumption. This approach makes it possible to better manage resources and utilities. Sensors provide the foundation for intelligent buildings. Today, almost every device can function as a sensor. In the past, every system would have its own sensors, but these didn’t exchange data or interact. In the smart home environment, we’re seeing something similar today: numerous ‘island’ solutions, with each manufacturer using their own platform and integrated devices. However, it makes more sense to use an existing sensor in an installed device, instead of adding the same type of sensor to multiple devices. Convergence is now allowing information from individual devices to be used to optimise the performance of other devices and the system as a whole. The converged network brings IT and OT together. Security protocols can run from enterprise servers, removing the need for protection of individual networks. One single interface and dashboard can be used to manage and control lighting, heating, ventilation and security. An ‘all-IP’ network allows all devices to use one common language, supporting integration and optimisation. All building technology and building management devices communicate in the same way, without barriers, over Ethernet/Internet Protocol (Ethernet/IP), with the LAN providing the basis for physical communication. IP-based convergence enables sharing of resources across applications and brings standardisation, availability, reliability and support for new deployments. The ‘digital ceiling’ concept supports ‘All over IP’ implementations. The data network and PoE are extended through an entire building’s ceiling, making it possible to connect building automation devices within defined zones with pre-installed overhead connecting points. The ‘digital ceiling’ will increasingly provide services that building occupants and managers are going to need in the near future and for years to come, enhancing user experience while reducing energy usage, making maintenance and adding new devices faster and easier, lowering installation and device costs, and increasing layout flexibility. A closer look at energy management in the data centre Data centres are responsible for some 2% of greenhouse gas emissions, which is almost the same as the entire global airline industry. For some time, designers and operators have been using the Greenhouse Gas Protocol, developed by businesses, NGOs, government bodies and other stakeholders to evaluate their supply chain and performance. Similar initiatives such as the recently published Climate Neutral Data Centre Pact, initiated by CISPE, point the way ahead for the data centre industry. Besides security, energy and (especially) cooling are key talking points in the data centre industry. Of every kilowatt used in the DC, the biggest portion is turned into heat. You can improve power efficiency by using more efficient equipment and thereby reduce heat production, but there is always some excess heat. The question is how to deal with this heat in a way that results in the least environmental harm.  One approach is using liquid-cooled PCBs so that components on the circuit board don’t heat up and, therefore, don’t pass on heat to the chassis or rack. This uses dedicated horizontal pre-terminated rack boxes which have connections for fibre and copper, cooling fluid and power. However, you will need specialised hardware for this, that will be difficult to swap out when you want to make changes. Cooling precisely at source is very efficient. By cooling individual components, heat is reduced, and their operational lifetime improves. This approach is complicated and requires some preparation based on the needs of individual applications, the nature of the business the hardware is used for, and the business case. Another way of dealing with heat is distributing the components across a wider area. Concentrating as many racks as possible in a huge data centre may bring economies of scale and some practical benefits, but it also concentrates a vast amount of heat production in one space - and isn’t always technically necessary. When hardware becomes outdated or a new user needs to be connected, edge hardware could be moved from, for example, a mid-sized enterprise customer location to a hyperscale facility. Later, it might even be moved from the mid-sized location to an even smaller private location. Intelligent architecture that carefully considers hot and cold aisles is another approach. The less room you need to cool, the less energy you need. Bad examples can be found, with huge rooms being cooled, even though they house just one small containment in a corner. The airflow is a mess in such a case, and the use of energy is extremely inefficient. Cooled air needs to be as close to the equipment as possible and should only cool targeted areas. Wherever DCIM software is used, the company sees power and cooling monitoring engaged as a minimum. In fact, often these are the ONLY monitored KPIs. Not only does this help avoid energy waste but it also improves stability of the system, avoiding malfunctions or even fires. If connectivity doesn't work, performance is harmed, but there’s no physical danger to the system or to people. However, not knowing the status of power and cooling can lead to real damage. It’s also interesting to consider the true price of all the data we’re generating. Many people are wondering why we’re spending so much power on storing. Some don’t realise how much power their phones, tablet and computers use, and how much energy it requires to transport and store all this data. Maybe it would be good if we thought about the environment every time we posted something online!

Scope 3 emissions: the time for bold leadership is now
By David Craig, Chief Executive Officer, Iceotope Technology offers us a path forward to meet the demands of the current climate challenges. The real question becomes, do we have the leadership to embrace it? Lately, it seems as if every time we scroll through our news feeds, we are confronted with the realities of climate change. Europe is in the midst of the worst energy crisis in decades, even prior to the war in Ukraine, exasperating the situation. The Intergovernmental Panel on Climate Change (IPCC) is regularly issuing reports on the severe impact of climate change on global society. Weather patterns around the world are becoming more extreme, resulting in fires, floods and more powerful hurricanes than ever before. The best corporations don’t just want to be seen as doing the right thing, but genuinely are doing the right thing when it comes to climate change. Businesses and industries have begun to track their carbon emissions through the Greenhouse Gas (GHG) Protocol, a widely sponsored international standardised framework to measure greenhouse gas. The protocol divides the emissions into three different scopes, commonly defined as: ● Scope 1 - direct emissions from owned or controlled sources. ● Scope 2 - indirect emissions from the generation of purchased electricity, steam, heating and cooling consumed by the reporting company. ● Scope 3 - all other indirect emissions that occur in a company’s value chain. Today, the scopes are designed to be voluntary. Scopes 1 and 2 are relatively easy for a company to own and track. Scope 3 goes to the level of actually understanding the carbon lifecycle of your entire footprint from cradle to grave. The corporate value chain measures Scope 3 emissions across 15 different categories from goods and services to transportation to business travel to end-of-life product disposal. For many companies, this accounts for more than 70% of their carbon footprint. Companies that are signing up to Scope 1 and 2 will absolutely insist that their supply chains do so as well. For data centre users, this can be a bit tricky, particularly when it comes to the cloud, where GHG emissions can be harder to calculate. When workloads are moved to the cloud, an organisation is no longer generating direct emissions or purchasing energy, both of which are covered under Scope 1 and 2 emissions, respectively. Those emissions are now part of Scope 3. Add to that, a significant proportion of carbon emissions across computing platforms actually come from hardware manufacturing as well as operational system use. Meta recently shared meta-analysis of sustainability reports and life cycle analyses from researchers identifying this trend. Despite improvements in software and hardware efficiencies, a mobile device, for example, may need a three year longer lifespan to amortise the carbon footprint created by their manufacture. The good news is there are solutions data centre operators can incorporate that immediately help in reducing carbon emissions. One solution is precision immersion liquid cooling. Liquid cooling offers the ability to reduce infrastructure energy use by 40%, slash water consumption by greater than 90% and improve pPUE to 1.03, which is further enhanced by server energy reductions often of 10% and more. Alternate forms of power generation, such as hydrogen fuel cells, are quickly becoming economically viable alternatives to UPS and diesel generators. Microsoft has been testing and implementing the technology and sees long term benefits beyond reducing carbon emissions. What all of this requires, however, is the courage to lead. Many of these ‘low hanging fruits’ are new ways of doing business. They are moving away from a well-known technology solution to one that has greater perceived risk. Now is not the time for incrementalism. We are facing a global climate crisis and need bold leadership to make hard choices and take swift action. There is a real competitive advantage opportunity for companies willing to adopt new technologies and find a new way to do business.  A business doesn’t grow by cutting back. A Gartner study from 2019 shows what differentiates winning companies in times of change, stating “first, progressive business leaders prepare to navigate turns even before the turns are clearly in view. Second, their mindset and actions before, during and after the turns separate their organisation from the pack and determine its long-term destiny.” Those that invest when change is happening are the companies that rocket and accelerate out of the downturn. The Scope 1, 2 and 3 protocols create a language, a framework and an expectation. In other words, they create an opportunity for businesses to navigate one of the biggest technology challenges we will face this decade. Governments, particularly in the UK and Europe, are committed to net zero initiatives and ensuring they are met. The public is embracing the seriousness of the crisis and the youth of today will hold us all accountable. It’s no longer about optics, but rather our competitive ability to survive and keep our planet healthy. Technology will play a significant role in this. Those who will become the heroes of this story are those who demonstrate bold leadership and embrace the changes that need to come.

Identifying and evaluating the real embodied carbon cost of a data centre
By Ed Ansett, Founder and Chairman of i3 Solutions Group Global emissions from new-build projects are at record levels. Consequently, construction is moving further away from, not closer to, net zero buildings. With the current focusing very much on the carbon footprint of facility operations, a new white paper presents the case for taking a ‘Whole Life Carbon’ approach when assessing the data centre carbon impact. According to the United Nations Environment Programme (UNEP) the carbon cost of building is rising. The UNEP Global Alliance for Buildings and Construction (GlobalABC) global status report highlighted two concerning trends: firstly, that ‘CO2 emissions from the building sector are the highest ever recorded…’ and, secondly, ‘new GlobalABC tracker finds the sector is losing momentum toward decarbonisation.’ Embodied carbon costs are mainly incurred at the construction stage of any building project. However, these costs can go further than simply the carbon price of materials - including concrete and steel - and their use. And while it is true that not all buildings are the same in embodied carbon terms, in almost all cases these emissions (created at the beginning of the building lifecycle) simply cannot be reduced over time. Since this is often, and in some cases, especially true in data centres, it is incumbent to consider the best ways for the sector to identify, consider and evaluate the real embodied carbon cost of infrastructure-dense and energy-intensive buildings. Technical environments and energy-intensive buildings such as data centres differ greatly from other forms of commercial real estate, such as offices, warehouses and retail developments. Focusing on the data centre, let’s take for example a new build 50MW facility. It is clear that in order to meet its design objective it’s going to require a great deal more power and cooling infrastructure plant and equipment to function in comparison with other forms of buildings. Embodied carbon in data centres Embodied carbon in a data centre comprises all those emissions not attributed to operations, as well as the use of energy and water in its day to day running. It’s a long list which includes emissions associated with resource extraction, manufacturing, and transportation, as well as those created during the installation of materials and components used to construct the built environment. Embodied carbon also includes the lifecycle emissions from the ongoing use of all of the above, from maintenance, repair and replacements to end-of-life activities such as deconstruction and demolition, transportation, waste processing and disposal. These lifecycle emissions must be considered when accounting for the total carbon cost. The complexity of mission critical facilities makes it more important than ever to have a comprehensive process to consider and address all sources of embodied carbon emissions early in design and equipment procurement. Only by early and detailed assessment can operators inform on the best actions which can contribute to immediate embodied carbon reductions. Calculating whole life carbon Boundaries to measure the embodied carbon and emissions of a building at different points in the construction and operating lifecycle are Cradle to Gate; Cradle to Site; Cradle to Use and Cradle to Grave carbon calculations, where ‘cradle’ is referenced as the earth or ground from which raw materials are extracted. For data centres, these higher levels of infrastructure are equipment-related, additional, and important considerations because in embodied carbon terms they will be categorised under Scope three of the GHG Protocol Standards - also referred to as Value-Chain emissions. Much of the Scope three emissions will be produced by upstream activities that include and cover materials for construction. However, especially important for data centres, is that they also include the carbon cost for ongoing maintenance and replacement of the facility plant and equipment. That brings us to whole of life calculations which will combine embodied and operational carbon. Combining embodied and operational emissions to analyse the entire lifecycle of a building throughout its useful life and beyond is the Whole Life Carbon approach. It ensures that the embodied carbon (CO2 emissions) together with embodied carbon of materials, components and construction activities are calculated and available to allow comparisons between different design and construction approaches. Data centre sustainability is more than simply operational efficiency The great efforts to improve efficiency and reduce energy use - as measured through improvements in PUE - have slowed operational carbon emissions even as demand and the scale of facilities has surged. But reducing operational energy of the facility is measured over time and such reductions are not accounted for until five, 10 or 30 years into the future. However, embodied carbon is mostly spent up-front as the building is constructed; there is, therefore a compelling reason to include embodied carbon within all analyses and data centre design decisions. A ‘Whole Life’ carbon approach that considers both embodied and the operational emissions, provides the opportunity to contribute positively to global goals to reduce emissions of greenhouse gases - and will save financial costs.

Delivering a resilient and sustainable electricity supply to data centres
By Antony White, Client Delivery Manager, UK Power Networks Services As the UK’s data centre market continues to grow and mature, forecasters are predicting 36% revenue growth, and 29% power uplift in power demand by the end of 2025. While this may not be the growth of smaller European markets, the increase in the UK is still significant. That means the challenge for the UK’s electrical infrastructure to meet the significant power demand of a data centre, with a truly resilient supply, remains. Sustainability pressures add complexity for data centres, who should seek alternative energy from renewable sources and invest in new energy technologies, including on site renewable generation and battery storage. Meeting capacity requirements with a high-quality network connection Data centres need to be connected to the local high-voltage electricity network - generally the 132kV network - which requires complex electrical infrastructure solutions, combined with experienced asset management. In most cases, connections also need to be fed from at least two sources to maintain supply in the event of power interruptions on the local network. Many of the UK’s data centres are located in the south-east of England, where there has historically been relatively easy access to high-voltage energy infrastructure from existing networks. Having enough capacity on the local electricity network to support the day-to-day running requirements and other energy-intensive requirements, such as air-conditioning and cooling, is a key factor in data centre location. Other important factors include having enough physical space and nearby connectivity to data networks and other utilities. UK Power Networks Services understands the requirements for these connections and the time and expertise needed to design and build them. Current supply chain headaches can result in long lead times for equipment, and detailed knowledge of the market, equipment required, and experience in high-voltage design and build connections projects is essential for any data centre project. Due to global supply, equipment, such as 132kV switchboards and transformers, can take anywhere between 12 to 18 months to purchase once a detailed design is agreed. The challenge for new data centres is to understand exactly what equipment will be needed to fulfil the capacity requirements of a site, knowing the market and what is currently available, then engaging in procurement activities early to meet the project’s timeline. Maintaining a resilient energy supply Customers of data centres also need assurance that their connectivity will be available 24/7. There must never be the risk of speed issues or service interruptions, let alone a prolonged impact to service. Maintaining a resilient energy supply is therefore crucial. While ensuring an Uninterrupted Power Supply (UPS) to a data centre is achieved from multiple sources, power outages caused by equipment failure are not out of the realm of possibility. Local backup generators may be able to keep some operations running, however, the demand of a sizeable data centre is usually too great for this to be a viable option especially where sustainability and low carbon requirements prevent the use of diesel back up. This is where UK Power Networks Services' experience and expertise as an independent connections provider is key. Data centres need a connections partner that understands the local electricity network, can design a fit-for-purpose connection, is experienced with high voltage engineering, understands the equipment required and has experience in equipment selection and knowledge of the market and what is available. Resilience is not just down to high-quality equipment and the expertise in design and build projects. Managing electrical assets is a specialist subject that works best with a long-term perspective. Data centres need to know when to replace equipment to optimise performance, what technological innovations to integrate, and how and when to dispose of obsolete equipment. The ongoing operations and maintenance and asset management of that equipment will be required to keep the electricity infrastructure operating effectively and continually in service. Powering data centres sustainably As electricity demand for data centres is very high, there is pressure to ensure they are powered as sustainably as possible. To satisfy local planning and social responsibility, the first step should be to ensure the power purchased is from green energy sources. There are other ways data centres can increase their sustainability credentials while also reducing the impact of rising energy costs of sourcing all power supplies through the market. Renewable generation opportunities are available due to the large footprint that data centres occupy. These large areas may make solar PV viable – whether on the roof of buildings or in the surrounding land. Some sites may even have space for wind generation. Other opportunities are emerging as technology advances, such as providing electric vehicle charging on site for staff and visitors or integrating battery storage into the local network. Battery storage could be used as an alternative to diesel backup generation and, as technology develops, may play a bigger part in managing the every-increasing demand of the data centre. Making your project a reality When choosing a partner to power a data centre, considerations must include extensive high voltage experience, a track record of safety, equipment procurement experience, a full end-to-end solution encompassing design, build, operations and maintenance. UK Power Networks Services has this experience and can also provide capital finance options.

Hybrid cloud: how enterprises can build resources to suit their own needs
By Jack Bedell-Pearce, CEO and Co-Founder of 4D Data Centres With so many issues that can cause inefficiency, IT leaders need to ensure the right foundations are in place in order to optimise the management of hybrid cloud. Every environment is different and there is no one-size-fits-all cloud infrastructure. So how can organisations prepare and build resources that work for them? Why is optimising hybrid cloud management important? Hybrid cloud is more than just sharing workloads between the two major hyperscale cloud providers, Azure and AWS. It also encompasses other infrastructure environments such as on-premises servers, private clouds, and servers in colocation. No one platform is necessarily better than another, but it is important to regularly evaluate them individually to make sure they are ticking the right boxes. Five essential areas platforms that need to monitor are performance (inc. compute, latency, bandwidth etc), reliability, resilience, security and cost efficiency. In addition to this, green credentials have recently become a sixth important factor, with companies realising colocation data centres and some hyperscalers are able to offer significant improvements in cooling efficiency and, in the case of colocation, high density cooling for High Performance Computer (HPC) systems. Not all platforms are equal when evaluating these criteria, so it is important for companies to consider what to prioritise in their business when matching their workloads with the relevant platforms. Public cloud is very good at providing entry level services and scaling quickly for fast growing businesses, but for more mature companies (especially those with readily available capital and potentially legacy systems), a blend of public cloud, private cloud, and colocation may be a more cost efficient and reliable option. This is demonstrated in a whitepaper by Andreeson Horrowitz, which shows the financial cost of enterprise companies miscalculating the mix and discovering significant cost savings by repatriating servers back into data centres. The right foundations and implementing good practice In the same way you wouldn’t advise someone to put all their savings into one asset class, large companies should avoid being overly dependent on a single platform. Aside from the obvious downtime risk associated with a single point of failure, there is the potential risk of being trapped and unable to avoid price inflation if your sole IT platform is provided by a third-party vendor. Once the right foundations are in place, enterprises need to become more organised and build IT resources through good practice. Examples of this include: ● Governance – how do you ensure the business is aware and being fulfilled/responsive to departmental needs (without them going off and just doing their own thing)? ● Security/Identity/Access Management – making sure that as services spread out, the right people have the right level of access. Data leaks can occur through poor basic hygiene and configuration. ● Stepping back and assessing how they’re using what is deployed; an example of this is Brandwatch doing front end visualisation in GCP (Google Cloud platform), as they had some good assets for their development team but the backend data was stored in colocation. How can optimisation pitfalls be avoided/mitigated? In order to minimise mistakes, enterprises should orchestrate across different businesses to overcome the ‘one pane of glass’ challenge for provisioning and delivery, be aware and in control of costs and recognise different approaches. The different potential costs of hyperscale cloud vs running your own vs colocation should also be considered, with the cost of the equipment etc. taken into account. Additionally, monitoring and reporting of the end-to-end solution using the right tools for multicloud/hybrid use must be factored in. This will ensure accurate and consistent alerting as well as raising awareness on what is actually being deployed and where, removing assumptions of resilience. Other areas to be aware of are the overlap or expansion of products and services, so as each provider continues to expand their product set, integration must be consistent and done at regular intervals to avoid being left behind. Integrating services and applications can also help with silos, but businesses must be careful of non-standardised interfaces to avoid future migration nightmares. Once hybrid cloud management is optimised, what should CIOs do next? Whilst CIOs might get close, it is unlikely that they will ever fully optimise their hybrid cloud setup. As with all technology, trends and advancements are happening regularly, so being up to date is not something businesses can ‘fit and forget’. Technologies will continue to evolve and part of the role of CIOs is to ensure they are not left behind and are tweaking their infrastructure accordingly and frequently. Perfecting cloud services demands a commitment to agility and change. Trends are endemic to the cloud and will continue to evolve at speed as adoption increases. Tracking and unpacking trends will help your enterprise to open doors by leveraging the expertise and knowledge of the industry. As the world continues to embrace cloud services, these opportunities will be essential to sustained growth in 2022 and beyond.

Your holistic cloud security checklist
By Massimo Bandinelli, Aruba Enterprise Marketing Manager Chances are that your organisation migrated to the cloud to enhance security, reliability and reduce the resource burden on your IT staff. But while it’s true that cloud enables your organisation to be more efficient, secure and reliable, it doesn’t mean you can forget about security. In fact, this common misconception can leave organisations like yours vulnerable to cyber attacks and regulatory scrutiny. Whether you’re selecting a public cloud provider, implementing a hybrid cloud solution or building your own private cloud – there’s a whole host of security factors to consider. With this in mind, let’s take a look at what should be on your cloud security checklist. Digital measures Back-ups: No matter which security measures you’ve put in place to protect your organisation’s cloud, the truth is that no measure can guarantee 100% security. That’s why back-ups are crucial – ensuring continuity of service and minimising business disruption in the event of a successful cyber attack. When backing up cloud data, it’s suggested that organisations should adhere to the 3-2-1 model. This means keeping three copies of data on at least two devices, with one copy offsite. It’s helpful to have one ‘live’ back-up – as this updates automatically and can be restored in a matter of minutes when disaster strikes. At the same time, it’s important to have a ‘cold’ back-up – an offline back-up which isn’t connected to your live systems, and therefore can’t be tampered with by malicious actors. Encryption: Encryption is one of the most effective measures for securing data stored in the cloud. It involves converting your data into an unreadable format before it’s transferred or stored, so it stays unintelligible even if malicious actors gain access to it. In particular, encrypting data when it’s ‘in flight’ is crucial – as this is when it’s the most vulnerable. This is particularly true for organisations using hybrid cloud solutions – in which data is regularly transferred between various applications and cloud services. Data sovereignty: Data sovereignty is a legal principle which says that data is subject to the laws of the country in which it’s stored. Awareness of this concept is steadily increasing, as more organisations begin using public cloud solutions and public awareness of how organisations collect and store consumer data grows. Data sovereignty is particularly relevant to EU or UK-based organisations who use largescale public cloud providers with US data centres. If your organisation’s data is stored in data centres outside your jurisdiction, it could be subject to local laws and can be accessed by local law enforcement – regardless of where your HQ is. This creates interesting legal tensions. For example, US laws like the CLOUD Act or FISA require US cloud service providers to hand over data to the US authorities if asked – even if the data is stored within the borders of another country. Meanwhile, EU GDPR legislation states that data can only be accessed by law enforcement based on requests arising under EU law – a clear conflict. To protect against current and future legal conflicts, many organisations are turning to sovereign cloud solutions – which are designed to comply with local laws on data privacy, access, control and transfer. In practice, this means only working with local cloud providers, or building your own on-premises private cloud storage. Identity and access management: Unsurprisingly, poor password hygiene (using simple passwords, or reusing login credentials) is a top cause of cloud data breaches. Remember last year’s Colonial Pipeline hack? That happened because a single employee reused login credentials, which were then re-sold on the dark web following a completely unrelated data breach. To secure your organisation’s cloud, it’s crucial that employees use complex passwords, and that multifactor authentication is enabled to avoid credential sharing. For enhanced protection, many organisations are turning to end-to-end identity and access management solutions. These take the responsibility for password management away from employees’ and enable organisations to centrally manage all employees’ digital identities. In addition to implementing robust identity management, it’s important to think about who has access to your cloud applications and systems. Not all employees need high-level privileges, and the number of administrators should be kept to an absolute minimum. Patching: Like with all software, it's crucial to apply security updates and patches to your cloud solutions as soon as they become available – before malicious actors can exploit vulnerabilities. If you’re working with a public cloud provider, make sure both parties understand who’s responsible for updating and patching software and applications. This will help to ensure that this vital work is done quickly, and nothing gets overlooked. Physical measures Redundancy: In a nutshell, redundancy is the practice of storing cloud data on multiple drives, in case of system failure. For companies operating in the cloud, ensuring redundancy is just as important as having multiple back-ups in place. But they aren’t the same thing! Back-ups are copies of data that can be restored in case of emergency, while redundancy is about ensuring reliability and uptime in the event of drive failure. To explain this, let’s take a look at two contrasting examples. Situation one – a hacker deletes important data stored in your organisation’s cloud. In this instance, having a fully redundant cloud solution wouldn’t get you very far, as the data would simply be deleted across all locations. This is where having back-ups is essential. Situation two – a drive on one of your organisation’s cloud servers fails during the working day. Here, having a fully redundant cloud solution comes into its own, enabling you to continue working with no interruption. Perimeter security: Ensuring the security of your cloud data goes beyond the digital sphere. Increasingly, malicious actors are adding new, physical attack vectors to their already impressive arsenal. This includes the physical delivery of ransomware – where malicious actors gain entry to data centres either through stealth or deception, and feed in ransomware that can lay undetected until activation. It’s imperative that organisations and data centre providers stay vigilant and implement a range of perimeter security measures to protect data centres, especially those organisations with on-premise facilities that wouldn’t otherwise implement the same level of security as perhaps a tier 4 data centre would operate. This means a combination of CCTV, anti-intrusion sensors and bollards, in addition to sophisticated entry control systems, which require employees to authenticate themselves using biometrics. These might feel a bit Mission Impossible – but they’re becoming commonplace among reputable data centre providers. The bottom line? There’s a lot to consider when it comes to cloud security. But with a common sense strategy in place and the right partners on-board, you’ll find it’s surprisingly manageable. If you haven’t already taken a holistic look at your cloud security, now is the time. After all, adopting a head in the sand approach is just waiting for problems to begin.

Peer Software and Pulsar Security enhance ransomware detection across cloud storage systems
Peer Software has announced the formation of a strategic alliance with Pulsar Security. Through the alliance, Peer Software will leverage Pulsar Security’s team of cyber security experts to continuously monitor and analyse emerging and evolving ransomware and malware attack patterns on unstructured data. PeerGFS will utilise these attack patterns to enable an additional layer of cyber security detection and response. These capabilities will enhance the Malicious Event Detection (MED) feature incorporated in PeerGFS. “Each ransomware and malware attack is encoded to infiltrate and propagate through a storage system in a unique manner that gives it a digital fingerprint,” says Duane Laflotte, CTO, Pulsar Security. “By understanding the unique behaviour patterns of ransomware and malware attacks and matching these against the real-time file event streams that PeerGFS collects across the distributed file system, Peer can now empower its customers with an additional layer of fast and efficient cyber security monitoring. We are excited to be working with Peer Software on this unique capability.” As part of the agreement, Pulsar Security will also work with Peer Software to educate and inform enterprise customers on emerging trends in cyber security, and how to harden their systems against attacks through additional services like penetration testing, vulnerability assessments, dark web assessments, phishing simulations, red teaming, and wireless intrusion prevention. “Ransomware attacks have become so common that almost every storage infrastructure architecture plan now also requires a cyber security discussion,” says Jimmy Tam, CEO, Peer Software. “But whereas other storage-based ransomware protection strategies have focused mainly on the recovery from an attack, Peer Software’s goal in working with Pulsar Security is to prioritise the early detection of an attack and limiting the spread in order to minimise damage, speed recovery, and keep data continuously available for the business.”

Snowflake launches workload to respond to threats with the Data Cloud
Snowflake announced the launch of a new cyber security workload that enables cyber security teams to better protect their enterprises with the Data Cloud. Using Snowflake’s platform and an extensive ecosystem of partners delivering security capabilities with connected applications, cyber security teams can quickly gain visibility and automation at cloud-scale. Organisations today are faced with a continuously evolving threat landscape, with 55% of security pros reporting that their organisation experienced an incident or breach involving supply chains or third-party providers in the past 12 months, according to Forrester. Current security architectures built around legacy security and information management systems (SIEMs) are not designed to handle the volume and variety of data necessary to stay ahead of cyber threats. With legacy SIEMs imposing restrictive ingest costs, limited retention windows, and proprietary query languages, security teams struggle to gain the visibility they need to protect their organisations. With Snowflake’s cyber security workload, customers gain access to the power and elasticity of Snowflake’s platform to natively handle structured, semi-structured, and unstructured logs. Customers are able to efficiently store years of high volume data, search with scalable on-demand compute resources, and gain insights using universal languages like SQL and Python, currently in private preview. With Snowflake, organisations can also unify their security data with enterprise data in a single source of truth, enabling contextual data from HR systems or IT asset inventories to inform detections and investigations for higher fidelity alerts, and running fast queries on massive amounts of data. Teams gain unified visibility across their security posture, eliminating data silos without prohibitive data ingest or retention costs. Beyond threat detection and response, the cyber security workload supports a broad range of use cases including security compliance, cloud security, identity and access, vulnerability management, and more. “With Snowflake as our security data lake, we are able to simplify our security program architecture and remove data management overhead,” says Prabhath Karanth, Sr. Director of Security, Compliance & Trust, TripActions. “Snowflake has been vital in helping us gain a complete picture of our security posture, eliminating blind spots and reducing noise so we can continue to provide user trust where it matters most. Deploying a modern technology stack from Snowflake is a pivotal piece of our cyber security strategy.” Snowflake’s rich ecosystem of partners enables best-of-breed security Snowflake is heavily investing in its extensive ecosystem of partners to transform the security industry and enable customers to choose best-of-breed applications that fit their needs. Snowflake integrates with partners including Hunters, Panther Labs, and Securonix to deliver industry-leading cyber security capabilities to customers with the Data Cloud using connected applications. Snowflake’s modern security architecture allows customers to gain control of their data, leverage pre-built content and security capabilities on top of their existing Snowflake environments, and utilise a single copy of data across cyber security use cases. With Snowflake’s Data Cloud, tightly integrated connected applications, and data from providers on Snowflake Data Marketplace, Snowflake is pioneering a new standard architecture for security teams looking to achieve their security goals. Snowflake Ventures, which focuses on investing in companies that help accelerate and augment the growth and adoption of the Snowflake Data Cloud, has already invested in Hunters.ai, Lacework, Panther, and Securonix. These investments have helped drive product alignment to further eliminate security data silos and enable data-driven strategies for joint customers. “Snowflake is leading the security data lake movement, helping defenders bring their data and analytics together in a unified, secure, and scalable data platform,” says Omer Singer, Head of Cybersecurity Strategy, Snowflake. “With Snowflake’s cyber security workload, we further empower security teams in the Data Cloud so that they can collaborate with diverse stakeholders and succeed in their vital mission to protect the enterprise.”

Moving to the cloud is the basis of a good business continuity plan
By Amir Hashmi, CEO and Founder of zsah A business continuity plan (BCP) is a thorough and complex plan to fight the ever-present and ever-costly risk of downtime, and moving operations to the cloud is the best shortcut to take. A Business Continuity Plan is, broadly speaking, a set of processes and principles to improve resilience and ensure a business can continue functioning. Due to the importance of IT to productivity for almost every organisation in the 21st Century – downtime, when IT systems are offline, is its antithesis. Thanks to the rapid adoption of digital tools spurred on by the pandemic and the general move to online we have seen throughout the world, there is a tremendous amount of risk out there for businesses with online assets, from cyber attacks and ransomware to natural disasters and power outages. However, using cloud-based IT assets such as remote desktops, SaaS applications and cloud storage of data can be a shortcut to protecting their continuity – and therefore the continuity of your business. According to Veeam’s 2021 Data Protection Report, the average cost of downtime is $84,650 per hour, that’s $1,410 per minute. Naturally, this figure is skewed by larger organisations reporting higher sums. Still, small, and medium businesses are increasingly impacted as they are seen as easier targets – and they have far less capital to absorb the blow. Although downtime has an infinite number of causes, from natural disasters to cyber attacks, two factors remain consistent: it is costly for modern businesses and often preventable. The key to this prevention is a good business continuity plan. Suppose we disregard the part of BCPs that consider the physical security of assets and focus on the digital continuity of IT systems. In that case, we can say that a good BCP focuses on three things, and according to IBM, these are: • High availability: the systems provided in a business that allows the enterprise to have access to applications that allow it to still operate even if it experiences local failures in areas such as IT, processes, and physical facilities. • Continuous operations: the system a business has in place that allows business to run smoothly during times when disruption or maintenance takes place either planned or otherwise. • Disaster recovery: the system a business has in place that allows it to recover its data centre at another location safely and securely if there is a significant event which means the current site either damaged beyond repair or inoperable. Of course, this is not a universally prescriptive solution. As businesses have varied sizes and needs, one size never fits all. However, many of these essential issues are automatically covered if enterprises move storage, desktops, and digital tools to the cloud rather than store and operate them from on-site servers or even on personal devices. Firstly, cloud providers automatically encrypt and protect your information through extensive cyber security measures and often duplicate it across multiple sites, areas, or even time zones to protect it against physical or cyber damage. Doing this yourself is a costly and time-consuming task with huge risks if not done correctly. Here, you benefit from the economy of scale, as huge deep pockets develop and invest in the most thorough, innovative, and automated protection measures. This means that your data, your applications, and therefore the continuity of your business is protected from all but the most apocalyptic and unforeseen of circumstances, including data loss, power outages, ransomware attacks, and many other causes of downtime. You are now (nearly) continuously operable and, just as importantly, are operable from anywhere. This, in turn, makes hybrid or working from home a far easier and safer experience for new and existing members of your team, with cybersecurity measures and encryption embedded in your teams’ operating systems and tools, no matter what device they use. As the ability to hybrid work is seen as an expectation of staff across the board, and most of the modern, industrialised world, making this process more accessible is a wise investment to attract and retain future employees. It’s not a magic cure, but it’s a start The cloud is the obvious answer for a company that requires always-accessible and always-operational data storage and applications. This is true whether you use public cloud resources or a dedicated, off-premises private cloud server operated by a dedicated IT team on your behalf. The cloud is nothing new, and it certainly is not a single-point cure to IT pain points. Still, it is undoubtedly one of the most transformational changes you can make to aid both security and operational efficiency. However, if you want to avoid unmonitored cloud usage causing a surge in costs, make sure you have the resources to dedicate to its use. Better yet, outsource to experts: an IT managed-service provider will ensure that your move onto the cloud, and its continued use, will be managed effectively.



Translate »
View the latest digital issue!