Thursday, April 24, 2025

Features


Scope 3 emissions: the time for bold leadership is now
By David Craig, Chief Executive Officer, Iceotope Technology offers us a path forward to meet the demands of the current climate challenges. The real question becomes, do we have the leadership to embrace it? Lately, it seems as if every time we scroll through our news feeds, we are confronted with the realities of climate change. Europe is in the midst of the worst energy crisis in decades, even prior to the war in Ukraine, exasperating the situation. The Intergovernmental Panel on Climate Change (IPCC) is regularly issuing reports on the severe impact of climate change on global society. Weather patterns around the world are becoming more extreme, resulting in fires, floods and more powerful hurricanes than ever before. The best corporations don’t just want to be seen as doing the right thing, but genuinely are doing the right thing when it comes to climate change. Businesses and industries have begun to track their carbon emissions through the Greenhouse Gas (GHG) Protocol, a widely sponsored international standardised framework to measure greenhouse gas. The protocol divides the emissions into three different scopes, commonly defined as: ● Scope 1 - direct emissions from owned or controlled sources. ● Scope 2 - indirect emissions from the generation of purchased electricity, steam, heating and cooling consumed by the reporting company. ● Scope 3 - all other indirect emissions that occur in a company’s value chain. Today, the scopes are designed to be voluntary. Scopes 1 and 2 are relatively easy for a company to own and track. Scope 3 goes to the level of actually understanding the carbon lifecycle of your entire footprint from cradle to grave. The corporate value chain measures Scope 3 emissions across 15 different categories from goods and services to transportation to business travel to end-of-life product disposal. For many companies, this accounts for more than 70% of their carbon footprint. Companies that are signing up to Scope 1 and 2 will absolutely insist that their supply chains do so as well. For data centre users, this can be a bit tricky, particularly when it comes to the cloud, where GHG emissions can be harder to calculate. When workloads are moved to the cloud, an organisation is no longer generating direct emissions or purchasing energy, both of which are covered under Scope 1 and 2 emissions, respectively. Those emissions are now part of Scope 3. Add to that, a significant proportion of carbon emissions across computing platforms actually come from hardware manufacturing as well as operational system use. Meta recently shared meta-analysis of sustainability reports and life cycle analyses from researchers identifying this trend. Despite improvements in software and hardware efficiencies, a mobile device, for example, may need a three year longer lifespan to amortise the carbon footprint created by their manufacture. The good news is there are solutions data centre operators can incorporate that immediately help in reducing carbon emissions. One solution is precision immersion liquid cooling. Liquid cooling offers the ability to reduce infrastructure energy use by 40%, slash water consumption by greater than 90% and improve pPUE to 1.03, which is further enhanced by server energy reductions often of 10% and more. Alternate forms of power generation, such as hydrogen fuel cells, are quickly becoming economically viable alternatives to UPS and diesel generators. Microsoft has been testing and implementing the technology and sees long term benefits beyond reducing carbon emissions. What all of this requires, however, is the courage to lead. Many of these ‘low hanging fruits’ are new ways of doing business. They are moving away from a well-known technology solution to one that has greater perceived risk. Now is not the time for incrementalism. We are facing a global climate crisis and need bold leadership to make hard choices and take swift action. There is a real competitive advantage opportunity for companies willing to adopt new technologies and find a new way to do business.  A business doesn’t grow by cutting back. A Gartner study from 2019 shows what differentiates winning companies in times of change, stating “first, progressive business leaders prepare to navigate turns even before the turns are clearly in view. Second, their mindset and actions before, during and after the turns separate their organisation from the pack and determine its long-term destiny.” Those that invest when change is happening are the companies that rocket and accelerate out of the downturn. The Scope 1, 2 and 3 protocols create a language, a framework and an expectation. In other words, they create an opportunity for businesses to navigate one of the biggest technology challenges we will face this decade. Governments, particularly in the UK and Europe, are committed to net zero initiatives and ensuring they are met. The public is embracing the seriousness of the crisis and the youth of today will hold us all accountable. It’s no longer about optics, but rather our competitive ability to survive and keep our planet healthy. Technology will play a significant role in this. Those who will become the heroes of this story are those who demonstrate bold leadership and embrace the changes that need to come.

Identifying and evaluating the real embodied carbon cost of a data centre
By Ed Ansett, Founder and Chairman of i3 Solutions Group Global emissions from new-build projects are at record levels. Consequently, construction is moving further away from, not closer to, net zero buildings. With the current focusing very much on the carbon footprint of facility operations, a new white paper presents the case for taking a ‘Whole Life Carbon’ approach when assessing the data centre carbon impact. According to the United Nations Environment Programme (UNEP) the carbon cost of building is rising. The UNEP Global Alliance for Buildings and Construction (GlobalABC) global status report highlighted two concerning trends: firstly, that ‘CO2 emissions from the building sector are the highest ever recorded…’ and, secondly, ‘new GlobalABC tracker finds the sector is losing momentum toward decarbonisation.’ Embodied carbon costs are mainly incurred at the construction stage of any building project. However, these costs can go further than simply the carbon price of materials - including concrete and steel - and their use. And while it is true that not all buildings are the same in embodied carbon terms, in almost all cases these emissions (created at the beginning of the building lifecycle) simply cannot be reduced over time. Since this is often, and in some cases, especially true in data centres, it is incumbent to consider the best ways for the sector to identify, consider and evaluate the real embodied carbon cost of infrastructure-dense and energy-intensive buildings. Technical environments and energy-intensive buildings such as data centres differ greatly from other forms of commercial real estate, such as offices, warehouses and retail developments. Focusing on the data centre, let’s take for example a new build 50MW facility. It is clear that in order to meet its design objective it’s going to require a great deal more power and cooling infrastructure plant and equipment to function in comparison with other forms of buildings. Embodied carbon in data centres Embodied carbon in a data centre comprises all those emissions not attributed to operations, as well as the use of energy and water in its day to day running. It’s a long list which includes emissions associated with resource extraction, manufacturing, and transportation, as well as those created during the installation of materials and components used to construct the built environment. Embodied carbon also includes the lifecycle emissions from the ongoing use of all of the above, from maintenance, repair and replacements to end-of-life activities such as deconstruction and demolition, transportation, waste processing and disposal. These lifecycle emissions must be considered when accounting for the total carbon cost. The complexity of mission critical facilities makes it more important than ever to have a comprehensive process to consider and address all sources of embodied carbon emissions early in design and equipment procurement. Only by early and detailed assessment can operators inform on the best actions which can contribute to immediate embodied carbon reductions. Calculating whole life carbon Boundaries to measure the embodied carbon and emissions of a building at different points in the construction and operating lifecycle are Cradle to Gate; Cradle to Site; Cradle to Use and Cradle to Grave carbon calculations, where ‘cradle’ is referenced as the earth or ground from which raw materials are extracted. For data centres, these higher levels of infrastructure are equipment-related, additional, and important considerations because in embodied carbon terms they will be categorised under Scope three of the GHG Protocol Standards - also referred to as Value-Chain emissions. Much of the Scope three emissions will be produced by upstream activities that include and cover materials for construction. However, especially important for data centres, is that they also include the carbon cost for ongoing maintenance and replacement of the facility plant and equipment. That brings us to whole of life calculations which will combine embodied and operational carbon. Combining embodied and operational emissions to analyse the entire lifecycle of a building throughout its useful life and beyond is the Whole Life Carbon approach. It ensures that the embodied carbon (CO2 emissions) together with embodied carbon of materials, components and construction activities are calculated and available to allow comparisons between different design and construction approaches. Data centre sustainability is more than simply operational efficiency The great efforts to improve efficiency and reduce energy use - as measured through improvements in PUE - have slowed operational carbon emissions even as demand and the scale of facilities has surged. But reducing operational energy of the facility is measured over time and such reductions are not accounted for until five, 10 or 30 years into the future. However, embodied carbon is mostly spent up-front as the building is constructed; there is, therefore a compelling reason to include embodied carbon within all analyses and data centre design decisions. A ‘Whole Life’ carbon approach that considers both embodied and the operational emissions, provides the opportunity to contribute positively to global goals to reduce emissions of greenhouse gases - and will save financial costs.

Delivering a resilient and sustainable electricity supply to data centres
By Antony White, Client Delivery Manager, UK Power Networks Services As the UK’s data centre market continues to grow and mature, forecasters are predicting 36% revenue growth, and 29% power uplift in power demand by the end of 2025. While this may not be the growth of smaller European markets, the increase in the UK is still significant. That means the challenge for the UK’s electrical infrastructure to meet the significant power demand of a data centre, with a truly resilient supply, remains. Sustainability pressures add complexity for data centres, who should seek alternative energy from renewable sources and invest in new energy technologies, including on site renewable generation and battery storage. Meeting capacity requirements with a high-quality network connection Data centres need to be connected to the local high-voltage electricity network - generally the 132kV network - which requires complex electrical infrastructure solutions, combined with experienced asset management. In most cases, connections also need to be fed from at least two sources to maintain supply in the event of power interruptions on the local network. Many of the UK’s data centres are located in the south-east of England, where there has historically been relatively easy access to high-voltage energy infrastructure from existing networks. Having enough capacity on the local electricity network to support the day-to-day running requirements and other energy-intensive requirements, such as air-conditioning and cooling, is a key factor in data centre location. Other important factors include having enough physical space and nearby connectivity to data networks and other utilities. UK Power Networks Services understands the requirements for these connections and the time and expertise needed to design and build them. Current supply chain headaches can result in long lead times for equipment, and detailed knowledge of the market, equipment required, and experience in high-voltage design and build connections projects is essential for any data centre project. Due to global supply, equipment, such as 132kV switchboards and transformers, can take anywhere between 12 to 18 months to purchase once a detailed design is agreed. The challenge for new data centres is to understand exactly what equipment will be needed to fulfil the capacity requirements of a site, knowing the market and what is currently available, then engaging in procurement activities early to meet the project’s timeline. Maintaining a resilient energy supply Customers of data centres also need assurance that their connectivity will be available 24/7. There must never be the risk of speed issues or service interruptions, let alone a prolonged impact to service. Maintaining a resilient energy supply is therefore crucial. While ensuring an Uninterrupted Power Supply (UPS) to a data centre is achieved from multiple sources, power outages caused by equipment failure are not out of the realm of possibility. Local backup generators may be able to keep some operations running, however, the demand of a sizeable data centre is usually too great for this to be a viable option especially where sustainability and low carbon requirements prevent the use of diesel back up. This is where UK Power Networks Services' experience and expertise as an independent connections provider is key. Data centres need a connections partner that understands the local electricity network, can design a fit-for-purpose connection, is experienced with high voltage engineering, understands the equipment required and has experience in equipment selection and knowledge of the market and what is available. Resilience is not just down to high-quality equipment and the expertise in design and build projects. Managing electrical assets is a specialist subject that works best with a long-term perspective. Data centres need to know when to replace equipment to optimise performance, what technological innovations to integrate, and how and when to dispose of obsolete equipment. The ongoing operations and maintenance and asset management of that equipment will be required to keep the electricity infrastructure operating effectively and continually in service. Powering data centres sustainably As electricity demand for data centres is very high, there is pressure to ensure they are powered as sustainably as possible. To satisfy local planning and social responsibility, the first step should be to ensure the power purchased is from green energy sources. There are other ways data centres can increase their sustainability credentials while also reducing the impact of rising energy costs of sourcing all power supplies through the market. Renewable generation opportunities are available due to the large footprint that data centres occupy. These large areas may make solar PV viable – whether on the roof of buildings or in the surrounding land. Some sites may even have space for wind generation. Other opportunities are emerging as technology advances, such as providing electric vehicle charging on site for staff and visitors or integrating battery storage into the local network. Battery storage could be used as an alternative to diesel backup generation and, as technology develops, may play a bigger part in managing the every-increasing demand of the data centre. Making your project a reality When choosing a partner to power a data centre, considerations must include extensive high voltage experience, a track record of safety, equipment procurement experience, a full end-to-end solution encompassing design, build, operations and maintenance. UK Power Networks Services has this experience and can also provide capital finance options.

Hybrid cloud: how enterprises can build resources to suit their own needs
By Jack Bedell-Pearce, CEO and Co-Founder of 4D Data Centres With so many issues that can cause inefficiency, IT leaders need to ensure the right foundations are in place in order to optimise the management of hybrid cloud. Every environment is different and there is no one-size-fits-all cloud infrastructure. So how can organisations prepare and build resources that work for them? Why is optimising hybrid cloud management important? Hybrid cloud is more than just sharing workloads between the two major hyperscale cloud providers, Azure and AWS. It also encompasses other infrastructure environments such as on-premises servers, private clouds, and servers in colocation. No one platform is necessarily better than another, but it is important to regularly evaluate them individually to make sure they are ticking the right boxes. Five essential areas platforms that need to monitor are performance (inc. compute, latency, bandwidth etc), reliability, resilience, security and cost efficiency. In addition to this, green credentials have recently become a sixth important factor, with companies realising colocation data centres and some hyperscalers are able to offer significant improvements in cooling efficiency and, in the case of colocation, high density cooling for High Performance Computer (HPC) systems. Not all platforms are equal when evaluating these criteria, so it is important for companies to consider what to prioritise in their business when matching their workloads with the relevant platforms. Public cloud is very good at providing entry level services and scaling quickly for fast growing businesses, but for more mature companies (especially those with readily available capital and potentially legacy systems), a blend of public cloud, private cloud, and colocation may be a more cost efficient and reliable option. This is demonstrated in a whitepaper by Andreeson Horrowitz, which shows the financial cost of enterprise companies miscalculating the mix and discovering significant cost savings by repatriating servers back into data centres. The right foundations and implementing good practice In the same way you wouldn’t advise someone to put all their savings into one asset class, large companies should avoid being overly dependent on a single platform. Aside from the obvious downtime risk associated with a single point of failure, there is the potential risk of being trapped and unable to avoid price inflation if your sole IT platform is provided by a third-party vendor. Once the right foundations are in place, enterprises need to become more organised and build IT resources through good practice. Examples of this include: ● Governance – how do you ensure the business is aware and being fulfilled/responsive to departmental needs (without them going off and just doing their own thing)? ● Security/Identity/Access Management – making sure that as services spread out, the right people have the right level of access. Data leaks can occur through poor basic hygiene and configuration. ● Stepping back and assessing how they’re using what is deployed; an example of this is Brandwatch doing front end visualisation in GCP (Google Cloud platform), as they had some good assets for their development team but the backend data was stored in colocation. How can optimisation pitfalls be avoided/mitigated? In order to minimise mistakes, enterprises should orchestrate across different businesses to overcome the ‘one pane of glass’ challenge for provisioning and delivery, be aware and in control of costs and recognise different approaches. The different potential costs of hyperscale cloud vs running your own vs colocation should also be considered, with the cost of the equipment etc. taken into account. Additionally, monitoring and reporting of the end-to-end solution using the right tools for multicloud/hybrid use must be factored in. This will ensure accurate and consistent alerting as well as raising awareness on what is actually being deployed and where, removing assumptions of resilience. Other areas to be aware of are the overlap or expansion of products and services, so as each provider continues to expand their product set, integration must be consistent and done at regular intervals to avoid being left behind. Integrating services and applications can also help with silos, but businesses must be careful of non-standardised interfaces to avoid future migration nightmares. Once hybrid cloud management is optimised, what should CIOs do next? Whilst CIOs might get close, it is unlikely that they will ever fully optimise their hybrid cloud setup. As with all technology, trends and advancements are happening regularly, so being up to date is not something businesses can ‘fit and forget’. Technologies will continue to evolve and part of the role of CIOs is to ensure they are not left behind and are tweaking their infrastructure accordingly and frequently. Perfecting cloud services demands a commitment to agility and change. Trends are endemic to the cloud and will continue to evolve at speed as adoption increases. Tracking and unpacking trends will help your enterprise to open doors by leveraging the expertise and knowledge of the industry. As the world continues to embrace cloud services, these opportunities will be essential to sustained growth in 2022 and beyond.

Your holistic cloud security checklist
By Massimo Bandinelli, Aruba Enterprise Marketing Manager Chances are that your organisation migrated to the cloud to enhance security, reliability and reduce the resource burden on your IT staff. But while it’s true that cloud enables your organisation to be more efficient, secure and reliable, it doesn’t mean you can forget about security. In fact, this common misconception can leave organisations like yours vulnerable to cyber attacks and regulatory scrutiny. Whether you’re selecting a public cloud provider, implementing a hybrid cloud solution or building your own private cloud – there’s a whole host of security factors to consider. With this in mind, let’s take a look at what should be on your cloud security checklist. Digital measures Back-ups: No matter which security measures you’ve put in place to protect your organisation’s cloud, the truth is that no measure can guarantee 100% security. That’s why back-ups are crucial – ensuring continuity of service and minimising business disruption in the event of a successful cyber attack. When backing up cloud data, it’s suggested that organisations should adhere to the 3-2-1 model. This means keeping three copies of data on at least two devices, with one copy offsite. It’s helpful to have one ‘live’ back-up – as this updates automatically and can be restored in a matter of minutes when disaster strikes. At the same time, it’s important to have a ‘cold’ back-up – an offline back-up which isn’t connected to your live systems, and therefore can’t be tampered with by malicious actors. Encryption: Encryption is one of the most effective measures for securing data stored in the cloud. It involves converting your data into an unreadable format before it’s transferred or stored, so it stays unintelligible even if malicious actors gain access to it. In particular, encrypting data when it’s ‘in flight’ is crucial – as this is when it’s the most vulnerable. This is particularly true for organisations using hybrid cloud solutions – in which data is regularly transferred between various applications and cloud services. Data sovereignty: Data sovereignty is a legal principle which says that data is subject to the laws of the country in which it’s stored. Awareness of this concept is steadily increasing, as more organisations begin using public cloud solutions and public awareness of how organisations collect and store consumer data grows. Data sovereignty is particularly relevant to EU or UK-based organisations who use largescale public cloud providers with US data centres. If your organisation’s data is stored in data centres outside your jurisdiction, it could be subject to local laws and can be accessed by local law enforcement – regardless of where your HQ is. This creates interesting legal tensions. For example, US laws like the CLOUD Act or FISA require US cloud service providers to hand over data to the US authorities if asked – even if the data is stored within the borders of another country. Meanwhile, EU GDPR legislation states that data can only be accessed by law enforcement based on requests arising under EU law – a clear conflict. To protect against current and future legal conflicts, many organisations are turning to sovereign cloud solutions – which are designed to comply with local laws on data privacy, access, control and transfer. In practice, this means only working with local cloud providers, or building your own on-premises private cloud storage. Identity and access management: Unsurprisingly, poor password hygiene (using simple passwords, or reusing login credentials) is a top cause of cloud data breaches. Remember last year’s Colonial Pipeline hack? That happened because a single employee reused login credentials, which were then re-sold on the dark web following a completely unrelated data breach. To secure your organisation’s cloud, it’s crucial that employees use complex passwords, and that multifactor authentication is enabled to avoid credential sharing. For enhanced protection, many organisations are turning to end-to-end identity and access management solutions. These take the responsibility for password management away from employees’ and enable organisations to centrally manage all employees’ digital identities. In addition to implementing robust identity management, it’s important to think about who has access to your cloud applications and systems. Not all employees need high-level privileges, and the number of administrators should be kept to an absolute minimum. Patching: Like with all software, it's crucial to apply security updates and patches to your cloud solutions as soon as they become available – before malicious actors can exploit vulnerabilities. If you’re working with a public cloud provider, make sure both parties understand who’s responsible for updating and patching software and applications. This will help to ensure that this vital work is done quickly, and nothing gets overlooked. Physical measures Redundancy: In a nutshell, redundancy is the practice of storing cloud data on multiple drives, in case of system failure. For companies operating in the cloud, ensuring redundancy is just as important as having multiple back-ups in place. But they aren’t the same thing! Back-ups are copies of data that can be restored in case of emergency, while redundancy is about ensuring reliability and uptime in the event of drive failure. To explain this, let’s take a look at two contrasting examples. Situation one – a hacker deletes important data stored in your organisation’s cloud. In this instance, having a fully redundant cloud solution wouldn’t get you very far, as the data would simply be deleted across all locations. This is where having back-ups is essential. Situation two – a drive on one of your organisation’s cloud servers fails during the working day. Here, having a fully redundant cloud solution comes into its own, enabling you to continue working with no interruption. Perimeter security: Ensuring the security of your cloud data goes beyond the digital sphere. Increasingly, malicious actors are adding new, physical attack vectors to their already impressive arsenal. This includes the physical delivery of ransomware – where malicious actors gain entry to data centres either through stealth or deception, and feed in ransomware that can lay undetected until activation. It’s imperative that organisations and data centre providers stay vigilant and implement a range of perimeter security measures to protect data centres, especially those organisations with on-premise facilities that wouldn’t otherwise implement the same level of security as perhaps a tier 4 data centre would operate. This means a combination of CCTV, anti-intrusion sensors and bollards, in addition to sophisticated entry control systems, which require employees to authenticate themselves using biometrics. These might feel a bit Mission Impossible – but they’re becoming commonplace among reputable data centre providers. The bottom line? There’s a lot to consider when it comes to cloud security. But with a common sense strategy in place and the right partners on-board, you’ll find it’s surprisingly manageable. If you haven’t already taken a holistic look at your cloud security, now is the time. After all, adopting a head in the sand approach is just waiting for problems to begin.

Peer Software and Pulsar Security enhance ransomware detection across cloud storage systems
Peer Software has announced the formation of a strategic alliance with Pulsar Security. Through the alliance, Peer Software will leverage Pulsar Security’s team of cyber security experts to continuously monitor and analyse emerging and evolving ransomware and malware attack patterns on unstructured data. PeerGFS will utilise these attack patterns to enable an additional layer of cyber security detection and response. These capabilities will enhance the Malicious Event Detection (MED) feature incorporated in PeerGFS. “Each ransomware and malware attack is encoded to infiltrate and propagate through a storage system in a unique manner that gives it a digital fingerprint,” says Duane Laflotte, CTO, Pulsar Security. “By understanding the unique behaviour patterns of ransomware and malware attacks and matching these against the real-time file event streams that PeerGFS collects across the distributed file system, Peer can now empower its customers with an additional layer of fast and efficient cyber security monitoring. We are excited to be working with Peer Software on this unique capability.” As part of the agreement, Pulsar Security will also work with Peer Software to educate and inform enterprise customers on emerging trends in cyber security, and how to harden their systems against attacks through additional services like penetration testing, vulnerability assessments, dark web assessments, phishing simulations, red teaming, and wireless intrusion prevention. “Ransomware attacks have become so common that almost every storage infrastructure architecture plan now also requires a cyber security discussion,” says Jimmy Tam, CEO, Peer Software. “But whereas other storage-based ransomware protection strategies have focused mainly on the recovery from an attack, Peer Software’s goal in working with Pulsar Security is to prioritise the early detection of an attack and limiting the spread in order to minimise damage, speed recovery, and keep data continuously available for the business.”

Snowflake launches workload to respond to threats with the Data Cloud
Snowflake announced the launch of a new cyber security workload that enables cyber security teams to better protect their enterprises with the Data Cloud. Using Snowflake’s platform and an extensive ecosystem of partners delivering security capabilities with connected applications, cyber security teams can quickly gain visibility and automation at cloud-scale. Organisations today are faced with a continuously evolving threat landscape, with 55% of security pros reporting that their organisation experienced an incident or breach involving supply chains or third-party providers in the past 12 months, according to Forrester. Current security architectures built around legacy security and information management systems (SIEMs) are not designed to handle the volume and variety of data necessary to stay ahead of cyber threats. With legacy SIEMs imposing restrictive ingest costs, limited retention windows, and proprietary query languages, security teams struggle to gain the visibility they need to protect their organisations. With Snowflake’s cyber security workload, customers gain access to the power and elasticity of Snowflake’s platform to natively handle structured, semi-structured, and unstructured logs. Customers are able to efficiently store years of high volume data, search with scalable on-demand compute resources, and gain insights using universal languages like SQL and Python, currently in private preview. With Snowflake, organisations can also unify their security data with enterprise data in a single source of truth, enabling contextual data from HR systems or IT asset inventories to inform detections and investigations for higher fidelity alerts, and running fast queries on massive amounts of data. Teams gain unified visibility across their security posture, eliminating data silos without prohibitive data ingest or retention costs. Beyond threat detection and response, the cyber security workload supports a broad range of use cases including security compliance, cloud security, identity and access, vulnerability management, and more. “With Snowflake as our security data lake, we are able to simplify our security program architecture and remove data management overhead,” says Prabhath Karanth, Sr. Director of Security, Compliance & Trust, TripActions. “Snowflake has been vital in helping us gain a complete picture of our security posture, eliminating blind spots and reducing noise so we can continue to provide user trust where it matters most. Deploying a modern technology stack from Snowflake is a pivotal piece of our cyber security strategy.” Snowflake’s rich ecosystem of partners enables best-of-breed security Snowflake is heavily investing in its extensive ecosystem of partners to transform the security industry and enable customers to choose best-of-breed applications that fit their needs. Snowflake integrates with partners including Hunters, Panther Labs, and Securonix to deliver industry-leading cyber security capabilities to customers with the Data Cloud using connected applications. Snowflake’s modern security architecture allows customers to gain control of their data, leverage pre-built content and security capabilities on top of their existing Snowflake environments, and utilise a single copy of data across cyber security use cases. With Snowflake’s Data Cloud, tightly integrated connected applications, and data from providers on Snowflake Data Marketplace, Snowflake is pioneering a new standard architecture for security teams looking to achieve their security goals. Snowflake Ventures, which focuses on investing in companies that help accelerate and augment the growth and adoption of the Snowflake Data Cloud, has already invested in Hunters.ai, Lacework, Panther, and Securonix. These investments have helped drive product alignment to further eliminate security data silos and enable data-driven strategies for joint customers. “Snowflake is leading the security data lake movement, helping defenders bring their data and analytics together in a unified, secure, and scalable data platform,” says Omer Singer, Head of Cybersecurity Strategy, Snowflake. “With Snowflake’s cyber security workload, we further empower security teams in the Data Cloud so that they can collaborate with diverse stakeholders and succeed in their vital mission to protect the enterprise.”

Moving to the cloud is the basis of a good business continuity plan
By Amir Hashmi, CEO and Founder of zsah A business continuity plan (BCP) is a thorough and complex plan to fight the ever-present and ever-costly risk of downtime, and moving operations to the cloud is the best shortcut to take. A Business Continuity Plan is, broadly speaking, a set of processes and principles to improve resilience and ensure a business can continue functioning. Due to the importance of IT to productivity for almost every organisation in the 21st Century – downtime, when IT systems are offline, is its antithesis. Thanks to the rapid adoption of digital tools spurred on by the pandemic and the general move to online we have seen throughout the world, there is a tremendous amount of risk out there for businesses with online assets, from cyber attacks and ransomware to natural disasters and power outages. However, using cloud-based IT assets such as remote desktops, SaaS applications and cloud storage of data can be a shortcut to protecting their continuity – and therefore the continuity of your business. According to Veeam’s 2021 Data Protection Report, the average cost of downtime is $84,650 per hour, that’s $1,410 per minute. Naturally, this figure is skewed by larger organisations reporting higher sums. Still, small, and medium businesses are increasingly impacted as they are seen as easier targets – and they have far less capital to absorb the blow. Although downtime has an infinite number of causes, from natural disasters to cyber attacks, two factors remain consistent: it is costly for modern businesses and often preventable. The key to this prevention is a good business continuity plan. Suppose we disregard the part of BCPs that consider the physical security of assets and focus on the digital continuity of IT systems. In that case, we can say that a good BCP focuses on three things, and according to IBM, these are: • High availability: the systems provided in a business that allows the enterprise to have access to applications that allow it to still operate even if it experiences local failures in areas such as IT, processes, and physical facilities. • Continuous operations: the system a business has in place that allows business to run smoothly during times when disruption or maintenance takes place either planned or otherwise. • Disaster recovery: the system a business has in place that allows it to recover its data centre at another location safely and securely if there is a significant event which means the current site either damaged beyond repair or inoperable. Of course, this is not a universally prescriptive solution. As businesses have varied sizes and needs, one size never fits all. However, many of these essential issues are automatically covered if enterprises move storage, desktops, and digital tools to the cloud rather than store and operate them from on-site servers or even on personal devices. Firstly, cloud providers automatically encrypt and protect your information through extensive cyber security measures and often duplicate it across multiple sites, areas, or even time zones to protect it against physical or cyber damage. Doing this yourself is a costly and time-consuming task with huge risks if not done correctly. Here, you benefit from the economy of scale, as huge deep pockets develop and invest in the most thorough, innovative, and automated protection measures. This means that your data, your applications, and therefore the continuity of your business is protected from all but the most apocalyptic and unforeseen of circumstances, including data loss, power outages, ransomware attacks, and many other causes of downtime. You are now (nearly) continuously operable and, just as importantly, are operable from anywhere. This, in turn, makes hybrid or working from home a far easier and safer experience for new and existing members of your team, with cybersecurity measures and encryption embedded in your teams’ operating systems and tools, no matter what device they use. As the ability to hybrid work is seen as an expectation of staff across the board, and most of the modern, industrialised world, making this process more accessible is a wise investment to attract and retain future employees. It’s not a magic cure, but it’s a start The cloud is the obvious answer for a company that requires always-accessible and always-operational data storage and applications. This is true whether you use public cloud resources or a dedicated, off-premises private cloud server operated by a dedicated IT team on your behalf. The cloud is nothing new, and it certainly is not a single-point cure to IT pain points. Still, it is undoubtedly one of the most transformational changes you can make to aid both security and operational efficiency. However, if you want to avoid unmonitored cloud usage causing a surge in costs, make sure you have the resources to dedicate to its use. Better yet, outsource to experts: an IT managed-service provider will ensure that your move onto the cloud, and its continued use, will be managed effectively.

NHS trusts to digitise pathology following cloud contract with Sectra
Sectra has signed a contract for digital pathology with the East Suffolk and North Essex NHS Foundation Trust and the West Suffolk NHS Foundation Trust. The solution will enable pathologists to review and collaborate around cases in a way that is not possible with microscopes. This will reduce variation and increase efficiency in primary diagnostics, thereby improving cancer care. The two NHS trusts in the East of England will transform pathology services for a population of more than a million people, following a new agreement signed during the fourth quarter of Sectra's 2021/2022 fiscal year. The contract comprises the digital pathology module of the enterprise imaging subscription service Sectra One Cloud. The solution will be delivered as a fully managed service where Sectra takes responsibility for all hardware, software, and other IT components. Pathologists working at both the East Suffolk and North Essex NHS Foundation Trust and the West Suffolk NHS Foundation Trust will be able to better collaborate as they move away from microscopes and glass slides to analysing high-resolution digital images that can be accessed from almost anywhere. The move to Sectra's digital pathology solution will support healthcare professionals in delivering timely diagnoses for patients. Rather than having to wait for glass slides to be transported from one site to another, pathology specialists will be able to easily and quickly access digital images of patient tissues to carry out their reports. Multidisciplinary teams will also be able to view images without delay, and the trusts will be able to pool their pathology resources more effectively to make best use of capacity, while improving working flexibility for professionals through home working. Sarah Rollo, Pathology Project Manager at West Suffolk NHS Foundation Trust, says: "The recent pandemic has highlighted the importance of being able to access slides remotely and the ability to provide flexibility and resilience in our service. Our consultants will have the ability to report routine and urgent work from home, shortly after it has been issued out of the laboratory. Consultants will be able to work collaboratively on cases from remote locations with simultaneous access to view and annotate patient slides. As a district general hospital, a proportion of our patients are referred to specialist hospitals. Improved data sharing with these specialist centres will improve the turnaround time in the patient's pathway. Digital pathology has also facilitated the introduction of digital processes within the laboratory, reducing the need for manual transcription and improving patient safety." Dr Yinka Fashedemi, Clinical Lead for Cellular Pathology with East Suffolk and North Essex NHS Foundation Trust, says: "This new technology will allow our staff to analyse and share samples remotely, in turn enabling us to further improve the service we provide by making sure our patients receive their test results as quickly as possible. It will also make it easier to work in collaboration with colleagues based at specialist hospitals to obtain second opinions so that patients can begin any treatment they may need promptly. We are pleased to be one of the first trusts in the country to introduce this digital pathology system which will further support our staff to deliver the best possible service." The digital program is expected to support specialist pathways, such as cancer pathways. It will support around 1.5 million examinations per year and will also pave the way for introducing emerging technologies, such as artificial intelligence, into the diagnostic process. The trusts will be the first in the UK to deploy the digital pathology solution on the Microsoft Azure Cloud, with a fully managed service provided by Sectra minimising IT and infrastructure burdens for the trust. Significantly, this will allow the trusts to make use of archive storage facilities now available in the cloud that will help to manage high volumes of data associated with digital pathology at a sustainable cost. Jane Rendall, Managing Director for Sectra in the UK and Ireland, says: "NHS diagnostic services are undergoing the biggest transformation seen in decades, if not centuries, as disciplines like pathology digitise. East Suffolk and North Essex NHS Foundation Trust and West Suffolk NHS Foundation Trust are at the forefront of that journey and are particularly innovative in their use of cloud computing to ensure their program remains scalable and sustainable. We are extremely proud to have the opportunity to support this transformation that will deliver significant benefits to healthcare professionals and patients."

Structured cabling: helping to overcome increasing demand
By Andreas Sila, VP Market Management, Data Centre, HUBER+SUHNER Data centres are facing increasing pressure to grow and meet the almost unprecedented demand required for modern life. As the Internet of Things (IoT), Industry 4.0 and the ever-growing expansion of 5G networks continue to gather pace, it is essential for data centre operators to invest in the latest technologies and solutions to enable immediate growth and ensure future demands are met. The challenge of keeping up with connectivity and bandwidth is one that not only impacts enterprise and colocation data centres, but also the hyperscale data centres too. Everyone involved in these operations, including the tech giants with multiple hyperscale data centres, must recognise the need to optimise the existing space, while maintaining the ability to scale at a rapid rate. Structured cabling may hold the key to enhancing operations, saving on time and space while allowing more focus and investment to be spent on revenue-driving technologies. Maximised space, optimised operations To make the most of the physical space available, high-quality fibre-optic solutions are crucial to provide adequate fibre capacity and bandwidth, allowing both high data throughput but with low loss and latency. This includes structured cabling systems, fibre optic bandwidth expansion, and all-optical switching. For operators looking for the maximum return on investment, a clear understanding of the solutions available and how each one can be tailored to meet each individual business need is needed. They must also keep in mind the demands that they may face in the future when looking to leverage these solutions for their operations. Every data centre will have a different amount of space to work with, but the approach often remains the same. Smart, high-density solutions can be deployed to maximise the available space, and should there be any room left over, revenue-generating technologies can be utilised to increase profit. Modern fibre-optic management systems can work for both small and hyperscale data centres, with an excellent, high-density solution offering organisational support for cable systems, freeing up room for more cables. Fibre density can be maximised by using optical distribution frames (ODFs) capable of containing a high number of fibre cassettes and ports. Accessibility is the aim As data centres – and the number of fibres they contain – grow, any significant adjustment of the cables can leave operations vulnerable to the risks of human error. Accessibility therefore must be a major consideration regarding fibre management; an unorganised data centre can create an environment prone to technicians making unintentional, and costly, mistakes. All it takes is the accidental removal of the wrong patch cord or getting caught up in other cables when carrying out moves for an accidental breakage for a poor-quality connection to occur. Live links are easily disconnected and can lead to devastating expenses. If the network architecture is disorganised or poorly labelled, new employees may struggle to understand the system and may also lead to further detriment. A proper structured cabling approach should be paramount to any operation. If the initial installation is not organised from the offset, it will only lead to subpar performance. High-quality installation is vital in making sure cables are integrated in an accessible way, making it easier to upgrade when this is required, rather than carrying out quick ‘fixes’ which can often lead to further issues down the line. With greater ease of management, the likelihood of damage is reduced when making MACs, cutting out unnecessary downtime. Tailored solutions to match requirements No technology can provide a one-fits-all solution when it comes to a successful structured cabling strategy. Every data centre requires different solutions to ensure optimised performance. Finding and working with specialised experts can help prepare your operations for future growth, and pinpoint the exact requirements needed to ensure future growth. Using industry experts like HUBER+SUHNER means installations can be managed to give an operator peace of mind that unnecessary expenses and downtime can be avoided. With the right considerations and investment in fibre optic solutions, data centre operators can rely on an enhanced set-up that will enable revenue-driving services to meet demand both now and in the future.



Translate »