During the COVID-19 pandemic, hackers and fraudsters have been extremely active. During the first half of 2020, stimulated by the fact that more people began to work remotely from home, Bitdefender’s Mid-Year Threat Landscape Report 2020 claims there was a 715% year-on-year increase in detected – and blocked – ransomware attacks. No organisation, no datacentre, or state is immune to the attacks, which are constantly evolving, and for the cyber-criminals that perpetrate them, they can be rewarding.
Not quite so perhaps with the recent attacks on the Republic of Ireland’s healthcare system. An Irish minister has described it as being “possibly the most significant cyber-crime attack on the Irish state” to date, and yet, its government is steadfastly standing firm and refusing to pay a ransom after consulting cyber-security experts. Taoiseach (Irish PM) Micheál Martin has also stressed that the country’s health and emergency services remain open. Nevertheless, he admits that it may take some days to fully assess the impact of the ransomware attacks on the HSE, the country’s health service.
BBC News reports: “The National Cyber Security Centre (NCSC) has said the HSE became aware of a significant ransomware attack on some of its systems in the early hours of Friday morning and the NCSC was informed of the issue and immediately activated its crisis response plan.”
The international attack caused widespread disruption in many of the Republic of Ireland’s hospitals, including at Dublin’s Rotunda Hospital. It had to cancel outpatient appointments, with the exception of women in their 36th week of pregnancy or later. All other gynaecology clinics were cancelled, but those with urgent concerns were encouraged to attend. The National Maternity Hospital in Dublin also reported significant disruption to its services on the date of the attack, 14th May 2021.
Hospitals, such as St Columcille’s Hospital in Dublin, Children’s Health Ireland (CHI) at Crumlin Hospital, and the UL Hospitals Group – which consists of six hospital sites in the mid-west – advise their patients of the disruption to even virtual online appointments and any matters related to electronic records. This disruption is causing significant delays in patient-related healthcare services and appointments.
HSE chief executive, Paul Reid, says the attack is focused on accessing data stored on central servers. BBC News adds: “Rotunda Hospital Master Professor Fergal Malone said they had discovered during the night that they were victims of the ransomware attack, which is affecting all of its electronic systems and records. Prof Malone said he believed it could also have affected other hospitals, which was why they had shut down all of their computer systems.”
ZDNet reports: “The attack was identified as a human-operated ransomware variant known as “Conti”, which has been on the rise in recent months.” Daphne Leprince-Ringuet explains in her article for the publication: “Conti operates on the basis of “double extortion” attacks, which means that attackers threaten to release information stolen from the victims if they refuse to pay the ransom. The idea is to push the threat of data exposure to further blackmail victims into meeting hackers’ demands.”
Rather than give in to the cyber-criminals, the NCSC has decided upon a remediation strategy. This involves isolating the systems that were hacked, perhaps by providing an air gap to ensure that no other systems can be infected with ransomware viruses. They then plan to wipe, rebuild and update all of the infected devices. The HSE will then work with its anti-virus software partners to ensure that their anti-virus solutions are updated and that all infected devices are thoroughly cleaned before they use remote back-ups to restore the systems safely.
Leprince-Ringuet adds: “The HSE has confirmed that it is in the process of assessing up to 2,000 patient-facing IT systems, which each include multiple servers and devices, to enable recovery in a controlled way. There are 80,000 HSE devices to be checked before they can be brought back online. Priority is given to key patient care systems, including diagnostic imaging, laboratory systems and radiation oncology, and some systems have already been recovered.”
There is also a requirement under the European Union’s General Data Protection Regulations (GDPR) to ensure that sensitive personal data is secured. A data leak can lead to severe financial penalties – not just a disruption in service delivery. The precautions taken by the HSE, the NCSC and their partners are therefore sensible, albeit perhaps drastic and highly disruptive. Prevention is often better than a cure, but hackers are becoming more and more sophisticated in how they carry out their attacks. The initial weakest link is often not an organisation’s technology, but their workforce who can unwittingly begin an attack by clicking on, for example, a phishing email link.
Jim McGann, VP Marketing & Business Development at Index Engines, offers his thoughts on the regulations: “GDPR puts personal data back in the hands of the citizens. So, if you have a company doing business in the EU, including from the US, you have to comply.” He adds that GDPR raises a key problem that organisations have with data management. Quite often, they find it hard to locate the personal data on their systems or in their paper records. Subsequently, they can’t know whether the data needs to be kept, deleted, modified or rectified. So, with the potentially enormous fines looming over their heads, GDPR will place a new level of responsibility on their shoulders.
However, he says there is still a solution: “We provide information management solutions; the ability to apply polices to ensure compliance to data protection regulations. Petabytes of data has been collated, but organisations have no real understanding of what data exists. Index Engines provides the knowledge of this data, by looking at the different sources to understand what can be purged. Many organisations can free up 30% of their data, and this allows them to manage their data more effectively. Once organisations can manage the content – the data – they can then put the policies around it, as most companies know what type of files contain personal data.
He then explains: “Much of this is very sensitive and so few companies like to talk on the record about this, but we do a lot of work with legal advisory firms to enable organisations with their compliance.” Index Engines, for example, completed some work with a Fortune 500 electronics manufacturer that found that 40% of its data no longer contained any business value. So, the company decided to purge it from its datacentre.
So, there is a need to train staff to prevent ransomware attacks, and to have the ability to back-up or restore and source data quickly. It’s also imperative to air gap sensitive data to ensure service continuity – perhaps to the cloud with backup-as-a-service (BUaaS), or to tape. Most people would be forgiven for thinking that tape is an old and no longer used technology, but it still plays a crucial role in storing and backing up data. After all, cloud systems can themselves be prone to attack, and so it’s sensible to have more than one means of backing up data.
Chris Ducker, Head of Product Marketing at Orgvue, and a former head of proposition marketing (Europe) at Sungard Availability Services, commented a couple of years ago: “If you look at on-premise, you have tape back-ups, a large IT infrastructure in place and that would be backed up onto tape or other services in a datacentre. This then gets stored in a safe location and then, when an incident occurs, you have to move it physically and get them to the right location to load them and get them up and running. This is a much slower process than cloud recovery provides. If you have a replicated environment, you can spin up servers more quickly than with traditional back-up.”
He says certain environments can be mirrored for business-critical applications using cloud services. However, there will be applications for which it might be more efficient to use traditional tape back-up and so tiering defines the activity that you do. In his view, it’s important to avoid being blinkered by any assumption that cloud recovery is the only option.
Cloud recovery and cloud storage more widely is part of a mix of options, and so there is a need to understand the business outcomes an organisation wants to achieve, and why there might be a need for the recovery to ensure that the right solutions are put in place to maintain data security, data integrity, business and service continuity.
Where once there used to be the rule that said you need 3 copies to ensure you can guarantee recovery of your data, the industry has now adopted the 3-2-1 rule. Although commonly known in industry, Carbonite advises on its website:
1. Keep at least three copies of your data. That includes the original copy and at least two backups.
2. Keep the backed-up data on two different storage types. The chances of having two failures of the same storage type are much better than for two completely different types of storage. Therefore, if you have data stored on an internal hard drive, make sure you have a secondary storage type, such as external or removable storage, or the cloud.
3. Keep at least one copy of the data offsite. Even if you have two copies on two separate storage types but both are stored onsite, a local disaster could wipe out both of them. Keep a third copy in an offsite location, like the cloud.
In response, Trossell comments: “With WAN Acceleration you can have three different copies on two different types of media in two locations. The different media types could be cloud and tape. You can have an onsite copy too, offering better resilience.”
Exponential data volumes
The trouble is that data volumes are increasing exponentially – and this can become a major issue, whether an organisation is backing up their data, restoring it after a ransomware attack or doing it for indexing purposes to comply with regulations, such as GDPR. There still works as a solution. It’s not WAN Optimisation, which can’t handle encrypted data, and it’s not SD-WANs on their own.
Even SD-WANs are boosted with a WAN Acceleration overlay with solutions, such as Bridgeworks PORTrockIT, which using a combination of artificial intelligence, machine learning and data parallelisation to accelerate data over wide area networks – and it can handle encrypted data to the cloud and for transfer to tape. WAN Acceleration mitigate latency and packet loss, while increasing bandwidth utilisation. Even government agencies need to be able to deliver.
As mentioned above, the additional challenge is to meet the increasing number of regulatory requirements, including data protection and data management over a WAN. One such organisation – a US government agency – approached Bridgeworks, in 2017, to undertake a proof of concept (POC) using multiple 10Gb/s WAN connections. This was with the aim of enabling high speed data transfer between its North-West and North-East coast US-based datacentres.
With accelerated WAN links, this US government agency hopes to replace its reliance on local tape back-up. The results of the POC eliminated the need for tape back-ups in the North-West datacentre, and brought the potential for significant cost-savings, achieved by gaining ‘almost local area network (LAN) performance’ over 2,500 miles between the two datacentres. This was done leveraging solutions like Commvault, NetApp SnapMirror and cloud-integrated StorageGrid, achieved with PORTrockIT.
A senior enterprise engineer at the agency comments: “The networking team were extremely impressed with the ability to throttle bandwidth consumption. We started low and reached our initial target of 600MB/s of throughput on the initial learning process without any impact on other traffic using the same WAN link.”
Petabyte scale movements
PORTrockIT has now been accelerating the US government agency’s WANs for a few years now, and the final implementation enabled the agency to realise the plan to move away from local tape back-up, resulting in significant cost savings. Data is now moved at a petabyte scale and at great speed.
By delivering 90-95% utilisation of the allocated 10Gb/s WAN bandwidth over 24 hours a day, 7 days a week and over the entire course of a year, the agency was also able to gain a significant return on investment on the cost of the WAN links themselves. This was in addition to the massive acceleration of the transferred data. This would be crucial in the event of any recovery from a ransomware attack, and it can accelerate real-time analysis as well as Backup-as-a-Service.
While the US agency moved away from its local tape back-ups to the cloud, there is still a role for tape back-ups for disaster recovery and for regulatory compliance. A combination of solutions can be deployed, as suggested by Ducker, to ensure that data is kept secure and readily available for whenever it is needed – even after a ransomware attack. The key is to be ready for any potential eventuality to ensure that, for example, a healthcare system can continue to operate to deliver healthcare services to its patients and to support its staff.
By Graham Jarvis, Freelance Business and Technology Journalist