by Andrew Cormack, Chief Regulatory Adviser, Jisc technologies
December 11th, 2017
Over the past decade European legislators, courts and regulators – regarded as setting the world’s highest standards – have been increasingly clear about the importance of security and incident response in protecting privacy.
In 2009 the ePrivacy Directive identified “processing of traffic data to the extent strictly necessary for the purposes of ensuring network and information security” as a legitimate interest of public network operators;(Directive 2009/136/EC, Recital 53) in 2016 the European Court of Justice extended this permission to websites. (Case C-582/14, Breyer v Germany)
The 2016 General Data Protection Regulation (GDPR) added “public authorities, … computer emergency response teams (CERTs), computer security incident response teams (CSIRTs), … providers of electronic communications networks and services and … providers of security technologies and services”, and recognised that a wider range of personal data might be needed. (Regulation 2016/679/EC, Recital 49)
In 2017 the European Data Protection Regulators expressed concern that a proposed ePrivacy Regulation did not explicitly permit security measures such as patching. (Opinion 01/2017 on the Proposed Regulation for the ePrivacy Regulation (2002/58/EC), Page 20) Later in the year they said plans to prevent, detect, mitigate and report security breaches should be a legal obligation – not just a permitted option – for every organisation handling personal data.(Article 29 Working Party Draft Guidelines on Personal Data Breach Notification, 3 October 2017)
This involves a paradox, because to protect the personal data on their systems and networks, security and incident response teams must themselves process personal data. Fortunately regulators also provide guidance on balancing privacy protection and privacy invasion. The words “legitimate interest” are not just a phrase, but one of the most deeply analysed terms in data protection law.
Under the GDPR, when acting for a “legitimate interest” an organisation must ensure: That the purpose of its processing is, indeed, legitimate; and That the processing is the least intrusive way to achieve that purpose; and, uniquely That the benefit of the processing is not overridden by its effect on individuals’ rights and freedoms. Thus an activity may be both for a legitimate purpose and done in the least intrusive way possible, but still be prohibited if it creates an unjustified risk to individuals. Fortunately, responsibly-conducted security and incident response will rarely do so.
The law is now clear that preventive security and incident response are legitimate purposes. The next test – “least intrusive” or, in GDPR terms, “necessary” – might appear to create a problem. Detecting incidents almost always involves recording information about all use – both legitimate and malicious – of a network or system. Can this be “necessary”? In fact, there is no less intrusive way to achieve the objective, since on today’s Internet all users and systems are likely to be attacked and we cannot predict which ones actually will be.
Modern incident response practice further reduces the privacy risk by automating the initial processing of logs and only requiring human inspection when these show evidence of a likely incident. A focus on IP addresses and other identifiers that are not directly linked to individuals means teams mostly work with “pseudonyms”. By only using directly identifying information, such as names and addresses, when an incident has been confirmed and the human victims need assistance security teams can implement the GDPR’s preferred privacy-protecting approach.
Security and incident response teams must still ensure their activities satisfy the third, balancing, test. The benefits of an activity must be greater than the risk it creates for individuals: a small benefit that requires significant intrusion will not be justified. Teams already seek to ensure their work creates benefits, rather than risks, for system, network and data security: this care will usually reduce the risks to individual users’ rights and freedoms as well. Teams should, however, check that they are not creating unnecessary or unjustified risks for individuals, for example by keeping information in greater quantity, or for longer, than the likely security benefits can justify. A method for assessing the risk/benefit balance can be found in my paper – “Incident Response: Protecting Individual Rights Under the General Data Protection Regulation” – along with examples and a detailed legal analysis.
Guidance on the balancing test also encourages teams to share their knowledge – for example alerting others to problems, providing Indicators of Compromise, or reporting vulnerabilities – since improving others’ security can be included on the benefits side of the test. Sharing must be done in ways that minimise risk, for example it is rarely necessary to disclose local or direct identifiers to help others protect themselves. Sharing relevant information within a trusted community – such as FIRST – that has a common objective of improving security and policies and practices to support that objective, should be seen as strongly supporting, and supported by, privacy and data protection law.