Monetary penalty for marketing phonecalls: Tribunal upholds ‘lenient’ penalty

December 16th, 2014 by Robin Hopkins

A telephone call made for direct marketing purposes is against the law when it is made to the number of a telephone subscriber who has registered with the Telephone Preference Service (‘TPS’) as not wishing to receive such calls on that number, unless the subscriber has notified the caller that he does not, for the time being, object to such calls being made on that line by that caller: see regulation 21 of the Privacy and Electronic Communications (EC Directive) Regulations 2003, as amended (‘PECR’).

The appellant in Amber UPVC Fabrications v IC (EA/2014/0112) sells UPVC windows and the like. It relies heavily on telephone calls to market its products and services. It made nearly four million telephone calls in the period May 2011 to April 2013, of which approximately 80% to 90% were marketing calls.

Some people complained to the Information Commissioner about these calls. The Commissioner found that the appellant had committed serious PECR contraventions – he relied on 524 unsolicited calls made in contravention of PECR. The appellant admitted that it made 360 of the calls. The appellant was issued with a monetary penalty under section 55A of the Data Protection Act 1998, as incorporated into PECR.

The appellant was issued with a monetary penalty to the value of £50,000. It appealed to the Tribunal. Its appeal did not go very well.

The Tribunal found the appellant’s evidence to be “rather unsatisfactory in a number of different ways. They took refuge in broad assertions about the appellant’s approach to compliance with the regulations, without being able to demonstrate that they were genuinely familiar with the relevant facts. They were able to speak only in general terms about the changes to the appellant’s telephone systems that had been made from time to time, and appeared unfamiliar with the detail. They had no convincing explanations for the numerous occasions when the appellant had failed to respond to complaints and correspondence from TPS or from the Commissioner. The general picture which we got was of a company which did as little as possible as late as possible to comply with the regulations, and only took reluctant and belated action in response to clear threats of legal enforcement.”

The Tribunal set out in detail the flaws with the appellant’s evidence. It concluded that “the penalty was appropriate (or, indeed, lenient) in the circumstances, and the appellant has no legitimate complaint concerning its size”.

This decision is notable not only for its detailed critique (in terms of PECR compliance) of the appellant’s business practices and evidence on appeal, but also more widely for its contribution to the developing jurisprudence on monetary penalties and the application of the conditions under section 55A DPA. Thus far, the cases have been Scottish Borders (DPA appeal allowed, in a decision largely confined to the facts), Central London Community Healthcare NHS Trust (appeal dismissed at both First-Tier and Upper Tribunal levels) and Niebel (PECR appeal allowed and upheld on appeal).

The Amber case is most closely linked to Niebel, which concerned marketing text messages. The Amber decision includes commentary on and interpretation of the binding Upper Tribunal decision in Niebel on how the section 55A conditions for issuing a monetary penalty should be applied. For example:

PECR should be construed so as to give proper effective to the Directive which it implements – see the Tribunal’s discussion of the Marleasing principle.

The impact of the ‘contravention’ can be assessed cumulatively, i.e. as the aggregate effect of the contraventions asserted in the penalty notice. In Niebel, the asserted contravention was a specified number of text messages which had been complained about, but the Tribunal in Amber took the view that, in other cases, the ICO need not frame the relevant contravention solely by reference to complaints – it could extrapolate, where the evidence supported this, to form a wider conclusion on contraventions.

Section 55A requires an assessment of the “likely” consequences of the “kind” of contravention. “Likely” has traditionally been taken to mean “a significant and weighty chance”, but the Tribunal in Amber considered that, in this context, it might mean “more than fanciful”, ie, “a real, a substantial rather than merely speculative, possibility, a possibility that cannot sensibly be ignored”.

The “kind” of contravention includes the method of contravention, the general content and tenor of the communication, and the number or scale of the contravention.

“Substantial” (as in “substantial damage or substantial distress”) probably means “more than trivial, ie, real or of substance”. Damage or distress can be substantial on a cumulative basis, i.e. even if the individual incidents do not themselves cause substantial damage or substantial distress.

“Damage” is different to “distress” but is not confined to financial loss – for example, personal injury or property interference could suffice.

“Distress” means something more than irritation.

The significant and weighty chance of causing substantial distress to one person is sufficient for the threshold test to be satisfied.

Where the number of contraventions is large, there is a higher inherent chance of affecting somebody who, because of their particular unusual circumstances, is likely to suffer substantial damage or substantial distress due to the PECR breach.

The Amber decision is, to date, the most developed analysis at First-Tier Tribunal level, of the monetary penalty conditions. The decision will no doubt be cited and discussed in future cases.

11KBW’s James Cornwall appeared for the ICO in both Amber and Niebel.

Robin Hopkins @hopkinsrobin

Above and below the waterline: IPT finds that Prism and Tempora are lawful

December 5th, 2014 by Robin Hopkins

The now famous revelations by US whistleblower Edward Snowden focused on US government programmes under which vast amounts of data about individuals’ internet usage and communications were said to have been gathered. The allegations extended beyond the US: the UK government and security agencies, for example, were also said to be involved in such activity.

Unsurprisingly, concerns were raised about the privacy implications of such activity – in particular, whether it complied with individuals’ rights under the European Convention on Human Rights (privacy under Article 8; freedom of expression under Article 10).

The litigation before the Investigatory Powers Tribunal

Litigation was commenced in the UK by Privacy International, Liberty, Amnesty International and others. The cases were heard by a five-member panel of the Investigatory Powers Tribunal (presided over by Mr Justice Burton) in July of this year. The IPT gave judgment ([2014] UKIPTrib 13_77-H) today.

In a nutshell, it found that the particular information-gathering activities it considered – carried out in particular by GCHQ and the Security Service – are lawful.

Note the tense: they are lawful. The IPT has not determined whether or not they were lawful in the past. The key difference is this: an essential element of lawfulness is whether the applicable legal regime under which such activity is conducted is sufficiently accessible (i.e. is it available and understandable to people?). That turns in part on what the public is told about how the regime operates. During the course of this litigation, the public has been given (by means of the IPT’s open judgment) considerably more detail in this regard. This, says the IPT, certainly makes the regime lawful on a prospective basis. The IPT has not determined whether, prior to these supplementary explanations, the ‘in accordance with the law’ requirement was satisfied.

With its forward-looking, self-referential approach, this judgment is unusual. It is also unusual in that it proceeded to test the legality of the regimes largely by references to assumed rather than established facts about the Prism and Tempora activities. This is because not much about those activities has been publicly confirmed, due to the ‘neither confirm nor deny’ principle which is intrinsic to intelligence and security activity.

Prism

The first issue assessed by reference to assumed facts was called the “Prism” issue: this was about the collection/interception by US authorities of data about individuals’ internet communications and the assumed sharing of such data with UK authorities, who could then retain and use it. Would this arrangement be lawful under Article 8(2) ECHR? In particular, was it “in accordance with the law”, which in essence means did it have a basis in law and was it sufficiently accessible and foreseeable to the potentially affected individuals? (These are the so-called Weber requirements, from Weber and Saravia v Germany [2008] 46 EHRR SE5).

When it comes to intelligence, accessibility and foreseeability are difficult to achieve without giving the game away to a self-defeating extent. The IPT recognised that the Weber principles need tweaking in this context. The following ‘nearly-Weber’ principles were applied as the decisive tests for ‘in accordance with the law’ in this context:

“(i) there must not be an unfettered discretion for executive action. There must be controls on the arbitrariness of that action.

(ii) the nature of the rules must be clear and the ambit of them must be in the public domain so far as possible, an “adequate indication” given (Malone v UK [1985] 7 EHRR 14 at paragraph 67), so that the existence of interference with privacy may in general terms be foreseeable.”

Those tests will be met if:

“(i) Appropriate rules or arrangements exist and are publicly known and confirmed to exist, with their content sufficiently signposted, such as to give an adequate indication of it.

(ii) They are subject to proper oversight.”

On the Prism issue, the IPT found that those tests are met. The basis in law comes from the Security Service Act 1989, Intelligence Services Act 1994 and the Counter-Terrorism Act 2008. Additionally, the Data Protection Act 1998 DPA, the Official Secrets Act 1989 and the Human Rights Act 1998 restrain the use of data of the sort at issue here. Taken together, there are sufficient and specific statutory limits on the information that each of the Intelligence Services can obtain, and on the information that each can disclose.

In practical terms, there are adequate arrangements in place to safeguard against arbitrary of unfettered use of individuals’ data. These included the “arrangements below the waterline” (i.e. which are not publicly explained) which the Tribunal was asked to – and did – take into account.

Oversight of this regime comes through Parliament’s Intelligence and Security Committee and the Interception of Communications Commissioner.

Further, these arrangements are “sufficiently signposted by virtue of the statutory framework … and the statements of the ISC and the Commissioner… and as now, after the two closed hearings that we have held, publicly disclosed by the Respondents and recorded in this judgment”.

Thus, in part thanks to closed evidence of the “below the waterline” arrangements and open disclosure of more detail about those arrangements, the Prism programme (on the assumed facts before the IPT) is lawful, i.e. it is a justified intrusion into Article 8 ECHR rights.

The alleged Tempora interception operation

Unlike the Prism programme, the second matter scrutinised by the IPT – the alleged Tempora programme – involved the interception of communications by UK authorities. Here, in contrast to Prism (where the interception is done by someone else), the Regulation of Investigatory Powers Act 2000 is pivotal.

This works on a system of warrants for interception. The warrants are issued under section 8 of RIPA (supplemented by sections 15 and 16) by the Secretary of State, rather than by a member of the judiciary. The regime is governed by the Interception of Communications Code of Practice.

The issue for the IPT was: is this warrant system (specifically, the section 8(4) provision for ‘certified’ warrants) in accordance with the law, for ECHR purposes?

This has previously been considered by the IPT in the British Irish Rights Watch case in 2004. Its answer was that the regime was in accordance with the law. The IPT in the present cases re-examined the issue and took the same view. It rejected a number of criticisms of the certified warrant regime, including:

The absence of a tightly focused, ‘targeting’ approach at the initial stages of information-gathering is acceptable and inevitable.

There is no call “for search words to be included in an application for a warrant or in the warrant itself. It seems to us that this would unnecessarily undermine and limit the operation of the warrant and be in any event entirely unrealistic”.

There is also “no basis for objection by virtue of the absence for judicial pre-authorisation of a warrant. The United Kingdom system is for the approval by the highest level of government, namely by the Secretary of State”.

Further, “it is not necessary that the precise details of all the safeguards should be published, or contained in legislation, delegated or otherwise”.

The overall assessment was very similar as for Prism: in light of the statutory regime, the oversight mechanisms, the open and closed evidence of the arrangements (above and below the “waterline”) and additional disclosures by the Respondents, the regime for gathering, retaining and using intercepted data was in accordance with the law – both as to Article 8 and Article 10 ECHR.

Conclusion

This judgment is good news for the UK Government and the security bodies, who will no doubt welcome the IPT’s sympathetic approach to the practical exigencies of effective intelligence operations in the digital age. These paragraphs encapsulate the complaints and the IPT’s views:

“158. Technology in the surveillance field appears to be advancing at break-neck speed. This has given rise to submissions that the UK legislation has failed to keep abreast of the consequences of these advances, and is ill fitted to do so; and that in any event Parliament has failed to provide safeguards adequate to meet these developments. All this inevitably creates considerable tension between the competing interests, and the ‘Snowden revelations’ in particular have led to the impression voiced in some quarters that the law in some way permits the Intelligence Services carte blanche to do what they will. We are satisfied that this is not the case.

159. We can be satisfied that, as addressed and disclosed in this judgment, in this sensitive field of national security, in relation to the areas addressed in this case, the law gives individuals an adequate indication as to the circumstances in which and the conditions upon which the Intelligence Services are entitled to resort to interception, or to make use of intercept.”

11KBW’s Ben Hooper and Julian Milford appeared for the Respondents.

Robin Hopkins @hopkinsrobin

In the wake of Google Spain: freedom of expression down (but not out)

July 15th, 2014 by Robin Hopkins

The CJEU’s judgment in Google Spain was wrong and has created an awful mess.

That was the near-unanimous verdict of a panel of experts – including 11KBW’s Anya Proops – at a debate hosted by ITN and the Media Society on Monday 14 July and entitled ‘Rewriting History: Is the new era in Data Protection compatible with journalism?’.

The most sanguine participant was the Information Commissioner, Christopher Graham. He cautioned against a ‘Chicken Licken’ (the sky is falling in) alarmism – we should wait and see how the right to be forgotten (RTBF) pans out in practice. He was at pains to reassure the media that its privileged status in data protection law was not in fact under threat: the s. 32 DPA exemption, for example, was here to stay. There remains space, Google Spain notwithstanding, to refuse RTBF inappropriate requests, he suggested – at least as concerns journalism which is in the public interest (a characteristic which is difficult in principle and in practice).

‘I am Chicken Licken!’, was the much less sanguine stance of John Battle, ITN’s Head of Compliance. Google Spain is a serious intrusion into media freedom, he argued. This was echoed by The Telegraph’s Holly Watt, who likened the RTBF regime to book-burning.

Peter Barron, Google’s Director of Communications and Public Affairs for Europe, Africa and the Middle East, argued that in implementing its fledgling RTBF procedure, Google was simply doing as told: it had not welcomed the Google Spain judgment, but that judgment is now the law, and implementing it was costly and burdensome. On the latter point, Chris Graham seemed less than entirely sympathetic, pointing out that Google’s business model is based heavily on processing other people’s personal data.

John Whittingdale MP, Chairman of the Culture, Media & Sport Select Committee, was markedly Eurosceptic in tone. Recent data protection judgments from the CJEU have overturned what we in the UK had understood the law to be – he was referring not only to Google Spain, but also to Digital Rights Ireland (on which see my DRIP post from earlier today). The MOJ or Parliament need to intervene and restore sanity, he argued.

Bringing more legal rigour to bear was Anya Proops, who honed in on the major flaws in the Google Spain judgment. Without there having been any democratic debate (and without jurisprudential analysis), the CJEU has set a general rule whereby privacy trumps freedom of expression. This is hugely problematic in principle. It is also impracticable: the RTBF mechanism doesn’t actually work in practice, for example because it leaves Google.com (as opposed to Google.co.uk or another EU domain) untouched – a point also made by Professor Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford.

There were some probing questions from the audience too. Mark Stephens, for example, asked Chris Graham how he defined ‘journalism’ (answer: ‘if it walks and quacks like a journalist’…) and how he proposed to fund the extra workload which RTBF complaints would bring for the ICO (answer: perhaps a ‘polluter pays’ approach?).

Joshua Rozenberg asked Peter Barron if there was any reason why people should not switch their default browsers to the RTBF-free Google.com (answer: no) and whether Google would consider giving aggrieved journalists rights of appeal within a Google review mechanism (the Google RTBF mechanism is still developing).

ITN is making the video available on its website this week. Those seeking further detail can also search Twitter for the hashtag #rewritinghistory or see Adam Fellows’ blog post.

The general tenor from the panel was clear: Google Spain has dealt a serious and unjustifiable blow to the freedom of expression.

Lastly, one of my favourite comments came from ITN’s John Battle, referring to the rise of data protection as a serious legal force: ‘if we’d held a data protection debate a year ago, we’d have had one man and his dog turn up. Now it pulls in big crowds’. I do not have a dog, but I have been harping on for some time about data protection’s emergence from the shadows to bang its fist on the tables of governments, security bodies, big internet companies and society at large. It surely will not be long, however, before the right to freedom of expression mounts a legal comeback, in search of a more principled and workable balance between indispensible components of a just society.

Robin Hopkins @hopkinsrobin

Surveillance powers to be kept alive via DRIP

July 15th, 2014 by Robin Hopkins

The legal framework underpinning state surveillance of individuals’ private communications is in turmoil, and it is not all Edward Snowden’s fault. As I write this post, two hugely important developments are afoot.

Prism/Tempora

The first is the challenge by Privacy International and others to the Prism/Tempora surveillance programmes implemented by GCHQ and the security agencies. Today is day 2 of the 5-day hearing before the Investigatory Powers Tribunal. To a large extent, this turmoil was unleashed by Snowden.

DRIP – the background

The second strand of the turmoil is thanks to Digital Rights Ireland and others, whose challenge to the EU’s Data Retention Directive 2006/24 was upheld by the CJEU in April of this year. That Directive provided for traffic and location data (rather than content-related information) about individuals’ online activity to be retained by communications providers for a period of 6-24 months and made available to policing and security bodies. In the UK, that Directive was implemented via the Data Retention (EC Directive) Regulations 2009, which mandated retention of communications data for 12 months.

In Digital Rights Ireland, the CJEU held the Directive to be invalid on the grounds of incompatibility with the privacy rights enshrined under the EU’s Charter of Fundamental Rights. Strictly speaking, the CJEU’s judgment (on a preliminary ruling) then needed to be applied by the referring courts, but in reality the foundation of the UK’s law fell away with the Digital Rights Ireland judgment. The government has, however, decided that it needs to maintain the status quo in terms of the legal powers and obligations which were rooted in the invalid Directive.

On 10 July 2014, the Home Secretary made a statement announcing that this gap in legal powers was to be plugged on a limited-term basis. A Data Retention and Investigatory Powers (DRIP) Bill would be put before Parliament, together with a draft set of regulations to be made under the envisaged Act. If passed, these would remain in place until the end of 2016, by which time longer-term solutions could be considered. Ms May said this would:

“…ensure, for now at least, that the police and other law enforcement agencies can investigate some of the criminality that is planned and takes place online. Without this legislation, we face the very prospect of losing access to this data overnight, with the consequence that police investigations will suddenly go dark and criminals will escape justice. We cannot allow this to happen.”

Today, amid the ministerial reshuffle and shortly before the summer recess, the Commons is debating DRIP on an emergency basis.

Understandably, there has been much consternation about the extremely limited time allotted for MPs to debate a Bill of such enormous significance for privacy rights (I entitled my post on the Digital Rights Ireland case “Interfering with the fundamental rights of practically the entire European population”, which is a near-verbatim quote from the judgment).

DRIP – the data retention elements

The Bill is short. A very useful summary can be found in the Standard Note from the House of Commons Library (authored by Philippa Ward).

Clause 1 provides power for the Secretary of State to issue a data retention notice on a telecommunications services provider, requiring them to retain certain data types (limited to those set out in the Schedule to the 2009 Regulations) for up to 12 months. There is a safeguard that the Secretary of State must consider whether it is “necessary and proportionate” to give the notice for one or more of the purposes set out in s22(2) of RIPA.

Clause 2 then provides the relevant definitions.

The Draft Regulations explain the process in more detail. Note in particular regulation 5 (the matters the Secretary of State must consider before giving a notice) and regulation 9 (which provides for oversight by the Information Commissioner of the requirements relating to integrity, security and destruction of retained data).

DRIP – the RIPA elements

DRIP is also being used to clarify (says the government) or extend (say some critics) RIPA 2000. In this respect, as commentators such as David Allen Green have pointed out, it is not clear why the emergency legislation route is necessary.

Again, to borrow the nutshells from the House of Commons Library’s Standard Note:

Clause 3 amends s5 of RIPA regarding the Secretary of State’s power to issue interception warrants on the grounds of economic well-being.

Clause 4 aims to clarify the extra-territorial reach of RIPA in in relation to both interception and communications data by adding specific provisions. This confirms that requests for interception and communications data to overseas companies that are providing communications services within the UK are subject to the legislation.

Clause 5 clarifies the definition of “telecommunications service” in RIPA to ensure that internet-based services, such as webmail, are included in the definition.

Criticism

The Labour front bench is supporting the Coalition. A number of MPs, including David Davis and Tom Watson, have been vociferous in their opposition (see for example the proposed amendments tabled by Watson and others here). So too have numerous academics and commentators. I won’t try to link to all of them here (as there are too many). Nor can I link to a thorough argument in defence of DRIP (as I have not been able to find one). For present purposes, an excellent forensic analysis comes from Graham Smith at Cyberleagle.

I don’t seek to duplicate that analysis. It is, however, worth remembering this: the crux of the CJEU’s judgment was that the Directive authorised such vast privacy intrusions that stringent safeguards were required to render it proportionate. In broad terms, that proportionately problem can be fixed in two ways: reduce the extent of the privacy intrusions and/or introduce much better safeguards. DRIP does not seek to do the former. The issue is whether it offers sufficient safeguards for achieving an acceptable balance between security and privacy.

MPs will consider that today and Peers later this week. Who knows? – courts may even be asked for their views in due course.

Robin Hopkins @hopkinsrobin

Some results may have been removed under data protection law in Europe. Learn more.

July 3rd, 2014 by Robin Hopkins

This is the message that now regularly greets those using Google to search for information on named individuals. It relates, of course, to the CJEU’s troublesome Google Spain judgment of 13 May 2014.

I certainly wish to learn more.

So I take Google up on its educational offer and click through to its FAQ page, where the folks at Google tell me inter alia that “Since this ruling was published on 13 May 2014, we’ve been working around the clock to comply. This is a complicated process because we need to assess each individual request and balance the rights of the individual to control his or her personal data with the public’s right to know and distribute information”.

The same page also leads me to the form on which I can ask Google to remove from its search results certain URLs about me. I need to fill in gaps like this: “This URL is about me because… This page should not be included as a search result because…” 

This is indeed helpful in terms of process, but I want to understand more about the substance of decision-making. How does (and/or should) Google determine whether or not to accede to my request? Perhaps understandably (as Google remarks, this is a complicated business on which the dust is yet to settle), Google doesn’t tell me much about that just yet.

So I look to the obvious source – the CJEU’s judgment itself – for guidance. Here I learn that I can in principle ask that “inadequate, irrelevant or no longer relevant” information about me not be returned through a Google search. I also get some broad – and quite startling – rules of thumb, for example at paragraph 81, which tells me this:

“In the light of the potential seriousness of that interference, it is clear that it cannot be justified by merely the economic interest which the operator of such an engine has in that processing. However, inasmuch as the removal of links from the list of results could, depending on the information at issue, have effects upon the legitimate interest of internet users potentially interested in having access to that information, in situations such as that at issue in the main proceedings a fair balance should be sought in particular between that interest and the data subject’s fundamental rights under Articles 7 and 8 of the Charter. Whilst it is true that the data subject’s rights protected by those articles also override, as a general rule, that interest of internet users, that balance may however depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.”

So it seems that, in general (and subject to the sensitivity of the information and my prominence in public life), my privacy rights trump Google’s economic rights and other people’s rights to find information about me in this way. So the CJEU has provided some firm steers on points of principle.

But still I wish to learn more about how these principles will play out in practice. Media reports in recent weeks have told us about the volume of ‘right to be forgotten’ requests received by Google.

The picture this week has moved on from volumes to particulars. In the past few days, we have begun to learn how Google’s decisions filter back to journalists responsible for the content on some of the URLs which objectors pasted into the forms they sent to Google. We learn that journalists and media organisations, for example, are now being sent messages like this:

“Notice of removal from Google Search: we regret to inform you that we are no longer able to show the following pages from your website in response to certain searches on European versions of Google.”

Unsurprisingly, some of those journalists find this puzzling and/or objectionable. Concerns have been ventilated in the last day or two, most notably by the BBC’s Robert Peston (who feels that, through teething problems with the new procedures, he has been ‘cast into oblivion’) and The Guardian’s James Ball (who neatly illustrates some of the oddities of the new regime). See also The Washington Post’s roundup of UK media coverage.

That coverage suggests that the Google Spain ruling – which made no overt mention of free expression rights under Article 10 ECHR – has started to bite into the media’s freedom. The Guardian’s Chris Moran, however, has today posted an invaluable piece clarifying some misconceptions about the right to be forgotten. Academic commentators such as Paul Bernal have also offered shrewd insights into the fallout from Google Spain.

So, by following the trail from Google’s pithy new message, I am able to learn a fair amount about the tenor of this post-Google Spain world.

Inevitably, however, given my line of work, I am interested in the harder edges of enforcement and litigation: in particular, if someone objects to the outcome of a ‘please forget me’ request to Google, what exactly can they do about it?

On such questions, it is too early to tell. Google says on its FAQ page that “we look forward to working closely with data protection authorities and others over the coming months as we refine our approach”. For its part, the ICO tells us that it and its EU counterparts are working hard on figuring this out. Its newsletter from today says for example that:

“The ICO and its European counterparts on the Article 29 Working Party are working on guidelines to help data protection authorities respond to complaints about the removal of personal information from search engine results… The recommendations aim to ensure a consistent approach by European data protection authorities in response to complaints when takedown requests are refused by the search engine provider.”

So for the moment, there remain lots of unanswered questions. For example, the tone of the CJEU’s judgment is that DPA rights will generally defeat economic rights and the public’s information rights. But what about a contest between two individuals’ DPA rights?

Suppose, for example, that I am an investigative journalist with substantial reputational and career investment in articles about a particular individual who then persuades Google to ensure that my articles do not surface in EU Google searches for his name? Those articles also contain my name, work and opinions, i.e. they also contain my personal data. In acceding to the ‘please forget me’ request without seeking my input, could Google be said to have processed my personal data unfairly, whittling away my online personal and professional output (at least to the extent that the relevant EU Google searches are curtailed)? Could this be said to cause me damage or distress? If so, can I plausibly issue a notice under s. 10 of the DPA, seek damages under s. 13, or ask the ICO to take enforcement action under s. 40?

The same questions could arise, for example, if my personal backstory is heavily entwined with that of another person who persuades Google to remove from its EU search results articles discussing both of us – that may be beneficial for the requester, but detrimental to me in terms of the adequacy of personal data about me which Google makes available to the interested searcher.

So: some results may have been removed under data protection law in Europe, and I do indeed wish to learn more. But I will have to wait.

Robin Hopkins @hopkinsrobin

GCHQ’s internet surveillance – privacy and free expression join forces

July 3rd, 2014 by Robin Hopkins

A year ago, I blogged about Privacy International’s legal challenge – alongside Liberty – against GCHQ, the Security Services and others concerning the Prism/Tempora programmes which came to public attention following Edward Snowden’s whistleblowing. That case is now before the Investigatory Powers Tribunal. It will be heard for 5 days, commencing on 14 July.

Privacy International has also brought a second claim against GCHQ: in May 2014, it issued proceedings concerning the use of ‘hacking’ tools and software by intelligence services.

It has been announced this week that Privacy International is party to a third challenge which has been filed with the Investigatory Powers Tribunal. This time, the claim is being brought alongside 7 internet service providers: GreenNet (UK), Chaos Computer Club (Germany); GreenHost (Netherlands); Jimbonet (Korea), Mango (Zimbabwe), May First/People Link (US) and Riseup (US).

The claim is interesting on a number of fronts. One is the interplay between global reach (see the diversity of the claimants’ homes) and this specific legal jurisdiction (the target is GCHQ and the jurisdiction is the UK – as opposed, for example, to bringing claims in the US). Another is that it sees private companies – and therefore Article 1 Protocol 1 ECHR issues about property, business goodwill and the like – surfacing in the UK’s internet surveillance debate.

Also, the privacy rights not only of ‘ordinary’ citizens (network users) but also specifically those of the claimants’ employees are being raised.

Finally, this claim sees the right to free expression under Article 10 ECHR – conspicuously absent, for example, in the Google Spain judgment – flexing its muscle in the surveillance context. Privacy and free expression rights are so often in tension, but here they make common cause.

The claims are as follows (quoting from the claimants’ press releases):

(1) By interfering with network assets and computers belonging to the network providers, GCHQ has contravened the UK Computer Misuse Act and Article 1 of the First Additional Protocol (A1AP) of the European Convention of Human Rights (ECHR), which guarantees the individual’s peaceful enjoyment of their possessions

(2) Conducting surveillance of the network providers’ employees is in contravention of Article 8 ECHR (the right to privacy) and Article 10 ECHR (freedom of expression)

(3) Surveillance of the network providers’ users that is made possible by exploitation of their internet infrastructure, is in contravention of Arts. 8 and 10 ECHR; and

(4) By diluting the network providers’ goodwill and relationship with their users, GCHQ has contravened A1AP ECHR.

Robin Hopkins @hopkinsrobin

Privacy, electronic communications and monetary penalties: new Upper Tribunal decision

June 12th, 2014 by Robin Hopkins

Panopticon reported late last year that the First-Tier Tribunal overturned the first monetary penalty notice issued by the Information Commissioner for breaches of the Privacy and Electronic Communications Regulations 2003. This was the decision in Niebel v IC (EA/2012/0260).

The Information Commissioner appealed against that decision. The Upper Tribunal gave its decision on the appeal yesterday: see here IC v Niebel GIA 177 2014. It dismissed the Commissioner’s appeal and upheld the First-Tier Tribunal’s cancellation of the £300,000 penalty imposed for the sending of marketing text messages.

I appeared in this case, as did James Cornwell (also of the Panopticon fold), so I will not be offering an analysis of the case just now. With any luck, one of my colleagues will be cajoled into doing so before too long.

It is worth pointing out simply that this is the first binding decision on the meaning of the various limbs of s. 55A of the DPA 1998, which contains the preconditions for the issuing of a monetary penalty notice.

Robin Hopkins @hopkinsrobin

Google Spain and the CJEU judgment it would probably like to forget.

May 19th, 2014 by Akhlaq Choudhury

In the landmark judgment in Google Spain SL and Google Inc., v Agencia Espanola de Proteccion de Datos, Gonzales (13th May 2014), the CJEU found that Google is a data controller and is engaged in processing personal data within the meaning of Directive 95/46 whenever an internet search about an individual results in the presentation of information about that individual with links to third party websites.  The judgment contains several findings which fundamentally affect the approach to data protection in the context of internet searches, and which may have far-reaching implications for search engine operators as well as other websites which collate and present data about individuals.

The case was brought Mr Costeja Gonzales, who was unhappy that two newspaper reports of a 16-year old repossession order against him for the recovery of social security debts would come up whenever a Google search was performed against his name. He requested both the newspaper and Google Spain or Google Inc. to remove or conceal the link to the reports on the basis that the matter had long since been resolved and was now entirely irrelevant. The Spanish Data Protection Agency rejected his complaint against the newspaper on the basis that publication was legally justified. However, his complaint against Google was upheld. Google took the matter to court, which made a reference to the CJEU.

The first question for the CJEU was whether Google was a data controller for the purposes of Directive 95/46. Going against the opinion of the Advocate General (see earlier post), the Court held that the collation, retrieval, storage, organisation and disclosure of data undertaken by a search engine when a search is performed amounted to “processing” within the meaning of the Directive; and that as Google determined the purpose and means of that processing, it was indeed the controller. This is so regardless of the fact that such data is already published on the internet and is not altered by Google in any way.

 The Court went on to find that the activity of search engines makes it easy for any internet user to obtain a structured overview of the information available about an individual thereby enabling them to establish a detailed profile of that person involving a vast number of aspects of his private life.  This entails a significant interference with rights to privacy and to data protection, which could not be justified by the economic interests of the search engine operator.  In a further remark that will send shockwaves through many commercial operators providing search services, it was said that as a “general rule” the data subject’s rights in this regard will override “not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name” (at paras 81 and 97). Exceptions would exist, e.g. for those in public life where the “the interference with…fundamental rights is justified by the preponderant interest of the general public in having…access to the information in question”.

However, the Court did not stop there with a mere declaration about interference. Given the serious nature of the interference with privacy and data protection rights, the Court said that search engines like Google could be required by a data subject to remove links to websites containing information about that person, even without requiring simultaneous deletion from those websites.

Furthermore, the CJEU lent support to the “right to be forgotten” by holding that the operator of a search engine could be required to delete links to websites containing a person’s information. The reports about Mr Costejas Gonzales’s financial difficulties in 1998 were no longer relevant having regard to his right to private life and the time that had elapsed, and he had therefore established the right to require Google to remove links to the relevant reports from the list of search results against his name. In so doing, he did not even have to establish that the publication caused him any particular prejudice.

The decision clearly has huge implications, not just for search engine operators like Google, but also other operators providing web-based personal data search services. Expect further posts in coming days considering some of the issues arising from the judgment.

Akhlaq Choudhury

Interfering with the fundamental rights of practically the entire European population

April 10th, 2014 by Robin Hopkins

In the Digital Rights Ireland case, the Grand Chamber of the CJEU has this week declared invalid the 2006 Directive which provides for the mass retention – and disclosure to policing and security authorities – of individuals’ online traffic data. It found this regime to be a disproportionate interference with privacy rights. Depending on your perspective, this is a major step forward for digital privacy, or a major step backwards in countering terrorism and serious crime. It probably introduces even more uncertainty in terms of the wider project of data protection reform at the EU level. Here is my synopsis of this week’s Grand Chamber judgment.

Digital privacy vs national security: a brief history

There is an overlapping mesh of rights under European law which aims to protect citizens’ rights with respect to their personal data – an increasingly important strand of the broader right to privacy. The Data Protection Directive (95/46/EC) was passed in 1995, when the internet was in its infancy. It provides that personal data must be processed (obtained, held, used, disclosed) fairly and lawfully, securely, for legitimate purposes and so on.

Then, as the web began to mature into a fundamental aspect of everyday life, a supplementary Directive was passed in 2002 (2002/58/EC) on privacy and electronic communications. It is about privacy, confidentiality and the free movement of electronic personal data in particular.

In the first decade of the 21st century, however, security objectives became increasingly urgent. Following the London bomings of 2005 in particular, the monitoring of would-be criminals’ web activity was felt to be vital to effective counter-terrorism and law enforcement. The digital confidentiality agenda needed to make space for a measure of state surveillance.

This is how Directive 2006/24 came to be. In a nutshell, it provides for traffic and location data (rather than content-related information) about individuals’ online activity to be retained by communications providers and made available to policing and security bodies. This data was to be held for a minimum of six months and a maximum of 24 months.

That Directive – like all others – is however subject to the EU’s Charter of Fundamental Rights. Article 7 of that Charter enshrines the right to respect for one’s private and family life, home and communications. Article 8 is about the right to the protection and fair processing of one’s personal data.

Privacy and Digital Rights Ireland prevail

Digital Rights Ireland took the view that the 2006 Directive was not compatible with those fundamental rights. It asked the Irish Courts to refer this to the CJEU. Similar references were made during different litigation before the Austrian Courts.

The CJEU gave its answer this week. In Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources and Others (C‑293/12) joined with Kärntner Landesregierung and Others (C‑594/12), the Grand Chamber held the 2006 Directive to be invalid on the grounds of its incompatibility with fundamental privacy rights.

The Grand Chamber accepted that, while privacy rights were interfered with, this was in pursuit of compelling social objectives (the combatting of terrorism and serious crime). The question was one of proportionality. Given that fundamental rights were being interfered with, the Courts would allow the European legislature little lee-way: anxious scrutiny would be applied.

Here, in no particular order, are some of the reasons why the 2006 Directive failed its anxious scrutiny test (quotations are all from the Grand Chamber’s judgment). Unsurprisingly, this reads rather like a privacy impact assessment which data controllers are habitually called upon to conduct.

The seriousness of the privacy impact

First, consider the nature of the data which, under Articles 3 and 5 the 2006 Directive, must be retained and made available. “Those data make it possible, in particular, to know the identity of the person with whom a subscriber or registered user has communicated and by what means, and to identify the time of the communication as well as the place from which that communication took place. They also make it possible to know the frequency of the communications of the subscriber or registered user with certain persons during a given period.”

This makes for a serious incursion into privacy: “Those data, taken as a whole, may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained, such as the habits of everyday life, permanent or temporary places of residence, daily or other movements, the activities carried out, the social relationships of those persons and the social environments frequented by them.”

Second, consider the volume of data gathered and the number of people affected. Given the ubiquity of internet communications, the 206 Directive “entails an interference with the fundamental rights of practically the entire European population”.

Admittedly, the 2006 regime does not undermine “the essence” of data protection rights (because it is confined to traffic data – the contents of communications are not retained), and is still subject to data security rules (see the seventh data protection principle under the UK’s DPA 1998).

Nonetheless, this is a serious interference with privacy rights. It has objective and subjective impact: “it is wide-ranging, and it must be considered to be particularly serious… the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance.”

Such a law, said the Grand Chamber, can only be proportionate if it includes clear and precise laws governing the scope of the measures and providing minimum safeguards for individual rights. The 2006 Directive fell short of those tests.

Inadequate rules, boundaries and safeguards

The regime has no boundaries, in terms of affected individuals: it “applies even to persons for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious crime”.

It also makes no exception for “persons whose communications are subject, according to rules of national law, to the obligation of professional secrecy”.

There are no sufficiently specific limits on the circumstances in which this can be accessed by security bodies, on the purposes to which that data can be put by those bodies, or the persons with whom those particular bodies may share the data.

There are no adequate procedural safeguards: no court or administrative authority is required to sign off the transfers.

There are also no objective criteria for justifying the retention period of 6-24 months.

The Grand Chamber’s conclusion

In summary, the Grand Chamber found that “in the first place, Article 7 of Directive 2006/24 does not lay down rules which are specific and adapted to (i) the vast quantity of data whose retention is required by that directive, (ii) the sensitive nature of that data and (iii) the risk of unlawful access to that data, rules which would serve, in particular, to govern the protection and security of the data in question in a clear and strict manner in order to ensure their full integrity and confidentiality. Furthermore, a specific obligation on Member States to establish such rules has also not been laid down…”

There was also an international transfer aspect to its concern: “in the second place, it should be added that that directive does not require the data in question to be retained within the European Union…”

This last point is of course highly relevant to another of the stand-offs between digital privacy and national security which looms in UK litigation, namely the post-Snowden litigation against security bodies.

Robin Hopkins @hopkinsrobin

The Google/Safari users case: a potential revolution in DPA litigation?

January 16th, 2014 by Robin Hopkins

I posted earlier on Tugendhat J’s judgment this morning in Vidal-Hall and Others v Google Inc [2014] EWHC 13 (QB). The judgment is now available here – thanks as ever to Bailii.

This is what the case is about: a group of claimants say that, by tracking and collating information relating to their internet usage on the Apple Safari browser without their consent, Google (a) misused their private information (b) breached their confidences, and (c) breached its duties under the Data Protection Act 1998 – in particular, under the first, second, sixth and seventh data protection principles. They sought damages and injunctive relief.

As regards damages, “what they claim damages for is the damage they suffered by reason of the fact that the information collected from their devices was used to generate advertisements which were displayed on their screens. These were targeted to their apparent interests (as deduced from the information collected from the devices they used). The advertisements that they saw disclosed information about themselves. This was, or might have been, disclosed also to other persons who either had viewed, or might have viewed, these same advertisements on the screen of each Claimant’s device” (paragraph 24).

It is important to note that “what each of the Claimants claims in the present case is that they have suffered acute distress and anxiety. None of them claims any financial or special damage. And none of them claims that any third party, who may have had sight of the screen of a device used by them, in fact thereby discovered information about that Claimant which was detrimental” (paragraph 25).

The Claimants needed permission to serve proceedings on the US-based Google. They got permission and served their claim forms. Google then sought to have that service nullified, by seeking an order declaring that the English court has no jurisdiction to try these particular claims (i.e. it was not saying that it could never be sued in the English courts).

Tugendhat J disagreed – as things stand, the claims will now progress before the High Court (although Google says it intends to appeal).

Today’s judgment focused in part on construction of the CPR rules about service outside of this jurisdiction. I wanted to highlight some of the other points.

One of the issues was whether the breach of confidence and misuse of private information claims were “torts”. Tugendhat J said this of the approach: “Judges commonly adopt one or both of two approaches to resolving issues as to the meaning of a legal term, in this case the word “tort”. One approach is to look back to the history or evolution of the disputed term. The other is to look forward to the legislative purpose of the rule in which the disputed word appears”. Having looked to the history, he observed that “history does not determine identity. The fact that dogs evolved from wolves does not mean that dogs are wolves”.

The outcome (paragraphs 68-71): misuse of private information is a tort (and the oft-cited proposition that “the tort of invasion of privacy is unknown in English law” needs revisiting) but breach of confidence is not (given Kitetechnology BV v Unicor GmbH Plastmaschinen [1995] FSR 765).

Google also objected to the DPA claims being heard. This was partly because they were raised late; this objection was dismissed.

Google also said that, based on Johnson v MDU [2007] EWCA Civ 262; (2007) 96 BMLR 99, financial loss was required before damages under section 13 of the DPA could be awarded. Here, the Claimants alleged no financial loss. The Claimants argued against the Johnson proposition: they relied on Copland v UK 62617/00 [2007] ECHR 253, argued for a construction of the DPA that accords with Directive 95/46/EC as regards relief, and argued that – unlike in Johnson – this was a case in which their Article 8 ECHR rights were engaged. Tugendhat J has allowed this to proceed to trial, where it will be determined: “This is a controversial question of law in a developing area, and it is desirable that the facts should be found”.

If the Johnson approach is overturned – i.e. if the requirement for financial loss is dispensed with, at least for some types of DPA claim – then this could revolutionise data protection litigation in the UK. Claims under section 13 could be brought without claimants having suffered financially due to the alleged DPA breaches they have suffered.

Tugendhat went on to find that there were sufficiently serious issues to be tried here so as to justify service out of the jurisdiction – it could not be said that they were “not worth the candle”.

Further, there was an arguable case that the underlying information was, contrary to Google’s case, “private” and that it constituted “personal data” for DPA purposes (Google say the ‘identification’ limb of that definition is not met here).

Tugendhat was also satisfied that this jurisdiction was “clearly the appropriate one” (paragraph 134). He accepted the argument of Hugh Tomlinson QC (for the Claimants) that “in the world in which Google Inc operates, the location of documents is likely to be insignificant, since they are likely to be in electronic form, accessible from anywhere in the world”.

Subject to an appeal from Google, the claims will proceed in the UK. Allegations about Google’s conduct in other countries are unlikely to feature. Tugendhat J indicated a focus on what Google has done in the UK, to these individuals: “I think it very unlikely that a court would permit the Claimants in this case to adduce evidence of what Mr Tench refers to as alleged wrongdoing by Google Inc against other individuals, in particular given that it occurred in other parts of the world, governed by laws other than the law of England” (paragraph 47).

Robin Hopkins @hopkinsrobin