There's rarely a quiet week in data protection — and this one was no exception. Below are three developments from the past seven days that caught my eye.

Story #1: Death of the one stop shop?

Have reports of the death of the GDPR's one stop shop been greatly exaggerated, in light of guidance issued last week by the European Data Protection Board? No, they have not.

The EDPB's guidance (link here) means that it will now be more difficult for some organisations, and close to impossible for others, to rely on the OSS mechanism that has become a controversial feature of GDPR enforcement in recent years.

That's because the guidance makes clear that an organisation's place of central administration (i.e., its main establishment) in the EU must (1) take the decisions on the purposes and means of personal data processing and (2) have the power for those decisions to be implemented. In practice, this means companies can no longer use a service office in the EU, or defer the ultimate decisions on the processing of personal data to the U.S., and expect to be able to rely on the OSS.

Although the law hasn't changed, what makes this different is that the guidance doesn't pull its punches.

The EDPB warns companies not to forum shop (using those words) in order to identify a main establishment. Companies must also be able to provide evidence that decision-making powers are exercised in the EU. And the fact that a company has a main establishment for certain processing activities doesn't mean that all processing conducted by that entity falls into scope (i.e., when the decisions on processing are taken elsewhere).

Regulators and privacy interest groups will now be more emboldened than ever to challenge — or, possibly, ignore — a company's assertion that it is subject to the jurisdiction of a lead supervisory authority. And the regulator whose authority will be challenged most often will inevitably be the Irish Data Protection Commission.

Indeed, it doesn't seem coincidental that the EDPB opinion was requested by the French supervisory authority, which has a history of finding creative solutions to avoid the OSS. These include bringing enforcement actions under the ePrivacy Directive and using the GDPR's urgency procedure to assert jurisdiction over companies whose lead regulator is — also not coincidentally — the DPC.

The power dynamics between EU national regulators make for a fascinating, albeit one-dimensional, story. The DPC has spent five years being criticised by its counterparts for its alleged unwillingness to bring enforcement actions, its sluggishness in bringing the actions that it does bring, and its tendency to issue lighter sanctions than the EDPB would like.

And now the EDPB's guidance will further chip away the DPC's authority — whether or not that was the intention. Companies under the DPC's supervision should pay particular attention to the guidance, but it is essential reading for organisations across the EU — and particularly for those with U.S. parent companies.

Story #2: AI and recruitment

In the UK, it's application season for aspirant lawyers, which got me thinking about recruitment and — of course — data protection (and AI).

The BBC ran a story last week about how AI hiring tools may filter out the best candidates. The piece (link here) is a little unbalanced, but it's still worth a read to understand the ways that businesses are using AI for recruitment and how individuals are challenging those practices.

At the outset, it's worth saying that there has always been bias in recruitment. Clearly, that's undesirable — and part of the reason that organisations are turning to AI and automated decision-making. But expecting technology — particularly a nascent one — to eliminate bias altogether, or to operate sight unseen, is wishful thinking.

Software can help to eliminate the biases that you're aware of — conscious and unconscious, even if the latter may not seem axiomatic. And it can do this well, and often very well. But what about the unknown unknowns (i.e., Rumsfeld) biases? And biases that may be inherent in the training data?

In spite of these challenges, the use of AI in recruiting — and the HR function generally — is going to increase. The BBC cites a recent IBM survey which found that 42% of respondents were already using AI screening and a further 40% were considering doing so.

With that in mind, what are some of the data protection considerations for using AI in recruitment?

  1. Transparency. What are you telling applicants about how you'll use their data, how the software will make decisions about them and these consequences of those decision? This is not always straightforward, but as always, people are less likely — and will have fewer grounds — to complain if they've been made aware of what to expect.
  2. Risk assessments. One is always conscious of avoiding death by data protection impact assessment, but conducting a robust DPIA is critical (and legally required).
  3. Data subject rights. The GDPR gives individuals the right not to be subject to solely automated decisions (including profiling) that have legal and similarly significant effects on them. That makes it important to understand the level of human involvement in the process. Will the AI sort applications into categories, all of which are reviewed by a member of the HR team? Or will candidates be screened out automatically if they don't meet your educational background requirements? In any event, you will need to have internal processes in place that allow applicants to request human intervention or to contest the decision.
  4. Vendor management. Doing your diligence is vital. What are you buying, and from whom? What is the source of the training data? How are the vendor's models tested and verified? What are their security measures? If you're not comfortable, do not proceed.

Story #3: Settling data subject complaints

I came across an interesting regulatory enforcement action in Belgium involving a fact pattern with which most companies will be familiar.

The scenario goes something like this. An individual makes a subject rights request and/or complains to the regulator about the controller, often in the context of a wider (e.g., employment) dispute. Ultimately, the parties settle their differences — a condition of which is that the individual withdraws their request and/or complaint.

In the Belgian case, an individual made a number of personal data deletion requests to a website, which were not honoured. The individual complained to the Belgian regulator, the APD, which ordered the controller to comply — but seemingly it did not.

Some time later, the individual informed the APD that his data had been removed from the website, such that further regulatory action was no longer needed. The APD decided otherwise — on the basis that once it had found the individual's claim was admissible, it needed to assess whether that claim constituted a GDPR breach.

In practice, when informed that a complaint has been resolved, regulators will often leave things there. But as the Belgian example shows, that may not always be the case.

It's self-evident that regulatory authorities have an independent purview — indeed, obligation — to investigate GDPR non-compliance. As such, it's important to be aware that they may find that honouring the subject rights request, or even reaching a financial settlement, did not cure the underlying issue.

It's a bit like when my kids are at each other's throats but, just as quickly, they make up. I may still want to know what's gone on — and to dole out a sanction, if appropriate.

So what are the learnings here (from the APD's decision, not my parenting skills)?

Clearly, it's preferable not to get to the point where the individual has complained to the regulator because you have ignored their request or complaint. Given the degree to which individuals routinely weaponise data protection in (non-related) disputes, it won't always be possible to stop them going to the regulator.

But starting on the back foot (i.e., because you haven't addressed the individual's request without good grounds for doing so) is likely to make the regulator more willing to pursue — or at least look more closely — at your case. And if the parties settle, think about how to communicate that to the regulator. Will it be done by the controller, the individual, or both?

I've seen all three options work well, but it's a case-specific decision, taking into account the nature of the complaint (and the potential or actual non-compliance in question), the status of the case and the tenor of previous communications with the regulator.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.