EDPB's problematic opinion on consent-or-pay models
The seemingly narrow rejection of the consent-or-pay model for large online platforms relies on shoddy argumentation, spelling trouble for data subjects when it comes to smaller controllers
The EDPB has recently released its Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms. I’ve was against this Opinion since it was requested because I knew it would be drafted under the pressure of deadlines and would only be limited to large online platforms (LOPs). Additionally, the work on the guidelines, which allow for course-correction and stakeholder feedback, was already underway. But now that this Opinion has been released, the EDPB will needlessly duplicate — though they didn’t have a choice — the work on these guidelines. Privacy professionals don’t have unlimited amount of time on their hands to read all these documents, which may conflict with each other and the national guidelines remain in place and will probably not be rescinded in any event anyway. The Opinion sets a precedent and the EDPB is now somewhat backed into a corner in its future guidance and decision-making. I really don’t see what this has achieved and I’m not sure the DPAs got what they were hoping out of it either. This probably the most crucial GDPR legal debate, yet neither the EDPB, which often displays remarkable lucidity, nor the DPAs seem to have strategized and though this through. The result is some of the most lackluster work the EDPB has put out in recent memory, and I have a pretty high opinion of them. But maybe it’s the DPAs playing politics — protecting the news media sector (and maybe SMEs) more specifically — when they should be applying the law.
Detriments and the Schrödinger's commodity
If refusing consent (to use of personal data for targeting behavioral ads) means having to take the time to enter your financial details, having that data retained for a decade, spend money (however small the amount), perhaps budget it and otherwise keep track of all these payments (not to mention the receipts flooding your inbox!) is that a detriment? Well, for the EDPB the answer to that question is apparently a very tough nut to crack. The Opinion seemingly makes sure to not exclude (i) those with tons of money and time on their hands and (ii) situations where even consenting entails payment and refusing consent is only marginally (so as to not affect the freedom of choice) pricier, instead of acknowledging that charging even a cent is enough to deter most from refusing to consent:
In order to avoid detriment within the meaning of Recital 42 GDPR and ensure that data subjects have the possibility to make a genuine choice, the manner in which the service is offered as well as the fee (if any) should not be such to effectively inhibit data subjects from making a free choice, for example by nudging the data subject towards consenting. Therefore, the fee in question should not be inappropriately high, which is further addressed in Section 4.2.1.4.2.
If the data subject refuses to consent or withdraws consent, and does not pay the requested fee, they would not be able to use the service, which may [oh, really?] constitute a detriment for the data subject. […]
They acknowledge that personal data is not a commodity (Recital 24 of the Digital Content Directive), yet pontificate over 42 pages how tying the refusal to give consent to payment requires balancing and a “case-by-case” analysis. Gosh, I really wonder, why does refusing consent come with a price tag? Could it have to do with economic interests of the controller?1 If so, doesn’t that itself acknowledge that data can be a commodity? It’s absurd to say that personal data is not a tradable commodity2 and then immediately switch to discussing pricing:
In respect of the imposition of a fee to access the 'equivalent alternative' version of the service, the EDPB recalls that personal data cannot be considered as a tradeable commodity, and controllers should bear in mind the need of preventing the fundamental right to data protection from being transformed into a feature that data subjects have to pay to enjoy. Controllers should assess, on a case-by-case basis, both whether a fee is appropriate at all and what amount is appropriate in the given circumstances, taking into account possible alternatives to behavioural advertising that entail the processing of less personal data as well as the data subjects' position.
Refusing consent shall not be harder than giving it
The EDPB makes a somewhat sweeping and unsubstantiated (though likely correct) assumption that
Controllers must ensure that data subjects have a real freedom of choice when asked to consent to processing of their personal data, and they may not limit data subjects’ autonomy by making it harder to refuse rather than to consent. […]
GDPR contains no explicit requirement that refusing consent shall not be harder than giving it, yet EDPB addresses this in a single sentence and a footnote, which says that the same applies to withdrawing consent (something the GDPR does actually explicitly require), pointing to Art. 7(2)3. No further analysis needed apparently. But when it comes to the obvious question of whether or not consent can be freely given when refusing it is conditioned on payment, we get excessive word salad that doesn’t even address some of the crucial arguments against consent-or-pay. But back to the previous point, the EDPB is even more explicit in another footnote:
Whilst consenting can often be done by a single action, refusing consent could potentially require the data subject to go through a longer and more cumbersome payment process, possibly connected with further data processing activities.
Yet again, this is where the analysis stops. No mention that this point alone is enough to kill pretty much all consent-or-pay models where consenting is free but refusal comes with a receipt.
The third option
Consent has to be informed but the EDPB doesn’t point out that data subjects could be easily confused and may — in order to just get to the content they want — consent anyway when there are three options presented: one to consent to behavioral advertising, the paid (ad-free) tier and a free alternative which uses little or no personal data. It’s not a stretch to assume that users could be confused as to why they’re being offered a paid option for something they can (seemingly) just get for free.
I have a feeling the reason they don’t mention it is because the EDPB is fixated on controllers “considering”4 offering of a third free-of-charge alternative that uses little or no personal data, as they already stated in their response to the Cookie Pledge5.
This alternative must entail no processing for behavioural advertising purposes and may for example be a version of the service with a different form of advertising involving the processing of less (or no) personal data, e.g. contextual or general advertising or advertising based on topics the data subject selected from a list of topics of interests. […]
While I agree that, in principle, this is the optimal solution and that such an equivalent (!) alternative (in place of an unnecessary paid tier) is desirable (legally mandatory, even), I’m afraid the EDPB is confusing lex lata with lex ferenda. GDPR does not contain a “just a little personal data is fine” provision that would exempt such ad targeting from needing to obtain consent, nor does the EDPB make clear what sort of personal data processing that would even entail6. Recital 68 DSA states in no uncertain terms that GDPR requires consent for the use of personal data in ad targeting. I’m skeptical that such an exemption can be read into legitimate interests under the GDPR and there is absolutely no room for it under the ePrivacy Directive anyway7. The recently passed Political Advertising Regulation also contains no such exemptions and the data subject must provide consent for the use of personal data specifically for purpose of targeting and delivery of political ads.
This is why, in my opinion, a European Advertising Code is necessary to provide for limited, targeted and specific exemptions to the need to obtain consent when processing personal data for the targeting of contextual ads. This should, at a minimum (and perhaps a maximum) include approximate8 and ephemeral location data, which is often inferred from the IP address, which would otherwise again require consent under the ePrivacy Directive. Other exempted data points could also include the data subject’s approximate age group and gender, but those may not be as ephemeral.
There are technical ways around this restrictions while still providing for at least some targeting. Apple’s Private Relay is a VPN on steroids, a hybrid between a regular VPN and the Tor and other anonymity networks. Importantly, when users connect to it, they are assigned an IP address within their jurisdiction. While Private Relay doesn’t provide the same level of unidentifiability as the Tor network, it may still be enough to escape GDPR’s scope (but perhaps not that of the ePrivacy Directive). But the point is that you approximate location targeting (with the resolution of an entire jurisdiction) is maintained without the publisher or the ad network touching IP addresses which may otherwise be considered personal data. The ad industry often sees Private Relay as a problem because that it makes IP address tracking hard, if not impossible — though IP address tracking a very bad tracking technology in general —but in this case it could actually be a solution. Google is also working on a roughly similar IP address masking service.
We don’t have enough case law on legitimate interests and perhaps CJEU is willing to read some exemptions into that, but even then Art. 21(2) GDPR provides for an unqualified and free-of-charge9 opt-out for such processing. Assuming legitimate interests can be used (and most people don’t opt-out), the other laws I previously mentioned do not seem to offer such off-ramps for reasonably well-targeted contextual ads. If we want contextual ads to be workable in reality and show politicians that a more privacy-preserving and economically-viable internet is possible, such off-ramps are necessary.
Processing principles and other aspects of consent
Credit where credit is due, the analysis surrounding network and lock-in effects is pretty solid, though I would’ve liked it even more if interoperability was mentioned as a mitigating factor. But a lot of what remains uses plenty of weasel words (“likely”, “may”) and analysis of other GDPR provisions that shows just how eager the EDPB is to dance around this problem instead of stating the obvious in clear and no uncertain terms: GDPR does not permit the consent-or-pay model except in circumstances where other rights —like freedom of the press and its ability to finance itself — outweigh the right to data protection and such an interpretation is needed to prevent the GDPR from being declared invalid.
Some analysis around consent and GDPR principles, chiefly fairness, is also interesting but —despite clearly pointing in that direction — bewilderingly does not lead the EDPB to conclude that it’s near impossible for controllers to claim that they can employ the consent-or-pay model:
[…] Key elements of the fairness principle include, among others, the need for the processing to correspond with data subjects’ reasonable expectations, the need for the controller to not unfairly discriminate against data subjects10 or exploit their [financial?] needs or vulnerabilities, the need to avoid or account for power imbalances, and the need to avoid any deceptive or manipulative language or design. In this respect, the EDPB recalls the need to avoid deceptive design patterns. Additionally, the controller should take into account the processing’s wider impact on individuals’ rights and dignity, and grant the highest degree of autonomy possible to data subjects. This is key for controllers to bear in mind especially whenever the processing they engage in is particularly intrusive. The EDPB also notes that fairness can act as an easily-understandable touchstone or reference point for controllers when evaluating a ‘consent or pay’ model. In this regard, it is important that controllers are able to demonstrate why they consider certain choices are in line with the principle of fairness as described in the previous paragraph. This is particularly important if the controller narrows down the data subject's range of choices (e.g. by not providing a Free Alternative Without Behavioural Advertising, as described below in Section 4.2.1.1) or which may risk unduly influencing the data subject's choice (e.g. by charging a fee that is such to effectively inhibit data subjects from making a free choice).
Additionally, as previously observed by the EDPB, where a clear imbalance of power exists, consent can only be used in ‘exceptional circumstances’ and where the controller, in line with the accountability principle, can prove that there are no ‘adverse consequences at all’ for the data subject if they do not consent, notably if data subjects are offered an alternative that does not have any negative impact. In the context of this Opinion, such an alternative could [not should?] be the offering of the Free Alternative Without Behavioural Advertising.
As a power imbalance sufficient enough for the EDPB to consider it a detriment does not exist in many every day situations, I suppose data subjects may soon be left to scour an app store for half an hour before they find a QR code reader that doesn’t spy on them without requesting payment in return. They make an important observation that such an imbalance of power is especially an issue when the data subject is a child…
Additionally, the target or predominant audience of the platform is an element to be considered. For example, if the platform is primarily directed at children, through the design or marketing of the service, or it is used predominantly by children or other vulnerable persons, this may also lead to a clear imbalance between the controller and the data subjects.
…but, once more, they don’t go the extra mile — in this case to consider that children often also use services where kids are not the main audience. Some service-splitting (like YouTube and Instagram for Kids) is possible, perhaps based on the kinds of content presented, but inevitably kids may be confronted with consent screens which are otherwise meant for adults. In some ways that’s inevitable with consent in other scenarios as well and perhaps controllers are just supposed to bite the bullet in instances when there’s mismatch. Determining whether the data subject is a child may itself be a violation of the principle of data minimization and shows how the GDPR may work back on itself to prevent such circumventions. Nevertheless, if consent-or-pay is accepted, kids and teenagers are especially likely to suffer a detriment (and likely loss of service) if they don’t consent, as they simply don’t have the financial means to pay for the tracking-free option. This obvious fact is not mentioned, yet far less important factors are (and repeatedly). (Yes, the DSA has a ban on targeting minors with ads based on profiling, but that is not a complete ban, either in terms of the personal data processed for targeting or entities and activities covered.)
Equivalent means actually equivalent
What I was positively surprised about is how strongly the Opinion came out against another tactic that will likely be used evade the GDPR after (if?) consent-or-pay is defeated: the tying of the refusal to give consent with more ads, the loss of certain (non-essential) features and functionality or other detriments. Previous guidelines on consent indicated that the EDPB was ready to accept at least some “incentives”, but this Opinion seems somewhat less forgiving if those incentives alter the functioning of a service, though the versions don’t have to be absolutely identical11:
If the Alternative Version differs from the Version With Behavioural Advertising only to the extent necessary as a consequence of the controller not being able to process personal data for behavioural advertising purposes, it can be in principle regarded as equivalent.
In other cases, the assessment can depend, taking the Version With Behavioural Advertising as point of departure, on whether the Alternative Version in essence contains the same elements and functions. While equivalence exists if the Alternative Version includes in principle the same features and functions (functional equivalence), the Alternative Version and the Version with Behavioural Advertising do not have to be absolutely identical.
If, compared to the Version With Behavioural Advertising, the Alternative Version is not of a different or degraded quality, and no functions are suppressed (unless any changes are a direct consequence of the controller not being able to process personal data for the purposes for which it sought consent), then the Alternative Version can likely be considered to be genuinely equivalent to the Version With Behavioural Advertising.
The more the Alternative Version differs from the Version With Behavioural Advertising, the less likely it is for the Alternative Version to be considered as genuinely equivalent, although this remains a caseby-case assessment.
Equivalence - meaning ‘having the same value’ - points in two directions. On one hand, as indicated above, if the Alternative Version was of a lower quality or is less rich in functionalities than the Version With Behavioural Advertising, users would not be presented with a real choice.
On the other hand, the possibility of including additional functionalities in the Alternative Version should be evaluated with caution: this is because a genuine equivalence between the versions of the service, as described above, has to be maintained, and users need to be able to make a genuine choice.
This doesn’t rule out alterations completely, but it’s a relief that the EDPB puts it foot down at least here, whereas I thought this would be a space where they’d try to insert more balancing and complexity. In a footnote they also mention Recitals 36 and 37 DMA, which almost explicitly don’t permit this for the gatekeepers:
See Article 5(2) DMA. In addition, see Recital 36 of the DMA: ‘[t]o ensure that gatekeepers do not unfairly undermine the contestability of core platform services, gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalised but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent.’ Recital 37 of the DMA adds that: ‘[t]he less personalised alternative should not be different or of degraded quality compared to the service provided to the end users who provide consent, unless a degradation of quality is a direct consequence of the gatekeeper not being able to process such personal data or signing in end users to a service’.
But again the DMA also says the alternative “should not be different”, which it clearly is if you charge for access to it, but that is conveniently ignored by the EDPB. I wish they weren’t as apologetic elsewhere and left more wiggle room here, as some reasonable compromises could be struck (especially in other jurisdictions).
Enforcement and broader outlook
Consent is not some bureaucratic box-ticking exercise to placate the legal department while the controller more or less still does everything as before. It’s an essential component and safeguard of European data protection regimes that is meant to restrict processing when data subjects are not comfortable with it. What is the point of narrowly (yet correctly) interpreting every other legal basis, only to then “course-correct” by devaluing consent to a pro-forma instrument that only results in consent fatigue? Even more absurdly, the EDPB was supportive of a ban on behavioral ads but for some reason feels the need to mince words when it comes to making a far less radical consent requirement a reality.
The Norwegian Privacy Appeals Board also dangerously suggested that a consent-or-pay model may be appropriate on Grindr of all places. If consent is this easy to defeat, then why bother with it at all? Sorry to be blunt, but the EDPB has frankly lost its mind if it thinks DPAs can police these (more or less non-existent) nuances. That is true even if Art. 5(2) and Art. 7(1) provide for an inverse burden of proof, though I don’t know how much national courts care about it, given that even the General Court has trouble applying it (EDPS v. SRB). Industry could soon pick up on this weakness and, before you know it, the EU could be (in some areas at least) reduced to a US-style data-sharing free for all.
Obviously there’s a plethora of other arguments against consent-or-pay that the EDPB does not address but that I intend to in the consultation on the guidelines. I understand that opinions and guidelines are not extensive legal analyses, but it seems worth pointing out that the legislator didn’t intend for us to live in a consent-or-pay world, nor are the DPAs properly resourced and equipped to handle that world. It doesn’t seem like it has dawned on the EDPB that watering down consent this way basically kills any viable centralized consent control mechanism (browser signals essentially) in the ePrivacy Regulation. If they’re so worried about the news media and smaller controllers, why did they sign onto banning behavioral ads in the first place? Are they worried about CJEU invalidating GDPR because of a Charter of Fundamental Rights (CFR) breach? Wouldn’t that admit that a ban on behavioral ads would infringe on the CFR even more? I don’t get these internal contradictions.
Maybe I’m a little too harsh and the EDPB was spooked by Meta v. Bundeskartellamt, but the Opinion finds its way around that and correctly points out that other language versions of the judgement don’t contain the as strong “if necessary for an appropriate fee” wording. The Opinion starts off pretty bad but improves towards the end, though the EDPB drags in lots of other GDPR requirements that aren’t directly related to consent-or-pay. The analysis is repetitive and superficial without taking any hard stances on the issue.
This Opinion should alarm anyone in civil society that has been working in the area of or related to data protection. There was initial reluctance from noyb but I really appreciate how they’ve come around in agreeing that consent-or-pay is almost categorically banned by GDPR. If the EDPB is this lenient, then god help us when they’ll water this down even further for smaller and less important products and services. In my assessment the EDPB is far from being ready (or even willing) to take the jump to (almost completely) rule out consent-or-pay, which is why it’s that much more important to lobby it (before and) after the consultation is opened on the upcoming guidelines. It’s arguably a far weaker stance on the issue than they took in their Guidelines 05/2020 on consent under Regulation 2016/679, which (like its more lax view on “incentives”) are out of step with this Opinion.
The first mover advantage of whoever gets their case before the Court of Justice first should not be underestimated. The Court caused a lot of trouble with its uncalled for and unnecessary “if necessary for an appropriate fee” obiter dictum in Meta v. Bundeskartellamt. More may be in store as judges were asking consent-or-pay questions in the Schrems v. Facebook Ireland Ltd. (C-446/21) hearing in December. The AG Opinion in that case is due April 25th. The European Commission is also investigating Meta’s implementation of the consent-or-pay model under Art. 5(2) DMA and Art. 38 DSA12, so they must be a target of lobbying as well. Meta’s appeal on that front will likely land before the General Court next year and would almost certainly be appealed to the Court of Justice afterwards. Hopefully Schrems or others can get the case before the Court of Justice before then. Otherwise civil society should think about submitting informal interventions before the Court.
Further reading
The fundamental rights of the data subjects also, as a rule, outweigh the economic interests of the controller (see C-131/12), which is another fact not mentioned by the EDPB.
I think it actually sometimes is and should be in the context of Art. 6(1)(b) but that’s a discussion for another time.
This is a small slip up by the EDPB as the relevant provisions are actually contained in Art. 7(3), not 7(2).
The EDPB apparently thinks they don’t have to. The Opinion uses the word “consider” in this and similar contexts so many times that it’s starting to resemble ICO guidance. I guess we have to “pretty please” ask controllers not to infringe our rights now? (I suspect a document editing war took place over this wording.)
The pledge is dead in the water because it essentially just spells out the existing legal requirements, which the industry was, surprise surprise (to no one who has paid attention to this field), not too keen to sign onto.
I’m guessing they mean advertising, which is not behavioral, but despite an attempt at a definition in Section 2.1.2 of the Opinion, it’s not exactly what would and wouldn’t fall into that category.
The ePrivacy Directive itself needs amendments (or clarifications in the recitals), as it may even require consent for the transmission (using IP addresses, which are communications metadata) and placement of ads onto terminal equipment. Art. 5(3) has been (correctly) read very restrictively and it’s not even clear if sidebars, widgets and other content the user has not explicitly requested would require consent either.
“Approximate” is also a relative term and, given sufficient precision, such data may even be sensitive data since it could indicate religious affiliation (e.g. the data subject lives in a predominantly Muslim neighborhood) or ethnicity (e.g. the data subject lives in a Roma community). This is why the legislator should provide a more specific measurement of maximum resolution for such data. Alternatively, data subjects could declare their approximate location themselves as they may want to receive ads that are not wholly irrelevant to them (see Erik Bugge’s ideas).
Though GDPR does not explicitly say that other conditions (such as “without detriment”) can be attached to that opt-out. Moreover, Art. 21(2) may be read as requiring a free-of-charge opt-out even when the legal basis is consent, but that is not mentioned in the Opinion.
[Such as discriminating between wealthy and poor data subjects.]
What sort of divergences are permissible? Perhaps changing the color of a button, but controllers will try things that are much more brazen.
The EDPB has also picked up on this in their Opinion as well. It seems like ad systems are DSA recommender systems in the mind of the European Commission. But Art. 38 DSA doesn’t require the less personalized option to be free-of-charge or even maintained as the default, though maybe that can be read into the DSA through its provisions on dark patterns and risk mitigation. I think the Art. 5(2) DMA (partial) duplication of GDPR’s requirements make sense, not only for clarity and legal certainty, but also because violations of those provisions are much more harmful when done by gatekeepers and must thus be sanctioned much more vociferously. However the application of Art. 38 DSA to ad systems seems like an unnecessary duplication.
Erm. You say that there is no explicit requirement that consent needs to be as easy to reject as it is to grant. While it doesn't specifically say 'refuse' article 7(3) explicitly states: "3. The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent."
It seems a reasonable cognitive step to assert that if it needs to be as easy to withdraw as to grant, it should also be as easy to refuse consent in the first place.