The Graveyard of Past Consents
User consent is one of the foundational requirements of good data privacy. In simple terms, if users are being asked to share personal data they should understand why that data is being requested and how it will be used. If they choose to share it, they should be empowered to exercise control over their information.
The question of what constitutes sufficient consent, and whether or when consent should be required, has been the topic of ongoing discussion amongst companies, academics and lawmakers for years. A broad swath of new(ish) global privacy laws, including Europe’s GDPR, California’s CCPA, Brazil’s LGPD, and China’s PIPL, have expanded the requirements for what constitutes a valid consent and imposed significant penalties for organizations that don’t comply with those requirements.
To the detriment of both users and organizations, however, the current approach to consent focuses almost exclusively on the moment that the initial consent is obtained. User interfaces are built to offer users only a binary choice of whether or not to share their data. Once that decision is made, the user relinquishes control over their data, loses visibility into how their data may continue to be used, and in most cases may soon forget that they even shared their data in the first place. This myopic focus on only the initial granting of consent, and the lack of technological tools that would enable users to retain real visibility and control over their data, gives rise to an insecure and deeply problematic state of affairs for users.
Perspective Matters
To appreciate the full extent of the consent problem, it can be helpful to shift our perspective away from a single interaction between a user and an organization, which is the standard framing, and to look more broadly at how users share data in the real world. In our current digital environment, users share data, create new accounts, and otherwise engage with various online systems on a daily basis. They order lunch from a restaurant and share their name, address, and preferences with the restaurant (or a third party delivery service) and likely share payment information with a third party payment provider. If they order from a different restaurant at dinner, the process repeats with a different restaurant, potentially a new delivery service, and a new payment provider. So it goes throughout an average day, week, month, and year. Buy a couch, sign up for a VPN, open a bank account, buy a t-shirt, book a flight, pay your gas bill, rent a hotel room, buy some new books, or open an account at the library — in every case, a user is asked to share their personal data in order to make the service work.
In each case, the user has to make a choice at the moment of engagement — the moment of ordering, for example, or the moment of creating a new account — about whether or not to consent to the use of their information. But once that consent is provided, for most practical purposes, that consent dies and disappears forever. Over weeks, months, and years of regular online activity, users develop what we might call a “graveyard of past consents” — an ever-growing accumulation of consents previously provided, buried in places they can scarcely recall. Where did you buy that t-shirt? What conditions did you agree to when you created an account to buy a couch? What did that hotel’s Privacy Policy say about the third parties it might share your data with? Are you still using that library account? The result, for all of us, is that when taken across the span of all the myriad products we buy and the services we use in our increasingly digital lives, we’re left with our personal data scattered — out of sight, out of mind, and for all practical purposes entirely out of our control.
Real Transparency and Control Have Remained Illusory
Privacy laws aren’t the cause of this unfortunate state of affairs, but by themselves they’ve been unable to solve the problem. The GDPR requires that organizations give users the right to withdraw their consent at any time and stipulates that it should be as easy to withdraw consent as to provide it in the first place. Like a number of other privacy laws, the GDPR also provides users with “data subject rights,” the ability to exercise a certain control over their data after they’ve shared it. Users can ask an organization to delete their personal data, they can correct their personal data, and they can obtain a copy of their personal data.
In the framework of our current digital environment, however, the ability to exercise the control that these rights are meant to provide is severely curtailed by some very basic operational facts. Once you’ve consented, there’s no easy way to see and track — across multiple organizations, services, and products — what consent you provided, what data you shared, and what’s being done with your data. The best-case scenario within the current framework is that a user will actively remember all the myriad companies and organizations with whom they’ve shared their data and then systematically reach out to each of those companies to request a file containing their data, or ask that company to delete their data. In the case of a deletion request, they would be left to hope that these organizations have actually followed their instructions, without any real way to confirm the actions taken. Very few users exercise their data rights and those that do achieve little in return for their efforts.
This isn’t to criticize the creation of these rights, by any means. It’s to say, instead, that we need to do better in enabling these rights so they can be exercised in a simple way that delivers useful results. Our failure to make data truly visible and to place it under ongoing control has real costs, both for the individuals who share their data and for the organizations and governments with whom they share it.
Building For Better Privacy
The solution to this problem lies not in new laws and regulations, nor in improved platform messaging, nor in user education. What the solution requires is technology that’s actually designed and engineered to give users visibility and control over their data, and that extends that visibility and control beyond the moment of consent and throughout the lifecycle of any given piece of data. What we need, in other words, is technology that breathes new life back into our data.
There are good operational models already up and running. Tim Berners-Lee’s Solid project is a web-standards-based specification that lets people use a personal data store (a Pod) to control if, when, how, and with whom their data is shared. Instead of users needing to independently share their data with every company that wants or needs a copy of it, the Solid Pod model lets users store their data in their own Pod and then control how that data gets shared. A user’s Pod can thus serve as a control center for personal data, allowing the Pod-holder to see the organizations they’re sharing data with, and the particular data elements they’re sharing, and then make ongoing choices about that data.
Building on this, Berners-Lee’s new development effort Inrupt has created an enterprise-grade Solid server (ESS) to deploy Pod services, which are already being used by a range of companies and governments. An ESS Pod allows its owner to see, and change, a consent grant at any time, and the organization can be sent a notification informing it that it no longer has consent to process the data for that purpose. Through their Pods, users can see:
- Who accessed their data
- What data was accessed
- When the data was accessed
- Whether the data was read or written
- What, if any, changes were made to the data
- What application was used to access the data
- Whether the data was accessed via consent
- If consent was provided, when that consent was granted and for what specific purpose.
In short, ESS Pods have been able to demonstrate that millions of users can have a simple and intuitive way to see the personal data they’ve shared online and then exercise real, continuous control over it.
Consent that Works in the Present and Scales for the Future
Until organizations have adopted privacy-centered frameworks through which users can actually see and control their personal information, we shouldn’t expect those users to feel any sense of control, or comfort, with respect to their data. The reality of the average’s user’s day, and the sheer volume of personal information they share on a regular basis, means that expecting users to remember and track where they’ve shared their data, and the terms under which they’ve shared it, is simply unrealistic. What we need, then, is to build and implement systems that let users see and control their data past the moment of initial consent — and to make this process so easy that it comes as second nature.
Building consent that works in the present, and that scales for the future, will not only yield benefits for users but also for the companies, non-profits, and governments that need this data to operate — because through transparency and control, these organizations can build and leverage real user trust.
Eliott Behar is a lawyer and writer working at the intersection of human rights and technology. A former war crimes prosecutor, he has worked on international justice initiatives in the Balkans, West Africa, Iraq, and Myanmar. As a technology lawyer he worked as Security Counsel for Apple and advises on data privacy and security issues for tech companies and governments.