Constraining Digital Con Men
Your cell phone company has granular records mapping your location, going back months. Your internet service provider maintains detailed records of your browsing history. Data brokers like Acxiom and Equifax group you into categories based on attributes they believe you have, such as possible medical conditions or employment status, and sell those profiles to anyone willing to pay for it, like companies that might evaluate you for a loan or a job.
You have little visibility into how these companies use this information, and little ability to stop them from using it even if you’re able to uncover practices you find disturbing. Often, the only available recourse is to try to find an alternative for the service, which, in a highly consolidated market, usually doesn’t exist. Without a comprehensive privacy law, most of the sector-specific laws that are on the books don’t apply in particular circumstances, contain definitional loopholes, or are under-enforced when they do apply. The result is a digital ecosystem in which companies are incentivized to push the boundaries of what is legal, ethical, and moral, and have little reason to seriously weigh how they might be putting their users at risk of manipulation, privacy violations, discrimination, or physical peril.
The skewed power dynamic between tech companies and the people who rely on them for specialized products and services invites exploitation. The services people receive are ones that people need, but generally can’t perform for themselves. People trust tech companies with their sensitive information in order to receive the services, but are ill-equipped to supervise how their information is being used. And the rewards for violating the service recipient’s trust by sharing or selling their data, manipulating them, or prioritizing profits over preventing harm are lucrative. Yet that set of circumstances isn’t unique to the tech sector. It’s the same paradigm that compelled legislatures, judges, and regulators to conclude that certain professionals like lawyers, doctors, and financial advisors owe fiduciary duties to their clients, drawing legal scholars like Jack Balkin and Jonathan Zittrain to propose applying duties of confidentiality, care, and loyalty to certain data collectors.
Building on their work, I argue in a new article in the Seattle University Law Review that applying fiduciary duties to data collectors could enable broader regulation of digital harms than a privacy law would otherwise likely encompass and, in turn, help elevate the American approach of treating privacy as a commodity to something closer to a right. My goal isn’t to criticize Europe’s landmark General Data Protection Regulation (GDPR), but to illustrate how the GDPR’s elevated commitment to privacy as a fundamental right could be echoed and improved in an approach that would better fit U.S. law.
The main objective of any comprehensive privacy law for the United States should be to reform the incentives of tech companies to exploit their users, both by imposing more meaningful obligations and prohibitions and by ensuring that those requirements cover the full range of harms that those companies engender or directly inflict. Without a comprehensive privacy law, the default presumption under current U.S. law is, broadly speaking, that a data collector owes nothing to its users beyond not explicitly lying to them, unless one of a few sector-specific laws applies. A compulsory, generally applicable information fiduciary law would reverse that presumption, and duties of care and loyalty could encompass digital harms like discrimination and manipulation that networked technologies enable, and which existing privacy laws generally don’t reach.
A duty of loyalty could incorporate a prohibition on using dark patterns to extract more information than a user might otherwise intend to share, or forbid buried requirements to submit to mandatory arbitration. A duty of care could extend to the failure of platforms to enforce their rules. A duty of confidentiality maps clearly onto the traditional focus of privacy law, like breach notification requirements, security obligations, and limitations on sharing information with third parties. These broader duties would prevent some of these harms by directly prohibiting the conduct that inflicts them, while also pushing companies to prioritize anticipating how their products could be directly exploitative or weaponized by third parties. The statute could enumerate some of the obligations these duties cover, and delegate additional elaboration through rulemaking to the FTC, or to a new agency. A fiduciary law could also include any number of individual rights that the GDPR creates, such as a right to access, correction, and deletion.
In addition, fiduciary duties are designed to accommodate the prerogatives of professionals performing a trade while protecting the people trusting those professionals with sensitive data. You trust your doctor or lawyer with your sensitive information, and both laws and codes of professional ethics prevent her from using that information to her benefit and your detriment in ways that you can’t understand or control. The fiduciary concept recognizes that without a legal or professional sanction, doctors and lawyers would be constrained only by conscience from profiting off of their patients’ and clients’ information, and that conscience will rarely suffice to prevent abuses. A framework built on the principle that privacy is normatively worth protecting, like the public policy considerations that compelled the creation of fiduciary duties for doctors and lawyers, would also counteract the insidious framing of privacy as something trivial that people should always be able to trade away and companies should be able to trample on. These rules can be flexible enough to accommodate the legitimate needs of professionals delivering necessary services while protecting individuals from harms they are not in a position to prevent. Shouldn’t we embrace the idea that people deserve to be shielded from unavoidable exploitation, and hold data collectors to a similar standard?
In order to meaningfully change the ecosystem, a law applying fiduciary duties to data collectors would have to apply to companies regardless of sector; the classification would have to be compulsory; violations would have to elicit significant fines; and individuals would need avenues to vindicate their claims, such as through a private right of action to sue in court or through administrative claims to which a regulatory agency would be required to respond. Under-inclusive privacy laws have been a key factor creating the current environment of impunity, and the vast majority of tech companies would see no reason to voluntarily submit themselves to the kinds of rules that would meaningfully curb their current license to collect first and ask questions later. Enforcers have to be provided with sufficient resources and authority to make violating the law carry a significant risk. These considerations are applicable to any comprehensive privacy law, regardless of the precise approach it takes: the best-designed privacy law in the world will be ineffective without significant penalties for violations as well as a ready, willing, and able enforcer to bring the law’s objectives to fruition.
There has also been thoughtful criticism levied against an information fiduciary approach. Some have argued that the conflicts of interest between data subjects and data collectors are too deeply entrenched for fiduciary duties to be meaningful. But that argument underestimates the depth of the conflicts in the professional relationships that traditional fiduciary duties were designed to check and have been largely, if imperfectly, successful in checking. Another concern raised is the possible conflict between a company’s duties to shareholders and information fiduciary duties. But a fiduciary law can make explicitly clear that the duties to shareholders remain generally applicable but are superseded where the two conflict, as the New York Privacy Act would do. And while a fiduciary framework wouldn’t fix the lack of competition bolstering unaccountable tech companies, it doesn’t purport to, and would not preclude additional, badly needed reforms that directly tackle competition.
The under-inclusivity and lack of enforcement of anemic privacy laws has engendered our frustrating reality of constant privacy abuses and the resulting resignation from consumers at their wits’ end. Applying fiduciary duties to data collectors would reframe a policy discourse that has dragged privacy into the domain of something trivial for which people don’t need strong legal protections. It would broaden the obligations of data collectors to cover not only the privacy harms they often escape liability for now, but also other digital harms, like manipulation and discrimination. And it would help to recalibrate a digital ecosystem that rewards exploitation by pushing companies to guard against the harms to privacy, fairness, free expression, and other democratic values that they currently write off as the cost of doing business—that is, when they bother to think about them at all.