Licensure as Data Governance
Erik Carter

Licensure as Data Governance

Moving toward an industrial policy for artificial intelligence

Data and Democracy

A Knight Institute and Law and Political Economy Project essay series considering how big data is changing our system of self-government

[In the late 1950s, the U.S.] government abdicated … responsibility to establish rules, safeguards, and standards relating to the collection and use of personal data for the purpose of directing human behavior. Plainly, all of this might have gone differently. Plenty of people believed at the time that a people machine was entirely and utterly amoral. “My own opinion is that such a thing (a) cannot work, (b) is immoral, (c) should be declared illegal,” [soon-to-be-FCC Chair] Newton Minow had written to Arthur Schlesinger in 1959. “Please advise.”

—Jill Lepore, If Then: How the Simulmatics Corporation Invented the Future, 323.

Introduction

Data protectors and privacy regulators face a crisis of overwork and underresourcing. Enforcement of privacy laws is too often belated, if it comes at all. Massive firms with myriad data points on tens of millions of people face fines for data misuse and security breaches that are the economic equivalent of a parking ticket. Potentially worse than all these well-recognized barriers to accountability is a known unknown: namely, the black box problem. Even the most diligent regulators and civil society groups have little idea of the full scope and intensity of data extraction, analysis, and use at leading firms, given the triple barriers of trade secrecy, nondisclosure agreements, and technical complexity now effectively hiding their actions from public scrutiny. This crisis is likely to continue unless there is a fundamental shift in the way we regulate the collection, analysis, transfer, and use of data.

At present, policymakers tend to presume that the data practices of firms are legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to certify that its data practices met clear requirements for security, nondiscrimination, accuracy, appropriateness, and correctability, before it collected, analyzed, or used data? Such a standard may not seem administrable now, given the widespread and rapid use of data—and the artificial intelligence (AI) it powers—at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling data practices, and only gradually (if at all) to smaller ones and less menacing data practices. For example, would it really be troubling to require firms to demonstrate basic security practices once they have accumulated sensitive data on over 1 million people, before they continue to collect even more? Scholars have argued that certain data practices should not be permitted at all. Rather than expecting underfunded, understaffed regulators to overcome the monumental administrative and black box problems mentioned above, responsibility could be built into the structure of data-driven industries via licensure schemes that require certain standards to be met before large-scale data practices expand even further.

To give a concrete example motivating this flipped presumption about data practices, consider the emergence of health inferences from data that is not, on its face, health-predictive. For instance, an AI program, reviewing only writing samples, “predicted, with 75 percent accuracy, who would get Alzheimer’s disease.” This type of inference could be used in subtle or secretive ways by the firms making it, as well as by employers, marketers, financial institutions, and other important decision-makers. Such predictions may have massive impacts on those projected to have Alzheimer’s, including denial of life insurance or long-term care insurance, denial of employment, or loss of other opportunities. Even where such uses of the data are illegal, complex and expensive legal systems may make it very difficult to enforce one’s rights. Governments should ensure ex antethat predictions are done and used in a responsible way, much as federally funded research is often channeled through institutional review boards in order to respect ethical and legal standards.

A licensure regime for data and the AI it powers would enable citizens to democratically shape data’s scope and proper use, rather than resigning ourselves to being increasingly influenced and shaped by forces beyond our control.To ground the case for more ex ante regulation, Part I describes the expanding scope of data collection, analysis, and use, and the threats that that scope poses to data subjects. Part II critiques consent-based models of data protection, while Part III examines the substantive foundation of licensure models. Part IV addresses a key challenge to my approach: the free expression concerns raised by the licensure of large-scale personal data collection, analysis, and use. Part V concludes with reflections on the opportunities created by data licensure frameworks and potential limitations upon them.

I. The Expanding Scope of Data Collection, Analysis, and Use

As data collection becomes more prevalent, massive firms are privy to exceptionally comprehensive and intimate details about individuals. These can include transport, financial, retail, health, leisure, entertainment, location, and many other kinds of data. Once large enough stores of data are created, there are increasing opportunities to create inferences about persons based on extrapolations from both humanly recognizable and ad hoc, machine learning-recognizable groups.

Much observation can occur without persons’ consent. Even when consent is obtained, improper data collection may occur. Increasingly desperate individuals may be effectively coerced via their circumstances to permit comprehensive, 360-degree surveillance of key aspects of their lives. For example, many people now cannot access credit because of a “thin credit file”—industry lingo for someone without much of a repayment history. Fintech firms promise “financial inclusion” for those willing to give lenders access to their social media activity and other intimate data. Critics characterize this “opportunity” as predatory inclusion, and it is easy to see why: Those enticed into the sphere of payday lenders may not only contract into unsustainable debt-repayment schemes but may also become marks for other exploitative businesses, such as for-profit universities or unlicensed rehab centers.

The economic logic here (giving up more privacy in exchange for lower interest rates or other favorable terms) is, in principle, illimitable. For example, a firm may give a loan applicant a reduced rate on a car loan, if she allows it (and its agents) to download and then analyze all information on her mobile phone during the term of the loan, and to resell the data. Such borrowers may never even know what is done with their data, thanks to all-pervasive trade secrecy in the industry. Tracker apps on cell phones may allow firms to record their employees’ location at all times. According to one aggrieved worker, her boss “bragged that he knew how fast she was driving at specific moments ever since she had installed [an] app on her phone.”

Even when such comprehensive surveillance is not consented to, AI can operate as a “prediction machine,” analyzing data to make damaging inferences about individuals. These inferences may be entirely unexpected, based on correlations that can only be found in vast troves of data. For example, a person’s proclivity to be depressed may be related to the apps or websites they visit (or how they use those apps and websites). The websites themselves may keep such data, or it may be collected by third parties with commercial or other relationships with the sites or apps.

Correlations based on how a person uses their phone or computer may be entirely unexpected. For example, a high-ranking Catholic cleric in the U.S. was recently reported to be a user of the gay dating site Grindr by journalists at the digital publication named The Pillar, based on computing data. As journalist Molly Olmsted explained:

According to one privacy engineer who has worked on issues related to location data, Pillar (or the group that had offered CNA [the Catholic News Agency] the data back in 2018) probably purchased a data set from a data broker, which in turn had likely purchased the data from a third-party ad network that Grindr uses. Grindr itself would not have been the source of the data, but the ad network would have been given full access to the users’ information as long as they agreed to Grindr’s terms of services. (In 2018, Grindr, which uses highly granular location information, was found to have shared users’ anonymized locations, race, sexual preferences, and even HIV status with third-party analytic firms.)

Whatever your views about the cleric in this case, the generalized exposure of a person’s dating practices, or other intimate inferences based on location-based data, is something a mature privacy law should prevent.

Researchers have also analyzed certain activities of people who extensively searched for information about Parkinson’s disease on Bing, including their mouse movements six months before they entered those search terms. Most users of the internet are probably unaware that not just what they click on, but how fast and smoothly they move their mouse to do so, can be recorded and traced by the sites they are using. The group of Bing users who searched for Parkinson’s—which it is probably safe to assume is far more likely to have Parkinson’s than the population as a whole—tended to have certain tremors in their mouse movements distinct from other searchers. These tremor patterns were undetectable by humans—only machine learning could distinguish the group identified to have a higher propensity to have Parkinson’s, based in part on microsecond-by-microsecond differences in speed and motion of hand movement.

There is at present no widely available defense to such detection—no close-at-hand, privacy-enhancing strategy that can prevent such technologies of inference, once developed, from classifying someone as likely to develop grave illness. Perhaps a clever technologist could develop tools to “fuzz” tremor movements, or to smooth or normalize their transmission so that they do not signal abnormality. But we should not expect internet users to defend themselves against such invasive categorization by joining an arms race of deflection and rediscovery, obfuscation and clarification, encryption and decryption. It is a waste of our time. And those of higher socioeconomic status have many more resources at hand to engage in such arms races, thus adding the insult of rising inequality to the injury of privacy harm.

Moreover, a patchwork of weak and often underenforced privacy laws is no match for the threats posed by large-scale data processing, which can be used to either overtly or secretly rank, rate, and evaluate people, often to their detriment and unfairly. Without a society-wide commitment to fair data practices, a troubling era of digital discrimination will be entrenched. The more data about signals of future distress, illness, or disability are available, the better AI will predict these conditions, enabling unscrupulous actors to take advantage of them.

II. The Impracticality of Consent-Based Models in an Age of Big Data

Many systems of privacy regulation begin with consent, shifting responsibility to data subjects to decide to whom they should grant their data, and to whom they should deny it. On the consent-based view, it is up to the data subject to vet the reliability of entities seeking access to her data and monitor their ongoing abidance with the promises they made when they obtained it. A consent-based model makes sense as part of a contractarian and propertarian view of legal order: Data are like objects, property of a person who may freely contract to give or share the data with others. On this view, just as a corporation needs consent to, say, take a person’s property, it needs consent to take personal data. Once a contract is struck, it governs the future exchange and use of the data.

This consent-based approach has multiple infirmities. Much data arises out of observation unrestricted by even theoretical contracts. To give an example: A Google user may “consent” to data collection while using the service, but no one is asked before the cameras on Google’s street-photographing cars roll down the road to grab data for its mapping service. The firm does blot out faces when publishing the images to the internet, but the principle remains: All manner of information can be garnered by microphones, cameras, and sensors trained on the public at large, and connected to particular persons, automobiles, or households via massive facial recognition databases. Aerial surveillance can trace a person’s every outdoor step and movement from home and back each day, as Baltimore’s recent “spy plane” litigation has revealed.

Even for data that can be practically bound by contract, terms-of-service agreements are continually subject to change. These changes almost always favor service providers—despite the serious lock-in costs and reliance interests of users. If one is dealing with a monopoly or must-have service, there is no real choice but to accept their terms. Even when choice is superficially available, terms of service are by and large similar and imposed as a fait accompli:The user either accepts them or does not get to use the service. Sometimes, under exceedingly gentle pressure from regulators, firms will magnanimously offer assurances about certain limits on the uses of data. However, data subjects are now under surveillance by so many firms that it is impossible for them to audit such assurances comprehensively. How can a person with a job and family to take care of try to figure out which of thousands of data controllers has information about them, has correct information, and has used it in a fair and rigorous manner? In the U.S., even the diligent will all too often run into the brick walls of trade secrecy, proprietary business methods, and malign neglect if they do so much as ask about how their data has been used, with whom it has been shared, and how it has been analyzed. Europeans may make “subject access requests,” but there are far too many data gathering and data processing firms for the average person to conduct reviews of their results in a comprehensive way.

The analogy between data and property also breaks down in an era of digitization, when data can be so easily copied, transferred, stored, and backed up. Securing a house is a relatively easy matter compared with securing one’s data. Even the most conscientious data subjects need only slip once, failing to read a key clause in terms of service, or transacting with an unreliable or insecure counterparty. Then critical, discrediting, disadvantaging, or embarrassing data about them could end up copied and recopied, populating countless databases. And even when the data subject has clearly been wronged by a data controller or processor, the judiciary may make litigation to obtain redress difficult or impossible.

As data and its analysis, sharing, and inferences proliferate, the consent model becomes less and less realistic. There is simply too much for any individual to keep track of. Nor have data unions risen to the challenge, given the immense difficulty of attaining any kind of bargaining leverage vis-à-vis first-party data collectors. There are myriad data gatherers, hundreds of scoring entities, and blacklists covering housing, voting, travel, and employment. Even if consent-based regimes are well-administered, they can result in data gaps that impede, for instance, both medical research and opportunities for clinical care. Forced into an environment where few adverse uses of compromising data and inferences are forbidden (and where the penalties for such uses are not sufficient to deter wrongdoers), data subjects easily forego valuable opportunities (such as future AI-enabled diagnoses) or eschew low-risk chances to contribute to the public good by sharing data (for research studies).

Moreover, we cannot reasonably expect good administration of consent-based regimes in many areas. In the U.S., regulatory agencies are ill-suited to enforce others’ contracts. Even when they do bring cases on the basis of deception claims, the penalties for failure to comply have frequently been dismissed as a mere cost of doing business. First Amendment defenses may also complicate any lawsuit predicated on an effort to stop the transfer or analysis of data and inferences. In Europe, while data protection authorities are empowered by law to advance the interests of data subjects via the General Data Protection Regulation (GDPR), they have in practice proven reluctant to impose the types of penalties necessary to ensure adherence to the law.

III. Beyond Consent: The ex ante regulatory imperative

One way to promote large-scale data processing that is more responsive to the public interest is to ensure that proper scrutiny occurs before the collection, analysis, and use of data. If enacted via a licensure regime, this scrutiny would enable a true industrial policy for big data, deterring misuses and thereby helping to channel AI development in more socially useful directions. As AI becomes more invasive and contested, there will be increasing calls for licensure regimes. To be legislatively viable, proposals for licensure need theoretical rigor and practical specificity. What are the broad normative concerns motivating licensure? And what types of uses should be permitted?

Cognizant of these queries, some legislators and regulators have begun to develop an explicitly governance-driven approach to data. While not embracing licensure, Sen. Sherrod Brown of Ohio has demonstrated how substantive limits may be enforced via licensure restrictions for large-scale data collection, analysis, and use. His Data Accountability and Transparency Act would amount to a Copernican shift in U.S. governance of data, putting civil rights protection at the core of this approach to data regulation. This reflects a deep concern about the dangers of discrimination against minoritized or disadvantaged groups, as well as against the “invisible minorities” I have previously described in The Black Box Society. For example, the mouse microtremor example mentioned above may be prevented by the Data Accountability and Transparency Act, which would forbid the calculation of the inference itself by entities that intend to discriminate based on it (or, more broadly, entities that have not demonstrated a personal or public health rationale for creating, disseminating, or using it). On the other hand, the inference may be permissible as a way of conducting “public or peer-reviewed scientific, historical, or statistical research in the public interest, but only to the extent such research is not possible using anonymized data.” Thus, the generalizable finding may be made public, but its harmful use against an individual would be precluded by preventing a firm with no reasonable method of improving the person’s health from making the inference. This avoids the “runaway data” problem I described in Black Box Society, where data collection and analysis initially deemed promising and helpful becomes a bane for individuals stigmatized by them.

Such assurances should enable more societal trust in vital data collection initiatives, like for health research, pandemic response, and data-driven social reform. For a chilling example of a loss of trust in a situation without such protections, we need only turn to the misuse of prescription databases in the U.S. In the 2000s, patients in the U.S. were assured that large databases of prescription drug use would be enormously helpful to them if they ended up in an emergency room away from home, since emergency doctors could have immediate access to this part of their medical record, and avoid potentially dangerous drug interactions. However, that use of the database was not immediately profitable and did not become widespread. Rather, the database became a favored information source of private insurers seeking to deny coverage to individuals on the basis of “preexisting conditions.” To avoid such future misuses and abuses of trust, we must develop ways of preventing discriminatory uses of personal data, and of shaping the data landscape generally, rather than continuing with a regime of post hoc, partial, and belated regulation.

Sensitive to such misuses of data, ethicists have called for restrictions on certain types of AI, with a presumption that it be banned unless licensed. For example, it may be reasonable for states to develop highly specialized databases of the faces of terrorists. But to deploy such powerful technology to ticket speeders or ferret out benefits fraud is inappropriate, like using a sledgehammer to kill a fly. A rational government would not license the technology for such purposes, even if it would be entirely reasonable to do so for other purposes (for example, to prevent pandemics via early detection of infection clusters). Nor would it enable many of the forms of discrimination and mischaracterization now enabled by light-to-nonexistent regulation of large-scale data collection, analysis, and use.

The first order of business for a reformed data economy is to ensure that inaccurate, irresponsible, and damaging data collection, analysis, and use are limited. Rather than assuming that data collection, processing, and use are in general permitted, and that regulators must struggle to catch up and outlaw particular bad acts, a licensure regime flips the presumption. Under it, large-scale data collectors, brokers, and analysts would need to apply for permission for their data collection, analysis, use, and transfer (at the very least for new data practices, if older ones are “grandfathered” and thus assumed to be licensed). To that end, a stricter version of the Data Accountability and Transparency Act might eventually insist that data brokers obtain a license from the government in order to engage in the collection, sale, analysis, and use of data about identifiable people.

IV. Free Expression Concerns Raised by the Licensure of Large-Scale Personal Data Collection, Analysis, and Use

Whether applied to data or the AI it powers, licensure regimes will face challenges based on free expression rights. The ironies here are manifold. The classic scientific process is open, inviting a community of inquirers to build on one another’s works; meanwhile, the leading corporate data hoarders most likely to be covered by the licensing regime proposed here are masters of trade secrecy, aggressively blocking transparency measures. Moreover, it is now clear that the corporate assertion of such alleged constitutional rights results in databases that chill speech and online participation. It is one thing to go to a protest when security personnel watch from afar. It is quite another when the police can immediately access your name, address, and job from a quick face scan purchased from an unaccountable private firm.

This may be one reason why the American Civil Liberties Union decisively supported the regulation of Clearview AI (a firm providing facial recognition services) under the Illinois Biometric Information Privacy Act (BIPA), despite Clearview’s insistence (to courts and the public at large) that it has a First Amendment right to gather and analyze data unimpeded by BIPA. If unregulated, the firm’s activities seem far more likely to undermine a robust public sphere than to promote it. Moreover, even if its data processing were granted free expression protections, such protections may be limited by “time, place, and manner” restrictions. In that way, the licensure regime I am proposing is much like permit requirements for parades, which recognize the need to balance the parade organizers’ and marchers’ free expression rights against the public need for safe and orderly streets. Given the privacy and security concerns raised by mass data collection, analysis, and use, restrictions on data practices thus may be subject to only intermediate scrutiny in the U.S. Even more sensible is the Canadian rejection of the data aggregator’s free expression claim tout court.

When an out-of-control data gathering industry’s handiwork can be appropriated by both government and business decision-makers, data and inferences reflect both knowledge and power: They are descriptions of the world that also result in actions done within it. They blur the boundary between speech and conduct, observation and action, in ways that law can no longer ignore. Mass data processing is unlike the ordinary language (or “natural language”) traditionally protected by free expression protections. Natural language is a verbal system of communication and meaning-making. I can state something to a conversation partner and hope that my stated (and perhaps some unstated) meanings are conveyed to that person. By contrast, in computational systems, data are part of a project of “operational language”; their entry into the system produces immediate effects. As Mark Andrejevic explains in Automated Media,there is no interpretive gap in computer processing of information. The algorithm fundamentally depends on the binary (1 or 0), supplemented by the operators “and, or, not.” In Andrejevic’s words, “machine ‘language’ … differs from human language precisely because it is non-representational. For the machine, there is no space between sign and referent: there is no ‘lack’ in a language that is complete unto itself. In this respect, machine language is ‘psychotic’ … [envisioning] the perfection of social life through its obliteration.” This method of operation is so profoundly different than human language—or the other forms of communication covered by free expression protections—that courts should be exceptionally careful before extending powerful “rights to speak” to the corporate operators of computational systems that routinely abridge human rights to privacy, data protection, and fair and accurate classification.

Unregulated AI is always at risk of distorting reality. Philosophers of social science have explained the limits and constraints algorithmic processing has imposed on social science models and research. Scholars in critical data studies have exposed the troubling binaries that have failed to adequately, fairly, and humanely represent individuals. For example, Os Keyes has called data science a “profound threat for queer people” because of its imposition of gender binaries on those who wish to escape them (and who seek societal acceptance of their own affirmation of their gender). In this light, it may well be the case that an entity should only process data about data subjects’ gender (and much else) if it has been licensed to do so, with licensure authorities fully cognizant of the concerns that Keyes and other critical data scholars have raised.

The shift to thinking of large-scale data processing as a privilege, instead of as a right, may seem jarring to American ears, given the expansion of First Amendment coverage over the past century. However, even in the U.S. it is roundly conceded that there are certain particularly sensitive pieces of “information” that cannot simply be collected and disseminated. A die-hard cyberlibertarian or anarchist may want to copy and paste bank account numbers or government identification numbers onto anonymous websites, but that is illegal because complex sociotechnical systems like banks and the Social Security Administration can only function on a predicate of privacy and informational control. We need to begin to do the same with respect to facial recognition and other biometrics, and to expand this caution with respect to other data that may be just as invasive and stigmatizing. Just as there is regulation of federally funded human subjects research, similar patterns of review and limitation must apply to the new forms of human classification and manipulation now enabled by massive data collection.

A licensure regime for big data analytics also puts some controls on the speed and ubiquity of the correlations such systems can make. Just as we may want to prevent automated bots from dominating forums like Twitter, we can and should develop a societal consensus toward limiting the degree to which automated correlations of often biased, partial, and secret data influence our reputations and opportunities.

This commitment is already a robust part of finance regulation. For example, when credit scores are calculated, the Fair Credit Reporting Act imposes restrictions on the data that can affect them. Far from being a forbidden content-based restriction on the “speech” of scoring, such restrictions are vital to a fair credit system. The Equal Credit Opportunity Act takes the restrictions further regarding a creditor’s scoring system. Such scoring systems may not use certain characteristics—such as race, sex, gender, marital status, national origin, religion, or receipt of public assistance—as a factor regarding a customer’s creditworthiness. Far from being a relic of the activist 1970s, restrictions like this are part of contemporary efforts to ensure a fairer credit system.

European examples abound as well. In Germany, the United Kingdom, and France, agencies cannot use ethnic origin, political opinion, trade union membership, or religious beliefs when calculating credit scores. Germany and the United Kingdom also prohibit the use of health data, while France allows the use of health data in credit score calculations. Such restrictions might be implemented as part of a licensure regime for use of AI-driven propensity scoring in many fields. For example, authorities may license systems that credibly demonstrate to authorized testing and certification bodies that they do not process data on forbidden grounds, while denying a license to those that do.

Moreover, credit scores themselves feature as forbidden data in some other determinations. For example, many U.S. states prevent them from being used by employers. California, Hawaii, and Massachusetts ban the use of credit scoring for automobile insurance. A broad coalition of civil rights and workers’ rights groups reject these algorithmic assessments of personal worth and trustworthiness. The logical next step for such activism is to develop systems of evaluation that better respect human dignity and social values in the construction of actionable reputations—those with direct and immediate impact on how we are classified, treated, and evaluated. For example, many have called for the nationalization of at least some credit scores. Compared with that proposal, a licensure regime for such algorithmic assessments of propensity to repay is moderate.

To be sure, there will be some difficult judgment calls to be made, as in the case with any licensure regime. But size-based triggers can blunt the impact of licensure regimes on expression, focusing restrictions on firms with the most potential to cause harm. These firms are so powerful that they are almost governmental in their own right. The EU’s Digital Services Act proposal, for example, includes obligations that would only apply to platforms that reach 10 percent of the EU population (about 45 million people). The Digital Markets Act proposal includes obligations that would only apply to firms that provide “a core platform service that has more than 45 million monthly active end users established or located in the Union and more than 10,000 yearly active business users established in the Union in the last financial year.” In the U.S., the California Consumer Privacy Act applies to companies that have data on 50,000 California residents. Many U.S. laws requiring security breach notifications generally trigger at around 500-1,000 records breached. In short, a nuanced licensing regime can be developed that is primarily aimed at the riskiest collections of data, and only imposes such obligations (or less rigorous ones) on smaller entities as the value and administrability of requirements for larger firms is demonstrated.

V. Conclusion

One part of the “grand bargain for big data” I outlined in 2013, followed by the “redescription of health privacy” I proposed in 2014, is a reorientation of privacy and data protection advocacy. The state, its agencies, and the corporations they charter only deserve access to more data about persons if they can demonstrate that they are actually using that data to advance human welfare. Without proper assurances that the abuse of data has been foreclosed, citizens should not accede to the large-scale data grabs now underway.

Not only ex post enforcement but also ex ante licensure is necessary to ensure that data are only collected, analyzed, and used for permissible purposes. This article has sketched the first steps toward translating the general normative construct of a “social license” for data use into a specific licensure framework. Of course, more conceptual work remains to be done, both substantively (elaborating grounds for denying a license) and practically (to estimate the resources needed to develop the first iteration of the licensing proposal). The consent model has enjoyed the benefits of such conceptual work for decades; now it is time to devote similar intellectual energy to a licensing model.

Ex ante licensure of large-scale data collection, analysis, use, and sharing should become common in jurisdictions committed to enabling democratic governance of personal data. Defining permissible purposes for the licensure of large-scale personal data collection, analysis, use, and sharing will take up an increasing amount of time for regulators, and law enforcers will need new tools to ensure that regulations are actually being followed. The articulation and enforcement of these specifications will prove an essential foundation of an emancipatory industrial policy for AI.

 

I wish to thank David Baloche, Jameel Jaffer, Margot Kaminski, Amy Kapczynski, Gianclaudio Malgieri, Rafi Martina, Paul Ohm, Paul Schwartz, and Ari Ezra Waldman for very helpful comments on this work. I, of course, take responsibility for any faults in it. I also thank the Knight First Amendment Institute and the Law and Political Economy Project for the opportunity to be in dialogue on these critical issues.

 

Printable PDF

 

© 2021, Frank Pasquale.

Frank Pasquale is a professor of law at Brooklyn Law School, an affiliate fellow at the Yale Information Society Project, and the Minderoo High Impact Distinguished Fellow at the AI Now Institute.