US federal privacy laws – status quo and developments

Vision and priorities

 

From a legal standpoint protecting privacy in U.S. is primarily a counterbalance to the state administration power of intrusion over the citizen. The 4th Amendment in the U.S. Constitution is precisely written in the vein of accounting for the government activities constituting searches and seizures, and how to address public authorities’ violations to citizens’ rights. Alas, equivalent protection is not afforded to individuals for potential harms committed by private entities.empire state building, hudson, sunset-6858030.jpg

Conversely, data processing by private entities is governed by a host of state and federal laws applicable on a sectoral basis. If there is no applicable sectoral law or, if the applicable sectoral law excludes certain types of stakeholders, which is often the case, organizations have no impediment to collect and process individuals’ data at their own appreciation, subject to the Federal Trade Commission (FTC) unfairness and deception enforcement authority and practice.


The lack of a federal constitutional right to privacy from private entities facilitates a lax consumer privacy protection landscape which shields private organizations from government actions against their arbitrary use of the data unless a state has its own data privacy law containing a more restrictive privacy regime. Up until the date there are three states which adopted comprehensive privacy regime, respectively, California, Virginia and Colorado. 


In her paper, US Privacy Law, the GDPR and Inoformation Fiduciaries Lindsey Berrett observes: “The scattered nature of privacy protections for individuals against private entities in the United States largely reflects a prioritization of corporate flexibility over individual rights. While an omnibus regime assumes that data collection should be justified, a sectoral regime assumes that any governmental limits on collection should be justified.”


Practically, in the U.S. sectoral statutes afford privacy protections for health information, financial information, students’ information, state surveillance, children’s online information, genetic information, and video home systems rental records. These statutes are HIPAA, FCRA, GLBA, FERPA, ECPA, COPPA, GINA, VPPA, respectively.


In his article for Wirecutter, Thorin Klosowski provides a very helpful insight into this bunch of disparate federal privacy statutes. The sticking point is that the protection provided by these sectoral privacy statutes is not complete suffering a series of limitations. Below we are providing a short overview of some of these limitations.

 

Sectoral statutes

The Electronic Communications Privacy Act (ECPA) prevents unauthorized government access to private electronic communications and sets rules concerning employers monitoring of employee communications. Critics often point out that ECPA, which was passed in 1986, is outdated since back in 1986 the World Wide Web did not even exist, and nobody carried a cell phone. Therefore, nowadays ECPA is incapable to protect individuals against modern surveillance tactics such as law enforcement access of older data stored on servers, in cloud storage documents, and in search queries.


Another example is the Family Educational Rights and Privacy Act (FERPA), the statute regulating the collection of student data which applies to public school benefiting of public funding. FERPA does not apply to other entities that collects student data, such as an organization conducting official-looking survey as part of a test students are required to take, and then sells the information to data brokers. 


The Genetic Information Nondiscrimination Act (GINA), the statute governing misuse of genetic data prohibits (only) the use of genetic data in employment or insurance decisions.


The Health Insurance Portability and Accountability Act (HIPAA), which protects health privacy, applies only to information collected by healthcare providers including doctors, hospitals, pharmacies, insurers, and other similar businesses. Any other collection or use of health information, for instance, by health applications like Fitbit isn’t protected, nor does the law restrict who can ask for your COVID-19 vaccination status.


Gramm-Leach-Bliley Act (GLBA) which applies to financial institutions (i.e., banks, insurance providers, mortgage lenders) doesn’t restrict how companies use the data they collect, as long as they disclose such usage beforehand. Nevertheless, the covered entities are obliged to give consumers a right to opt-out of having their information shared which is fairly a weak protection for the consumer having into account the sensitivity of the data at hand.


The Fair Credit Reporting Act (FCRA) offers stronger protections especially in the context of employment. Even so, it only applies to consumer reporting agencies (CRAs). Organizations not fitting the CRA definition, such as Facebook, Google, or a data broker, which buys, sells, or shares financial information, are not regulated by GLBA nor FCRA.


As for Children Online Privacy Protection Act, which imposes certain limits on a company’s data collection for children under 13 years old, this has been overrun by the very industry it was supposed to regulate. According to a 2019 column in Washington Post surveys show that four out of five American preteens use some form of social media, with YouTube being the most popular but Instagram, Facebook and Snapchat also widely used even though all four services officially prohibit users younger than 13. 


Furthermore, as per the same source other popular online offerings such as the game Fortnite, which has proved to be so engrossing to preteen boys that parents worry about addiction, maintain they are “not directed” at children. But the services also don’t ask users how old
they are. This tactic, lawyers say, helps the companies sidestep the Children’s Online Privacy Protection Act, that restricts the tracking and targeting of those younger than 13 but requires “actual knowledge” of a user’s age as a key trigger to enforcement. 


On top of all these we add the shortcomings of a heavy reliance on notice and choice failing over and again to provide for the transparency it was meant to. This is something which the E.U. data privacy regime accounted for by elevating the requirements for a meaningful consent whilst prohibiting the toxic practice of “take it or leave it” and providing individuals with meaningful pathways for administrative and judicial redress.

 

The Federal Trade Commission

U.S. privacy protections are equally hampered by pragmatic shortcomings, respectively, the scarce resources allocated for privacy policymaking and enforcement authority. The Federal Trade Commission (FTC) is the primary federal agency responsible with protecting individuals from digital exploitation in a commercial context, including data privacy, security, and misuse by companies.


Basically, by wielding Section V of the FTC Act preoccupied with unfair and deceptive practices in or effecting commerce, FTC fills in the gaps left behind by the scattershot sectoral privacy regulations.

For example, FTC has the authority to police an app or website that violates its own privacy policy by deceiving consumers on their privacy promises. FTC can also investigate violations of marketing language related to privacy.


Nevertheless, the agency’s ability to police abusive privacy practices is obstructed by the limits of its own statutory authority of being more of a reactive rather than proactive body to shaping privacy practices, and the scarcity of its resources compared with the humongous responsibility (i.e., manpower, legal tools, and monetary resources). Some groups have also recently called on the FTC to expand that power to abusive data practices.


Moreover, FTC’s authority does not include common carriers or non-profits, a limitation which leaves these entities free to violate individuals’ privacy with near impunity. Lacking general rulemaking authority, the FTC’s approach to shaping industry practice is reactive, rather than proactive building layers of precedent but not a consistent policy over the market. 


Furthermore, FTC typically uses its deception authority in privacy and data security cases and rarely relies on its unfairness authority, with the latter requiring the agency to reach the elevated threshold of “a clear theory of substantial likelihood of harm to consumers that is not outweighed by any countervailing benefits.” And, although the enforcement priorities of the FTC evolved during the years beyond a tangible financial harm, its limited definition of privacy harms to physical or financial injuries allowed for reputational harms, emotional harms, manipulation, or discrimination to persist.


On top of that, the judiciary has created additional barriers for individuals looking to vindicate their privacy rights by enforcing narrow interpretation of the doctrine. For example, the Supreme Court has ruled that violation of a statute does not constitute per se injury.


As the reader can notice from the above short analyze in U.S. there seems to be a prioritization of corporate flexibility. The default mindset is that most privacy laws must allow data collection as opposed to a default requirement that data collection should be justified and owned by the data collector.


In general terms, in the U.S., whomever stores data, is deemed to own the right to store and use it, even if the data was collected without permission, except to any extent regulated by scattershot laws and rules.


The framing of privacy as a good that individuals should be able to commoditize as they please, the limited definition of digital harm, and a legally and resourced constrained FTC hampers the law’s capacity to sufficiently protecting individuals from evolving digital threats. 

 

Having said that, this sectoral approach leaves open doors and subjective and objective vulnerabilities to be exploited to the detriment of individuals right to privacy. As Lindsey Barrett notes: “These narrowly defined laws frequently fail to protect against new kinds of digital harms, such as manipulation or discrimination. A sectoral approach prioritizes the ability of industry to move fast and break things and subordinates strong privacy protections for individuals in favor of corporate flexibility to exploit them.” 

 

We need a Privacy Framework

Whilst U.S. privacy professionals are not waiting for a panacea to all digital harms, a privacy framework primarily focused on pro-actively protecting the rights of individuals rather than on retroactively correcting damages to individuals and society is needed.

 

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital