Legal risks arising from the use of biometric data and AI

September 2023  |  SPECIAL REPORT: TECHNOLOGY, MEDIA & TELECOMMUNICATIONS SECTOR

Financier Worldwide Magazine

September 2023 Issue


Technological advances in higher computing power at lower costs and model design have enabled the training of artificial intelligence (AI) models on increasingly large data sets, improving the accuracy and power of AI capabilities and leading to a proliferation of AI-based programmes and tools.

State-of-the-art AI models are generally trained on large amounts of data, including text, images or video, often collected from various sources. However, to the extent such data includes personal data, the training and use of such AI models could create privacy-related legal concerns. Companies need to be aware of, and comply with, applicable data privacy laws, including those that relate to the collection and use of biometric data.

Biometric data is seen by regulators as warranting a higher level of security and more rigorous privacy compliance because biometric markers often uniquely identify an individual, may require invasive or intimate access to an individual to collect and generally cannot be changed, unlike other categories of personal data. Even if an individual’s social security number is compromised, a new number could be obtained, whereas an individual’s faceprint, iris scan or fingerprint is unique and fixed. The current legal landscape in the US governing the use of biometric data presents a patchwork of laws that could easily trip up a business. This article surveys the legal landscape in the US and considers recent cases that may impact the use of biometric data in connection with AI applications.

The US legal landscape

Illinois is generally seen as having the strongest biometric data protection regime within the US. The Illinois Biometric Information Privacy Act (BIPA), which requires notice to and informed written consent from Illinois residents whose biometric information is collected, stored or used, was enacted in 2008 and paved the way for other states, including Texas and Washington, to pass similar legislation.

Notably, Illinois is the only state that provides a private right of action to its residents, increasing potential liability for businesses. Under BIPA, companies can face $1000 in liquidated damages per violation and $5000 for intentional or reckless violations. Moreover, a recent Illinois Supreme Court decision held that each violation of BIPA, including each fingerprint scan or each facial scan of an image, constitutes a separate violation of the statute that gives right to a separate claim for damages. As a result a business may accrue potentially astronomical damages for violating BIPA. In 2022, 1.3 million Illinois residents claimed $650m in a settlement with Facebook for BIPA violations.

Certain other states are also introducing bills specifically to address biometric data, while other states, including California and Virginia, have implemented broader privacy laws that also cover biometric data. Many businesses will opt to comply with the most stringent laws, rather than try to implement state-by-state policies and practices, which could be burdensome. Further, while there is no federal privacy legislation, recent statements by the US Federal Trade Commission (FTC) regarding data collection practices for use in AI may have implications for businesses using AI technologies.

Collection of biometric data

Over the past year there has been a meteoric rise in generative AI tools – software that allows the user to generate new content, such as photographs, images and audio, among others. One popular app was Lensa, which allowed users to upload ‘selfies’, which the app would process and use to generate artistic avatars of the user based on the uploaded images. In a putative class action filed in February of this year under the BIPA against Lensa owner Prisma Labs, Inc., plaintiffs alleged that Prisma falsely presented to users that users’ biometric identifiers, namely their facial geometry, would not be collected, which conflicted with Prisma’s model of collecting users’ images and other statements made in Prisma’s privacy policy. Plaintiffs also highlighted that for a short period, Prisma’s privacy policy disclosed that ‘face data’ is used to ‘train’ Prisma’s ‘neural network algorithms’, but that the disclosure was removed shortly thereafter.

Regardless of which party prevails in that litigation, the case reinforces the importance of complying with legal requirements when collecting biometric data from users.

Biometric data in training data sets

In general, businesses can comply with state biometric data laws by implementing blanket policies that comply with the most stringent biometric data regulations; segregating data by state and applying the applicable legal requirements to each data segment, geofencing residents of certain states to avoid collecting their biometric data entirely, or not doing business in the applicable state. However, these strategies may not be available where businesses are implementing AI models based on vast troves of information that may have included biometric data that was not collected in a regulatory compliant way.  

Many data sets used to train AI models are aggregated through web scraping, often without obtaining consent of the content owners. This can lead to class action suits and the risk of potentially high damage awards. For example, in 2019, IBM received heated criticism when it created a facial recognition tool based on approximately 1 million photos downloaded, without the consent of users, from the photo hosting site Flickr. A class action lawsuit was filed under the BIPA against IBM alleging that IBM improperly collected biometric data of Illinois residents. The parties recently agreed to dismiss the case.

Third parties using such data sets may also be subject to class action suits. For example, in 2022 a federal judge dismissed BIPA lawsuits brought against Amazon and Microsoft for use of the same dataset created by IBM that included images of Illinois residents, on the basis that the companies’ activities did not “primarily and substantially” occur in the state of Illinois.

By contrast, a May 2022 settlement in ACLU v. Clearview AI demonstrates the risk the BIPA presents for database creators, as well as third parties relying on such databases. Clearview AI provides a facial recognition platform, and, according to Clearview, as of February 2022 it had amassed 10 billion facial photos in its database with plans to reach 100 billion, by scraping pictures from social media and other sites. The ACLU filed an action alleging that Clearview violated the BIPA by collecting and using biometric data of Illinois residents without obtaining prior consent. Clearview settled, agreeing to a permanent ban on selling or granting access to its facial recognition database to private entities and a five-year ban from selling or granting access to its database to any governmental or private entities in Illinois, after which it could resume business with local or state law enforcement agencies in Illinois. Clearview also agreed to delete all facial vectors in the Clearview app that were previously created. Aside from the impact on Clearview’s business, the case demonstrates how important it is for businesses to diligence the sources of their biometric data when contracting with third parties for access to such data. To the extent an AI-based business builds its own tools and software using a third-party database, losing immediate access to that database could have material consequences for its business.

Businesses using biometric data also need to be focussed on potential federal law issues. The FTC has the authority to require ‘algorithm disgorgement’ in cases where it determines data was improperly obtained. Under this enforcement approach, the FTC can require a company to delete the improperly obtained data along with any products developed from such data. The FTC recently demanded that WW (formerly known as Weight Watchers) delete any models or algorithms developed in whole or in part using personal information collected from children in violation of the Children’s Online Privacy Protection Act (COPPA).

Nonetheless, the fact that the FTC may face practical challenges in imposing algorithm disgorgement on AI models, does not mean that the FTC will not seek to do so, and impose on businesses the duty to determine how to comply. This uncertainty drives home the importance of complying with all laws governing data collection and use.

While AI technology presents seemingly limitless possibilities for biometric-based applications for businesses and consumers, it behoves any business seeking to use or implement AI technology in connection with biometric data to tread carefully given the rapidly shifting legal landscape. Businesses should be aware of the widespread reach of the BIPA and the potential ramifications of damages, settlements or other remedies. Beyond the US, other jurisdictions are regulating, or are seeking to regulate, uses of biometric data, further fracturing the patchwork of applicable rules and regulations and increasing the importance of using biometric data in compliance with applicable laws.

 

Mana Ghaemmaghami is an associate at Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates. She can be contacted on +1 (212) 735 2594 or by email: mana.ghaemmaghami@skadden.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.