Best Practices to Avoid a Wrongful Collection Event

Technology is making our lives easier than ever. With just a few clicks, packages are delivered straight to our doors. Doctor appointments can be made online in seconds. If something goes awry, we can turn to chatbots for 24/7 support.
The price to pay for convenience? Website owners know what we buy, which medical conditions we search, and the problems we share with virtual assistants.
While regulators and lawmakers are working towards answers to privacy in the digital age, technology moves faster. In turn, privacy litigation is on the rise — the number of federal data privacy lawsuits filed have more than doubled since 2018. Cases show no sign of slowing down anytime soon, either.
Below, we’ll explore the evolving legal landscape through examples of litigation and how businesses can reduce their exposure to legal action.
What qualifies as wrongful collection of data?
When businesses are hit with a wrongful collection of data lawsuit, it may come as a total surprise.
Without a federal, comprehensive privacy law in the United States, what qualifies as a “wrongful” data collection practice is decided by a patchwork of state laws and regulatory entities. With the regulatory and legal landscape in flux, it’s common for a business to be unintentionally mishandling data.
While specific requirements vary, a general guideline for businesses looking to protect data is to provide consumers with a privacy notice and opportunities to opt-in and opt-out of data collection:
Notice: Consumers should know what personal information a business collects and what it discloses with third-parties.
Opt-out: Provide consumers with the opportunity to opt out of certain data collection practices, such as targeted advertising or the sharing of their personal information.
Opt-in: Require users to clearly express consent for the collection of their data through cookie consent banners or sign-up options.
With the regulatory and legal landscape in flux, it’s common for a business to be unintentionally mishandling data.
Since the implementation of the European Union’s General Data Protection Regulation (GDPR) in 2018, which was created to give individuals better control over their data in the digital era, other comprehensive consumer privacy bills have followed suit, like the California Consumer Privacy Act (CCPA) and other state laws in Colorado, Connecticut, and Delaware (to name a few).
As legislators take strides to protect consumer data state-by-state, businesses need to prioritize understanding what collection practices commonly lead to lawsuits and the related compliance measures they can take to reduce their legal exposure.
What is considered sensitive information?
Personally identifiable information (PII): Any information that can be used to identify a person, such as Social Security number, email, address, or biometric identifiers.
Protected health information (PHI): Any information in a medical record that can be used to identify a patient and that was used in the course of providing a health care service, such as diagnosis or treatment. PHI involves PII, as well as health insurance information, healthcare correspondence, and billing information.
Payment card information (PCI): Any data used during payment card transactions, established by the Payment Card Industry Data Security Standard (PCI-DSS). PII also falls under the umbrella of PCI.
What data collection practices are being challenged?
Tracking pixels and HIPAA
Tracking pixels are tiny snippets of code embedded on websites that allow businesses to collect data on users’ online activities, like clicks and time spent on page. By creating unique IDs per visitor, pixels are able to track user activity across sessions and devices.
Controversy surrounded tracking pixels after it was alleged that personal information was being sent to third-party companies, like Meta and Google. In 2022, The Markup found that one-third of the top 100 hospitals in the United States featured pixel technology on their websites. In some cases, tracking pixels were found in password-protected patient portals. PHI, such as full names and medication details, were being sent back to third-party organizations in violation of the Health Insurance Portability and Accountability Act (HIPAA).
Following the news, privacy class-action lawsuits rolled in against Meta, as well as the healthcare entities using tracking technology. Plaintiffs claimed that patient information was improperly collected without consent and proper disclosure.
In response, The HSS Office for Civil Rights (OCR) released guidance prohibiting healthcare providers from using pixels on public-facing web pages and warned that user web activity constituted “individually identifiable health information” when IP addresses were linked to public-facing websites that cater to specific health conditions or providers.
A recent ruling found that HHS took its guidance too far —a website and an IP address cannot qualify as individually identifiable health information. However, ad-tracking is still prohibited on user-authenticated pages (where someone is logged in and identifiable).
Tracking pixels and the VPPA
Outside of healthcare, consumers have sued companies for sharing video viewing history and PII back to third-parties without their consent.
In the last several years, there has been a surge of cases alleging that any business with videos present on their website was improperly collecting and sharing their personal information.
The creative legal twist? The VPPA was passed in 1988 after a Supreme Court nominee’s video rental history was leaked during the nomination process, long before we all turned to the world wide web.
The VPPA has specific language prohibiting the disclosure of personally identifiable information of a “consumer.” Unsurprisingly, a law from the VHS era defines consumer — “subscriber of goods or services from a videotape service provider” — differently than we would today, resulting in higher dismissal rates compared to other privacy litigation.
In the last several years, there has been a surge of cases alleging that any business with videos present on their website was improperly collecting and sharing their personal information.
Additionally, defendants in the Central District of California have successfully defended against VPPA claims by arguing that their business was not “centered, tailored, or focused around providing and delivering audiovisual content,” and therefore, they were not a “video tape service provider” under the statute.
If more courts follow suit, this could prove to be a fruitful defense and effectively limit the scope of the VPPA.
Biometric data and BIPA
The Biometric Information Privacy Act (BIPA) is a privacy law in Illinois that regulates the use of biometric data, including fingerprints, eye scans, voiceprints, and facial geometry scans. The law requires that entities obtain an individual’s consent before collecting, obtaining, or disclosing an individual’s biometric data.
Since the act’s passage in 2008, lawsuits have surfaced alleging that employers that collected biometric information failed to provide advance notice or shared the data with third parties without consent. Important to note, the BIPA is the only biometric privacy law in the United States that allows an individual or private party to file a civil lawsuit against a business.
A 2023 ruling decided that a BIPA claim occurs each time there is a collection of biometric data constituting a potential violation, even if the same biometric data is repeatedly collected by the same entity. In simple terms, the same business could be liable for thousands of individual BIPA violations ranging from $1,000 to $5,000 each.
To reduce potential financially devastating results for businesses, the Illinois legislature amended BIPA in 2024. The amendment treats repeated collection of the same biometric information as a single violation, reversing the 2023 ruling.
Businesses should still remain vigilant with how they collect and share biometric information. In light of the news, the common litigation strategy could shift to focus on cases with a large number of one-time violations rather than many repeated violations.
Data risks with AI chatbots
The European Union’s Artificial Intelligence (AI) Act is the first comprehensive regulatory framework for AI. The act regulates AI systems based on risk level, with each risk level coming with its own set of requirements and provider obligations.
With 68% of people admitting to using an automated chatbot for assistance, there’s no denying that chatbots and digital assistants have become commonplace among the web. However, the AI Act classifies the use of chatbots as a “limited-risk” AI system, which means they still pose specific risks of impersonation and deception.
While past complaints over the use of chatbots didn’t go far in court, the introduction of the AI Act’s regulatory framework could reignite legal action. Businesses should follow transparency obligations to make it abundantly clear for website users that they are communicating with an AI system and not a person.
In addition, typical privacy policies apply and users should have a clear understanding of what happens to information they share with chatbots and if their conversations are being recorded or stored.
How businesses can put privacy first
Privacy notices and consent
Website and app privacy policies should clearly explain the use of tracking technology and provide users with the opportunity to opt-out of tracking when required. If a business is using tracking pixels or collecting biometrics, it should be able to explain why the technology and process is necessary for business operations.
Users should be required to take affirmative action to indicate their consent to privacy notices, like clicking a button or checking a box. Businesses also need to comply with their privacy policy by practicing what is stated and regularly auditing such practices.
By involving all necessary parties, businesses reduce the risk that ad-tracking technology will end up in places it shouldn’t be.
Get the right teams involved
When implementing new approaches to collecting, storing, or using data, businesses should make sure to include legal, marketing, and IT teams at the table. By involving all necessary parties, businesses reduce the risk that ad-tracking technology will end up in places it shouldn’t be.
Vendor management
Businesses should look for ways to reduce their third-party risk. Before working with a new vendor, establish a process to make sure that contractors have reasonable data security practices in place. For existing partnerships, businesses should keep an inventory of their vendors and what sensitive information vendors can access.
Data mapping
Data mapping is necessary for businesses to be compliant with the GDPR, CCPA, and other privacy laws. Businesses generate and collect an overwhelming amount of information — data mapping helps create a fluid snapshot of where data is coming from, what is being collected, why it’s being collected, and how long it will be retained. Data mapping also helps businesses understand where the most sensitive information lives so they can implement stronger security controls where necessary.
Check cyber coverage
Due to the current regulatory and legal landscape, many cyber insurance providers have enforced strict and comprehensive exclusions to costly wrongful collection of data claims. Businesses should check their current cyber policies and be aware of where they are protected (and where they are not). By taking steps to reduce exposure to litigation and prioritize privacy, businesses may be able to find more comprehensive coverage.