Global communication network concept.

Dechert Cyber Bits

 

Issue 74 - April 10, 2025


EU’s Highest Court Rules on Automated Decision-Making

The Court of Justice of the EU (“CJEU”) recently issued a significant ruling regarding the scope of data subjects’ right of access under the GDPR in relation to automated decision-making, including profiling. The CJEU clarified that individuals must receive meaningful information about the logic involved in automated decision-making processes, balancing transparency with the protection of other fundamental rights and commercial considerations, such as third-party data or trade secrets, in line with the principle of proportionality.

The CJEU specified that individuals must receive an explanation of the “procedures and principles” applied in automated decision-making, including what personal data was used and how it was utilized, rather than a detailed explanation of the algorithms or the full algorithm itself. This information should be provided in a concise, transparent, intelligible, and easily accessible form, using clear and plain language.

The Court determined that Member States cannot create laws that entirely deny individuals the right of access in the event that such a right might jeopardize a trade secret. Instead, organizations are required to submit the purported protected information to the relevant supervisory authority or court. This allows for a case-by-case assessment to balance the rights and interests involved and determine the extent of the data subject's right of access to the information.

Takeaway: The CJEU's clarification that "meaningful information" does not require disclosing underlying algorithms or trade secrets to data subjects will reassure organisations. However, businesses may be concerned about having to share trade secrets with supervisory authorities or courts. The decision highlights the importance of transparency and clarity in automated decision-making, which is particularly challenging with AI-driven processes where staff may be distanced from the decision logic.


Illinois Judge Scraps Rulings Applying BIPA Change Retroactively

Recently, Judge Elaine E. Bucklo of the United States District Court for the Northern District of Illinois vacated her prior rulings in two separate cases that applied a recent amendment to the state's Biometric Information Privacy Act (“BIPA”). Specifically, the court vacated its prior determination that the amendment was a clarification of existing law and therefore could be applied retroactively to conduct occurring before the amendments went into effect. 

The BIPA amendment at issue—which went into effect on August 2, 2024—created a new limitation on liability. Originally, BIPA (as interpreted by the Illinois Supreme Court) provided statutory damages on a per-violation basis, meaning that a defendant could be held liable for wrongfully collecting or disclosing the same information multiple times (e.g., an employer who fingerprinted employees each day). Under the amendment, all unauthorized collection (or disclosure) of the same biometric information pertaining to the same person collected using the same method constitutes a single violation, no matter how many times that specific piece of biometric information is shared.  

The first case to consider the retroactivity of the amendment was Gregg v. Central Transport LLC, decided in November of 2024. In Gregg, Judge Bucklo held, as a matter of first impression, that the amendment was a “clarification” to existing law, so it could be applied retroactively. In other words, defendants could leverage the new rule limiting recovery for repeated violations even in cases where the plaintiff filed suit before August 2, 2024. Judge Bucklo took the same approach two days later in Amigon v. Old Dominion Freight Line Inc. Following Gregg, however, two other judges in the same district reached the opposite conclusion, that the BIPA amendment applied only prospectively—Judge Alexakis in Schwartz v. Supply Network, Inc. and Judge Ellis in Giles v. Sabert Corp. Now, on reconsideration of her orders in Gregg and Amigon, Judge Bucklo has reversed course and joined her colleagues in holding that "the better interpretation of the amendment is that it effected a change in the law" and, therefore, cannot be applied retroactively to claims that were filed before the amendment. 

Takeaway: The plaintiffs and defense bars alike have been waiting for further clarification as to whether the BIPA amendment would apply retroactively. The implications are huge—with a difference in damages that could swing billions of dollars in large pending consumer class actions. While this question is by no means settled, these two cases are a setback for potential BIPA defendants in limiting the exposure from claims filed pre-amendment. What was previously a 2-1 split among district courts has now become a 3-0 consensus against retroactivity. Notably, however, these decisions all have involved litigation that commenced before the amendment. Courts have not yet considered whether this approach can be extended to future suits arising out of pre-amendment conduct. That will no doubt be the subject of more litigation. Stay tuned. 


New York AG Secures Another Data Security Settlement with Auto Insurer

On March 20, 2025, New York Attorney General Letitia James secured $975K in penalties from Root Insurance Co. (“Root”), in connection with claims that the auto insurer failed to protect drivers’ personal information from being swept up in an industry-wide hacking campaign that targeted online auto-insurance quoting applications. According to the Agreement, some of the stolen information was then used to perpetuate unemployment benefits fraud.

Specifically, the Agreement states that Root’s deficient security practices enabled bad actors to exploit vulnerabilities in Root’s pre-fill feature in its auto-insurance quoting tool. The tool would input users’ personal information before the user had the chance to input that data themselves, which disclosed users’ full driver’s license numbers and resulted in an auto-generated PDF at the end of the process. According to the Agreement, Root knew about bad actors’ exploitation of the pre-fill feature in January 2021 but failed to adequately assess the breach, or its scope, and used deficient controls in its attempts to thwart later attacks. Root did not admit any wrongdoing in connection with the settlement.

In addition to penalties, the Agreement requires Root to take steps to bolster its information security practices, including by maintaining a comprehensive information security plan, developing and maintaining a data inventory of private information, and maintaining reasonable authentication procedures for access to private information, among other requirements. 

Takeaway: The Agreement with Root is another in a series of settlements by the NY Attorney General, highlighting a sustained focus on cybersecurity enforcement. Although it may seem like a "blame the victim" scenario, if Root knew of the vulnerability in January 2021 and failed to act, this may have prompted the NY Attorney General's action. Post-breach, businesses must conduct thorough forensic analysis and implement recommended remediation to mitigate future liability.


Websites’ Visible Privacy Statements are Sufficient Under PA Wiretapping Law

A Federal judge ruled that websites that disclose third-party data collection in their privacy statements, which a "reasonably prudent person" could see, do not violate Pennsylvania's laws against wiretapping. Judge William S. Stickman IV’s opinion in Popa v. Harriet Carter Gifts, Inc. emphasizes the importance of user consent—whether expressly or implicitly—for online tracking, as well as the role privacy policies play in determining whether a user consents to such tracking.

Pennsylvania’s Wiretapping and Electronic Surveillance Control Act makes it illegal to intentionally intercept, disclose, or use online communications without the consent of all parties involved. In his opinion, Judge Stickman found that the plaintiff implicitly consented to alleged interception of her data despite her insistence that she had never reviewed the defendant-website’s privacy statements.

The court’s application of the reasonable person standard looks to common sense; both the visitor and the website must be reasonable. With respect to the visitor, the court evaluated whether a reasonable person could have been alerted that third parties may track that person’s online activity. With respect to the website, the court determined that if a privacy policy is reasonably conspicuous on a website, a visitor’s consent to the policy may be implied. Here, because such policy was reasonably conspicuous, the plaintiff was deemed to have implicitly consented to the terms of the agreement and therefore was on notice of the defendant's data collection practices.

Takeaway: The court's opinion in Popa v. Harriet Carter Gifts, Inc. underscores the importance of intuitive website design and prominently displayed privacy policies that clearly outline data collection practices, allowing users to review them before using the site. This case reinforces that a consumer's claim of not seeing or reading the policy is insufficient if the policy is reasonably conspicuous and written in clear language. Privacy policies must be conspicuous, straightforward, and transparent, adhering to the old adage "say what you do, do what you say.”


UK Regulator Begins Enforcing Online Safety Act Codes

On March 17, 2025, Ofcom, the UK’s communications regulator, began enforcing its Illegal Harms Codes of Practice under the UK Online Safety Act (“OSA”), marking the first major milestone in enforcement of the OSA. The Codes were originally published in December last year, giving online service providers a three-month preparation period. The OSA applies to providers of search services and services that allow users to share content online or to interact with each other online. These codes require organizations to implement stringent safety measures to combat illegal content. Key requirements include appointing a senior executive responsible for OSA compliance, adequately funding content moderation teams, enhancing algorithmic testing to curb the spread of illegal content, and removing accounts associated with terrorist organizations. Additionally, organizations must proactively detect and eliminate child sexual exploitation and abuse material using advanced tools like automated hash-matching. However, Ofcom has indicated that it will take a risk-based approach to enforcement and that risk levels posed by a service will dictate the extent to which specific measures set out in the codes are expected to be implemented.

The OSA covers over 100,000 online services, including search engines and platforms hosting user-generated content, and addresses 130 priority offences such as child sexual abuse, terrorism and fraud. Failure to comply with the OSA’s measures, including completing the risk assessment process, could result in fines of the higher of 10% of an organization’s global revenue or £18 million, whichever is greater. Business disruption measures, such as blocking orders, are also on the table for more serious infringements. Ofcom has indicated its readiness to enforce these regulations and will hold further consultations to expand the codes, potentially including measures like banning accounts sharing child sexual abuse material and implementing crisis response protocols.

Takeaway: The enforcement of Ofcom’s Illegal Harms Codes under the OSA represents a significant shift towards proactive regulation of online harms. Companies must now demonstrate accountability and take measures to prevent and remove illegal content. Attention will focus on Ofcom’s initial enforcement actions and whether they will employ their strongest powers or adopt a more collaborative approach. Further developments are anticipated.


Dechert Tidbits

FTC Democrat Members Challenge Dismissals in Federal Court

On March 27, 2025, recently terminated FTC members Rebecca Kelly Slaughter and Alvaro M. Bedoya filed a lawsuit in the federal district court for the District of Columbia, contesting their dismissals by President Trump. The lawsuit argues that their terminations violate long-established legal precedents that protect FTC Commissioners from removal without cause.

FTC to Hold Workshop on The Attention Economy

The FTC announced a workshop titled “The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families,” scheduled for May 28, 2025, at the FTC’s headquarters in Washington, D.C., and to be streamed online. The event will gather parents, child safety experts, and government leaders to discuss the impact of Big Tech's addictive design features on children and families and explore solutions such as age verification and parental consent requirements.

AI Is a Growing Focus for Corporate Boards, Proxy Proposals According to Analysts

According to a report released Wednesday by ISS-Corporate, public companies and their investors increased their focus on artificial intelligence last year. The report and a customer advisory warning issued by the US Commodity Futures Trading Commission the same day, show that AI’s opportunities and inherent risks are top of mind for boardrooms and shareholders in all industries.


We are honored to have been recognized in The Legal 500, Chambers USA, nominated by The American Lawyer for the Best Client-Law Firm Team award with our client Flo Health, Inc., and named Law360 Cybersecurity & Privacy Practice Group of the year! Thank you to our clients for entrusting us with the types of matters that led to these recognitions.


Recent News and Publications



Dechert Cyber Bits Partner Committee


"Dechert has assembled a truly global team of privacy and data security lawyers. The cross-practice specialization ensures that clients have access to lawyers dedicated to solving a range of client’s legal issues both proactively and reactively during a data security related crisis or a litigation."

"The privacy and security team collaborates seamlessly across the globe when advising clients."
- Quotes from The Legal 500

 

Dechert’s global Cyber, Privacy and AI practice provides a multidisciplinary, integrated approach to clients’ privacy and cybersecurity needs. Our practice is top ranked by The Legal 500 and our partners are well-known thought leaders and sought after advisors in the space with unparalleled expertise and experience. Our litigation team provides pre-breach counseling and handles all aspects of data breach investigations as well as the defense of government regulatory enforcement actions and class action litigation for clients across a broad spectrum of industries. We have handled over a thousand data breach investigations of all types including nation states, ransom/cyber extortion, vendor/supply chain, DDoS, brought by threat actors of all types, from nation-state threat actors to organized crime to insiders. We also represent clients holistically through the entire life cycle of issues, providing sophisticated, solution oriented advice to clients and counseling on cutting edge data-driven products and services including for trend forecasting, personalized content and targeted advertising across sectors on such key laws as the CCPA, CPRA and state consumer privacy laws, Section 5 of the FTC Act; the EU/UK GDPR, e-Privacy Directive, and cross-border data transfers. We also conduct privacy and cybersecurity diligence for mergers and acquisitions, financings, corporate transactions, and securities offerings.

View Previous Issues