This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minutes read

US: Online safety for children in the spotlight

There has been rapid growth in regulation of how internet users – particularly children – are treated online. And compliance with the varying requirements across the world has become ever more challenging. Reflecting growing concerns about children’s online experiences, we report on two key developments in regulation of online safety in the US at a state and federal level.

01 New York Takes a bold step to protect minors on social media

New York Governor Kathy Hochul has signed into law a “nation-leading” piece of legislation, intended to protect children against the perceived harms of social media. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act will prohibit social media platforms from displaying “addictive feeds” to children under the age of 18 without parental consent.  

The law applies to social media platforms regardless of their state of incorporation, with respect to any covered minor/covered user physically in the state of New York at the time of accessing the social media platform.

Limits to additive algorithmically curated content

The legislation aims to limit the amount of content that is algorithmically curated and then presented to children based on their social media activity. 

Algorithmic content recommendations are based not only on a user’s “likes” and “follows”, but also other factors like the time of day, or the amount of time spent looking at a particular piece of content.  In its statement of legislative intent, the New York legislature describes such content as being “addictive” and intended to “create[e] a feed tailor-made to keep each user on the platform for longer periods.” 

Algorithmic content recommendations must be disabled by default for users under 18 years of age, unless or until the social media platform obtains verifiable parental consent to switch that function on. 

Verifiable parent consent

The statute directs the NY Attorney General to promulgate regulations identifying acceptable methods for obtaining verifiable parental consent, but it seems plausible that such regulations may broadly mirror the parental consent rules of COPPA, the leading federal law for protecting minors’ privacy online. 

Non-addictive content

The SAFE for Kids Act will still permit social media platforms to present minors with non-addictive  feeds  and non-addictive content, such as feeds listed in chronological  order, to ensure that children can still obtain all the core benefits of social media.

Time barring notifications

Furthermore, the SAFE for Kids Act will curtail the hours during which minors can receive notifications about their social media feeds, specifically between midnight and 6 a.m. 

Balancing children’s rights against privacy concerns

New York’s SAFE for Kids Act is not the first state law to tackle the perceived addictiveness and harms of social media for children.  Previously, Arkansas, Ohio, and California have all passed their own laws intended to curb certain addictive aspects of social media for children. 

But these other laws have been blocked by courts, for the time being, on constitutional grounds related to the First Amendment and privacy concerns (regarding the compelled disclosures of parents’ personal data to verify parental consent). 

The leading opponent of these laws and the plaintiff in the legal challenges is a tech trade association, NetChoice, who has already come out in forceful opposition to the New York SAFE for Kids Act. Thus, it remains to be seen whether New York’s attempt will survive legal challenge.

02 Bipartisan Congressional efforts to battle online social harms

Meanwhile at a federal level there has been focus on the impact of Generative AI which creates some series, and potentially very personal, harms online.

The AI Roadmap on online harms 

The Bipartisan Senate AI Working Group's AI Roadmap (read more) also encourages the relevant Senate committees to consider or develop AI-specific legislation concerning social harms, including to:

  • address online child sexual abuse material (CSAM), including ensuring existing protections specifically cover AI-generated CSAM
  • address similar issues with non consensual distribution of intimate images and other harmful deepfakes
  • protect children from potential AI-powered harms online by ensuring companies take reasonable steps to consider such risks in product design and operation. The AI Working Group is also concerned by data demonstrating the mental health impact of social media and expresses support for further study and action by the relevant agencies to understand and combat this issue
  • ban the use of AI for social scoring, protecting our fundamental freedom in contrast with the widespread use of such a system by the Chinese Communist Party.

The Take it Down Act

In response to the rise and proliferation of non-consensual deepfake pornographic images and content – targeting celebrities and private citizens (including minors) alike – a bipartisan group of Senators, led by Senator Ted Cruz, have just introduced a US federal bill, called the Take It Down Act. 

This would require social media platform providers, among other things, to “take down” deepfake porn imagery (including taking reasonable steps to remove copies of the images, including copies shared in private groups) within 48 hours after receipt of a victim’s request.  

The bill would empower the Federal Trade Commission to enforce the requirements and seeks to criminalize the publishing or threatened publication of deepfake sexually explicit images.


The Take It Down Act joins an existing bipartisan bill, led by Senator Dick Durbin, called the Disrupt Explicit Forged Images and Non-Consensual Edits Act (the DEFIANCE Act), which was introduced in the Senate in January 2024 (led by Senator Dick Durbin) and in the House of Representatives in March 2024 (led by Representative Alexandra Ocasio-Cortez), would characterize non-consensual sexually explicit images as digital forgeries and provide a right of action by victims against the images’ creators, distributors and possessors. 

While the DEFIANCE Act has been criticized by some as being overbroad or potentially stifling innovation, its supporters have argued that it would not create liability for technology platforms (unlike the the Take It Down Act would require social media platforms to police and remove content).

Looking ahead – AI as part of the solution?

While attention is focused on how AI can exacerbate online harms in creating and disseminating harmful content, companies and regulators are also looking to AI solutions for verifying age and identity, detecting illegal and harmful content, and automating mitigation. 

As tech advances and new regulations come into force, companies will need to continue to build and refine their compliance solutions to meet the requirements of the new regimes.

Learn more: Games and Interactive Entertainment Webinar: The new regulatory frontier – online safety and privacy