Meta is going to automatically limit the type of content that teen Instagram and Facebook accounts can see on the platforms, the company announced on Tuesday. These accounts will automatically be restricted from seeing harmful content, such as posts about self-harm, graphic violence and eating disorders. The changes come as Meta has been facing increased scrutiny over claims that its services are harmful to young users.
Although Meta already doesn’t recommend this type of content to teens in places like Reels and Explore, these new changes mean that this type of content will no longer be shown in Feed and Stories, even if it has been shared by someone a teen follows.
“We regularly consult with experts in adolescent development, psychology and mental health to help make our platforms safe and age-appropriate for young people, including improving our understanding of which types of content may be less appropriate for teens,” the company wrote in a blog post. “Take the example of someone posting about their ongoing struggle with thoughts of self-harm. This is an important story, and can help destigmatize these issues, but it’s a complex topic and isn’t necessarily suitable for all young people.”
The company notes that although Instagram and Facebook allow users to share content about their own struggles with suicide, self-harm and eating disorders, Meta’s policy is to not recommend this content and to make it harder to find. Now, when users search for terms related to these topics, Meta will start hiding these related results and will instead display expert resources for help on the matter. Meta says it already hides results for suicide and self harm search terms, and that it’s now extending this protection to include more terms.
Meta is also automatically placing all teen accounts in Instagram’s and Facebook’s most restrictive content control setting. The setting is automatically applied for new teens joining the platforms, but now it will be applied to teens who are already using the apps. The content recommendation controls, which are called “Sensitive Content Control” on Instagram and “Reduce” on Facebook, are designed to make it harder for users to come across potentially sensitive content or accounts in places like Search and Explore.
The new changes will also see Meta sending new notifications encouraging teens to update their settings to make their experience on the platforms more private. The notification will pop up in situations where a teen has an interaction with an account they aren’t friends with.
Meta says the updates will roll out to all teen accounts in the coming weeks.
The measures announced today come as Meta is scheduled to testify before the Senate on child safety on January 31, alongside X (formerly Twitter), TikTok, Snap and Discord. Committee members are expected to press executives from the companies on their platforms’ inability to protect children online.
The changes also come as more than 40 states are suing Meta, alleging that the company’s services are contributing to young users’ mental health problems. The lawsuit alleges that over the past decade, Meta “profoundly altered the psychological and social realities of a generation of young Americans” and that it is using “powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens.” It accuses Meta of disregarding “the serious dangers to promote their products to prominence to make a profit.”
In addition, Meta has received another formal request for information (RFI) from European Union regulators who are seeking more information regarding the company’s response to child safety concerns on Instagram. The regulators are asking the company what it’s doing to tackle risks related to the sharing of self-generated child sexual abuse material (SG-CSAM) on Instagram.
Meta faces more questions in Europe about child safety risks on Instagram
Why 42 states came together to sue Meta over kids’ mental health