This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.cnbc.com/2025/07/23/meta-instagram-teen-safety.html
and if you wish to take away this text from our web site please contact us
Facebook and Instagram icons are seen displayed on an iPhone.
Jakub Porzycki | Nurphoto | Getty Images
Meta on Wednesday launched new security options for teen customers, together with enhanced direct messaging protections to stop “exploitative content.”
Teens will now see extra details about who they’re chatting with, like when the Instagram account was created and different security suggestions, to identify potential scammers. Teens will even be capable to block and report accounts in a single motion.
“In June alone, they blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice,” the corporate stated in a launch.
This coverage is a part of a broader push by Meta to guard teenagers and kids on its platforms, following mounting scrutiny from policymakers who accused the corporate of failing to protect younger customers from sexual exploitation.
Meta stated it eliminated almost 135,000 Instagram accounts earlier this yr that had been sexualizing youngsters on the platform. The eliminated accounts had been discovered to be leaving sexualized feedback or requesting sexual photographs from adult-managed accounts that includes youngsters.
The takedown additionally included 500,000 Instagram and Facebook accounts that had been linked to the unique profiles.
Meta is now routinely putting teen and child-representing accounts into the strictest message and remark settings, which filter out offensive messages and restrict contact from unknown accounts.
Users must be at the very least 13 to make use of Instagram, however adults can run accounts representing youngsters who’re youthful so long as the account bio is evident that the grownup manages the account.
The platform was just lately accused by a number of state attorneys basic of implementing addictive options throughout its household of apps which have detrimental results on youngsters’s psychological well being.
Meta introduced final week it eliminated about 10 million profiles for impersonating massive content material producers by means of the primary half of 2025 as a part of an effort by the corporate to fight “spammy content.”
Congress has renewed efforts to control social media platforms to give attention to baby security. The Kids Online Safety Act was reintroduced to Congress in May after stalling in 2024.
The measure would require social media platforms to have a “duty of care” to stop their merchandise from harming youngsters.
Snapchat was sued by New Mexico in September, alleging the app was creating an atmosphere the place “predators can easily target children through sextortion schemes.”
This web page was created programmatically, to learn the article in its authentic location you may go to the hyperlink bellow:
https://www.cnbc.com/2025/07/23/meta-instagram-teen-safety.html
and if you wish to take away this text from our web site please contact us
