At the end of 2024, Meta introduced Instagram Teen accounts, which was the purpose of a safety trap to protect young brain from sensitive materials and ensures that they have safe online interactions, which are affected by age detection techniques. Accounts for adolescents are automatically classified as private, aggressive words are hidden, and the messages of strangers are blocked.
According to an investigation by young-centered non-profit, design it for us, and Accountable technologyInstagram teen railings are not reaching their promise. At an interval of two weeks, five testing accounts related to adolescents were tested, and they were all shown sexual content despite the Meta promises.
A barrage of sexual materials

All the test accounts were served unfit materials despite enabling sensitive material filters in the app. “5 of our test teenage accounts recommended body image and disorganized food material from 5 algorithms,” the report said.
In addition, 80% of the participants reported that they experienced the crisis using Instagram Teen accounts. Interestingly, only one of the five testing accounts was shown an educational picture and video.
“(Approx) 80% of the content in my feed belonged to relationships or raw sex jokes. This material was generally away from being completely clear or showing graphic imagination, but left very little for imagination,” one of the examiners was said.
26-According to the report of the two, a shocking 55% flag material represented sexual acts, sexual behavior and sexual imagination. Hundreds and thousands of likes were deposited in such videos, one of which preferred more than 3.3 million.
The algorithm of the Instagram also pushed a material that promoted harmful concepts such as “ideal” body types, body sorting and eating habits. Another worrying themes were videos that promoted alcohol consumption and videos that used to use steroids and supplements to users to achieve a certain masculine body type.
A complete package of bad media
Despite the meta claims of filtering problematic materials, especially for adolescent users, test accounts were also shown racist, homophobic and incorrect materials. Once again, such a clip collectively liked millions. The video showing gun violence and domestic misconduct was also pushed into adolescent accounts.

The report said, “Some of our tests adolescent accounts did not receive the default security of meta. No account received sensitive material control, while some did not get protection from aggressive comments,” the report said in the report.
This will not be the first time that Instagram (and other social media platforms of Meta, normally) have been found serving problematic materials. In 2021, Leaks revealed how Meta was aware of the harmful effects of Instagram, especially on young girls dealing with mental health and body image issues.
In a statement shared with Washington PostMeta claimed that the findings of the report reduce the sensitivity of the flawed and flagged material. A month ago, the company also expanded its teenage security for Facebook and Messenger.
A Meta spokesperson has been told, “A manufactured report does not change the fact that tens of millions of teenagers now have a safe experience for Instagram Teen accounts.” However, he said that the company was looking at the recommendations of problematic material.

