0 like 0 dislike
in General Factchecking by Newbie (310 points)
closed ago by
Tiktok may be sending very harmful videos related to suicide and eating disorders to teens, within minutes of creating their accounts. There is a lot of content related to body disorders and mental health on Tiktok, both negative and positive things.
closed with the note: Fact-check selected.

5 Answers

0 like 0 dislike
by Novice (540 points)
selected by

My research on this topic suggests that the study referenced in this article is flawed, and if they want to make a study with high validity, they need to design it differently. Reading the study that was mentioned in this article, it’s clear that there are multiple distinct mistakes within their presentation of their findings. First of all, the study states, “Within 2.6 minutes, TikTok recommended suicide content. Within 8 minutes, TikTok served content related to eating disorders.” In no way does this finding claim that the content being served to the account was negative, or encouraging of eating disorders, or mental health disorders. As stated in the article, “The spokesperson said the CCDH does not distinguish between positive and negative videos on given topics, adding that people often share empowering stories about eating disorder recovery.” This is a very important distinction, because the study is claiming that TikTok is pushing harmful content. They did not mention any “harmful” content, which is a purposeful use of words to push the rhetoric that they are looking for. In addition, the researchers admit that, “TikTok operates through a recommendation algorithm that constructs a personalized endless-scroll ‘For You’ feed, ostensibly based on the likes, follows, watch-time, and interests of a user.” In a previous paragraph, the researchers stated that the accounts they made impersonating young teens would stay on eating disorder, self harm, or suicide content for a few seconds, and like and interact with only those posts. It is strange and contradictory that they are explaining how the algorithm works, and then baiting the algorithm to give them the “negative” result. They continue to say, “TikTok identifies the user’s vulnerability and capitalizes on it.” Which we have already explained, is just the algorithm doing what it does… Being an algorithm. In addition, the studies were short, as the article claims, and to my research have not been replicated.

The article was written by Samantha Murphy Kelly, who writes about how technology can impact lives. There are no obvious biases associated with her, other than that CNN is a media platform, and she and the company will likely NOT benefit from media slander such as the referenced study above. As for the CCDH, they make money from producing studies, and claiming to "counter digital hate" Hence the name. It seems as though the writers of the study and the researchers weren't too thorough.

CCDH study link:https://counterhate.com/wp-content/uploads/2022/12/CCDH-Deadly-by-Design_120922.pdf

CNN article link: https://www.cnn.com/2022/12/15/tech/tiktok-teens-study-trnd

Exaggerated/ Misleading
0 like 0 dislike
by Newbie (300 points)

The claim being made is wildly exaggerated. While harmful content is all over the internet and can be accessed by teens, it isn’t pushed onto them automatically the moment they create an account on TikTok, more often than not they’re seeking it out. Per the original article, “These accounts briefly paused on and liked content about body image and mental health. The CCDH said the app recommended videos about body image and mental health about every 39 seconds within a 30-minute period”, the research being conducted is completely skewed in one direction to fit their narrative. The additive nature of the TikTok algorithm is inherently flawed and should be reviewed, and restricted, but the videos pushed to the viewer are based on their previous likes, comments, shares, and even lingering on a video for a second longer than others. So coaching the feed to show content related to suicide and eating-disorders taints the logistics of the study. Furthermore, the study does not indicate whether or not the content on dieting or eating habits are positive or negative as there is a large community of survivors who share their experiences for empowerment and to prevent others from following their paths.

Sources:

https://www.cnn.com/2022/12/15/tech/tiktok-teens-study-trnd

Exaggerated/ Misleading
0 like 0 dislike
by Newbie (310 points)
I do believe that TikTok may push potentially harmful content onto teens within minutes because TikTok's algorithm can quickly increase information linked to detrimental habits, such as sever dieting, self-harm, or substance usage. Teens may be exposed to dangerous or inappropriate content soon after joining or interacting with specific videos due to the platform's design, which places emphasis on participation. Teen safety and mental health are legitimate issues raised by this quick form content.
True
0 like 0 dislike
ago by Newbie (280 points)

Overall, my investigation found that the claim is largely supported: TikTok can expose teen users to harmful content related to suicide and eating disorders very quickly after account creation, although the experience is not identical for every user. The platform hosts both harmful and supportive mental health content, but its recommendation algorithm appears capable of rapidly amplifying risky material based on minimal engagement. This suggests the issue is less about the existence of such content and more about how quickly and intensely it can be delivered to vulnerable users.

For primary sources, I focused mainly on the original research referenced in the Center for Countering Digital Hate study, which tested TikTok accounts posing as teens. The study found that accounts could be recommended self-harm and eating disorder content within minutes, sometimes repeatedly in short intervals. This helped demonstrate how the algorithm behaves in real time rather than relying on self-reported experiences. The study can be found here: https://www.counterhate.com/tiktok. Additionally, TikTok’s own public statements (from TikTok) act as a primary source; the company has claimed it works to remove harmful content and promote safety, which shows how the platform frames its responsibility and moderation efforts.

For secondary sources, I used reporting from CNN, which summarized and contextualized the study: https://www.cnn.com/2022/12/15/tech/tiktok-teens-study-trnd. This article explained the findings clearly and connected them to broader concerns about teen mental health and social media. I also referenced coverage from outlets like ABC News (https://abcnews.go.com/GMA/Family/tiktok-pushes-harmful-content-teens-39-seconds-new/story?id=95357982), which reinforced the claim by highlighting how quickly harmful content can appear once a user engages with certain videos. These sources helped verify that multiple organizations reported similar findings.

Each source may carry some bias or underlying interest. The Center for Countering Digital Hate is an advocacy organization focused on online harms, so it may emphasize negative outcomes to push for regulation. CNN and ABC News aim to inform but may highlight more alarming aspects of the story to attract readership. TikTok, as a company, has a clear incentive to downplay risks and emphasize safety measures to protect its reputation and user base. Recognizing these perspectives is important when weighing the evidence.

Evidence supporting the claim includes the experimental findings showing that new teen accounts were exposed to harmful content within minutes and that the algorithm intensified recommendations based on minimal interaction. Multiple reports consistently found similar patterns, strengthening the reliability of the claim. Additionally, broader research on social media algorithms supports the idea that engagement-based systems can push users toward more extreme content over time.

However, some evidence complicates or slightly undermines the claim. Not all users will have the same experience; the algorithm depends heavily on behavior, meaning exposure can vary. TikTok also removes large amounts of harmful content and promotes positive mental health resources, which suggests the platform is not solely pushing negative material. Furthermore, the study conditions (accounts deliberately engaging with certain content) may not perfectly reflect every real teen user’s experience.

When attempting to contact the original source of the claim, I would reach out to the Center for Countering Digital Hate through their official website contact page or social media accounts to ask for clarification on their methodology and findings. As expected in many cases like this, there was no direct response, but the organization provides public reports and explanations that serve as their official position.

Exaggerated/ Misleading
0 like 0 dislike
ago by Newbie (420 points)

CNN accurately summarized a 2022 Center for Countering Digital Hate study finding that TikTok test accounts registered as 13-year-olds were shown suicide/self-harm content in as little as 2.6 minutes and eating-disorder content in about 8 minutes. However, the study used controlled test accounts and does not prove that every teen will receive this content or that TikTok directly causes eating disorders or suicidal behavior. TikTok says it does not allow content that promotes, glorifies, or normalizes suicide, self-harm, or eating disorders, but it allows supportive recovery or awareness content. This matters because TikTok disputes that harmful content is allowed, yet admits that these topics appear on the platform. New resources to support our community's well-being - Newsroom | TikTok. Also, CNN reported the CCDH findings and included TikTok’s pushback, saying the study did not reflect real user behavior because researchers repeatedly engaged with specific topics. TikTok may push potentially harmful content to teens within minutes, study finds | KRDO. Some potential biases that come from these sources are that TikTok has a business interest in defending its safety systems and avoiding regulation or reputational damage. CNN frames the story around risk because child safety and social media harms are newsworthy topics. The CNN report found that new teen accounts on TikTok were shown suicide and self-harm content within minutes, with eating-disorder content appearing soon after. CNN also notes that TikTok acknowledges mental health–related content exists on the platform, which supports the idea that teens can quickly be exposed to both harmful and related content. According to statements from TikTok cited by CNN, the study may not reflect the typical user experience because the test accounts intentionally engaged with sensitive topics, which may have influenced the algorithm. TikTok also emphasizes that it removes content promoting self-harm or eating disorders and aims to direct users toward supportive resources, suggesting exposure is not inevitable for all teens.

Exaggerated/ Misleading

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...