This claim is true. This information is verified by Meta’s official website, as well as reputable news sources including The New York Times and The Guardian. These sources verified Meta’s announcement of AI safety guidelines and new parental controls for the accounts of teenage users.
For my primary source, I found Meta’s article on the new safeguards written by Adam Mosseri, Head of Instagram and Alexandr Wang, Chief AI Officer. According to the article, parents will be able to set time limits on AI interactions or turn them off all together. Furthermore, parents will receive information about the topics their teens are discussing with AI characters. Finally, AI experiences for teens are now guided by PG-13 ratings. This means that AI characters “...should not give age-inappropriate responses that would feel out of place in a PG-13 movie” (Meta).
This information is corroborated by reliable secondary sources including The New York Times and The Guardian, which also discuss the chatbots’ tendencies to engage in provocative discussions about race and medical disinformation. These articles further examine how AI chatbots have been “blamed for driving some children to suicide and sending some adults into delusional spirals” (The New York Times). These changes to the Meta platform were in response to reports of sexual conversations between AI characters and children deemed acceptable by Meta’s chatbot standards. The spokesperson for Meta, Andy Stone, claims that the company is “...in the process of revising the document and that such conversations with children never should have been allowed” (Reuters).
No available evidence or potential bias undermines this claim. Reputable news agencies, as well as the article published by Meta itself, support the claim that parents will soon be able to block or limit their child's interactions with AI on Instagram.