The claim that parents will soon be able to limit their child's interactions with AI is true.
According to meta.com, they claim to be making "technological interventions" by adding new tools in support of protecting children on Messenger, Instagram, and WhatsApp. Meta also has various of partnerships, for instance with Tech Coalition, ensuring safety in kids globally.
Furthermore, according to an article, "Meta Adding AI Chatbot Safety Features for Teens," on the hill.com, Julia Shapero writes, "The social media giant announced Friday it will add new parental controls for AI chatbots that will allow parents to turn off their teens’ access to one-on-one chats with AI characters...". This supports the claim that Meta is going through with these plans of protection. Also the head of Instagram, Adam Mosseri and chief AI officer, Alexandr Wang, state in a blog, "We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI". This means that parents will be able to have control over what their kids are doing and ensure it is age-appropriate.
Similarly, The New York Times, is another reliable source that goes into detail on the new safety feature. Also the article highlights how after much scrutiny from parents, AI chatbots need to be, "...limited set of characters on age-appropriate topics like education, sports, and hobbies – not romance or other inappropriate content..". In order to avoid more backlash, Meta is taking action to protect the adolecence on their platforms.
Overall, my findings and multiple sources (The New York Times, Meta.com, and The Hill) find that this claim is true. No other evidence is available to combat this claim.