Meta Platforms Inc., the parent company of Instagram and Facebook, has announced a big change aimed at protecting young users. After facing heavy criticism for its AI chatbots that sometimes behaved in a flirty or inappropriate manner, Meta has decided to give parents more power to manage how their teenagers use these AI features. This new move shows that Meta is finally taking the safety of minors more seriously after being under the spotlight for the wrong reasons.
The company said on Friday that parents will soon be able to turn off their teens’ private chats with AI characters. This means that if a teenager is chatting with an AI assistant or one of Meta’s AI personalities, their parents will have the option to stop these one-on-one conversations completely. The feature is expected to launch early next year on Instagram in the United States, United Kingdom, Canada, and Australia.
This update follows weeks of criticism directed at Meta for allowing AI chatbots to have suggestive or overly personal conversations with young users. In August, Reuters reported that Meta’s AI system did not fully prevent uncomfortable or adult-style chats with minors. This raised serious questions about whether the company was doing enough to ensure child safety in the digital world.
Meta’s decision now shows a shift in how the tech giant approaches online safety. “We want our platforms to be a place where young people can explore safely,” Instagram head Adam Mosseri said in a blog post. Along with Chief AI Officer Alexandr Wang, Mosseri explained that Meta wants parents to be more involved and informed about how their teens use artificial intelligence features.

The company has also announced that its AI experiences for teenagers will now follow the PG-13 movie rating system. This means that the AI chatbots will be designed in a way that filters out adult content or sensitive topics that are not suitable for users under 13 years old. This is similar to how movie ratings help parents understand what is appropriate for their children to watch. By using the PG-13 rating system, Meta hopes to create a safer, more comfortable environment for teens online.
But this change goes beyond just limiting access. Meta’s new tools will also allow parents to block specific AI characters if they feel that a certain bot is not appropriate for their teen. Parents can also view broad topics that their teens discuss with AI chatbots or the Meta AI assistant—without reading the exact messages. This gives them insight into their child’s online activities while still respecting their privacy.
Meta clarified that even if parents choose to disable private AI chats, teens will still have access to Meta’s main AI assistant for general and safe interactions. The company said that this assistant will operate with “age-appropriate defaults,” meaning it will automatically avoid mature or harmful content when interacting with younger users.
This announcement comes at a time when U.S. regulators are paying close attention to how AI companies manage the safety of minors. Many experts and parents have raised concerns that AI chatbots could expose teens to emotional manipulation, suggestive language, or even harmful advice. The growing popularity of AI assistants, while exciting, also poses new challenges for families and regulators trying to ensure online safety.
Meta’s move may also be seen as an effort to rebuild trust. Over the past few years, the company has faced repeated criticism for how its platforms affect teenagers’ mental health. Reports have shown that long hours on apps like Instagram can lead to anxiety, depression, and negative self-image, especially among young girls. The addition of AI chatbots made things more complicated, as these bots could simulate emotional or romantic conversations that some found inappropriate for minors.
By allowing parents to control these interactions, Meta is trying to balance innovation with responsibility. Many believe that artificial intelligence can be an amazing tool for creativity and learning—but only if it is properly managed. Teenagers today often use technology to express themselves, explore interests, or seek advice. However, without proper safety rules, even helpful AI tools can cross the line into dangerous territory.
“Technology should help young people grow, not put them at risk,” said Alexandr Wang, Meta’s Chief AI Officer. His statement reflects a growing awareness within the company that AI must be handled with extra care when it comes to children and teenagers.
Parents around the world are likely to welcome these new controls. In recent years, they have voiced frustration about not being able to monitor or limit their children’s digital activities effectively. While many social media apps already offer parental controls, the addition of AI chat tools made things trickier because these conversations often felt personal and private. Meta’s new update might ease some of those worries by letting parents step in when they feel it’s necessary.
Still, experts say that technology alone cannot solve every problem. Parents, teens, and companies must work together to create safer online spaces. The new AI settings are a step in the right direction, but digital education—teaching teens about online boundaries, privacy, and respect—remains equally important.
As social media continues to evolve, companies like Meta will need to adapt constantly. The use of AI in platforms such as Instagram and Facebook is growing rapidly, and it can shape how young people think, talk, and behave. By giving parents more say and setting clearer boundaries, Meta is showing that it is aware of its responsibility to protect the next generation of users.
In the end, this move is not just about technology—it’s about trust. Teenagers want to explore and connect, but they also need safety nets. Parents want to guide and protect, but they also want their children to enjoy the benefits of modern technology. Meta’s new approach tries to meet both needs by giving families the tools to make smart choices together.

