Ofcom Issues Warning to Tech Companies Over Chatbots Mimicking Real and Fictional People

Ofcom, the UK’s communications regulator, has issued a warning to tech companies regarding the potential risks posed by chatbots that impersonate real and fictional individuals. The warning follows reports of distressing incidents where chatbots were created to imitate deceased British teenagers Brianna Ghey and Molly Russell. These incidents have raised concerns about the safety and impact of chatbot-generated content, particularly under the new UK Online Safety Act.

The warning highlights that user-created chatbots could fall under the scope of the UK’s digital safety laws. Ofcom clarified that platforms allowing users to build and share their own chatbots, including those that mimic real people or fictional characters, are subject to the regulations of the Online Safety Act. This includes any services that allow users to create chatbots for others to interact with and share content on social media or messaging platforms.

image

While Ofcom did not mention the US-based Character.AI platform by name, it emphasized that any platform or app where users can create and share chatbot content would need to comply with the law. The Online Safety Act, which is set to be fully enforced next year, requires platforms to ensure the safety of their users, particularly children, by proactively removing harmful or illegal content and providing tools for users to report inappropriate material.

In its guidance, Ofcom referred to troubling incidents where Character.AI users had created bots that impersonated Brianna Ghey, a transgender teenager who was tragically murdered in 2023, and Molly Russell, who died by suicide in 2017 after being exposed to harmful online content. These cases raised alarms about the potential emotional harm caused by chatbots that mimic real people, particularly in sensitive and traumatic situations.

Another case cited by Ofcom involved a teenager in the US who developed a relationship with a Character.AI bot based on a character from the TV series Game of Thrones. Tragically, this interaction led to the teenager’s death. These incidents have prompted further scrutiny of the role of chatbots in shaping online behavior, particularly among vulnerable users.

The Online Safety Act, which is still being fully implemented, sets out specific obligations for social media platforms and other services that host user-generated content. Under the law, the largest platforms will be required to proactively monitor and remove illegal and harmful content, including chatbot-generated material, and to provide users with clear ways to report harmful content. Platforms will also be required to conduct regular risk assessments to identify potential dangers posed by their services, especially to children.

The Molly Rose Foundation (MRF), a charity founded by the family of Molly Russell, welcomed Ofcom’s guidance as a “clear signal” that chatbots could cause significant harm. However, the foundation also expressed concern that further clarification was needed about whether chatbot-generated content could be considered illegal under the Act. Jonathan Hall KC, the government’s advisor on terrorism legislation, had earlier pointed out that the current laws did not adequately address the challenges posed by AI chatbots. In response to these concerns, Ofcom plans to issue additional guidance on how platforms should handle illegal content generated by chatbots.

Ben Packer, a partner at the law firm Linklaters, noted that Ofcom’s warning highlighted the complexity of the Online Safety Act and its wide-reaching scope. He pointed out that the Act was being developed at a time when the rapid growth of AI tools and chatbots was not fully anticipated, which makes it challenging to address all the potential risks associated with these technologies.

Character.AI, the platform mentioned in the incidents, responded by stating that it takes safety seriously and that it moderates the content on its platform both proactively and in response to user reports. The company also confirmed that the bots mimicking Brianna Ghey, Molly Russell, and the Game of Thrones character had been removed from its platform following the incidents. However, the removal of these bots does not fully address the broader issue of how AI and chatbots should be regulated to prevent potential harm.

image

The concerns raised by these incidents reflect a growing awareness of the potential dangers posed by AI and chatbot technologies. As these technologies continue to evolve, it will be important for regulators, tech companies, and advocacy groups to work together to ensure that users, particularly vulnerable individuals, are protected from harmful content.

The UK’s Online Safety Act is a significant step forward in regulating digital platforms and ensuring that they take responsibility for the content shared by their users. However, it also presents challenges in keeping up with the rapid pace of technological innovation. As AI and chatbot technologies continue to evolve, it will be important for regulators to adapt their strategies to address new risks and ensure that digital platforms remain safe spaces for all users.

image

New Car Tax to Increase for Luxury and High-Emission Cars Starting April 2025

image

Revised Train Services Due to Staff Shortages and Engineering Works