Authorities Call for Investigation Into TikTok Over AI-Generated Political Disinformation

The scandal started as a TikTok account became popular due to the supply of videos that seemed to be actually planned to attract younger viewers. The videos involved young women in national colours, and they gave emotionally charged messages championing the departure of a nation out of the European Union. The videos appeared to be natural manifestation of political opinion at first. However, a closer look caused alarm. The use of linguistic patterns, strange syntax, and the high level of polished but unnatural audiovisual content indicated that the content has been produced or highly aided through artificial intelligence.

Worst was the pace at which the account gained engagement. The videos spread quickly, being watched, shared, and talked about within weeks and this demonstrates how quickly AI-assisted narratives can permeate online discussions. Then, without any warning as it seemed, the profile disappeared on TikTok, leaving questions of whom it was created by, how it spread so fast and whether the platform did enough to curb it.

In an official complaint to the European Commission, high-ranking officials made it clear that such content exceeded the normal political speech. According to the revealed material, there is a danger to the alleviation of the order of society and the safety of the information as well as to the integrity of democratic procedures in Poland and the European Union as a whole, as Deputy Digitalization Minister Dariusz Standerski wrote. He continued, the character of the stories, the way they are shared, and the use of artificial audiovisual content suggest that the platform is not fulfilling the requirements set on it as a Very Large Online Platform (VLOP).

image

Ethically, as an editorial matter, this case points to a larger change in the way that influence operations are undertaken. Disinformation does not just exist as crude propaganda or text posts by anonymous people. Robots are now capable of creating believable faces, natural tones and emotionally charged messages at scale. Regarding platforms that operate based on short-form video and algorithmic amplification, that risk is increased. Posts that appear interesting and familiar to the reader have an easier time going viral than fact-checker or moderator posts can handle.

The source of the content was openly evaluated by the government representatives. One of the spokespersons claimed that the videos were certainly Russian disinformation citing the usage of the Russian syntax in the videos. Although these allegations cannot be substantiated in the public without the disclosure of intelligence data, they agree with the old announcements of the European security forces about the efforts of foreign forces to control the masses with help of the Internet.

Tik Tok, in its turn, argued that it had done nothing outside of its regulations. A company representative replied in an e-mail that the company reached out to Polish officials and had taken down material when it contravened its policy. This reaction underscores a recurrent conflict between regulators and platforms. Removing the content once it was flagged as such shows compliance on the platform. According to the regulator, the true question is whether systemic protection is robust enough to make sure that such content is not spread so far at all.

The European Commission established that it received the request to be scrutinized and referenced prevailing requirements under the Digital Services Act. A spokesperson of Commission argues that enormous online platforms must evaluate risk posed by their services, including artificial intelligence-related ones. The spokesperson added that already in March 2024, the Commission has sent a request of information to a number of online platforms, such as Tik Tok, inquiring them to provide information on what they have done in addressing the risk associated with AI.

It is not the first occasion that Tik Tok had to endure the pressure of the regulators in Europe. The Commission initiated official action against the platform, owned by the Chinese company ByteDance, last year, due to an objection that it was not sufficiently restricting election interference, especially in the Romanian presidential vote in November 2024. The said proceedings marked the beginning of a change towards warnings and enforcement, signifying that regulators are now more eager to put the entire Digital services act into the test.

The very large online platforms are under increased responsibilities under the Act. They are supposed to take the initiative to restrict harmful contents, respond to systemic risks and provide transparency in amplification of political contents by algorithms. Noncompliance with these criteria may lead to fines up to 6 percent of the annual turnover of a company in the world, which is a substantial sum even to attract the attention of the most giant technological companies.

Governments in the European Union have grown increasingly sensitive to the risk of electioneering and internal politics being interfered with by foreigners. Cyber activities, espionage, and organized influence campaigns have raised the level of concern over the digital ecosystem. Though Russia has repeatedly denied using the foreign elections, the European policymakers still consider such risks credible considering the new AI technologies that reduce both the cost and complexity of manipulation.

In a larger sense of society, this episode poses unpleasant questions. What can common users do to draw the line between genuine political expression and artificial persuasion by AI? What speed should platforms be anticipated to respond to technically advanced but not immediately criminal contents? And whose responsibility should it be when it is algorithms that decide what millions of people watch every day, and not human editors?

The predicament highlights the difficulty that regulators struggle to keep in touch with technology. Legislations such as the Digital Services Act are a framework and enforcement lies in constant vigilance, technical skills and collaboration with platforms. In the case of TikTok and its equivalents, the examination is not going to be removed soon. The closer an AI-generated content resembles a regular work of free expression, the more difficult the boundary between free expression and manipulation under the veil is going to become.

👁️ 58.9K+
Kristina Roberts

Kristina Roberts

Kristina R. is a reporter and author covering a wide spectrum of stories, from celebrity and influencer culture to business, music, technology, and sports.

MORE FROM INFLUENCER UK

Newsletter

Influencer Magazine UK

Subscribe to Our Newsletter

Thank you for subscribing to the newsletter.

Oops. Something went wrong. Please try again later.

Sign up for Influencer UK news straight to your inbox!