How violent extremists are thriving online – and why it’s getting harder to catch them

How violent extremists are thriving online – and why it’s getting harder to catch them


Violent extremists are using AI to evade social media moderation, according to the leader of one of Europe’s leading violence prevention task forces.

Speaking at TikTok’s Trust and Safety forum in Dublin, Judy Korn, managing director of the Violence Prevention Network, said the nature of extremism was changing along with the tactics used.

“[While] Islamist and far-right extremism is rising,” said Ms Korn, “the sharpest rise in ideologies is in unclear violence and nihilistic violent extremism.”

Nihilistic extremism, she said, isn’t driven by an ideology beyond simply “destruction and chaos” – which makes it harder to catch on social media because it won’t trigger traditional content filters.

At the same time, violent extremists are using generative AI to spread their messages.

Please use Chrome browser for a more accessible video player

Musk, X and the British right-wing

“Generative AI is much more clever than violent extremists, because it learns faster than a human being how to produce content that conveys the desired message without violating regulations and without violating the platform guidelines,” said Ms Korn.

Violent extremists are also getting younger.

In the UK, more than 50% of people referred to the deradicalisation programme Prevent are under 18, and one in five people arrested for terrorism is legally a child, according to counterterrorism police.

During the event, TikTok announced a number of new measures to better protect its users, including new educational prompts for users searching for terms related to extremism in Germany.

Read more from Sky News:
Paris’s Louvre closes gallery weeks after heist

Hacker ordered to pay back over £4m in Bitcoin

As well as measures around extremism, TikTok also announced a new invisible watermark that will help users recognise when content has been made with AI.

The company has had AI detection tools for years, including the widely adopted C2P labelling system, which embeds information into the metadata of content.

“The common industry challenge is that with these methods, the labels might get removed if you download the content re-uploaded or re-edited elsewhere,” said Jade Nester, director of data public policy, Europe at TikTok.

“These invisible watermarks help us address this by adding a robust technological watermark that only we can read.”

TikTok also announced the rollout of a new, gamified “wellness hub” to encourage users to focus on their mental wellbeing through meditation, affirmations and controlled screen time.



Source link

Posted in

Kim browne

Leave a Comment