16 Jan 2023
by Tom Drew OBE

How terrorists are capitalising on the cost of AI (Guest blog by Faculty)

Guest blog by Tom Drew OBE, Head of Counter Terrorism at Faculty #NatSec2023

For the best part of a decade, terrorist organisations and extremist groups have used myriad online platforms to host harmful and abhorrent propaganda, aiming to radicalise vulnerable people and mobilise them to violence.

Aware that their content can no longer achieve an enduring presence on the most well-known content sharing and social networking platforms, the strategy of groups like Daesh and al-Qaeda for the past nine years has been to disseminate new propaganda via URLs that send the viewer to dozens of small platforms, on which the content is hosted.

The approach works on the premise that these smaller platforms cannot develop robust moderation tooling to tackle the propaganda and that, even if one of these small platforms does remove the content, it will always exist somewhere on the internet for supporters to access.

This logic, while malign, is proven to work. One Daesh propaganda video released in the past week has - at the time of writing - been viewed more than 10,000 times across more than a dozen small platforms.

The overwhelming majority of these small hosting platforms are not ambivalent to terrorist exploitation of their platforms. Naturally, most are morally opposed to their ideology, aims and actions, and financially incentivised to take action on their propaganda, as no major advertiser would knowingly associate their brand with violent extremism.

While some initiatives exist to help small platforms be responsive to the upload of historic, known propaganda, Big Tech has proven the only way to robustly detect and remove new propaganda - the kind that attracts more than 10,000 views in a week - is through AI.

Tech giants like YouTube and Facebook have developed proprietary AI solutions that can quickly and robustly detect terrorist and other harmful, illegal content and prioritise this for human moderation. These classification models traditionally seek to identify features in content or metadata that are exclusive to the harm type the platform is seeking to detect. For example, computer vision models can be trained to detect specific forms of extreme violence in a video, with high precision.

The challenge for the responsible owners of small platforms that want to take action on terrorist propaganda hosted on their services is the financial cost to their business of doing so. Many such platforms do not have the skills, experience or time to invest in the development of their own AI content moderation tooling, when factoring that over essential platform development and troubleshooting. Equally, the compute costs of running third-party AI models can spiral when deployed live, against millions of pieces of content every day, if not carefully managed.

It is against this background that Faculty has recently started working with the Global Internet Forum to Counter Terrorism (GIFCT) to widen access to terrorism moderation tooling for smaller content hosting platforms. The partnership sees us work with a community of technologists and cloud providers to attempt to develop a delivery model that facilitates small platforms’ access to terrorism classification models at no cost.

GIFCT is a non-profit founded by Facebook, Microsoft, Twitter, and YouTube in 2017 to prevent terrorists and violent extremists from exploiting digital platforms while respecting human rights. The challenge of better equipping a community of small platforms to more quickly and robustly take action on terrorist content was one of its founding principles.

GIFCT invests in the development and distribution of cross-platform technical solutions to support member companies in preventing terrorism and violent extremism. As part of this GIFCT have funded a partnership with Faculty and are offering small online platforms who are members of GIFCT free access to a suite of advanced AI models that we have developed over the past five years to classify Daesh and al-Qaeda propaganda in multiple formats with exceptionally high performance. Our Daesh video classification model can detect 94% of the group’s content with 99.995% precision, meaning only one video in 20,000 would be mistakenly flagged as propaganda to a human moderator.

To ensure the financial burden of running these models is minimised for small platforms, with the support of GIFCT and its extensive network of online safety advocates, we are now working to secure agreements for a genuinely novel approach to minimising and spreading the cost of the delivery of AI content moderation tooling, that could represent a pathway for other harm types.

We will also be working to ensure that ethical and privacy preserving approaches - already built into the AI classifiers - are also applied to the implementation of the models that GIFCT and Faculty are making available. In practice this will require working closely with online platforms and moderators to ensure that content classification is never based on personal or protected characteristics but on the unique and fundamental features of individual terrorist groups.

We are under no illusions that securing these agreements will be straightforward. But we are committed to working to get it right to help make the internet safer and make it harder for terrorists to recruit and radicalise online.


Vote for your new National Security Committee 2023 representatives

We are pleased to announce that voting is now open for techUK's National Security Committee.

Find out more

National Security Reception

We are delighted to announce that techUK's first National Security Reception will take place on 21 March 2023.

Book now!

 

 

Authors

Tom Drew OBE

Tom Drew OBE

Head of Counter Terrorism, Faculty