Online extremism has evolved, what can we do about it?
There is a real call for sober, outcome-orientated, multi-disciplinary programmes to prevent and counter violent extremism (P-CVE), which need to remain as agile and adaptive in their delivery, as the issues they seek to address.
The internet as a medium to socialise, congregate, discuss, and debate issues has become a major facilitator of violent extremism. Online radicalisation is a complex process comprising of societal malaise, interaction and immersion within introverted and isolated countercultures, resulting in the acceptance of extremist doctrines as representing absolute truth. Within this context, any act advancing the dominance of the preferred ideology becomes a virtuous one. Such online communities have become virtual 'enabling environments’ which encourage intricate networks to form and act as echo-chambers for extremist narratives and morality salience. Media packets illustrate and amplify messages heard in the offline domain but can, occasionally, indoctrinate and radicalise individuals with limited human contact.
As part of a deliberate strategy ꟷ by both jihadist and far-right organisations ꟷ to tailor standalone content to their target audiences, extremist media production and distribution entities (MPDEs) have emerged, imitating mainstream news agency formats in their attempts to portray credibility and candour. Extremist content has become remarkably easy to obtain and disseminate, and is finding its way onto mainstream platforms such as Facebook and YouTube. Such developments have been facilitated by wider technological and communicative developments online, but also reflect the increasingly decentralised structure of contemporary terrorist organisations.
This phenomenon has significant implications for countermeasures seeking to tackle radicalisation and recruitment, as well as for interventions targeting those deemed 'at risk'. The shifting dynamics of online usage, new technologies, data architecture and infrastructure present moving goal posts for both policymakers and P-CVE practitioners. Nevertheless, the crux of the issue remains constant; how to protect liberal democracy, those vulnerable to malign influence, and not allow virtual recruitment grounds to flourish, without eroding the very rights, freedoms, and civil liberties one seeks to defend?
Policy prescriptions have tended to follow one of two main tactics: The first are ‘negative measures’, centred around denial of service and restriction of access to extremist material; removal of websites, and the filtering, monitoring, censoring, and blocking of content. These methods were initially pursued by governments enthusiastic about finding technical solutions for what were perceived as technical dilemmas. However, policies advocating state censorship raise all manner of freedom of speech, and civil liberty questions and, whilst flagging and removing content is potentially effective if targeted appropriately, technical measures can also be crudely implemented and overzealous.
It has also been floated that non-violent extremist forums may offer a ‘safety valve’ for radicals to vent their frustrations, mitigating the likelihood of violent extremism. As a practitioner, I am of the view that not only does monitoring extremist discussions offer greater value to intelligence gathering and identifying emergent security concerns, but perhaps more importantly, affords a better understanding of commonly espoused grievances which helps in the design and delivery of better CVE programmes.
More practically speaking, the enormity and transnational nature of the internet means efforts to shutdown or block forums, even if ethically and technically viable, are only nominally disruptive. This is due to the constant circulation of standalone, user generated content, the number active community members, and their exploitation of legal ambiguities, restrictive jurisdictions, and disclaimer clauses.
‘Positive measures’ aim to offer counter- or alternative-narratives which either directly challenge or indirectly neutralise extremist messages. Whilst conceptually sound, in reality many strategic CVE communication campaigns are ill-defined, experience problems of legitimacy and credibility, struggle to reseach and be shared by their target audience, and find impact level indicators notoriously difficult to capture and measure. Yet, there are a range of hybrid options beyond this false dichotomy, a few that spring to mind include:
Combining strategic, commensurate, negative measures with the effective prosecution of prolific online extremist administrators, facilitators and those involved in MPDEs, deterring the production and distribution of extremist content, and therefore its availability at source.
Empowering online communities to self-regulate through strengthening report and complaint mechanisms, and compelling social media platforms to have a duty of care backed-up by sanctions for non-compliance when extremist content is allowed to be shared in the mainstream.
Bolstering critical media literacy through comprehensive educational programmes which strengthen abilities to gauge, evaluate, and assess online content helping reduce extremist media appeal, as well as promoting conduct-awareness and positive normative behavioural patterns.
Supporting strategic communications programmes which facilitate the production, packaging and dissemination of counter- and alternative content from informed, credible grassroots voices, that challenge extremist narratives, and stimulate debate independent of direct state involvement or perceived influence.
Whatever the approach, there is a real call for sober, outcome-orientated, evidence-based, multi-disciplinary CVE programming, which needs to remain as agile and adaptive as the issues it seeks to address. There is no archetypal CVE guidebook because there is no generalisable terrorist-typology, and violent radicalisation viewed through a single theoretical lens is at best unhelpful and at worst misleading.
Although stratcomms methodologies may add to a comprehensive CVE toolkit, they are not realistic frameworks for addressing the psycho-social processes underpinning radicalisation, at least not in the online space alone. Drawing upon several conceptual tools is essential when exploring how cultures, settings, and interactions inform and shape our normative behaviours and world-views.
In practical terms, means convening teams that ‘bridge the gap’ between expertise, policy, and practice, and developing CVE programming that synthesise and operationalise multi-disciplinary methodologies to better understand and address the subtleties of extremism, whilst providing credible theories of change and results frameworks to measure the outcome and impact of interventions.
This post is an abstract from my academic paper Surfing the Jihadisphere: How the Internet can Facilitate Violent Radicalisation available for free download from the RESEARCH tab above.