Free to Be Extreme

By

EVENT TRANSCRIPT: Free to Be Extreme

DATE: 12.30pm -1.30pm, 23 January 2020

VENUE: Grimond Room, Portcullis House, Westminster, SW1A 2LW United Kingdom

SPEAKER: Nikita Malik, Director of the Centre on Radicalisation and Terrorism (CRT) at the Henry Jackson Society

EVENT CHAIR: Rt Hon Jeremy Wright QC MP, Dr Paul Stott

 

Rt Hon Jeremy Wright QC MP: Good afternoon everybody, we’ll make a start, it’s a little after 12.30. Thank you for coming and thank you to the Henry Jackson Society for arranging today’s event. You are not going to hear much from me, my name is Jeremy Wright, I used to be the Secretary of State for Digital Culture, Media and Sport, in which capacity I helped to produce the Online Harms White Paper, which is referred to – as you’ll know – in the paper you are going to hear about.

The star of the show is Nikita, who sits on my right. You are going to hear a lot more from her, as the author of this work, but I just wanted to introduce it by saying that I think this is a hugely important contribution to what is a hugely important debate. There won’t be anybody in this room who doesn’t subscribe to the view that extremism is a problem for our society, and that extremism promulgated online is a particularly troubling facet of that problem. And I hope everyone would also subscribe to the view that we, as a society, need to do something about not just extremism but the way in which it is promulgated online. I think that is a view shared not just by everyone here, but by those in politics and those who work in the technology sector. The Online Harms White Paper was focused on social media companies, on search engines, online platforms and the like, but this is a problem that extends across our society: the more we do online, the more we have to think about this issue. But of course, if you are thinking about this issue, particularly if you are an online company seeking to do something about it, you are entitled to say – well I accept the principle, I know something must be done, but what? And how practically are we supposed to go about dealing with the issue of extremism online? How do we recognise it, how do we categorise it, and how do we therefore combat it? And that, I think, is where this report plays a significant part. Because what I think Nikita and her colleagues have been able to do is to focus on the practicalities of this debate. Not just, is it a good idea to address online extremism – we all think it is – but how do you do it, how do you specifically address this challenge. And I think providing online companies in particular, regulators and governments with a framework in which to address this challenge, is hugely helpful. And without it, it’s going to be very difficult, actually, to implement the system that we all agree must be there.

So, I pay tribute to what Nikita and her colleagues have been able to do, it’s a start, not a finished article, I’m sure she will recognise that openly. But it’s a start that is important to make in setting out the framework, the rubric within which all of us who care about online extremism – and there is other types of online problem – can start to address those issues. So without any further words from me, I’m going to pass on to Nikita. When I walk out, as I shortly will, it will be nothing to do with what she is saying at the time, it’s because I have to be elsewhere – I’ll be leaving in about ten minutes time, I apologise for that. But Nikita, please, tell us about the report.

Nikita Malik: Thank you. So, for those of you who haven’t met me, my name is Nikita Malik, I head our Centre on Radicalisation and Terrorism at the Henry Jackson Society, and this report was something that I wanted to write for a very long time, and it was made possible by Facebook, who made very clear that they wouldn’t influence the findings in any way, and also gave us some time to do the research. It was also – I benefited greatly from four research assistants, none of whom could be here today, because they have all gone on and got jobs, so good for them but we all miss having them here, and they are acknowledged as well.

Because it was a very big project, for some time I have been studying terrorism offences, and do profiles on terrorists, because that is the bread and butter of the Henry Jackson Society’s work. One of our most cited pieces of work is ‘Islamist Terrorism 1998 to 2015’. So I began to look at 2015 to 2019, to see what profiles on terrorists were available, and of course I didn’t want to just focus on Islamism, I wanted to look at the far right as well. And this was where really, I wouldn’t call them problems, but the challenges started. The biggest challenge in this space is, you know, what is extremism. Of course, we have a commission for countering extremism, we have some of them in the audience today, and they have defined hateful extremism, but it’s very difficult to map, particularly when you look at the far right. So, I then did – I think I sent about 13 freedom of information requests, to the Home Office, to CPS, to understand how many people had been convicted of violent extremism, even online, in the UK over the last 5 years. How many people had been banned from coming to the UK because of fears of violent extremism, how many people had been prevented from speaking in public platforms, could anyone answer those questions? Unfortunately, all of the freedom of information requests were denied. So then it became a project of trying to figure out what fits the profiles of violent extremism. If we are asking technology companies to monitor people, organisations, content, could we give them examples of what fits, of what fits violent extremism.

Now, yesterday, yesterday evening, the Henry Jackson Society had an event with Jonathan Hall QC, we were very lucky and privileged to have him, and one of the questions that was asked – because he is looking at terrorism legislation – is whether he will look at violent extremism. And his answer was no, he is not going to look at violent extremism, he only looks at terrorism legislation. And I didn’t want to say anything at the time but, really I think violent extremism is a very important precourse to terrorism. So, the fact that we don’t have an offence for violent extremism, and we don’t have any case studies on that, is challenging. But also, when he spoke about the kind of conflict that exists in monitoring somebody who might be ready to commit a violent act – so you have enough information on them if you come from the prevent side or the police side to be monitoring this individual and you know that they are going to do something bad, but you are letting the, you know, the offence play out because, you know, you want to gather as much intelligence as you can about this individual to prosecute them – at what time do you step in and have enough evidence to prosecute an individual for terrorism? If we were able to, you know, look back a little and look at violent extremism, perhaps we would be able to prosecute people for different offences, earlier on. Now, to get into the nitty gritty of the report itself, the methodology – given that the FOI requests were all denied – we then began to – me and the research assistants who helped in the project – began to put together a number of case studies, from what was available on the open source. We put together 260 names – individuals that we believe met the threshold for inclusion in the report. But of course, we were looking at the online space, so they had to have some element of, you know, activity online, or their content had to be online, they had to have some kind of relationships online. So, from the 260, that really brought us down to 107 profiles. Now, I’ll just refer to my notes because I want to make sure that I get this right. When we reduced it, the way that we identified an individual as being extremist – because at the time that we were doing this study the report by the Commission had not been released on hateful extremism, but frankly even after it was released I find that categories still apply. Number one was that they were convicted for terrorism offences in a British court and that the case was publicly available under the Counter Terrorism Division for the Crime Prosecution Service website; two, that they were pro-, if they were prosecuted under non-terrorism legislation, but the offence could be matched to the UK’s government – the UK government definition of extremism. So typically, this included offences contrary to common law, such as murder, or public order offences such as incitement to hatred based on race, religion or sexual orientation. These cases were find-, found primarily through the Community Security Trust, the CST, open source log of cases, and were collectively referred to as extremism-related. Additional cases were found by reading the annual reports released by Tell MAMA, from 2015 to 2019, and three cases overlapped between CST and Tell MAMA during this period. So, we had 107 cases that we began to look at, but what I truly wanted to understand, beyond whether they met this threshold of inclusion for extremism or not, was, you know, what patterns and what indicators could technology companies, or others – criminal courts for example – use to begin to map some of these profiles for levels of harm, or dangerousness. So, I us-, the first point was to use five cases, five in-depth, qualitative cases that – only one overlapped with these 107 cases, that of Alison Chabloz – but five individual cases of people who had been banned from entering the UK because of the unacceptable behaviour’s policy, or people who had been labelled extremists by the Home Office, sued and they had then lost the case; and I was able to look qualitatively at how the courts and the government, for these five cases, determined whether this person was truly an extremist or not. That was fascinating because they used a number of things to determine that. One of the most interesting facets, and this then led to twenty indicators that I created, looking at things like history, does the person have a background of criminality? Looking at things like when statements are made, and was al in open source court material and of course with the appeals that came with that, the history of the speaker, the intent, the agency. Where they saying something of their own accord? Had they said something extreme on multiple occasions? Perhaps the most fascinating part that came out of this was influence. It seemed to me that courts, at least in the real world and offline, were very interested in a speaker’s influence. Somebody who had the position of an Imam at a mosque, for example, was seen as a very influential figure, so if the person had been denied, and in this case the person had been denied, citizenship to the UK because of fears of extremism, the case put forward by the lawyer was that the audience listening to this king od extremist speech can go somewhere else if they don’t like what he’s saying, his audience is quite small. But the courts push back on that was that because he occupied a position of authority as an Imam, because somebody has influence, is somehow their statements, if they are extremist, are much more harmful than that of the typical average joe. And of course I began to look at things like incitement to violence, membership of a proscribed group, or an extremist group, such as Al-Muhajiroun, other far-right groups that have not yet been prescribed, so you’re looking at proscription, incitement to violence, influence. Another two indicators that were fascinating were creation of content online versus sharing content online. If you create content online, are you more harmful? One of the cases on here for example, made various clippings of a slide show, and then created a YouTube presentation. He’d obviously put a lot of effort into what he was putting out from Islamic State. Is that person more harmful than somebody who retweets an anti-Semitic tweet, or retweets that talk about hatred of refugees, and how does influence play into that? If somebody is more influential, or they are a political leader and they retweet that, is that more harmful than the average Joe who spends an entire day creating an extremist of terrorist presentation that goes on YouTube and doesn’t have as many views? So things like views, and how much the content was re-shared and liked on the platform, all of those indicators were then factored into this framework of twenty indicators. So it wasn’t easy, and by all means it’s certainly not complete, it’s just a start, and just based on open and accessible material. So after examining those five cases to understand what were courts using, what history, agency content, space, space was very important because if I say something under a University, we know that under the UK law and safeguarding statutory duty, extremists are not permitted to universities, without having somebody to contest their arguments. What about whether I say something extreme in my own home? Can we map similar parallels with the online space, a public post versus a private post, that only a few people could see. So having created these twenty indicators was really this idea of a grading system of harm. So the more indicators you get, the more harmful you are, and the more intent you have. We can then have a scale of 0 to 15.2, where at one end of the spectrum we have people who aren’t so harmful online, and on the other end we have people who are very harmful online, and how do those two factors play with how technology companies are supposed to work with the police, for example? So one thing that I’d quickly flag, is this extremism grading framework measures harm as a result of extremist activity on social media platforms, rather than harm from disrupted terrorist activity. So individuals may have high sentencing under existing counter terrorism legislature, but they possess a low-level of extremism on the indicator index. So an example is R versus Salim Wakil. Wakil received a sentence of 30 years imprisonment, but his low extremism indicator score of two, which is one of the lowest on the database of 107 cases, reflects is low social media presence, and his absence of public broadcasts, and his sharing of extremist beliefs online. So his framework indicates an assessment of online harm, perpetuated by individuals such as Hussain Rashid, who had an extensive social media presence, and who gave the prolific sharing of dangerous, hateful, and violent content, which resulted in a high grade eleven point five in this case. In some way, one of the biggest challenges was determining what we mean by arm online, versus terrorists, because in my opinion they are quite different. In fact, some of the people on the very low scale of this grading index are quite dangerous terrorists, they have very long sentences, but they’re using online platforms to share logistical information, or they’re using It to transfer money to relatives who might be out in ISIS. They’re not using these platforms to share extremist views or for public broadcast to offer publication, or to reach out to people for them to join their extremist causes. Then on the most harmful scale, I thought what is it that would be the worst case for a technology company, it would be somebody who uses these platforms very publicly, sharing a lot of posts, or have a massive audience, probably linked to a proscribed organisation or having an extremist organisation, and do so with intent, have a history of violence, maybe have a history of the prevent programme, have been given warnings and they’ve tended to ignore all of that. One of the things that has come out of this is how far would police and technology companies work well together in a hypothetical grading system of harm? In the lower end of the platform you would have terrorists who the police are probably very well aware of. So perhaps the police would be informing the technology companies about these individuals, and I know they work with law enforcement already, but it would be good for them to know, say if somebody is coming out of jail after having served some time for a different offence, that they don’t then use the platform to amplify their views, of recruit individuals. Then on the higher harm scale, you have people like Alison Chabloz and others who technology companies may know, are very harmful, are very dangerous things, but who the police are perhaps unable to arrest, as all that is in their tool kit is hate crime legislation. The case of Alison Chabloz is a very interesting one because it was a private company, a private organisation that started this case against her, and even though she has an order on her not to post online, she continues to break that order and do so anyway. So what kind of relationship would the police and technology companies need to have on the higher level of the harm scale? What can technology companies do? A fascinating thing for me in this, and I know there are many people here in the audience today who work in police and in prevent, how useful is actually banning an organisation or an individual? It’s very easy to ban people on the lower end of the spectrum, because they might have terrorism links, and they don’t have a prolific presence on the online platforms, so banning them won’t really make a difference. But it means in the offline space, when you ban an individual or an organisation, there tend to be new ones that come in their place, they splinter under different names, or they send new representatives, who have the same ideas and do so under a new branding. So are they instead different actions that technology companies can take to ensure that their platforms are not used by extremists in the way. If someone has met a certain level of harm, could they be stopped from having a public event, for example, or from having a comments section on their YouTube video, or they cannot live stream. What’s fascinating is that these decisions can be justified, as they could violate the terms and conditions of a company, unlike the offline space where people can sue for their freedom of speech, or they can say that they have the right to express these opinions. In fact, they have sued in the past, and that is why this area is quite a problematic one to map out. So I think there are a lot of options available which aren’t just a binary ban or a no-ban option, for groups that could splinter and change, and also a lot of work that can be done on migration. There are a lot of statistics in this report, it’s available online, I ran a number of indicators to see who was the most harmful, so social media platforms were used most for glorifying and justifying violence, they were used for praising and supporting a [inaudible] extremist, they were used to incite violence, to share content, and to publicly broadcast views. One of the top platforms that was used by offenders was Facebook, at 29.3%, followed by twitter and then WhatsApp, which is ironic because Facebook sponsored it, and it’s not good news that they use Facebook in this way. My point, and I think I’ll end with this so we have enough time for questions and answers, is that platforms will continue to be used, in my opinion, in this way. There is a lot of reaction and a lot of work that can be done with government and with technology companies to create a framework of harm, to monitor some of this, to have different options that can be available. But what I also noticed was migration, so there is of course migration that happens once somebody is banned from a platform. It happens in two ways. One, if somebody is banned, they tend to move to an alternative, or what we call ‘alt-tech’ platforms, where they can have their views. This can sometimes tend to be effective, because they can have a huge audience on a group like Facebook, they can have a blue tick, which gives them legitimacy they can amplify their views, and moving to an alt-tech platform I think Facebook also commissioned some review papers on this, and one of them found that moving to an alt-tech platform actually reduces audience engagement. Extremists will always have a core audience, in my opinion, who will follow them wherever they may go, but what we’re trying to stop is this amplification, and new followers that can come to them on things like an open social media platform. So migration does happen, but it doesn’t happen on things like a substitute unless they’re banned. It’s often done as a compliment to open social media platforms. By that I mean, people will use Facebook and YouTube, and we see this in live streaming as well, to educate their audiences on which alt-tech platforms to go to. So it’s complimentary rather than substituting one for the other, unless of course an individual like Tommy Robinson was banned, and then they go on and try and find a new platform and a new audience, so take their old audience with them. I hope that this gives some indication of what can be done, and in terms of breaking down responsibility, one of the things I put down in the recommendations in the report that is freely available online, is perhaps profiles graded in the lower levels of extremism could be handled by police, and a lot of them are terrorist offenders and aren’t using social media platforms for prolific sharing, or for affiliation, more for logistical reasons. So perhaps tech companies could work for police for the lower levels of harm, but the profiles at the higher levels of harm, which I would say is 5.8 and above, these could perhaps be handled by a new research department. I know that [inaudible], which is a big consortium of tech companies, has become independent, so perhaps there could be more research to look at how some of these harmful people that are using the platform and it’s just the beginning I think, and it’s just an idea of how some of this work can be made more systematic, and can be made more consistent between platforms as well. Thank you, I think that it’s time for questions.

Dr Paul Stott: Thank you, Nikita, we’ve got around about 30 minutes left for questions. If we could please ensure that when people put their question they also give their name, and affiliation if they have one, and also so we could get the questions going for as long as possible, try also to make questions as opposed to statements. I’ll take questions in groups of three, and just finally, when people are leaving we do have some copies of the report, I think a limited number, which my colleague Jamila, has. We may be giving priority to journalists, I’m not sure how many we are able to do, we had some photocopier issues this morning, so questions please.

Carol Shaw: Yes, Carol Shaw, I’m sorry to be late so you might have mentioned this, but I recently read something about social media platforms, the owners of those responsible, the owners are going to be held accountable. I think they were looking at sexual harassment and so on, but I would presume that, what happens with say a terrorist organisation or somebody like Tommy Robinson?

Nikita Malik: Certainly, I too have read, and we have people from Facebook here today who might be able to comment as well but I think governments are increasingly looking at fines to social media companies to hold them accountable for terrorist propaganda. But the purpose of the report, and where Tommy Robinson and others, Pamela Geller, Robert Spenser, is different from terrorists, because they have not yet, or will not commit acts of terrorism. In fact they are influencers, so they are able to influence terrorists. Like [inaudible] did, and many of his speeches were found with people before they did the Mumbai terror attacks. But they themselves are very careful about what they say on platforms, so that their material can stay online, so that they can help people migrate to an alt tech platform where things can become more logistical. So I think that the key difference here is terrorist propaganda, which I do think, based on maybe 2 or 3 years ago, technology have become very good at removing, based on a stat that is used quote often is that they remove 90% of propaganda before anyone can see it, because they are able to sue artificial systems to do that. But where we can’t replace it human review and expertise, because humans will find ways to very creatively ensure that their material still stays online, which perhaps a computer will be unable to pick up on.

Dr Paul Stott: Ok, if I could take the woman in the front row, and then the gentlemen here please.

Audience: Some of the names you’ve mentioned, I don’t know about but I assume [inaudible] they are a lot of anonymous accounts, a lot of shared accounts, maybe they tweak the algorithms on Twitter to make their posts look more relevant [inaudible]. But how do we target [inaudible] accounts that are completely anonymous, that are sharing and generating traffic towards these posts. How do we have that conversation with social media companies and how close we to getting them to [inaudible].

Nikita Malik: I think that’s a fascinating area, I really do, false accounts. Even if somebody is banned, we still see on YouTube, regularly, snippets of that persona, snippets of speeches of Anwar Awlaki, people who are posing, or they switch the names around or move letters around, and one of the most bizarre videos that I’ve seen is someone actually dressing up like that person. So it’s not that person, but they’re trying to look like that person and mimicking their ways of speaking so that they can still communicate to their audience that way. And that takes me back to the question of bans and are they effective. They can be effective, because as I mentioned earlier we’ve seen it with terrorist propaganda, but one of the things that I really thought about when writing this, and I don’t know the answer, how would law enforcement and technology companies work to ensure that person doesn’t open another account with another name, or how internet service providers tracked, or, because people have a motive, they’ll do it. So one can hope that if somebody has served time for an offence, they don’t want to reoffend, maybe they are engaging with prevent or are being made to engage with channel, and so they don’t want to go online and do that. But the bots, and the traffic, and sometimes the comments, a video can be very innocent, and it’s the comments that are thousands and thousands, that are very problematic. So I think there has been some leeway made, though I don’t speak for tech companies, but just having worked with YouTube for two years, we flagged this with them and now they remove the comments if a video violates certain conditions, so I think the technology is there, and in some ways I think the technology companies are having to deal with this much faster than government, if I’m honest. Governments don’t have the time to put a travel ban, and often it takes a politicised agenda, and I say this without any bias, for both Islamists and the far-right, it takes a group to lobby and raise awareness that an organisation is on a terror sanction list, or that a person should be on a travel ban, and the government will then decide and debate, and there will be appeals, and Zakir Naik certainly was told that he would be able to appeal later on if he proved he as an extremist. But in my opinion, tech companies don’t have the time to do that, when they’re dealing with a massive scale and multiple countries, the content is just insanely large, and so I think that it’s promising in some ways that they’ve gone over and beyond in trying to regulate their own terms and conditions, but I do think there is some more work to be done.

Audience: Thanks again from [inaudible] strategy I was just wondering how you thought tech companies could operate within this framework, in the online retail space, so, the procurement of things that could then be used in terror offences, or as part of their extreme action?

Nikita Malik: That’s a very good question, thank you. When I was putting the framework together, multiple cumulative harm, so as people take the number of indicators of harm, and ten agency background, they become more harmful, it can certainly be applicated to their fields. So, if you are looking at criminality online, of course there are going to be certain things that are going to be very yes or no answers, somebody possessing child pornography, for example, is an offence. But where would you draw the line in gun possession, or using the internet to solicit certain things that are in that grey area, like extremism is. Of course, possession of terrorist propaganda is an offence, that’s an easy one, but where do you get into that muddled area of people who might be influencers in this network, perhaps we could keep an eye on them from a law enforcement framework by using a scale of criminality in courts or offline as well.

Dr Paul Stott: Gentlemen in the red jumper, and then the woman just in front there

Audience: Thank you, my name is [inaudible] I work in Prevent. How much does the geographical context apply to this risk and harm framework, because surely what would be deemed as extreme here may not necessarily be so in Pakistan, and are social media companies expected to apply a standardised rule for [inaudible].

Nikita Malik: That’s a fascinating question, particularly if you look at the US or if you look at freedom of speech, or even if you look at political leaders who might be extremist, but who might occupy positions of power and legitimacy. I don’t have an answer to that really, I was looking at UK cases that I think could be replicated to the US and maybe the indicators could be different, but certainly some indicators would still carry through. Things like audience amplification and engagement, intent, history, links to prescribed networks. In the US it is, I believe, easier because they have no fly lists, with people on there, they have laws regarded material support which are stronger than the UK, and so I think it’s just a start, really. I had four or five months to write it, and so I did just look at the UK, but I think it’s just a start to look at more countries, where they may be more complications and changes would be needed in those contexts.

Audience: I’ve actually got two questions. I’m Megan for the National Secular Society. First question is regarding holding platforms to account and fining them, my worry is that companies are going to be doing what will minimise their chances of being fined, in the most efficient way, and that will mean using a sledgehammer approach. They will be motivated to be far more cautious and to block things, which will be very detrimental to freedom of speech. It’s already the case that there already can be a bit of a sledgehammer approach to things. I’ve had articles from the BBC that have been blocked by Facebook, so there’s already an indicator that they can be detrimental to freedom of speech. And my second question was, you mentioned Zakir Naik, this is where I think there needs t be action regarding charities, because Zakir Naik is the trustee of a UK charity. I’ve seen personally quite a few registered charities sharing what I would consider to be extremist content online. When we have informed the charity commission about it they have engaged with those charities and that material has been removed. But that’s literally been on a monthly basis of looking at the new charities registering, going through their websites and finding something, and I generally find something every month. And that’s just from for example, the beginning of last year, and there must be hundreds of these charities with quite extremist messages. It’s one thing to be an organisation with extreme views, it’s quite another to be a charity and be sharing this stuff, and you’ll be facilitated by the state with tax credits, you will have gift aid and things like that, so I don’t know if that needs action on.

Nikita Malik: So you’re first question on a sledgehammer approach, do we cast the net too wide, is certainly something I worry about as well, particularly when it comes to things like religious criticism. Criticism of religion can be seen as offensive, or things like satire or art, irony, could these things be picked incorrectly, and I certainly think that is a real risk, and we should be very careful not to do that. In fact my own personal approach has always been, and perhaps informed the research, that unless it’s very clear that harm is present, imminent harm, and that has certainly been the case with trying to prove travel bans against extremists, and making the case for that, unless there is real and probable imminent harm, a person has a right to an opinion, even if it is offensive, perhaps that is more in the US line of thinking. Maybe that was what informed this framework as well, that you allow people, rather than ban them, to be online with restrictions. Another thing I worried about too, it’s the same stat that I quoted earlier, 98% of material being removed. We are, fortunately or unfortunately, part of a mechanism, because we are trusted flaggers for YouTube, so we flag a lot of that material, but something we raise time and time again is when we flag the material it removed, and we can never see it again, it is indefinitely removed, so we capture each video before we flag it, because we want to be able to access it. So I think ideas around access, legitimacy, posts, someone’s freedom of speech, these are all difficult areas, and so in my opinion rather than removing or banning maybe a framework where we can do a little bit, rather than too much, is a start. Regarding your second question on charities and legitimacy, absolutely, we had a number of extremist organisations who have charitable status, we too have raised concerns about this, because if we go to Amazon, and say the Amazon Smile platform is being used to raise funding for someone who’s linked to Hezbollah, they will say this charity has a registered status, so what can we do, we are just operating off what the government has told us. And that takes me back to the point about technology firms almost having to move much faster in this space than governments, because it takes a lobbying effort to raise awareness and public interest in these people to then be removed, and that process is incredibly slow. So I don’t have an answer for that yet, I know Zakir Naik has a YouTube channel which has millions of viewers, so if he can’t be coming to the UK and having a speech at a university people can be watching him online. So I think it’s a really tough one, and one I don’t have answer to yet I’m afraid.

Dr Paul Stott: We’ve got two more questions scheduled the gentleman there, and then I’ll take the one there.

Audience: Thanks very much, Hugo [inaudible] ABS Group which is a defence and security trade body, I also research on public and private sector cooperation, and picking up the comments about the companies moving more quickly than government, and also your reflections on the police, and the police are obviously an associated structure of central government, and interested in you reflections on the UK policing structures and [inaudible] do we have the appropriate structures to fulfil some of those recommendations going forward? This strikes me in daily business as a very disaggregated structure, with local forces, regional units, central, to what extent do you see the UK’s policing structure capable of adopting these recommendations?

Nikita Malik: If I’m totally honest, I had an initial meeting at the national digital exploitation service, the arm of the police, and then following that, didn’t really present the finding yet, though present it to David Ormorod, who is a criminal prosecutor. I don’t know whether they will take it on board, but I hope they do, and I’d certainly share it afterwards. Regarding what they’re doing, which might not directly answer your question but I think is fascinating and goes into that idea of precursor crimes, police involved in infiltrating some of these groups they do it much more with the FBI in the US, but here there are still infiltration happening, but in some ways benefits, but they certainly do learn a lot from the stuff being online, and so there is that interest there to pursue a case, and certainly from what I have read in court cases there are police who have had intelligence and have gone a little too far perhaps, so there is that aspect there, and I know that tech companies do work and have their own law enforcement teams and do work with the police, and I know that they have a website where people can raise concerns, so they can raise concerns with somebody who is posting things online that are offensive. In fact in these 107 cases there were many I have missed out in here, many that had previous warnings from the police or friends. They were sharing this disturbing material with friends, and friends were saying this isn’t right, and trying to tell them to take it off, don’t keep online, and when they didn’t take it off going to the police and flagging it to them, which is how it operates in the offline space as well. There is a he amount of work done by community efforts, or people within the community flagging this, and perhaps they can feedback on that too.

Dr Paul Stott: We’ve got a question there, and then at the back.

Audience: Hello, Claire Evans from Internet Matters, we help families to stay safe online. I should say we’re a not for profit but we’re funded by business. I’m really fascinated by this pace thing, about tech companies having to work faster than government, and of course that’s right. When we speak to tech companies about this kind of material, or child sexual exploitation material, they make a very clear distinction if it’s illegal, it’s very easy, they can take it down. As soon as you are shy of that illegality you asking tech companies to engage in moral judgements, and I’d just like to invite you to comment on your thinking on that and the appropriateness of that and actually, if that is a reasonable expectation from society, and how we help them do that in a meaningful and consistent way.

Nikita Malik: Absolutely, I think that was one of the driving forces behind trying to figure out a framework, is that if you are asking technology companies or social media platforms to go over and beyond because they have to, because you’re not providing them perhaps with a list of groups, or organisations, and even if you do perhaps they don’t want to be assessing that, because I’m talking in a UK context, perhaps there may be countries with a certain agenda. So, how can you create something that is consistent, that is fair, that as an appeals process that you can justify to somebody that they have violated the terms and conditions because of abcd, and it’s this idea that creating 20 indicators, some of them show things like following and intent, agency, creation, support, what was the material in the background, will assist in these moral judgements, and it won’t be the case, not that I’m saying that it is, but less difficult to determine what to do on a case by case basis. Certainly, I spoke to Jeremy Wright about this when he wrote the forward, and one of the things he put in the forward in the online arms white paper, and I’m sure he’d love to speak about it, is this idea of an independent regulator who is truly independent, not just for extremism and terrorism, but things like child sexual exploitation and crime, to really be completely, well you’re part of government, but with no agenda as to what tech companies should be doing and actually taking some of the ideas on board for what harm is online, and how to stop harmful things from happening, and I hope it happens, when the online harms white paper came out, a lot of companies, including ours were very excited about it, so now that things are settled, I really do hope that it’s taken up in some of its recommendations on board

Audience: [inaudible] from Facebook on the counter terrorism and dangerous organisations team, and we were part of the process of making sure that this got funded so that we could have better, more difficult conversations, so consider me a masochist, but I think it’s important to say that compared to five years ago, our jobs didn’t exist, I don’t come from a tech background,  I come from a practitioner and academic background combating violent extremism and terrorism, nobody in my sector thought we were a tech company, so it’s good that we first of need in-house knowledge so that we know what we’re talking about, but also to your point, our policies have to applied globally because if a content originates in Singapore, and then somebody from Indonesia shares that content with somebody in France, what jurisdiction is that? And so we have to have really objective policies, and the basis for that have to revolve around the three pillars that oftentimes completely clash with each other, which is privacy, security, and free speech, and actually, if you alter just one of the three, those three pillars are what all of our tech companies and governments, are having to constantly navigate what are we solving for and what are the trade-offs with the other two pillars, for any decision we make, which is an impossible task. But I think the point of your paper, as we’re trying to scale this up and apply it more globally around that, defining terrorism as something that we, again countries don’t agree on the exact definition, but we’re at last in a better place with a lot of literature to go off of, there are lists, but when we get to violent extremism, or non-violent extremism that might just be as disruptive and especially harmful in how it incites others, it’s very little to go off of. Especially in UK studies, we just looked at white supremacy or extreme right-wing terrorism, which is a western phenomenon that’s on the increase, it’s amazing that there’s one group that’s on the UK proscription list that, National Action, but that’s it, that’s the only designated white supremacy or far right wing group that’s on the list. That did help us be more firm in our actions against that group, but when we’re having to define, we’re going off a lot the indicators that you highlighted, we’re going off things like hate speech, incitement, or relation to designated entities, so I guess my question really relies on, this paper is helpful for us, we distribute it internally and share it among the global internet forum and counter terrorism and broader frameworks, but does this also provide government a way to maybe create that grey list, that’s not that black and white list of using the big ‘t’ word, terrorism, because it has such legal repercussions, but does this give a framework to maybe have a better banning system, or maybe a better awareness system around extremist groups or extremist individuals, where we can also mitigate in taking halfway measure, sometimes, whether that’s not allowing access to live, or not allowing comments or, because that really helps us when governments do guide, because otherwise we are always going to have to too far and reap the consequences because of that.

Nikita Malik: I absolutely agree, I think it is a relationship between government and tech companies, and sometimes that relationship can break down when I see, for example, testimonies that tech companies have to give about why stuff is still online and governments are holding them to account, and I think sometimes that is precisely what you said, these terms, terrorism in itself is heavily contested, then you have the underbelly of that extremism, violent extremism. We have a definition of hateful extremism now, but extremism is not an offence that you can prosecute somebody for, so these 107 cases are all prosecutions for terrorism and hate crime, and as a result the sentencing is very different, because if you’re prosecuted for terrorism you’re sentenced longer for hate crime, even though sometimes the things that you’re doing online can be quite similar, or one can actually be quite harmful, so I think it’s definitely something that I am really waiting for, and waiting to see whether we have a definition that’s good, but will ever have somebody prosecuted for an offence of hateful extremism, rather than be prosecuted for hate crime or terrorism, and if we do have that it’s fantastic for research companies, because we can then use this case as a basis. At the moment the case we use, and there are 5 cases in here, all people denied rights because of extremism concerns, these are the benchmark cases we use to determine what an extremist is, and in the real world, when labelled an extremist, as I said earlier, someone can sue, and they have because it’s heavily contested, and sometimes they’ve lost, which is even more helpful for us because then we can really use the case. But in the online space there I that advantage that you can say that you violated our terms and conditions, so you cannot use that platform, because you are a customer, whereas in the offline space it is more about human rights, my right to express my opinion, so I thin that balancing act will be really interesting to see play out.

Dr Paul Stott: Ok, we’ve got a few minutes left so I think this may well be the last question.

Audience: Thanks, I’m Rachel Risen from Albany associates. You talk about the human elements, the flagging and the capturing of the YouTube videos, and you talk about the necessity of that, and also we have done research into this field for quite a while with people, how do you think that institutions that work in the flagging and work in this space, whether it’s tech companies or researchers should deal with the impact of being exposed to this propaganda tool and material online, in a protected way for society but actually themselves could be exposed and potentially radicalised.

Nikita Malik: That is an amazing question, and you and I know very well because we used to work together where we looked at a lot of child propaganda by IS, so it can be quite harrowing to see that on a day to day basis, and I think that as an industry we’re vastly unprepared for the consequences of that. Just talking about the consequences of these cases earlier, not related to this report, FBI cases where agents have worked to infiltrate a group, and many of them have turned, and joined the group. There is a very famous case of a US woman, agent, who fell in love with the ISIS fighter she was supposed to be stopping, went to meet him, and then ended up leaking intelligence to him. I can see that exposure in that way is quite dangerous, and its also not monitored properly, and there is this idea in the industry of defence and security, that it’s just part of the job, and if you can’t of it then, you should shouldn’t’ be in that role. This is just from desk space work, I know people who work as channel intervention providers, and are facing these people day to day, also don’t have support that they need, psychologically to be having sessions on therapy and whatnot, and it’s a tough question, I think it’s just raising awareness of it and trying to put some kind of rules around it. I think there should be rule around how much time people can be exposed to extreme and graphic violence like that there should be mandatory therapy offered, we offer it in our think tank but that’s just because I said we should, but it’s not the norm that this should happen. My final point on this, as I’m quite passionate about it as you can see, is there is still an element of shame around, this idea that to ask for help or to see that there might be repercussions down the line is to be seen as weak and unable to do the job, but there are massive repercussions, you become completely normalised to certain types of violence, I’m not bothered by a beheading video, and other things can end up bothering you much more, and I think there is definitely a group that should be made of people who are working n this field and should be having access to care and safeguarding in that way.

Dr Paul Stott: Ok, thank you everyone, I’m afraid the clock has rather beaten us, do download the report, and thank you everyone for your attendance.

HJS



Lost your password?

Not a member? Please click here