EVENT TRANSCRIPT: Exploding Data: Reclaiming Our Cyber Security in the Digital Age
DATE: 1:00 pm – 2:00 pm, 6 September 2018
VENUE: Millbank Tower
Westminster, SW1P 4RS United Kingdom
SPEAKER: Secretary Michael Chertoff
EVENT CHAIR: Dr Alan Mendoza
Dr Alan Mendoza: Well, hello and good afternoon. Welcome to another session of the Henry Jackson Society. Today we’ll be looking at this wonderful book, Exploding Data: Reclaiming Our Cyber Security in the Digital Age, and I am of course delighted to have our dear friend Michael Chertoff. Secretary Chertoff was the Second Secretary of Homeland Security from 2005 to 2009 in the US. Obviously, a very interesting time in the evolution of the Homeland Security area in general. Some of the things we take for granted now of course came into being in your tenure. He was of course a judge before that at United States Court of Appeals for the Third Circuit, a Federal Prosecutor, an Assistant U.S. Attorney General and United States Attorney for the District of New Jersey. And of course currently he is the executive chairman and co-founder of the Chertoff Group and senior council to Covington & Burling. Basically, there is nothing this man doesn’t know about judicial security issues and we’re delighted he’s here to share some of his expertise and to talk about his new book.
Secretary Michael Chertoff: Thank you very much for that kind introduction. And it’s good to be here on what has been a week of remarkably good weather. It’s the first time I’ve said that I’m coming to London for the weather. But it’s great to be here. I could give you a little background how I came to write the book. As Alan said, I held a series of government jobs. In 2001, I just had been appointed by President Bush to be the Assistant Attorney General to the Criminal Division at the Department of Justice, and in those days there was no Homeland Security Department so all response to terror issues came from the Department of Justice, and on September 11th, as I was heading into work, I was on the telephone with one of my deputies and he told me a plane had hit the World Trade Center, and like many people I thought: Well, it was a private pilot that got turned around and set. When I heard a second plane had crashed, we both realised that this was an attack. So we over to the FBI where they had the Operations Center and by coincidence Mueller was the new FBI director. And he and I sat there for the next days and weeks afterwards first trying to determine who carried out the attacks and what else was coming and the big concern was ‘Will there be more attacks?’. Obviously that day we had the Pentagon and then we had the plane that was headed for Washington that came down over Shanksville, Pennsylvania. But one of the things I learned in the weeks and months afterwards is that we’re dealing with a global security challenge from a non-state network that is acting to carry out terror attacks. The conventional defences we use do not work. Typically, we build defences against enemy actions with radar, so if a plane or missile was coming you would have radar that would detect it and you’d shoot it down or would respond. But when your attackers are coming in a flow of travellers and so you can’t see who they are because they are not wearing uniform or flying flags, the questions is how you detect the people you need to stop and block. And what we learned was the key to doing this was data. And in the initial weeks and months after 9/11 we would with a lot of the companies that accumulate data to understand the kind of data that exists about people and how it might be used to identify potential threats, terror threats to the United States. One of the things we learned is that there is a form of data the airlines collect for every traveller called Passenger Record Information. It is your address, your contact telephone number, your passport number, your form of payment, and a couple of other items that are pretty minor. But when you collect them together it is possible – and we saw this retrospectively – to connect people up to individuals who are known terrorists. And actually when we went back and tried this as an exercise on the 19 hijackers we discovered we were able to connect 50 of them together and then to known terrorist financiers or operatives in other parts of the world. So that was my introduction to the use of data as a way of developing warnings against attacks. And we put into place a legally supportable but nevertheless somewhat novel programme of using data in order to secure the country. In the years since then though, the techniques and the skills that were developed have now been transformed into something entirely different which is the use of data for commercial purposes and actually even for carrying out attacks itself. A couple of years after I left my job, I remember attending an event where someone from the big social media companies was talking about the Snowden episode and how the government is using surveillance and ‘Oh, that’s so terrible’ and I thought ‘Wow, you are collecting 100 times more data than the governments in its wildest dreams would collect and the government is collecting under judicial supervision and with the intent to protect you from getting killed and your’re collecting to sell stuff’. And that got me interested in the way in which data is generated, including the data we don’t actually know we are putting out there but it put out there about us. For example, this is on the record, so you can all tweet, you can all make notes or put something on facebook about this presentation. I don’t generally use social media. So I will not generate data of this set of remarks. You all do and that data will be uploaded to the cloud. And then someone who has access to the data can assemble it all and get multiple views of my interaction today. And when you think about that, each through the course of your day you go through a lot of interactions that will generate data. Not just your social media. The video cameras they have here, the locational information data that is generated by your telephone. If you use your credit card to pay for taxi that will generate data. In the Us, maybe here, you have a card that you can use in the supermarket that gets you a discount and that will record everything you bought that day. And so there will be data out there that is sufficient to put together a picture of everything you did from the time you woke up to the time you went to sleep. And actually if you use a fit bit or a sports watch and wear it while you sleep you generate about what you do while your sleeping. So that’s really an enormous amount of information and the effect of that is that unlike in the old days where if you did something in public there weren’t that many people who saw it, it probably wasn’t recorded. If it was, it was probably wasn’t kept for a long time. And there was no capability of sharing it. Now the increase in the data we generate, the infinite storage capacity, and the ability to analyse it and make use of it, mean that every single scrap of information can be used for whatever purpose people want. And so that raised a couple of questions in my mind. If it’s not possible to really keep all of your data to yourself anymore, do we need to rethink the legal rules and the social rules around how data is collected. You know, typically in the US and in the UK, if you went back 50 years, the general rule was you had privacy protection against somebody searching your house or wiretapping your phone. But if you were in public, you had no protection. You in public, you consented, it’s all out there. Nowadays, though, the assumptions have changed. In those days public meant again a limited number of people seeing, a limited number of people recording. Now public means you can generated constant records. Does that mean we need to afford people protection for the data they generate in public in a way we never used to do before? And so my thinking has evolved to the point that I believe that we need to start to ask not just the question of hiding your data but the question what is a way to control the data even after it has been generated. If you generate data in a supermarket or locational data or something of that sort, can the vendor simply resell it. Should there be restrictions on selling it? Should you have to be asked before it’s resold or repurposed for something other than you intended. Or maybe something you didn’t even know about you should be contacted that there is information about you that we have, we want to use it for x purpose, do you consent to that? And that moves the discussion away from the old idea of protecting you from someone collecting into the new idea of protecting you in terms of how the data is used once it is collected. So in thinking about this, I looked back a little bit historically and I looked how privacy developed around a 100 years ago. And originally the idea was privacy was all about property. It was an expression ‘every person’s home is their castle’. They said ‘every men’s home’. I’m updating you for modern times. The idea was it was a matter of not letting people into your house to collect your papers or search your house. In the 19th century we developed photography, we developed telephone usage and that offered opportunities for people to take your photograph even though it was in public and to repurpose it for something other than you intended and likewise to tap your phone without ever setting foot in your house. And so initially when people raised privacy objections to wiretapping or misuse, misappropriation of your image, the court said: ‘Look, if it’s not in your house, there’s no problem.’ But eventually the court said: ‘Maybe the technology has changed and we’re missing the boat.’ And so the courts began to say: ‘Actually you do have a right to control how your image is used for promotional purposes.’ And likewise the courts said: ‘We are not going to impose a requirement and a judicial warrant for wiretapping, even if it outside your house.’ So it seemed to me that we’re at a point now where the technology had sufficiently changed and it was no longer good to apply the old rules to the new technology. We needed to reset the legal benchmarks. And interestingly in the United States that is going on now. And we had a decision in the Supreme Court about six months ago that said for the first time for the government to get data about your location from your telecom company, it’s not sufficient to get a subpoena, which used to be the case. Because it was viewed as the property of the telephone company. But we’re going to require a warrant because the volume of information is so great that it has really qualitatively changed what we’re doing here. Likewise, the courts put limitations on your ability to survey somebody in public 24/7 with the new technology. Again, because qualitatively it’s different that physical surveillance in the old days. So I think that a good deal of my book is about where need to change the law and the policies to protect ourselves in a new era and part of what I wanted to put out there is where would this lead if we didn’t do something about it. It seemed to me that we are moving away from a problem with privacy because in a sense what I’m saying is privacy in a sense of confidentiality has already eroded but we are looking in the area of our freedom. That we would actually see the amount of data being collected if it’s unrestrained in its use affecting our autonomy. I will give you an example. In China there is something (inaudible) called the Social Credit Score whereby monitoring everything you do online, what your friends do, and what you do in real time, using the data you generate, they can characterise you as a good citizen or bad citizen. And good citizens get better jobs, better housing, better education, and vice-versa for the bad citizens. Even in the US now, there is a (inaudible) deal company that will give you a discount if you put a devise in your car that demonstrates how you drive. You start suddenly, you stop suddenly, you accelerate quickly. And they can combine that with other data, like for example if you went to a restaurant, you ordered a glass of wine and you got into the car, they may very well know that. And they talk about this can be used to adjust your premium rate. Now they’re presenting it what a benefit to you to lower your rate if you’re driving well but we all now the flipside to that they’re going to raise your rate, if you are not. I know you have national health here. In the US, we have private health. If your health insurance, where they raise your premium because you’re eating too many fatty foods. You’re not getting enough exercise, you’re not sleeping well, and you tend to drive in an erratic way. Wouldn’t you find yourself very rapidly beginning to say: ‘Uh maybe I don’t want to order dessert because that’s going to go on my record’. Or maybe I don’t want to forgo exercise today, because that’s going to go on my record. Now some people will say: ‘That’s great. We are going to make everybody healthier.’ But I’m thinking to myself that’s a little bit like having a nanny sitting on your shoulder saying: ‘You have that. You can’t have this. Do this, do that.’ And that is not what we call freedom. And I think few of us would trade our freedom for a world in which we were healthier because someone told us every single thing we should do every minute of the day. So to me, this is really about our autonomy and creating rules that will give us control and limit the uses of data. And I’ll give you one example. Many of signed consent to use certain kinds of platforms. The problem is in many cases we don’t really have a choice. A platform is a near monopoly or monopoly of a service. If we want to partake in this service. And sometimes we don’t have a choice. We’re told ‘You have to do this.’ Then you really don’t have consent. So maybe the government needs to say to companies that are monopolies as they would in the field of energy or anything else ‘You can’t force people to give up their data as a condition of using the service so you can monetise the data and make a profit.’ Now it may be fair as an alternative to make a fee to pay. I’m not saying that platform shouldn’t be able to make a fair living. But the point is maybe it has to be with the option to pay in cash as opposed to paying with your data. Because once you pay cash, you are kind of done. So I think that’s one issue we need to talk about. The second thing is we need to talk about or must be clearer about what is done with data. Maybe some things will off limits or (inaudible). Finally, what I want to talk about is one we have to do ourselves. And my argument is we need to be more mindful of when we share our data. There are times that it makes sense – and I get frequently paying this as you do too – if I get on a site or an application, can we use your location to improve service. Now if I’m using a map service, I go ‘yes’ because it doesn’t make sense to use a map service or GPS they don’t know where I am. But for many things, whether they say when I want to buy something or if I want to read an article, they’ll say we’ll improve service if you give us your location, I say no. So I make a deliberate decision about what is the benefit to me. In giving out my information. And I think we all need to be more mindful. Finally, just briefly before we open the floor to questions. I said a little bit earlier, you know one of the things we are seeing is that the use of this data is something that can not only be used by to protect you against bad guys because government surveillance still pays a critical role in keeping dangerous people and terrorists out of our countries. And for example Europeans now moved into the realm of collecting data about travellers because they worry about foreign fighters. But data can also be used to attack us and we saw that in the 2016 election and in other elections where the Russians used their ability to get granular data about people’s preferences – and that was the Cambridge Analytica episode – to target people with messages that would either move the in favour or against a particular candidate or maybe just dissuade them from voting. So that is using your personal data to weaponise and actually subvert your activity as voting member of the public. And that I think is an increasing challenge. The last thing I would say is where getting to the internet of things that’s going to collect exponentially even more data than we are collecting now. Your refrigerator is going to know what you have in it and how quickly you go through the cheese cake. And now they have these Alexa and Echo. That’s like putting a recording devise in your own home and there have been stories about someone makes a mistake and all of the sudden their conversation is recorded without their knowing and it’s transmitted to a third party. Again we have to think about – speaking personally – I think I can turn the lights on or the music on myself. I don’t need to say: ‘Alexa, do that for me.’ Because I don’t necessarily want to run a risk that Alexa is going to record when I don’t want to be recorded. Again, this goes into the issue of mindfulness. So a lot of issues in the book, I also talk a little about the usage of cyberwar and how do we conceive of that and what are the rules that should apply there – but it’s really a brave new world and I’m happy afterwards to have a chat with you all about that.
Dr Alan Mendoza: Thank you very much, Michael. Effective way (inaudible) a lot of issues in your assigned time. But let me take the chairs first, starting just at the end of that. Mindfulness. It’s quite an important concept actually and you mentioned of course the importance of all of us being very careful with this. But isn’t it true we’re human, we’re going to make mistakes in this area? You suggested already in terms of some of these points. Surely the owners cannot be on every single individual to rigorously police every interaction they have in today’s world in every way. And if you agree with that statement, what is the legal framework to change that?
Secretary Michael Chertoff: I do agree with your statement and I think that comes back to the point I made earlier that we need to be able to be given control over our data meaning that once you submit your data it doesn’t mean that the company that acquired it is free to do whatever it wants. They need to be able to advise you every change in the use of the data. So I need to get affirmative permission and not merely blast an email at you with 70 pages of text that if you don’t respond to it is treated as yes. So that’s why I think the law has to give us much as it gives you control over the use of a photograph of you for commercial purposes which goes back 100 years. It has got to give you control of your data and require the data holder to ask for permission to change that.
Dr Alan Mendoza: The data holder obviously respond by saying: ‘But that’s crazy. I got a billion people’s data on my box. How do you expect me to send that notice to everyone?’
Secretary Michael Chertoff: Well, that’s if you want to be in that business, you’re going to have to build the infrastructure to allow you that. And ironically the companies probably most suited to communicate with a trillion people are those that are collecting this amount of data because they are those to which people are always connected. So I think it is practical. I recognise it may diminish the monetary value of data. It may change the business model of some of these companies. And maybe what they’ll decide is that instead of using your data as the license, they’ll simply charge you a monthly fee. And that’s fair. There’s no problem with that.
Dr Alan Mendoza: Okay, let’s open up to the floor. Could you give your name and any organisation when you ask your questions?
Audience Member: I’m James Kidner from Improbable Wear Technology start-up by James (inaudible) from the Foreign Office. So I’ve seen him from both government and private sector side. Can you talk a little bit about what I would call transnational implications of all this. Because law is a construct that is finding it harder to adjust to the change, to the perception of all this. And you’ve talked about how different states and different regimes have very different attitudes to use of citizens’ data. How can you set up? Is the role of books of yours (inaudible) is it sort of the set up an exemplary model that others can then follow or is it to challenge everyone who is flirting around with that model in different nations, different regimes, different systems of government and seeing whether they can come with a consensus? Do you start with a good model or do you start with some kind of agreement that what is I might call United Nations?
Secretary Michael Chertoff: You know I think that is a good model. But I think you identified an important point which I actually explicitly talk about in the book which is our laws are still based on borders for the most part but our data has no border. It’s borderless. And so we’ve seen two separate sets of issues. One is the issue of lawful access to data when it’s held in another country. So for example there was a case about a year ago where a US court issued processing Microsoft to turn over data that was held in a server in Ireland. And the company said: ‘Well, you have to go to an Irish court. We don’t have the data here.’ And the government said: ‘Well, you can get access to the data. It’s like a bank.’ You know, we can get bank records if you do business in the US. And the counterargument was: ‘No, it’s not like bank records. It’s like a safety deposit box. If you have a Swiss bank doing business in the US, you can’t make them open a safety deposit box in Geneva. You got to go to a Swiss court.’ Eventually, we got by this problem because a treaty was signed between the US and the UK and a legislation just passed that would create an authorisation to have a series of bilateral and multilateral treaties where we agreed upon standards for turning over information. Either it would be through a multilateral legal assistance treaty request to a court in another country or we could have an agreement like we have in the UK where we agree there are courts that can directly requested from the provider. But the law that applies is going to depend on the citizenship of the person whose data it is. And so we can a consistent rule about what are the principles that apply that is not focused on the happen stance where the data is located but rather on the citizenship of the person whose data it is. So I think my hope is that the approach the US and the UK have developed will now migrate in other countries in Europe, you know the Commonwealth countries, other democratic countries. I recognise China and Russia and North Korea are totally different story but at least in much of the world there is a sufficient similarity in legal rules that we’ll be able to do that. The harder question is what you do when there is actually substantive law that differs. So as you know in Europe you have now the right to be forgotten. And when that was originally upheld by European court what search engines did was they said: ‘Okay we have to delete a reference to someone on a search engine, we will do it for the domain in which the person lives or maybe in Europe. But now I gather the argument is ‘No, you have to delete it everywhere.’ For example, if I’m sitting in Geneva, I’m Swiss, I make an investment in France or I say I’m French and I have right to be forgotten. That doesn’t mean the information was false. That’s a different story. It means just unflattering or unhelpful to me. I might be able to get a court or a regulatory body in Europe to say to Google: ‘When anybody searches in Europe you cannot show this listing.’ But are they able to say to Google you can’t show it in the US because the Americans are going to say: ‘Well time out. We fought a war of independence. We get to make our own rules. Our first amendment says: ‘You can’t censor or remove truthful speech. Even untruthful speech is protected.’ So I think that’s going to be a challenge to it. I think it’s a harder challenge because the substantive rules are different. And we’re seeing this in some sense now with Russia and China who have laid down certain rules about what companies have to do in order to make business there that are incompatible with what we believe here in the West. I think that’s where the issues of global data confronting land-based law.
Dr Alan Mendoza: Okay, next question. You have been stunned into silence by data. Yes?
Audience Member: Alastair Masten, member of the Henry Jackson Society. I’m very ignorant about China so could you please expand a bit on that so for example if you’re Chinese citizen and you’re going for a job in a private corporation to what extent would they have access you’re referring to.
Secretary Michael Chertoff: So I don’t know that this approach is fully deployed. I would say that I’m not sure that private corporations means exactly the same way we mean here. But my assumption is that when this is fully deployed that it will basically affect everything, every interaction you have. Because it’s a marvellous method of social control. You can slowly dial up or dial back the benefits and the detriments in a way that people get the message very quickly. It’s like training a dog. You know, with rewards and punishment.
Audience Member: Sorry, I am asking a lot of questions. Secretary Chertoff, what would your reaction be to that? Does it have some benefits? Obviously it has disadvantages.
Secretary Michael Chertoff: No, I think the disadvantages absolutely outweigh the benefits. I understand because I heard people say: ‘What’s wrong with making people eat healthy?’ and it’s an interesting philosophical question. Because I think freedom includes the freedom to make choices that are unwise as long as they only affect you. I am (inaudible) my Jon Stewart stuff when I was in college. I don’t think that, I mean there are many totalitarian regimes that believe that they know better than you what to do and it’s actually for your own good. My view is that’s great when you’re the parent of a small child. But I think when you’re an adult you get make your own decision within reason. And there is also a constraining effect inevitably by having micro-incentives on behaviour because it tends to kill innovation. Some of the best innovations have come because people have been free to experiment and fail sometimes. I also can’t resist saying if for example you’re going to try that emote people’s eating habits you’re going to face the fact that literally every week at least in the US, maybe here as well, there’s a new study that refutes the prior study. So for a long time eggs were very (inaudible), now eggs are good. And then coffee was bad, then coffee is good. But then again coffee is bad but then they say that coffee is actually good. So I’m not sure who is smart enough to kind of regulate all that.
Dr Alan Mendoza: Let’s continue on (inaudible) in a moment. I take your point entire and of course I am going play devil’s advocate here for a moment. You say we should all have the freedom to do whatever we want as long as we’re not affecting someone else but in case of the driver that you mentioned, him drinking definitely has an impact on his reflexes and it could lead to someone dying. In the case of, ok maybe eggs and things are a bit different, but there some pretty established cases, smoking for example clearly has a burden. And for all us tax payers here who pay for National Health Service, why are we paying for the lung cancer chap who has killed himself?
Secretary Michael Chertoff: So even with smoking, I don’t smoke but I think people should be free to do it, actually a lot of people here are walking around, in this case this becomes a philosophical argument. One of the arguments by the way against that we hear in the US against government paying for health care because people will say: ‘Great, the government will pay for healthcare and then say since they are paying for it, they’ll tell you what to do.’ And freedom involves doing sometimes things that can be harmful to you. Now we do make driving under influence illegal. But I’m not sure that measuring alcohol consumption and the trying to correlate it when you get behind the wheel of a car it may affect kind of over the long term but I’m not sure it has any practical effect in short term. So you can probably pick one or two things where monitoring you maybe there is a benefit but I would say in general I’m sceptical about the fact that because the government is paying for something now they own you has a beneficiary.
Dr Alan Mendoza: Next question. Yes?
Audience Member: David Canisbetty. We have all seen the problem here with Russia recently with the GRU officers et cetera. In lights of that could you speak a bit more about the cyberwar, brave new world that we’re going into and the red lines we are going into, the use by both state and non-state actors and the dangers of escalation and sort of sanctions that might come into play and the dangerous of, you know, serious escalation and potential sort of war.
Secretary Michael Chertoff: Well, that’s another topic I cover in the book. I cover a lot of ground. We are in a low-grade cyber conflict. Not the UK and the US particularly but in parts of the world. And the best example of that is Ukraine. Ukraine is a petri-dish for Russian hybrid war tactics. In 2015/16 they shut the lights off by attacking the power grid. And recently the US government disclosed that we found malware on our own power grid that is Russian malware. Now it hadn’t done anything yet. And, you know, it’s not clear whether that reconnaissance or prepositioning a weapon. But I think it’s actually a little bit of a threat and look what we can do to you. More recently, NotPetya, which was ransomware but it wasn’t really ransomware. It was waved destroying and locking down data. And the perpetrators never asked for ransom because they were not really interested in it. Again, it was a broad-based attack on Ukraine. It was based on malware that was inserted in a very commonly used accounting software that was used for purposes of doing your financial reporting. And also interestingly, although it was targeted at entities in the Ukraine, it was not controlled in a way that would only attack certain institutions. Anybody who had an office there and used that software could be infected. Maersk, the big shipping line, lost a huge amount of money because their operations were compromised all across the company because they had an offset in the Ukraine. And then of course we had the North Koreans attack Sony because Sony was going to put a movie out that was unflattering to Kim. Now I’m always careful not to calling this cyberwarfare, because once you call something warfare, now you’re using a set of countermeasures that is not limited to cyberspace. As I think has been probably said. But it is a destructive attack. It’s a form of conflict. And that raises two interesting challenges with deterrence. One is attribution. In order to deter you have to respond in some way and in order to respond you have to know who you’re responding to. In the old days, everybody knew if a plane came off to drop a bomb or missile did you can see where it came from and we would respond and that deterred. Nowadays, the attackers obfuscate their involvement. So for example in 2007 when Estonia was attacked by Russians the government said: ‘This is not the government. These are criminal and patriotic hackers.’ Often what happens is not only the attack does not only go directly from the attacker to the victim but it may bounce around a number of hot points or may even send someone into the United States with a thumbdrive with malware to launch the attack from in the US. So one issue of attribution is the technical issue of demonstrating that the attack came from a particular country. The second related issue is how do you establish the responsibility of a government in an area deniability, as we said, the Russians will often make deals with criminal groups that they will leave the criminal groups alone as long as a) their criminality is directed outside of Russia and b) they are there to lend a hand to the government when the government wants them to do something. And that requires intelligence not just about technical capabilities but intent. And then when you get that you have to be careful that what you use, you don’t want to compromise sources and methods. One solution which I suggest in the book is the broadening the idea of state responsibility. Somewhat what we’ve done with terrorism. And to say to a country: ‘Look, if an attack emanates from your territory and you get notice and you don’t do anything about it either because you don’t want to or you can’t, it’s on you. And now you’re responsible for it. And we’ll treat it as if it’s owned by the government. So you can use the concept of responsibility to help with the attribution. The second question is what is the effect of response that doesn’t get you into an escalatory spiral. When the atomic bomb stolen by the Russians for the first time we had to think about what would be a deterrent in a nuclear age. And Eisenhower had project put together called Project Solarium designed to study all the ins and outs of the use of these weapons and how you would manage it strategically. And in the end they came up with a very durable situation which is essentially we would preserve the right to use them first but in practice there was a norm that we would not use a nuclear weapon except in the most extraordinary circumstances. Even when they developed low-iodine nuclear weapons and suddenly said this is no different than TNT. Not only the US but the Russians and the Chinese eventually, the British and the French kind of accepted that once you cross the nuclear threshold it’s very hard to know where you’re stopping and so we’re better off not crossing it. We need to have a similar study in some way on cyberspace. You know there people that say you should hack back and destroy the server that attacked you. The problem is the server may be hosting other functions, including civilian functions. So there is a real risk that you a) innocent people and b) that you wind up escalating. Up to now, what we’ve done is we’ve named and shamed, we’ve called people out, we’ve charged people with indictment, which has the effect of embarrassing the adversary country so that’s some value. And we’ve pursued financial sanctions. A little bit a risk with financial sanctions is, you know, if the sanction is relatively targeted, it’s probably not a lot of risk. But sometimes they talk about disconnecting Iran or Russia from the international global financial system. They are not allowed to use SWIFT or corresponding relationships. The problem with that is once a country feels it’s no longer benefitting from the system because the system has become a battlefield, what’s to stop them from saying: ‘Okay, we are going to bring the system down.’ Right now they won’t do it because it would actually hurt them as much as it would hurt us. But you have got to think through whether the kind of sanctions you use will put at risk the stake in the venture that right now is inhibiting them. So to me the attribution issue and the issue how you scale are the matters we are talking about in the US and I know the Secretary current of Homeland Security spoke yesterday, I think, where she said we strike back harder than we get struck. And I understand that as the logic behind that. As one actually implements that though, you’re going to get into some subtle issues about to what extent we are exposing ourselves. We run the risk these things escalating more.
Dr Alan Mendoza: Very well. Right, who is next? Yes?
Audience Member: You mentioned the Russians influencing the election recently outside their own, abroad. And obviously there has been a lot of talk about. And I mean for someone like me who only has access normally to publicly available information, there hasn’t been any very credible or at least very much credible evidence presented to us. I mean it’s a difficult question, obviously we know intelligence agencies don’t like to reveal how they know what they know and often if they reveal what they know they reveal how. So we know they have a counterintelligence problem. But equally, for someone like myself it almost seems as though it’s something that has become publicly accepted without very much publicly presented evidence. Can you comment on that?
Secretary Michael Chertoff: Sure. I mean I would say three things. I’d saw first of all I was in the US repeatedly affirmed by the heads of all the intelligence agencies that it was Russians. The private companies have now acknowledged that they’ve seen the evidence of Russian activities in social media and things of that sort. But if you want the best single example, if you look at the indictment that was issued by the special council some months back about Russian intervention, they named 12 individuals by name. It is literally like an exquisite day-by-day revelation of exactly what they did including people who travelled to the US, you know with emails quoted that were obtained. I mean the value in this. In other worlds we have seen the 12 in the court room but the granular detail makes it absolutely clear that they did it and I think it’s very credible.
Audience Member: So as a follow-up, would you say that – obviously this is a tricky issue – I personally would assume that most countries have programmes that try to influence elections in lots of different ways but possibly not excluding online activities and social media activities and that sort of stuff. So that raises the question: does the US participate in the effort to influence international elections in other countries? It’s a (inaudible) programme. I mean the point really is to what extent does the Russian effort outstrip what the United States does?
Secretary Michael Chertoff: So I think that – I’m not aware of another country that has a programme to actually mascaraed or use algorithms to manipulate social media platforms the way the Russians have done. So now, the Russians would say to you: ‘Well, the US is always talking about democracy. So that’s trying to subvert our system. Or we’re talking about human rights. To me a key distinction is: are you doing it openly and you know with accurate identification of who you are or are masquerading as someone or manipulating your data base or using troll farms to attack certain people in a systematic way. That’s where I think you get into off-limits behaviour. If the Russians want to give speeches saying that their system is better than ours, that’s fine you know. And in the US because of free speech, they could broadcast that or put out Russia Today. They can do that. The problem here isn’t that they are advocating for a particular position, the problem is that they are doing it in a way that’s manipulative, covert, and at some point even disruptive.
Audience Member: I have been thinking about it the last year or two, but when David Cameron asked Obama to tell the British people that voting to leave the EU would put Britain at the back of a queue that most of us suspect that doesn’t really exist. You know that is manipulative and covert.
Secretary Michael Chertoff: It’s not covert. I mean if he said it, it would be-
Audience Member: Sorry the request of Cameron and his team to Obama was covert in the sense that wasn’t publicised, it was kept secret at the time, so it’s covert in that sense and the question I have is what is morally the difference between a British Prime Minister asking a foreign president to essentially tell a lie in order to threaten his own people and on the other hand some guy with an algorithm on a server going online and registering too many facebook accounts and tell people to think in a different way. Where is the-
Secretary Michael Chertoff: Well, I guess I would say-
Audience Member: What makes those two situations so fundamentally different?
Secretary Michael Chertoff: I guess I would say whatever – I don’t remember what Obama said – Obama said, he believed it to be true and it was Obama. It was not- I am not sure what he did was favour to Cameron. I think he wouldn’t have done it if he didn’t believe it. And it was Obama. There is no question about it. If Putin wants to get a facebook account and say ‘you should vote for Brexit.’I have no problem with that. But when people are pretending to be somebody’s friend in order to drive his story or like that or kind of a story that is a negative story about a particular politician. All of the sudden, it rises way up on the search engine because you have coordinated systems that are constantly retweeting it or republishing it. That to me is deceptive, it is manipulative, and it subverts the systems. So I see a vast difference between the two and I haven’t seen anybody other than the Russians approach this in anywhere near this systematic form and not just about election. We’ve seen them generate social media efforts to create violence. They have a fake group like a Black Lives Matter-group and fake White Lives Matter-group and then they start to infiltrate legitimate platforms and raise conspiracy theories and that results in people actually showing up and you get violence like you got in Charlottesville where somebody got killed.
Audience Member: Sorry, go on about this-
Dr Alan Mendoza: Right, one very brief, final one and then we go on.
Audience Member: So very briefly. In fact, earlier you spoke about the data collected on all of us and whether that affects our freedoms. And just now you mentioned the idea of systematic representation through these sorts of systems. So it seems to me that the obligation to make the people using these systems are real people rather than algorithms lies with firms providing the platforms. Myself, for 20 years or more, I have persistently entered false data about myself whenever registering with obviously a huge range of services and so online for the very reason I did not wish to be spied upon as essentially what the internet has now become a massive spying service. How do you draw the line between an individual who is seeking to preserve some kind of privacy against the tide of intrusion and an organisation that you’ve just described?
Secretary Michael Chertoff: So with most (inaudible) in the law, it’s about intent. If your intent is concealing the fact is simply in order to protect your privacy I view that as qualitatively different than if your intent is to mislead other people. And that distinction is one that exists in all areas of life. We have a white lie, which is: ‘Oh I would love to come your party but I’m afraid I have a prior engagement’ if you really don’ want to go. Or you have: ‘Here is some food that’s healthy for you’ and you actually tainted it with poison. There is a huge difference and that goes to the question of intent, which what I said.
Dr Alan Mendoza: Okay, next question. Yes, Sir?
Audience Member: (inaudible) Royal United Services Institute. You mentioned an example of data that was very helpful for terrorism prevention in the US, mainly PRI, Passenger Records. And I was wondering if we should be careful how we regulate data collection from the standpoint of privacy. Notice (inaudible) whether we should be careful to make sure that the data that might actually be used for (inaudible) purposes afterwards that data collection is not impeded by privacies.
Secretary Michael Chertoff: That’s a really good question because actually there is a part of the book I didn’t discuss that deals exactly with that question what changes should be made in the way that the government collects. And what I argue is at least in the US and probably in the UK now it’s very binary. The general rule is either the government can collect or can’t collect and the question whether it can collect, you have your legal rules about what your felonies are etc. The challenge is often the relevance of the data is not evident until much after you’ve collected it. And if you don’t collect and save it, you can never go back and look and see that there’s a connection. On the other hand we don’t want to have just open collection. So what I argue is we need to kind of surveillance as a kind of continuum. Access, which is simply the ability to access data under assuming probably legal permission. Collection, which means you collect but nobody looks at it and does anything with it. And then inspection or someone does look at it, and now that’s where you make use of it and the execution, where you actually take action. And my argument is actually the threshold for access and collection should be relatively low, but with the understanding that when you collect, you can’t look at it or do anything with it until or unless you get a predicate that gives you the authority at a higher standard of proof. So you might be able to collect it, like metadata, like PNR data, not look at it but of there’s a particular reason – you identify a known terrorist – and now you want to go and see all the people that are connected to that known terrorist, you can then get permission to go now look at the data in order to whose connected to that. That gives you more flexibility. And the reason I come to this conclusion is obviously the experience when I was a prosecutor many years ago, there’s a famous murder case and it had been unsolved for a period of years. Then we came up with an idea of how we might solve the case and prosecute people for it. But in order to do that we needed to go back to the evidence that existed years ago that fortunately had been kept. And then we were able to connect some of that evidence with what we now found and that was instrumental in getting us to be able to convict the people involved in the murder. So it taught me that sometimes facts don’t become significant until much later when you can somehow get a trigger to make that connection. So I do think that we should be more subtle in the way we organise our rules, create thresholds that calibrate as you move along the continuum.
Dr Alan Mendoza: Okay, we only time for one more question. Yes?
Audience Member: Leading from remarks about individuals in a far Eastern country, is it possible and is it happening that organisations themselves can be assessed or rated using the data they possess themselves they are using to calibrate our people and if it is possible for all organisation to be regularly assessed in such a way, could such information be available to normal individuals like myself?
Secretary Michael Chertoff: Well, certainly one of the reasons I think is a concern is exactly what you say about employers, prospective employers, not just using the information they themselves collect but buying information, I’m sure there are firms now that will sell you information about individuals that they collect from all over the internet and there will be more and more information collected and that could affect your ability to get a job or affect your ability to get into university or something like that. So I think that is exactly what’s going on and it’s going to get worse. Now, will individuals be able to buy that? I mean the answer might be yes. If a company wanted to sell, then I suspect it may be rather expensive. But I think the concern I had with part of this, is that there are firms that are in the business of buying the data from all over the place, not just what they themselves collect. It’s everything that’s out there that can be resold for money. And once it’s resold for money it can be provided for money to somebody else and that’s exactly why I think the control or ought it be left with individuals whose data it is and you also have to make sure that people can’t be prejudiced by exercising that control. What do I mean? If I say I really don’t want to have all my personal data over to an employer and the employer says ‘In that case I am not hiring you’ I don’t think that’s consent. So I think we need to look as well at what can be used as a carrot on a stick to make you consent. Consent should be real and I recognise this undercuts the business model of some of the data brokers but you know, we’ve also undercut the business model tainted and adulterated pharmaceuticals and that’s what the government is supposed to be doing, protecting us.
Dr Alan Mendoza: Well my thanks. That has been a fascinating hour. I think the key thing that I’m taking away from this is just the rapid pace of that revolution of this whole subject. When you were Homeland Security Service Secretary, I wonder if you imagined 10 years hence you would be having this kind of conversation.
Secretary Michael Chertoff: Oh no, this was just a glimmer on the horizon.
Dr Alan Mendoza: So in that statement from 10 years where will we be in 10 years time further. I think that’s why the central message of the book that actually control needs to revert to the subjects who owned the data originally seems to me a very sensible way of proceeding could be the safe path. I think we’re all a little uneasy about not having a present when it comes to this evolving relationship. So I would like to thank our guest for his views on the subject and you can of course purchase the book outside with safe words written in by Secretary Chertoff. So thank you for coming and we’ll you outside in a minute outside for some book (inaudible)
Secretary Michael Chertoff: Great, thank you.