Future Politics

By

Nikita – Good afternoon, thank you very much for attending what I think is our last event of 2018, so we saved the best for last with Jamie Susskind and his brilliant new book, Future Politics. My name is Nikita Malik, I am the Director for our Centre on Deradicalisation and Terrorism at the Henry Jackson Society, and with me today I have Jamie Susskind who is an author, speaker and practicing Barrister. A past fellow of Harvard University’s Birkmand Centre for Internet and Society; he also studies History and Politics at Oxford. So today we are going to be talking about Jamie’s book which as I mentioned looks at one of the most important questions of our time, how digital technology will transform politics and society. Jamie argues that rapid and relentless innovation in a range of technologies from Ai to virtual reality will transform the way we live together. This is a groundbreaking work of political analysis which challenges readers to rethink about what it means to be free or equal. What is mean to have power or property and what is means for a political system to be free or equal. This proposes ways in which we can and must regain control. We will begin with Jamie speaking for about 20 minutes on his findings on the book. It is a very multifaceted book so sorry to give you only 20 minutes to discuss that then we would like to open up the floor to questions. I know I have many questions, but I will try to hold back the chairs prerogative. Shall we begin?

 

 

Jamie Susskind – Well thank you Nikita and thank you all for coming out at lunch time to hear my talk. There is a story that Henry Ford used to tell, he was the guy who brought the first car to the mass market, and he used to say that when he asked people what they really wanted in relation to transport, they would say they wanted faster horses. I think about that story a lot when I think about the future of politics because a lot of our thinking on this is about faster horses. We imagine the future in terms of today and a sleeker, chrome or glass based version of what we have today. There is another school of thought to which I subscribe is that the developments which are taking place in digital technology could be as transformational to humanity as writing or the agricultural revolution. The difference between now and the future will be the difference between a horse and a car rather than a horse and a merely faster horse. What I thought I would do in my 20 minutes is too set out what I think are some of the main technological changes which are taking place. I will very briefly sketch out what I think the political implications of those changes are and the politics is perhaps what we can focus on in the second half of the event. So basically, when it comes to technology there are three big things happening in the world around us. The first, to put it broadly, is increasingly capable systems, non-human computational systems which are able to do things which we in the past thought that only humans can do and now in many cases can do it better. Many of these capacities are well known. Computers can beat us in almost any game that we have ever devised. Some are less well known, they can diagnose lung cancers and skin cancers better than the bets doctors, and they can lip read and mimic humans speech as well as the best humans can. A company here in the UK claims to have developed a chat bot, which is a form of artificial intelligence system which you can converse with and answer questions for the examination to become a member of the royal college of general practitioners which is higher than the doctors who take that exam and who pass it. So with 82% accuracy compared with 71% from the people who treat our illnesses. Often underpinning the development of AI is the growth in processing power which has been growing exponentially since the end of the last century. Second is the explosion of data in the world. Most of the AI systems around us are not generally intelligent and are not really generally aiming to be intelligent with creativity or consciousness. They are better at doing very narrow tasks and are often better at them than we are. So what is interesting about Alpha Go, which is the latest system of google deepmight that was able to beat the very best human players at the very complex Chinese game of go. In 2016 when that System beat Lee Sadol it beat him 3-1 and the player, the grand master, managed to get a game of it. In 2017, another system beat another grandmaster 4-0 and it was much less close. Later in 2017, a system called alphago zero managed to beat the original alphago system 100 times in a row. In the space of just two years, you go from someone saying something is impossible for machines to ever beat umans at go, to the machine which beat the best player being beaten 100 times in a row. And what is one fascinating about that one, is the machine which beat the original system 100 times in a row was not taught any strategy by a human go player. Previous iterations of that systems like the one which beat Gary Gasparovich at chess. AlphagoZero had none of that, it learned to be the best by playing against itself over and over again. What I am interested in my book is the implication of systems like there not just with their extraordinary capacities, but when things go wrong. These systems rely heavily on the quality of the data which they are fed. Nowhere is this more obvious than with chatbots. The famous example of Microsoft who launched a chatbot called TAY on twitter on 2016 which aimed to show how far conversation in artificial intelligence had come. That system was designed to become more intelligent through interactions on twitter. That system lasted 16 hours before Microsoft had to take it down because over the course of 16 hours, learning from us. It had become a racist, extremely sexually aggressive monster that showed the old adage of data scientists to be true – trash in, trash out. My favorite example and I know I can say it because we are all adults, but now I notice that there is a camera pointing at me. My favorite example is one young bloke started talking to TAY on twitter and TAY’s response was, “Fuck my robot pussy I am such a naughty Robot.” This was not the PR coup which Microsoft had hoped for. Anyway that’s tend number one – Increasingly Capable System. AI systems can do amazing things but they can also go wrong in spectacular ways.

 

The second idea is increasingly integrated technology. You and I looking around this room have interacted with technology in our lifetime through the keyboard, the mouse and the screen. Before that the computer was the size of a room and you’d walk inside it. After that, since 2008-09, we have been living in the time, what David Rose calls the glass slab where phones, and iPad in our homes is the principle way which we interact with technology. There is a school of thought which argues that technology won’t even resemble a glass slab and won’t even resemble technology at all. It will be integrated into the world around us in our clothes and in our bodies, sometimes called the internet of things, in objects we never thought of as technology. So it will be processing power and censors and data gathering capacities everywhere. SO the distinction which we hold in our head between real and virtual and online and offline and cyber space and real space will make much less sense to our children. Cisco says there will be 50 billion devices connected to the internet by 2020 which is in a couple of years. Not only is technology becoming more powerful, but it is becoming more ubiquitous. That is important. The realm of technology…its power used to be confined to… well what now sounds archaic, but the realm of cyber space. This is no longer the case. Technology acts on us in a very real sense in real space too.

 

Third major trend is what I call increasingly quantified society. This is in some respects the most sobering of all the trends. In every two hours we as a civilization produce as much data as human beings did from the dawn of time until 2003. Every two hours. That rate is speeding up. It is said that by 2020 there will be three million books for every human being on the planet. Now that to me marks quite a profound civilizational change. For most of our forbearers, everything which happened in their lives was immediately forgotten to time as soon as it had happened and that is including after the most advanced technology for recording and processing information up to this point was developed. Most of the time what you said, who you associated with and what you purchased, what you cared about and what you felt at any given time. These things happen then they were lost. Now, increasingly the opposite is true. Everything about our lives is caught ad captured through data in a permanent or semi-permanent form. These instruments of computational analysis which I talked about earlier, machine learning systems which learn from the vast amounts of data in the world and thereby become more capable. I don’t make any claims for any particular company – google, Facebook, whatever it is or any particular technology or fad – but when you step back and look at the life cycle of an average human being and the epoch which we are moving into. Those three trend are likely to define the next 100 years of human existence – increasingly integrated systems, increasingly integrated technology and increasingly quantified society.  Now the reason I spent 500 pages trying to talk about this is the rest of my book is what this means about the way we will live together. It would seem to me to be surprising if these profound changes didn’t affect our politics. In ways which are quite profound. In the past when there have been major changes when we have stored information there have usually been profound changes. The first empires as we understand it, didn’t really exist until the invention of writing because that technology was required for processing information on a scale of which was needed for empire building. Something similar is happening now. My thesis is that when you suspect there is profound transformational change happening. What is pays to do is to step back from the daily barrage of news and daily cycle of stories about tech which are becoming increasingly ubiquitous. Even when I started writing this book two years ago I could not have imagined that tech policy scandals would be dominating the news every day as they are. The useful thing to do is look at the most fundamental concepts – power, freedom and democracy and justice. That’s how I structure the book and look at where that idea comes from, what it means today and what it might mean for our children in the future. To summarize the book in a sentence, those who control the most powerful technologies will increasingly exert control over the rest of us, they will be able to affect the democratic process in ways that are benign, but also harmful. They will be able to control what we do and do not do – our freedom and get us to do things we would not usually do. They will distribute goods in society whether it is jobs. 72% of CV’s are no longer read by humans, they are only read by algorithms. Same for insurance, access to healthcare or credit. Questions of social justice will be determined by technology. So to my mind software engineers – the code that they write and the decisions they take – will have consequences which are not merely political. They are not merely corporate and certainly not just commercial. That is the grand summary. I would love in the questions if we could drill down into these themes because what I have told you already is perhaps not much of a surprise to you if you’re into technology. Just to focus on one concept before we begin. I think it is the most helpful one. That is the concept of power. You hear a lot about people saying that technology is power and that technology firms are too powerful. What I try to do in the book is analytically analyse what that is the case. My conclusion is that there is three reasons why technology gives those control it a certain degree of power. Technology contains rules which all of us have to follow. If you go in a self-driving car and it refuses to go over a certain speed or park on a double yellow line or an area which the GPS says would be trespassing. You are being subject to the code that is being written in that platform. Just like when you don’t have a password for a document or a platform it won’t let you in, no matter how strong your moral claim is. Just like when Obama gave Gordon Brown DVD’s from American films as a gift and the Prime Minister found that they wouldn’t work because they were coded to only word in North America. It doesn’t matter how powerful you are in normal political terms. You cannot get a technology to do something which it is against its coding. If we are surrounded by these systems and those who write the code, write the rules. It is rule number one of power. Rule number 2 is scrutiny. Information gathered about us. The more information is gathered, the easier it is to influence and manipulate us. This is the basis for all online and political advertising. It is said that Cambridge Analytica on behalf of Donald Trump had 5,000 data points for 200 million Americans. It is said, to micro target messages at individuals at an individual level, framing the message in a way, the data suggested was most attractive. The second way about having data gathered about us, having power over us. We are less likely to do things perceived as sinful, shameful or in some sense wrong. I think you see a lot of stories on the news about people being caught out by technology, like there was a case not so long ago in the States where there was a guy who accused of murdering his wife in a home invasion and in the course of that she had been killed. But at the time of her death, she had been wearing a Fitbit and what the Fitbit suggested is that she had been moving around the property at running speed and her heart rate was consistent with someone performing serious exertion. The data which was gathered was that she had been running away and that evidence was enough to convict that guy.

 

Another story I heard recently was a couple who lived together and shared a pair of smart scales. They are scales you stand on and it send data to your phone with your weight and body mass index. So it keeps track of it, god knows why you would want to do it, but the technology exists. The woman in this relationship or away on business and was surprised to receive an update on her phone from her smart scaled giving her latest body mass index which she noted with concern was less than when she had weighed in. She realised that the person her boyfriend had been having an affair with had idly stepped on the scales. The bigger picture is that we are in a transition period where we don’t realise that we are being watched the whole time. Once we cotton on to that we are likely to change our behavior – that is a form of power. Third and final form of power before I sit down is called perception. We really on third parties to gather information about the world beyond our immediate perception. In the past it was principally books and the news and mass media, now it is increasingly technology. We rely on technologies to gather, in some cases write and in many cases order the news to choose which slice of reality we see. That is important because it orders what we see as true or false, right and wrong, beautiful or gorgeous. And at the same time, whenever we search for information. We rely on information we use google which works with typing in words and getting results. I don’t think in the future it will work like that, I think it is most likely to be a natural language conversation where you ask a question and you can an answer. But the answer that you get will be important. The answer you don’t get is what is important because they will frame what you do and don’t know about the world. SO when I step back and look at new technologies; if they have the power to write the rules and to scrutinize us and to gather data about us and to control our perception of the world. Particularly if they can do more than one of those things so if they have the ability to more than one of those things so to gather data and then alter our view of the world then they grant those who control them a large amount of power. That might be the state i.e. China or it might be a private tech firm. I think the battles between those two entities will be a big one in the coming century. So that is power. Maybe we can talk a little bit more about democracy, freedom and justice. That’s an overview of the book so you don’t need to get it now. Thanks very much!

 

Nikita – Thank you very much Jamie. That was an excellent overview. I will now open the floor to questions and I request that you please introduce yourself and your organization affiliation before asking your question.

 

Question – Hello my name is James Challinor, I am from the Wolves of Westminster. You touched on earlier how technology is going to play a greater and greater role. You gave the example of the AI which can pass the medical test at a higher rate than doctors do. That is inevitably is going to have a big impact on the job market. Did you explore that avenue in your book?

Jamie Susskind – I did explore that theme across a couple of chapters in my book. Broadly I agree with the thesis. My starting point when it comes to technological unemployment which is this phenomenon which increasingly machines and non-human systems might be able to do our jobs better than us. You don’t need to live in a world of 100% technological unemployment for it to be a profoundly transformed world from the one we currently live in. If even only 5-10% of the country, let alone 20-30% could never work under any circumstances, their work could be done better by a machine, then you require a welfare system and an economy which is profoundly different from the one we have and I try to explore some of the options for that. Just to try and make good the thesis a little bit. There are two terms which are helpful when it comes to looking at technological unemployment. Neither of them are mine, both are my brothers who writes about this stuff. One is frictional technological unemployment takes place when a person in one location is made unemployed because a robot does his job better, but there is a job in another city, it’s just he is not trained to do that job and it is not where he lives. There are enough jobs and there are enough tasks for humans to do, but physical and practical constraints like a lack of training or transport means that can’t be filled. That means that person cannot be employed. That is phase one of technological unemployment. The other is structural unemployment where even if there was perfect retraining and flexibility of the labour market, machines have advanced to such a level that they can already do economic tasks better than a human being. No matter fluid the transport or training situation. There’s just not enough work for humans to do. One counter argument which is often made is that machines generate jobs as well as destroying them which is certainly what they did in the past in the industrial revolution. The difficulty is that I prompts the next question which is, those jobs which are generated, would they be done better by machines or AI systems? So I think there will be strong economic incentives for companies to employ machines, as would they be done better by machines and artificial intelligence systems. I think it is an issue our generation will have to wrestle with. We will have to explore philosophically why we arrange labour through a labour market, why it is ethical to work when that work is (inaudible) or unavailable. It is going to require quite a philosophical transformation and were going to have to find ways of getting the things which we get from work – an income, social status and esteem and social welfare and respect.

 

Question – My question is about power and democracy. The concern I have is about regulation and how we regulate these power brokers. There are instances recently where Facebook has deleted accounts of people who have political views which Instagram, Facebook, twitter, WhatsApp do not like. There is the example of President Trump saying large amount of his followers have been deleted. Goldman Sacks have instructed all their employees not to donate to trump – they can only donate to the democrats. Were on a spiral here chaps and it would be interesting to pick your brain as to how we can exert leverage over these organisations. Not just in an immediate sense, but also in 50 years’ time.

 

Jamie Susskind – I think it’s a profound question, I won’t speak about the Goldman sacks example, but what we are going to do about companies which acquire a degree of power through the technologies which they control. To use the example of speech patterns. One of the most sacred examples of speech – political speech – now takes place online. It take place online and is subject to the rules of those platforms. You rightly say, they can decide what may be said and what may be said. They can also decide the form of what he said; a tweet simply will not send if it is over 280 characters. That is a form of regulation over the debate as well. So for our freedom of speech we rely on our intermediaries. That is one of the many freedoms I say we will rely on tech companies in the future for. I think it’s a good example because it is one of the earliest ones and the ones we are becoming most aware of now. Now a lot of people like yourself will say how do we regulate these companies? I think it is the right question, but we are nowhere near being able to answer it. You only need to look a the congressional hearings with the head of google where the Senator was berating him about his iPhone, manufactured by a different companies or when Mark Zuckerberg was asked by Senator Hatch how does Facebook make money if it doesn’t charge its users. The levels of sophistication, and I don’t say this to deprecate politicians, but the level of technical sophistication amongst politicians is too low for us to have faith that the laws they make will be an improvement rather than the opposite. Another problem with regulation and the state coopting it, is you move towards a China system where the state has a great deal of control over technology but it uses it for its own ends. There have been some developments in Australia over the last few weeks in relation to encryption. So you have to balance the power of the state which is something you have to keep an eye on and balance it with the power of tech. To go to this particular example, one of the forms of regulation which I would favor if we ever get there, this is a tripe point which anyone familiar with these debates will know, but one of the forms that I support is transparency. I is an elementary point of western philosophy that when power is exerted over you, you ask to see how it is being used and the terms it is being used. That is why we ask our legislators to debate in public. Jack Dorsey came before congress and told them, the following. I have some news, inadvertently, Twitter’s algorithm went haywire and downgraded 700,000 people from the public consciousness. Some of those people were politicians. Some of those politicians were running for reelection at the time. The algorithmic problem may well have led to some of them losing elections, I don’t know, but don’t worry because I out of the goodness of my heart came here, to tell this to you and to tell you we have it under control. That seems to me to be an inadequate state of affairs. We shouldn’t have to rely on the good will of the Jack Dorseys and the Mark Zuckerbergs of the world to make sure their technology is being used in the public good. What I favor is more of a system where you have a fence at the top of the cliff rather than an ambulance at the bottom. Some insight into how the algorithms work so that civic minded individuals and journalists can keep an eye on it so before the catastrophe happens something is going wrong here. That to me would seem more consonant with political principles we have inherited from the past relating to the power of transparency and sunlight being a great disinfectant. So I think too much of technology is a black box to us and I don’t think as that increases that is sustainable, either by our old principles or the new ones which we will have to develop.

 

Question – I am a researcher in the House of Lords and one of the reasons that people around the world will say that British legislation is too a high standard is about the scrutiny, particularly in the second chamber in a trustful collaboration with the House of Commons. Apropos what you said earlier, politicians, and it’s true, they talk about it publically that they just don’t have the knowledge of IT which is needed. What is your recommendation, don’t worry I won’t quote you, on what steps they need to take?

Jamie Susskind – What aside from reading my book?

Question – well I have read your book and it is amazing. I love it.

Jamie Susskind – well this has been a great question we can just leave it there. The answer is not an easy one. There are some politicians in the house of lord and commons who do take this stuff seriously. What we need is a generation change. Sounds vapid, but it is true. I’m not saying this is about age, but in some ways it is about age. Whether it is thought acceptable to say, “I know how to use my email” or “I rely on my grandkids to do my tweeting for me.” As if that is a badge of honor. As if the personal inclination of a politician towards technology is a determinant to how much they know about it. Saying I don’t know about technology should be as disqualifying for office as saying I don’t understand economics, or I don’t know the English language properly or I don’t know the geography of this country at all. If we believe that the major political issues of the next 50 years are going to be technological then we have to understand it. We need to have engineers in tech companies who are well trained in morality and philosophy and a lot of my book is geared to those individuals so I don’t let them off the hook. Nor do I let us of the hook. We need to start looking at technology in terms of being citizens, not just as consumers. We must be demanding of our politicians that they are as well versed in it as they should be. But the trouble is generational change takes a generation and technology is moving much faster than that. Look there are some instances where politicians have done a good job at introducing innovative laws which affect technology. One is the GDPR in the European Union. Now that is not a perfect piece of legislation and having rad it I can tell you that it is an extremely boring piece of legislation, but what it is, is a far sighted attempt to control the way data is collected, stored and transferred. That affects a lot of issues in my book. They just don’t have that in the US, it is the Wild West as to how your data is used. Still less so with regards to China. There is no…..I am not saying this to make a political point…there is no doubt that Europe has been the best regional player at trying to regulate technology even though it has not always been benign. What do politicians need to do? They need to learn about tech, they don’t need a PHD, they don’t need to be a software engineer. What they need to do is understand what a machine learning system is, you need to understand what block chain is before you start distributing welfare payments using is. You do need to know how the internet works if you’re going to try and regulate it. Most politicians are not thick. They are well meaning, but for most of them it is just not a priority. Just like climate change wasn’t a priority 20/25 years ago. I am aware it is not a priority for many people now. That’s the kind of generational change where something goes from being on the agenda to being off the agenda. I think tech should be on the agenda.

 

Question – How will technology affect democracy?

Jamie Susskind – My thinking is similar to yours. So the faster horses’ version is direct democracy. Security concerns would have to be resolved, but if they could be, there is no reason why you couldn’t swipe left or swipe right for a policy 5-10 time per day on matters of local of national concern. You could also have liquid democracy where you delegate your vote so for example, on health policy, you say I am going to let this consortium of health doctor’s vote on my behalf whenever a vote comes up. You could also have an AI system on your phone where you tell it your values and you answer a questionnaire and based on your lived experience and values it could recommend to you a particular policy, or vote on your behalf, or vote on your behalf a thousand times a day with varying degrees of strength. My individual strongly supports this policy, but with the following change. We are moving away from faster horses now and approaching something we currently have. The real change, the next phase in human self-organization is too look at something like data democracy or what I call AI democracy. It is strange I think, it may come to be regarded as strange, that we claim to have governments who represent the people. The data on the basis of which they claim to represent the people is so scarce. A tick in the box every two or three years, choosing between 4-5 options. If we are saying that there is going to be 3 million books of data per person, then wouldn’t a government which truly represents the people and said to be truly democratic or legitimate in some other meaningful way unless of a matter of course it properly reflected the data which is gathered about people. We put that data to good use in entertainment and social purposes. There will be inevitable calls to put it towards political purposes.

Finally, if we allow AI to perform operations on us, on stocks in the market and diagnose cancers and look after our children and elderly. It is not farfetched to say which political decisions are better taken by non-human systems. A lot of people balk at this, and I share their fears, I would just say two things. A lot of decisions are administrative in nature, how to run the local traffic or distribute water in a given region. It might well be that AI can contribute successfully in that region in the future. There are some tasks which are not best done by their representatives. What I would also say is that we are already allowing machines to perform already functions of extreme political importance without realizing or calling it. 72% of CV’s are no longer read by human eyes. They are initially scanned by systems before they get through to the next phase. Perhaps not the jobs that folks in this room do, but a lot of jobs. To me distribution of jobs, a job being the most economically useful thing that you can have being largely determined by automated systems. Systems run by private companies, already says that we have become comfortable with AI systems making decisions of political importance even if we don’t call it with a capital P. I think in so far as we still live together in human civilizations become so powerful that they destroy us all. In so far as politics goes, direct democracy is faster horses and the car is systems constituted on data and where non-human systems increasingly take decisions of importance. You can see the tree; in the past it was priests, princes, kings, the people, then their representatives then whatever comes next. We’re not there next.

 

Question – When you mention direct democracy listening to you, one of the things which comes to mind is that it is completely global, completely transnational, a lot of problems we have are at a level higher than that of nation states. Why try to circumscribe people in borders which were established in the 18th century and so on when everything is moving towards other borders which are not these borders.

 

Jamie Suskind – I think that is a very profound question. With technology one can certain imagine better ways of constituting human beings. Some transnational, some hyper local. I think in political science terms what I am saying. At present, we do still live in a world governed by the nation state tends to be the ultimate sanctioning force within that geographical region. That power to make the ultimate decisions remains an important one. You would have to delegate and move that power around if you were to move that around. That’s the philosophical answer, the practical answer is that I am not one of those people that believes that technology will erode the nation state. Just as I think that it was naïve for people in the 90s to say that the internet because it was by nature, a decentralized technology would lead to greater decentralisation. Technology bare the stamp of the world into which they are born and the technology which we are currently developing are begin born into a world of nations states, a world of powerful companies, a world of what might loosely be described as democratic capitalism and these technologies will be, in the first instance, taking on the forms of those who are the most powerful and developing them. The internet which was initially seen as this greet decentraliser has increasingly come under the control of national governments and big companies. The technologies I describe in my book will do the same unless we somehow manage to change course and why I don’t predict the death of the nation state. In fact, I predict the supercharged state that in the next 50 years, it will be much, much easier to be a state than it has ever been in the past, to keep an eye on your population to keep an eye on the rules through technological means as well as traditional ones, by coding them into the technologies we use. When your car doesn’t drive over the speed limit for example, and of course the speed limit might change when you cross the border into France and of course that code is enforced in a different way. So I think it will be easier, regardless of the philosophical merits of organizing ourselves in a different way, to organize ourselves as states, particularly for the machinery of state to be more powerful.

Nikita – I am afraid we have 10 minutes et so perhaps we will take one more question, then perhaps I will ask a question myself, before we finish.

Question – In house lawyer for TK Max, just wanted to ask, the current form of prime ministers question time, do you see that changing?

Jamie Suskind – Honestly I have never directed my mind to that question. One of the great things about writing about the slightly more distant future is that I don’t need to get bogged down I real politics. So im not going to give you a bull shit answer to this. I think ou politics, suffice to say, I don’t think our system functions very well now, aside from the problems posed by technology. I think the rules and the norms of the debate could do with some updating so as to engender greater trust in that situation. Do I see an AI system doing a greater job than the current politicians who are there, in not so long may be, and I don’t think it’s a very high threshold? Other than that I don’t really have an answer to that question. I’m not the person to ask.

Nikita – So my question was on the ethical kind of debates which come out of the material which you have written. Perhaps if you look at in the worst case scenario pitting AI against humans, what you think in your opinion that humans have access to in terms of justice. You mentioned several times discrimination on the part of AI systems who are reading CVs for example. What kind of mechanisms would have to you know even start a case on something like that? Or if you look at my space which is terrorism, you look at predictive policing and use of data. What kind of avenues would people have in the way that their data is used to stop them committing a crime, before they have even committed a crime because the data is informing the police that this person’s behaviors is very likely to be a terrorists behavior. Being a lawyer yourself, I would be very interested because this throws open a huge amount of changes in the legal system. What kind of avenues would people have to protect themselves from more kind of malicious uses of AI?

 

Jamie Suskind – There is going to inevitably be an enormous amount done to shoehorn our current laws into forms which can deal with new political problems. It is the job the common laws, the courts and parliament to develop the law in a way in which properly governs the society which they seek to govern. I don’t like pitting it as humans versus AI because on the big messages of the book is that, at least currently, behind every technology is a person or a company that can be held responsible for their creations. The manufacturer of that responsibility in legal terms, will be an important legal development. Just because questions about our freedom and our democratic process increasingly reside in technology. That doesn’t necessarily mean that those things I’ll be made worse. If you write the right code and embed the right rules you might have a recruitment system which is more just that one we have at the moment. Being a discrimination lawyer myself by background I can say that we are highly imperfect in the way we offer jobs to people and employ them. That said we are still in the realm of cock ups. I don’t know if anyone saw the story about amazon over the summer, but it is one the biggest and sophisticated tech companies in the world was using an algorithm to employ people and it was a machine learning algorithm. They gave it the data about their most successful employees from the last ten years. They said this machine learning system was the best way of detecting a pattern that sometimes humans can’t see. Find the pattern between what’s on someone’s CV and what makes them a successful employee at Amazon. The algorithm is entirely overseen by men, the problem was that at Amazon, there had been a highly dominated male work culture. Most of the successful managerial roles were held by men. When the machine learning system processed the date, lo and behold is ascertained that the most likely indicator of being successful at Amazon is being a man. So if your CV contained the words women’s volleyball team it went to the bottom of a pile. Or a name of an all-girls school or university then it went to the bottom of a pile. That’s an example of an algorithm getting it catastrophically wrong which amazon used for four years. No doubt there are many great female candidates who didn’t get the job of their dreams because their algorithm didn’t function. Think about it the other way as well. If you start with the principle of justice. Say you believe in the process of positive discrimination so you believe it is important to recruit people from nontraditional backgrounds or nontraditional genders to an organization. There is no reason why you couldn’t code that into a recruitment algorithm so that it is weighted in favor of justice rather than against it. There is no reason why you can’t generate systems which leave the world better when they found them rather than worse. I quite often make this point to software engineers and often it lands and you see the eyes widen that actually, every decision they take is in fact a political decision of sorts.  Before we even get to the realm of laws. How we can have people enforcing their rights against individuals, whatever their rights are. I would like to see a world in which the technologies which are developed, are developed, consciously alongside principles of justice and freedom. Yes, when you train to be a computer scientist, the ethics course is not a voluntary thing you do in your third year as a bolt on. It should be integral to it like it is to law or medicine. That’s a long winded answer to your question, but I don’t have a shorter one. There’s no one policy which can solve the problems arisen from technology. What you need is better technologists and better politicians.

Nikita – Thank you so much for coming today.

HJS



Lost your password?

Not a member? Please click here