AI: Legal, Ethical, and Policy Challenges

Artificial intelligence is fraught with legal, ethical, and public policy challenges. This episode brings two esteemed experts to discuss these issues and present guidance for both commercial companies and the public sector policymakers.

48:34

Nov 10, 2016
2,346 Views

Artificial intelligence is fraught with legal, ethical, and public policy challenges. This episode brings two esteemed experts to discuss these issues and present guidance for both commercial companies and the public sector policymakers.

Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2015, he was chosen to be an Eisenhower Fellow to Taiwan and Australia and in 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”. 

Kay Firth-Butterfield is a Barrister and part-time Judge and has worked as a mediator, arbitrator, business owner and professor in the United Kingdom. In the United States, she is Chief Officer, and member, of the Lucid.ai Ethics Advisory Panel (EAP) and an adjunct Professor of Law. Kay is a humanitarian with a strong sense of social justice and has advanced degrees in Law and International Relations. Kay co-founded the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas and taught its first course: Artificial Intelligence and Emerging Technologies: Law and Policy. She is Vice Chair of the IEEE Industry Connections Committee “Global Initiative for Ethical Considerations in the Design of Autonomous Systems”. 

Transcript

Michael Krigsman: Welcome to Episode #203 of CXOTalk. I’m Michael Krigsman, and CXOTalk brings together the most innovate, most original, most interesting business thinkers to have an in-depth conversation about important and very often disruptive issues. And these are people who are genuinely shaping some important part of our world. Today, on Episode #203 of CXOTalk, we’re going to be discussing artificial intelligence, and particularly the ethical and the public policy, and the legal challenges and implications associated with that. We have two amazing guests. Our first guest is Kay Firth-Butterfield, who is a legal scholar and one of the world’s top experts in the ethical issues and legal issues associated with artificial intelligence and robotics. And our other guest is David Bray, who is the CIO for the Federal Communications Commission. So, let’s begin with Kay. Kay Firth-Butterfield, how are you and thanks so much for taking the time!

Kay Firth-Butterfield: Thank you for giving me the opportunity!

Michael Krigsman: So Kay, tell us about your background.

Kay Firth-Butterfield: Well, my background, as you say, I’m a lawyer. I was a barrister and judge in England before I moved to the United States about eight years ago. And here I’ve been teaching law, and thinking about artificial intelligence. I recently was the chief officer of an AI company and ran the ethics advisory panel. We were the first AI company that actually allowed its CO to go out and give talks, and talk about what we were doing. I’d say that was a great privilege for me to be there at the founding edge of that work and to be able to talk about what we were doing. In the last month, I moved to be the executive director of Austin…  AI-Austin and that’s a brand new collaboration between academia, industry, government, medical school, and others. And although we’re based in Austin, we’re actually having a very global outreach program. So, I’m really looking forward to taking that forward with my co-founding advocates. I still do have an academic background: I’m Distinguished Scholar at the [Robert E.] Strauss Center at the University of Texas; and I teach a course on artificial intelligence and the law for the law school there. I am founder of an organization that is a consortium at the University of Texas, which, we call it “CLEAR” because it’s actual title is quite long. It’s the Consortium on Law and Ethics of Artificial Intelligence and Robotics. And, then, I am Vice Chair of the IEEE’s project (another snappy title): The Global Initiative for Ethical Considerations in the Design of Autonomous Systems.

Michael Krigsman: Ok. So, if we want to talk about law and AI, you’re the person to talk to. [Laughter]

Kay Firth-Butterfield: Well, I’ll try to be! [Laughter]

Michael Krigsman: Ok. And, our other guest is my good friend and colleague David Bray, who has been on CXOTalk several other times, and David is the CIO for the Federal Communications Commission. Dr. David Bray, welcome to CXOTalk again!

David Bray: Thanks for having me, Michael. It’s great to be here and I look forward to discussing how we can both deal with the basic fundamentals of artificial intelligence; and how we can begin to use it in organizations both public and private; as well as how we can actually make sense of the ethical issues involved in AI use.

Michael Krigsman: Ok. So let’s dive in. When we talk about AI, what do we mean? David, tell us, what are we actually talking about here? I think we need to get that out of the way first.

David Bray: I think that’s absolutely true. So, artificial intelligence probably includes many different things to different people. And I can only talk about it as machine learning; it’s neural networks… It really is using technology to try and emulate, basically something that appears to be intelligent. And I want to be very careful about using the word, “appears to be intelligent” because we have to answer the question of what’s intelligent behavior in the first place. We can be very human-centric and say, “Well, humans are inteligent because we are able to make sense of challenges put before us. We can be goal-oriented.”

I think when it comes to talking about AI, what we’re really talking about is using technology to solve problems or achieve goals in ways that appear to mirror intelligence beyond just something that someone has programmed the machine explicitly to do.

Michael Krigsman: And Kay, as an attorney, how do you think about the definition of AI?

Kay Firth-Butterfield: Well, I will piggy-back on what David said, in terms of AI. But, I think that basically we’re talking about a scholarship that has been going on now for a long time and a number of different ways of achieving what we call “artificial intelligence.” But again, I’m going to piggy-back on David as to what might be intelligence. I think that we need to perhaps lift from the UK government’s recent report that “artificial intelligence” can be adequately used as an umbrella for all the different things going on ─ all the different scholarship in this space.

Michael Krigsman: When we talk about AI, it seems like it has become the explosive growth in the jargon value of AI. And, in commercial companies wanting to piggy-back on top of the terminology. And so, clearly AI is very important, but can either of you shed a little bit of light more specifically on why it’s so important, why this explosive growth, and why we should actually care about the legal, policy, ethical issues of AI?

David Bray: So … Go ahead, you can go first, Kay.

Michael Krigsman: Please, Kay, go ahead.

Kay Firth-Butterfield: Ok. [Laughter] I think that one of the things that sticks out in my mind is some research that McKinsey [&Co.] did recently, where they describe AI as a contributing factor to the transformation of society. And I just want to quote what they’re saying about the transformation of our society: that it’s happening ten times faster, and at three hundred times the scale, or roughly three thousand times faster than the impact of the industrial revolution. And you know, a lot of people compare this revolution to the industrial revolution. But, I think it’s the speed and the real, core underpinning that AI is contributing to that transformation of our society, that makes these discussions so important.

David Bray: So, I would build on what Kay was saying, and really say that I think AI has had three waves. We’re kind of in wave three in my opinion. The first wave: you can actually go back to a Nobel Prize winner Herb [A.] Simon. Herbert Simon, interestingly enough, actually started off actually with New York government and public service, and he observed what he called administrative behavior. And what he saw in terms of how people did administrative behavior was, generally, people didn’t go beyond sort of the landscape of what they already knew to be true. And he had this conclusion that the challenge is “How can you help people in organizations go beyond the landscape of what people knew to be true.” And interestingly enough it led him to do behavioral economics, behavioral psychology, and eventually to artificial intelligence. So, in some respects, observing how people make decisions in public service gave rise to artificial intelligence. That was a first wave.

The later waves tried to look at what was called decision support systems, expert systems and that would be the late 80’s and the 90’s. And I think what we’re now is, I think the third wave is really occurring, quite frankly as Kay said, because computers have gotten fast enough; memory has gotten cheap enough; the internet is now connecting things that we can actually now do distributed problem-solving at a scale that frankly was not possible in the 80’s or 90’s, or when Herb Simon was trying to do his work in the 70’s.

So, why AI has become the buzzword is, in some respects, and I would say it’s replaced “cloud” as the new buzzword. In some respects, cloud was the buzzword four or five years ago. That said, and interesting enough, even though it’s replaced that buzzword, in some respects AI is only possible now because we do have cloud computing. So, you have elasticity of CPU cycles, of memory, and quite frankly, just the sheer scope of being able to collect data and trying to make sense of it. That’s why I think artificial intelligence has reached the crescendo that we’re hearing about right now.

Kay Firth-Butterfield: And I think also that we’re actually beginning to see artificial intelligence so the general public can see it so much more. And they’ve interestingly, the Future of Advocacy did a YouGov poll in the United Kingdom just recently, which showed how little the general public understand about AI. But, when they go out their door, and they can see for example an autonomous vehicle or truck, then that’s really leading to the way that we’ve had much more reporting in the press about AI. And so, I think it’s not just the technology, but it’s also the fruits of the technology that are being seen that contribute to the conversation I think is so important at the moment.

Michael Krigsman: So we see, as you said, we see technology such as autonomous vehicles coming out, and if you’re in San Francisco you can often see these driverless cars, or autonomous cars driving around the streets, and things like Chatbots that are visceral reminders to people, or make people aware of the personal impact of these technologies. So, it’s not just hidden behind the surface. But all of this creates a set of dynamics with profound implications for ethics, for the legal system and for policymakers. And, Kay, why? Why is that the case?

Kay Firth-Butterfield: Because, we are, as lawyers, always catching up. And so, for example, in a common law system, unless you have legislation, you have to wait for something to happen before you can have case law decisions be made about it. So, we’re sort of in this holding pattern at the moment where we’re either waiting for governments to create legislation or for self-regulation to spin out, and I think that’s vitally important, or the case law piece. And so, you know, if you look at Europe for example, they have gone with regulation of a lot of these things, and more and more so. Whereas of course, in the United States, we have seen a very slow progress just through the NHTSA trying to work out how to govern or regulate safety on those vehicles.

Michael Krigsman: And David, what about … Why is this such a potential quagmire? Why is this so fraught with difficulty and challenge from a policy perspective?

David Bray: So, that is where I’ll put on my Eisenhower Fellow hat, where I was in Taiwan and Australia and had a chance last February and March talk to them, both about their strategies for the Intenet of Everything, but also the expected impacts of artificial intelligence. And I think, first is there is the need for educating the people in a way that is accessible to everyone, not just computer scientists, as to what artificial intelligence can and cannot do. I think we may have the challenge of people who have been educated in those respects through the movies and the movies, of course, show a very non-realistic situation in terms of artificial intelligence deciding to change its ultimate goal, and somehow taking over the world or something that… You know, we actually do not currently have a programming language that allows you to have the program itself change its ultimate goal. It may be able to change sub-goals, but we don’t have the ability to have a machine change its ultimate goal yet. And people will say, “Yeah,” but then unlikely again.

It’s trying to have a conversation that involves everyone, not just the experts on artificial intelligence, that is going to make tackling these issues, both in the public sector and in the private sector, challenging. And I think as Kay said too, I think we need to have a little bit more demonstration projects before there is any rush to try and do any policy. You don’t even begin to show what’s possible, both from a good sense, and also what maybe you want to try and avoid, if you don’t show what’s possible. It’s really hard to have an informed conversation. So hopefully over the next year or two, we can try to show what artificial intelligence beyond these autonomous cars, maybe can make local communities healthier or safer, maybe address things at the national level.

One of the things I’m tracking with interest is in California. They actually are using machine learning to actually help set bail decisions. So you feed in the facts of the case, and it actually makes a bail recommendation. The interesting thing about that is [it] actually helps weed out things that shouldn’t relate to your bail decision, and shouldn’t be related to your height, or your weight, or your gender, or your race. So, in some respects, artificial intelligence in that sense could actually make things more ethical, because we know what the algorithm is, and we know it’s not taking in extraneous information that should not be important.

Kay Firth-Butterfield: Except, looking at that on a different footing of helping the sentencing, you know there was the recent research done around bias ─ in-built bias ─  in sentencing so that people of color were still getting heavier sentences than white people using the models. So I think we have to be very careful around how we build these systems.

David Bray: Oh I agree 100% and that’s where one of the things I really want to see happen is making the algorithm open-source in terms of what weight and what factors it’s considering. So I agree. If you’re only going to base it on past decisions, and past decisions were made by human bias, then don’t be surprised the machine itself is going to be biased, too. I think that’s where there needs to be a conversation about where do you want to get your data. Because if your data is biased, it will result in biased decisions. However, that’s also where the machine itself can probably pick up, and actually begin to identify weight. These past human decisions were biased. I mean, we know it’s a sad reality. Your height should not relate to the amount of money you earn as, say, a Chief Financial Officer, but we know that there’s a very strong correlation between your height. The taller you are, the more you earn as a Chief Financial Officer even though there’s no relation to the job with height. And so, we know that humans, we all have inherent biases even if we try not to. That’s actually why I’m a big fan of the phrase, “collective intelligence” and what collective intelligence is. How do you arrange both human and technology nodes so they make smarter, more intelligent decisions without, I mean, you can never remove bias, but as less biased as possible? So, I think it’s worth talking about not only doing pioneering projects on artificial intelligence and learning what works and doesn’t work, but also doing experiments on collective intelligence that is a combination of humans, as well as technology nodes, to ideally actually begin to remove bias from both groups.

Michael Krigsman: So is this issue here the fact that we’re now asking machines to make decisions that people otherwise would have made regarding judgement? Is that the issue that’s kind of driving the ethics? What’s driving [it]?

Kay Firth-Butterfield: Certainly, that’s one of the issues. And if you think about what Europe is doing with the general directive that will come into force next year, so very soon, they’re saying, “Well, we want transparency, we want openness.” So, if a machine is making decisions that adversely affect citizens of the EU, we want that machine to be able to explain itself, because the human would have been able to, and so therefore the machine would be able to. So, I think it’s very much around that human-in-the-loop notion. That previously human beings were doing this, and now we are increasingly giving over these decisions to our artificial intelligences.

Michael Krigsman: But David …

David Bray: Just a bit on there real quick. I was going to say, and I think, I would even go one step further. That it’s not just about handing over judgement and decisions to a machine that a human would do otherwise. It really is about the loss of a locus of control, either a loss of a locus of control for the individual. So, when you’re in an autonomous car, you know, you are not driving; the car is driving, unless you have the ability to stop in the next … [garbled text], but again, within milliseconds that might not be possible. It’s really about are we handing over control to an entity that we are willing to trust that will be as fair, if not more fair than a human. And that’s where it gets to what Kay said with Europe. The interesting thing with the Europe question is it applies not just to artificial intelligence but to what they call “autonomous systems”. So, the question is, “Will this require companies like Google and Facebook to be able to explain why certain results showed up at the top of the page as opposed to the bottom, and are they actually going to be willing or able to do that?” Because, that gives them their search algorithm, their ranking, in some respects intellectual property. And it’s going to create some interesting challenges of how much are companies actually going to be able to explain why the system is doing things a certain way, and at the same time protecting intellectual property. And I think it’s going to be the interesting experiment for the next two or three years is, “How can you do that and at the same time, preserve possibly your unique advantages in the organization?”

Michael Krigsman: But David Bray … Please, let me … So David, let me just interject here. So how is this different from what currently is happening with existing technology, because Google and many other companies do personalize the data that is presented to us, and so these challenges are there. So, why is AI any different?

David Bray: So, I think it’s just the scale at which it may be used, and the scale and the impacts of the decisions. I think we’ve always had, well, there’s always been the ability to tailor your experience even before the Internet in terms of what services were provided to you. People were making sense by hand what things you should receive in the mail in terms of ads, or what was called “automated data processing in the 1970’s. And it’s interesting to note that as Kay mentioned about the law, obscenity laws came before privacy laws, and that obscenity laws came about in the late 1900’s because people started moving in the cities. When people were living closer together, now they realize they might look out a window and see something they don’t want to see. So, that led to obscenity laws. And then privacy laws came back in the 1970’s when you started doing automated data processing. And again, these machines were nowhere near as fast as what we have today, but that somehow there could be a correlation of “This person lives at this address; they’re getting this type of heart medication; they also are on this type of insurance.” At what point do you need to say, “Well, those are correlations you shouldn’t draw unless that person is giving consent?” So I think artificial intelligence, much like those things that came before, it’s just the scale and the impact of what this machine might be able to make decisions that will impact your life will be. So you’re right it’s the same trend. But, I think it’s the sheer scope and impact that I think we need to take into consideration.

Kay Firth-Butterfield: And I think it’s coupled with things that are going on in our society, which gives it more, a bigger reach. Say for example our aging population. You know, if we decide that we might go the same route as Japan, and introduce more artificial intelligence devices in the form of robots for example into our elder care, then that’s going to make the technology so ubiquitous that the scope is so much broader. The other way that we might go would be obviously immigration, too, so some of the care needs that we have for our elder population. And again, there’s going to be different choices around the world.

Michael Krigsman: So the issue then, is one of scale and then one of pervasiveness. Is that why the issue of, the challenge of AI ethics has received such a high profile in recent days?

David Bray: I would agree… Go ahead, Kay.

Kay Firth-Butterfield: I was going to say “Yes!” in a nutshell, yes. But I think that actually the AI ethics point really came to the general lips of media, and those people who weren’t really thinking about this, through perhaps DeepMind’s original creation of its ethics board. And obviously, you know the seminal quote from Stephen Hawking on the first of May, 2014, when he said that this could be the best thing that we’ve ever done, or our last. And I think that really captured the attention of the media. And where there were lots of us thinking about these things before, it’s become so much part of a more public conversation now.

David Bray: And I would build on that and say that I do think it’s the winning of Jeopardy by Watson, the winning of a Go championship; there’s been a series of events that are making this much more real to people. If you think about it, how many of us in the last ten years have been on a plane where at some point in time, and probably for a majority of the flight, the plane was on autopilot? And none of us were running around saying, “Oh dear, this plane is being flown by itself!” You know, it was always there in the background. It’s just now becoming increasingly visible to people. Sure, it’s actually raising interesting questions of: Will this impact employment? Will this impact jobs ─ the nature of work? And so, it’s raising a lot of interesting questions. I mean, the good news is we’re talking about it in some respects before the technology’s even able to do some of the things that people are claiming it might be able to do in the future.

Michael Krigsman: Kay Firth-Butterfield, you are one of the top legal scholars in the world and ethicist, and thinking about these issues. And so, when it comes to AI ethics, is there kind of a framework, or an approach that we can use to break it down and look at the problem?

Kay Firth-Butterfield: I think that the way that I have always seen it is that we need to be thinking about responsible design, and also, companies that create AI taking responsibility. Either we’re a nascent industry, or at least a young one, but we need to have a level of maturity around the product. And so, what I was doing when I was at Lucid.ai, was really sort of talking about the fact that we need to be thinking about responsible design from the moment that we have the idea of a product, through to the way that it’s sold and used. And so, I think it’s a continuum, and it’s something certainly that in my new role at AI-Austin, we’re going to be looking at, and working with companies who not only producers of AI, but users of AI.

Michael Krigsman: David Bray, and you have been in public service for much of your career, and have had quite a number of roles looking at these kinds of issues. So Kay talked about responsible design, really from building it in from the ground up, looking at the technology through the development, through the point of release as a product. What is your thought on that?

David Bray: So, I think I’d say i’m very supportive. I would say from my perspective, having served in public service as well as in the private sector, you can plan for something to be used a certain way, or designed a certain way and the reality is humans will find things that you never intended, both good, bad, and mundane. Unfortunately when the Mumbai terrorist attacks happened, the attackers actually used the things that you and I use on a daily basis for web searches. They used GPS. They used social media unfortunately both to plan the attacks and to execute. And, I don’t know of any engineer [who] could have changed the design of web searches or GPS or social media to prevent that from happening. And so, if we recognize that yes, design is a good part, but there’s still going to be the human agency that is going to possibly use it in ways that you never intended it and it may even be good ways that you never intended.

I really want to make sure that when we’re thinking about how we design, and actually how we begin to incorporate these things into society, how we can give people the ability to actually indicate their preferences for what they want done, either with them, to them, with their data associated with artificial intelligence. So there may be some people like, “I’m all in! I would like to have an autonomous car. I’d like to, when I retire, have AI providing care, and that’s something that I have now given my consent to.” There may be other people wanting to live off the grid and that’s also perfectly fine, too.

So, how can we continue to recognize … It’s almost an inversion of the Golden Rule, which is, you know, “Do unto others as you would have [done] unto you.” It’s almost sort of the interesting tweak to say, “Do unto others as they will permit you to do unto them.” And then, that’s again recognizing that what artificial intelligence is, is giving up some control. It’s recognizing that something else is making a decision, and in some respects it could be the same thing for a human, but it’s letting the human that is having those actions occur, so it has the ability to indicate their preferences as to what they’re comfortable with. And ultimately when you do do those things when it’s in the public space ─ having openness and transparency ─ so, as Kay mentioned, it’s not just being able to explain why the machine made a decision, but also be able to indicate: What was its range of possibilities? What is it actually considering? What is it not considering when it makes these decisions? So, we can again have some informed understanding about the scope and sheer impact of artificial intelligence.

Kay Firth-Butterfield: Well, I’ll just echo some of those things. I think that everything that David just said is great, and really important, and leads to the need for us to have a much more open conversation about some of the things that we’re doing. One of the great things about having this great conversation with you, Michael, is that we’re reaching people who will be using AI and we’re able to have this interdisciplinary conversation, which is so important, that we have at this level and at a wider level.

Michael Krigsman: So, Kay, this issue of the unintended consequences of AI, of the use of AI, really any other kind of technology. We don’t know in advance how people will apply these technologies. How does that inform the shaping of laws, policy, and the ethical thinking as well?

Kay Firth-Butterfield: Well, yes of course, you know, with every technology that we’ve ever built as humans, there have been bad actors. And so, my raison d’être when I’m thinking about this, is thinking about things that we can do to be as safe as possible, and to educate people correctly in the use of the technology. But, I agree with David that there are going to be bad actors who are going to use technology in bad ways. The best thing we can do is try and stay ahead of those people doing those things. It sounds like a cop-out answer, but it’s, you know, since someone invented your rock, or picked up a rock and hit somebody else on the head with it, we’ve been having this trouble as human beings.

David Bray: Yeah, and I would actually reinforce what Kay’s saying. I mean, when the car came out, that allowed interstate crime, which was something that had never been possible before. I mean, you could not potentially drive to a state that you weren’t living in, commit that crime, and drive out and the local law enforcement wouldn’t know who you were because you were not a resident of that city. Does that mean we shouldn’t have cars? No, but I think that’s again, we should recognize that again, it’s how we humans choose how to use things, whether it’s good or bad, that will have impacts. What can we do as Kay mentioned, to educate the public, to ideally make it available to as many people as possible. And I think it gets to another ethical dimension that’s worth talking about, which is I personally would like to see artificial intelligence be available to as many people as possible. So, it’s not just a niche only available to a few. And so, I applaud efforts like OpenAI and other endeavors that are really rolling it out so it can be used used by everyone and it’s not limited to a few niche actors, because I think that’s going to be so key to making sure we can have these informed conversations. I would not be surprised, I hope, in the future, you have students as early as elementary school and middle school beginning to do experiments with artificial intelligence so that as they grow up , they are much more aware of what it can or cannot do, and how it can enhance their lives.

Kay Firth-Butterfield: And I think that’s really an important thing because, you know, one of the things we have been talking about is taking some control for ourselves as individuals. And unless we empower people to do that through education, then people are not going to be able to take back that power. And so, and also I think that there’s an issue around what we’re seeing in social media at the moment. I have seen a lot it of Twitter in the last two days that people are saying, “Oh well move. We have to defend our privacy.” And there’s a lot of fear of surveillance ─ switching to Tor, and more secure uses of email and things like that. That is not a positive sign for the way that some people in our society are thinking about artificial intelligence.

Michael Krigsman: Well of course, there’s also great concern that the robots are going to be taking over our jobs, and especially in light of the political climate today, that’s particularly so, particularly pronounced, those concerns. And so, what about that? That must intersect the ethical perspectives in one way or another as well. How do we think about that?

Kay Firth-Butterfield: Well, I think that we do have to be very worried about it, because AI, in my view, is a technology that will benefit mankind or humankind enormously. And, there are some great challenges that we have as humans and for our planet that we really can’t solve without AI. And so, we certainly don’t want to see a groundswell of opinion against AI by people who are losing their jobs to it. We’ve all read for the Oxford Martin study, and the Bank of America [Merrill Lynch] study that says that 47% and I think 52% of jobs in America currently done will go to automation in the next 15 or 20 years. But we have to think about the complexity of job loss, because we don’t know what the future jobs are going to be. But what we do know is that as people lose their jobs, and some think that hasn’t been done in the past, we need, and can use AI to retool and re-skill those that work that workforce to create the jobs of the future.

David Bray: So, I would build on that, too. I mean, if we go back to the analogy of when the car came out, I’m sure there was a large portion of the world’s population that were involved in raising horses and taking care of horses and things like that. But, that didn’t mean we shouldn’t not recognize the car as an advantage, and because people were worried about losing their jobs taking care of horses. And so, I think this raises a question about as jobs are lost because they can be automated, what do we as society owe those people whose jobs have been displaced, to help them re-tool, retrain as best as possible for something else. And the jury is out as to whether more jobs will be created vs. destroyed as a result of artificial intelligence. So, we need to monitor them and be aware of it. We must also be aware of there is what’s called the “unemployment effect” on people’s health, which is we humans need to have a purpose. And so, a future in which we don’t need to work because artificial intelligence is doing everything may actually not be a nirvana as it sounds like because we won’t find purposes. Or we may find purposes in advocations as opposed to vocations. But that’s a collective conversation we need to have, which is, “Where are we going together as a society? How can we make sure we bring as many people along?” As Kay said, ideally make it so they’re not as fearful of artificial intelligence.

I personally think the future is really going to be about pairing humans with AIs. Right now, artificial intelligence is a lot like a five year old. So, for example, if you have a five year old, none of us teach a five year old specifically to speak, sing, subject, verb, and object. We just expose the four- and five-year old to enough language before they actually begin to construct sentences on their own and eventually they might say something like, “I walked to the school today.” And when you ask them why they say it that way, as opposed to, say, “To school today I walked,” the five-year old is just going to say, “Well, I never heard it said that way before.” They’re not going to have a deeper reason why. And so, I think right now, where there’s going to be plenty of automation that’s possible by machines and by artificial intelligence, when you ask the deeper question of “Why right now?” It’s just going to be because that’s what I have seen in the data, or that’s what I’ve never seen before. It’s not going to be telling you the deeper reason. That’s going to require humans at the moment to be able to dive deeper. And so, I think it’s really going to be about pairing humans and artificial intelligence, at least for the next 20 years in my opinion.

Kay Firth-Butterfield: I certainly agree with that and I think that it’s a great, great thing for us to have the augmentation of AI as humans. We’ll be able to do our jobs better, and as I say, perhaps solve some of these intractable, currently intractable problems. I think two points I wanted to just come back to on David’s comment: One is that it was easier for people who had been looking after horses to perhaps move to looking after cars. You know, grooming a horse, and polishing the car. They’re manual labor. If we are looking at a change which actually requires a change from manual labor to coding, or any of those sort of things, then that’s a much bigger gap to bridge and we need to think about how that might be managed. And also, as a historian by background, I really worry about the analogies with the industrial revolution because the industrial revolution hurt a great deal of people over a long period. And yes, we came through it and we developed something better. But, it looks as if this industrial revolution will be much faster, and we need to prepare not to hurt as many people very quickly.

David Bray: And, I think that’s very true actually because it’s worth noting that when the industrial revolution happened, and people moved from working on farms basically 24/7 to factories, and doing basically rote, repetitive actions, aside from the fact they’re doing rote, repetitive actions for twelve hours at a time is not healthy for anyone, so what was also very painful was the way society collectively dealt with that transition from agriculture to industry was actually through alcohol and gin. And so, similarly when we made a transition from the industrial revolution to the post-WWII era, in which people didn’t need to work as much, and actually worked 9-5 jobs, the interesting thing that happened with that, was there are some that actually argue the way we dealt with that was through TV dinners and sitcoms, which, while not as bad as alcohol, we still needed an outlet. So Kay’s absolutely right. It’s going to happen in a much shorter time period. It may be as big, if not bigger change. And so, having again that conversation about what do we, as society, owe each other is really key to have now, because we don’t know! And none of us know if the job we’re currently doing today in two or three years will be done better by machines.

Michael Krigsman: You know, one of the big difference I think between this change that’s taking place, in terms of the fears around job displacement vs. during the period of factory automation is when factories were being automated, they brought in robots, and people could walk into the factory and you could see, “This machine is doing this task, and it’s a physical thing and you can see how this task is now being done that I used to do, and so I understand how my job is being displaced.” However, with AI, I think part of the fear is there’s this unseen hand, there’s the computer that’s a black box and we have no visibility or transparency into it, and it’s changing things, it’s making my job, or I have the fear it’s making my job go away. But it’s not tangible. And that changes the psychology of how people relate to the technology.

David Bray: Yeah, I would agree. My experience is, again, I previously worked in the Bioterrorism Preparedness Response Program, so we dealt with bioterrorism. And what makes bioterrorism such a challenging subject is that it is not seeable. That if you say something bad has happened, even if you haven’t done anything, the fact that it’s not seeable makes people worried, makes people fearful, and makes people worry something’s occurred. And so, we humans don’t do well with things that are invisible. And right now, most artificial intelligences are not written in a way in which you can easily show what they’re doing. Like you said with the factory example. So I think part of the interesting ethics of design going forward is, how easily can you elucidate both what the machine is designed to consider - recognizing again that it’s not going to be like A+B+C+D. That’s not what artificial intelligence does. It’s goal-bound. It’s often exploring a space much larger than you can express in a diagram. But, something that can help people understand what it is possibly doing, what it is not possibly doing in order to help people overcome that possible fear factor.

Kay Firth-Butterfield: And I think that that’s interesting because actually you have two sides of the equation here. When we have the AI that can’t be seen because it’s locked away in our computers, or in the “black box” as it’s often talked about. But, when we actually do put AI into robots, it’s very interesting because we then see them as being created in our own image. And I think that that’s really interesting. You see robots being called, and the people relating to robots almost as if they’re humans.

David Bray: Right, yeah. And well even with computers, how many of us have wanted to hit a computer because it did something we didn’t want it to do, thinking it would somehow respond to the fact that we hit it. But, you’re right, there are these cases of young children being educated through a robot, and then they become their friends and they hug them. So, we do anthropomorphize machines if we can see them.

Kay Firth-Butterfield: And, just I was reading today that Google’s looking at cybersecurity, and had named the three algorithms with human names, that they were using. [Laughter]

Michael Krigsman: So we want to have warm, fuzzy AI algorithms that make us feel good. We have, really, just a couple of minutes left, and so let’s just finish. Kay first, and then I’ll ask David just in a minute: what advice and suggestions do you have for people who are thinking about the law in, and the evolving law in regard to AI?

Kay Firth-Butterfield: Well I think the advice to lawyers is that very soon, you will be receiving… You will see those cases coming across your desk, and you need to get up to speed around artificial intelligence. And, what’s going on in artificial intelligence now, I think just going back to that job creation thing, actually there are going to be a lot of jobs around, so we’re not going to kill all the lawyers by automating them just yet, because we are going to see experts needed in court. For example, instead of cross-examining a driver, we might have to cross-examine an algorithm, a.k.a. an expert on the system. If you are in any business, you need to be looking at what AI can do for you, and what the impact of AI will be on your business. So there are two pieces of that, because I genuinely believe that AI will change everything. And if you don’t start looking now, you will be too far behind.

Michael Krigsman: And David Bray, your thoughts on guidance for policymakers who are looking at the policy, the public sector policy, and regulatory side of this. Any thoughts or guidance for us, very quickly.

David Bray: So again, I’m wearing my Eisenhower Fellow hat, not my FCC hat. What the conversation I had in Australia and Taiwan is, cloud computing in some respects is the appetizer, artificial intelligence and the Internet of Everything is really going to be the main course that we’re going to be consuming over the next five years. And, I don’t know if I can necessarily give advice necessarily to policymakers, but I’ll say what Kay said. Any organization and any entity should recognize that this will disrupt how you operate and it’s a question of whether or not you are very intentional about it. Or, someone else is going to do it to you. So, start on that journey now. Start having conversations. And if there’s one thing I really call out, it’s look at the OpenAI effort and other efforts like it that are trying to make this open and available to people as a place to try to either begin experimenting, or if you don’t have the time to experiment, maybe have some of your employees begin to experiment what’s possible. Because, we’re only going to get the expertise we need to know in this era through the experiments that we need to do with artificial intelligence.

Kay Firth-Butterfield: And I think just to quickly add to that, we need to have more networking. We need to talk about this more. So, thank you very much for this opportunity.

David Bray: Yes, thank you Michael for the great service!

Michael Krigsman: Well, thank you two! This has been an amazing conversation, and in about a week, we’ll have the transcript up on the CXOTalk site, and you can dig in and watch the replay. Just a tremendous amount of information. You have been watching Episode #203 of CXOTalk. Our guests today have been David Bray, who is here in his Eisenhower Fellowship… “wearing his Eisenhower Fellowship hat,” is the right way to say it, although he’s also the CIO of the FCC. And, we have been talking as well with kay Firth-Butterfield, who is truly one of the world’s leading experts on the law and ethics of AI. And, a clear message has been that AI is going to be changing a lot of parts of our lives, and for all of us regardless of the job that we do, the time to start learning about this, thinking about this, and understanding more of it, that time is now. So, thank you so much, and we have another CXOTalk tomorrow, actually. So join us! Thanks so much everybody, have a great day. Bye-bye!

Published Date: Nov 10, 2016

Author: Michael Krigsman

Episode ID: 394