Ever been shadow-banned? Ever wondered if an algorithm is changing your perception of reality? We talk about AI’s black box problem and much more in this episode of The AI Show.
What happens when you don’t know why a smart system made a specific decision?
Today’s guest is Nell Watson. She chairs the Ethics Certification Program for AI systems for the IEEE standards association. She’s also the vice-chair on the Transparency of Autonomous Systems working group. She’s on the AI faculty at Singularity University … she’s an author … and she’s been a judge for the X-Prize.
Scroll down for the audio, subscription links, the video, and a full transcript.
Listen to the podcast here:
Subscribe on your favorite podcasting platform:
Or, watch the AI black box problem interview on Youtube
What we talk about
We’ve probably all heard the stories about this … in one, an image recognition system distinguished between dogs and wolves because all the wolf photos it was trained on also had SNOW in the background. Clearly, that’s a system that will fail in other circumstances.
But unless you know why an AI system is doing what it’s doing, it’s pretty hard to fix. So today we’re talking about transparency in AI. How important is it to know why a smart system made a decision … and, can we engineer know-ability into all our AI systems?
Full transcript: Solving AI’s black box problem
John Koetsier: What happens when you don’t know why a smart system made a specific decision?
Welcome to the AI Show. My name is John Koetsier.
Today we’re talking about why AI does what it does: the infamous black box problem. We’ve probably all heard the stories about this … in one, an image recognition system distinguished between dogs and wolves because the wolf photos it had been trained on had snow in the background. Clearly, that’s a system that’s going to fail in other circumstances.
But unless you know why an AI system is doing what it’s doing, it’s pretty hard to fix. So today we’re talking about transparency in AI. How important is it to know why a smart system made a decision, and can we engineer know-ability into all our AI systems?
I’m super excited to introduce today’s guest who is going to help us figure all this out.
Our guest today chairs the Ethics Certification Program for AI systems for the IEEE standards association. She’s also the vice-chair on the Transparency of Autonomous Systems working group. She’s on the AI faculty at Singularity University. She’s an author and she’s been a judge for the X-Prize. Her name is Nell Watson. Let’s bring her in right now. Nell, welcome!
Nell Watson: Hello.
John Koetsier: So glad you could join us. You’ve had a really amazing career. Can you tell us a couple of the highlights before we dive into everything?
Nell Watson: Yeah, I think I happened to be trying to solve a very difficult problem with reconstructing the body in 3D just using two dimensional images, like from a standard photograph. And in trying to figure out those very difficult problems I, almost in desperation, started applying machine learning techniques and that was at just the time when deep learning was starting to become possible.
And at just the right time we started to apply these technologies, and since then I’ve worked to develop the space of technology within AI, but more specifically looking at the ethical implications as well, and trying to create good rules around AI and how it’s used. And then also a little bit of socialization as well. How do we teach machines about our preferences and the things that we would like our engagements with them to look like.
John Koetsier: Interesting, and it had to be super cool to be a judge on the X-Prize as well.
Nell Watson: Absolutely. I’ve been selected as one of the judges working on the Avatar X-Prize and that is essentially kind of a hybridization of robotics and human intelligence. Essentially it’s like a telepresence robot that anyone can inhabit and then use that to explore another space.
Now this is very important, for example, for visiting family if you’re far away or doing a specific task in a dangerous location. But it’s also useful especially in the time of these potential pandemics going on in the world, being able to do something or to have a presence without putting oneself at direct risk is very important.
And I suppose also it might help with things like the ongoing migration crises. For example, if you can enable people to participate in a strong economy, but they can take that money and spend it in their own homeland and improve their own homeland, then that offers so many more possibilities than just sort of desperately trying to cross borders and then ending up in a tricky economic position if you manage it.
John Koetsier: Yes, understandable. Interesting, very interesting. Let’s get to what we’re actually talking about today. Can you summarize the black box problem for us?
Nell Watson: Sure. Essentially a lot of neural networks, which are one of the most commonly thought of methods of AI in recent years, they’re very stochastic, that is they’re very random. If you put input into them, they don’t always come out the same way.
It’s a little bit like the Plinko game on Price Is Right, do you know when you put the token in at the top and it goes [plink-plinking sounds] and you’re never quite sure where it’s going to land. Now with machine learning you maybe get a better understanding of where it might land, or where it’s probably going to land, but a lot of the time it’s kind of random. Sometimes you put the same input in and you get a slightly different result out, and it means that it’s very difficult to understand exactly how these systems are making the statistical predictions that they are, and whether or not those predictions are based on reality, or whether they’re based on some kind of a biased perception of reality which might be due to limited training data.
For example, the dogs versus wolves because of the weather system. I’ve heard of a similar system that’s trying to detect tanks, and the images of tanks are all taken at a certain time of year as well. And then when you bring these things out into the field they completely break.
Now it’s one thing if your system doesn’t work, it’s another thing if you think that it works but actually it’s working in the wrong way and may be making very biased impressions due to that. And it’s a little bit more complicated even than that, because sometimes what bias means to a sociologist or even a legal scholar is a different thing from what a statistician would use that word to represent.
And sometimes the world is complicated and sometimes people are different in different ways. And yet if we tell a system that men and women are of similar heights typically, and similar body strengths typically, which common sense tells us that although there are outliers in both cohorts, on average, men do tend to be taller and do tend to have more upper body strength. But if you kind of try to lock the system into being perfectly equitable, it may go completely awry when it’s faced with a difficult reality that it cannot reconcile.
And so trying to mitigate bias in these systems is very difficult because we don’t always know how they’re working or even necessarily the best way to fix it.
John Koetsier: Yeah. Talk about that, maybe diving into the black box problem a little more. Do you have some examples as well of what are the dangers of not knowing how a smart system arrived at its conclusions?
Nell Watson: The dangers are potentially pretty large, especially in the world of what’s called federated learning, and that means that different devices can be doing little operations on data and then sharing that around. So maybe even your phone might be doing some limited machine learning based on some of the data that it’s getting in.
For example, if Siri or Bixby or Alexa is trained on your voice, then the model that trains the system to recognize your voice specifically that’s typically done on the phone, but then that might be shared with other systems. And so understanding where data came from, who was responsible for it, how much trust to give it, those are all big problems in and of themselves. I think that it’s very important to know who is accountable for a system.
So who normally owns this thing and its purpose, as well as the processes that it’s doing. So what hardware resources might this system have access to? Is it going to be able to look through your camera? Is it going to be able to mess with the battery on your device to tell you that you need to upgrade soon? These kinds of questions are filtering into the public consciousness and people are beginning to be aware of the ways that machines lie to us.
For example, the engine emissions fiasco of a few years ago, where a couple of different manufacturers engineered primitive autonomous systems that are designed to lie to humans. They’re designed to figure out when people are trying to test for something and then to do things in a different way than they would typically do on the highway.
Machines are able to solve for all kinds of difficult problems, and that means that they’re also able to solve for obfuscation
So machines are able to solve for all kinds of difficult problems, and that means that they’re also able to solve for obfuscation. They’re able to figure out how to lie to us or how to hide something in plain sight so that we don’t notice it, and that means that our impressions can be manipulated in different ways. Maybe even our sense of consent, or our beliefs about things.
All of this stuff is liable for manipulation by smart machines. And to my mind, the only real way to fight this stuff is by disinfecting it with lots of sunlight, that is, to be able to peer into the system and get a better impression of what it’s doing, for what purpose, and who is benefiting from that?
John Koetsier: That’s a great segue because what I wanted to ask next is, so the average consumer … you, me, in our everyday life, not our working life as well … should people who are affected by smart systems be able to know what they’re dealing with, how they’re being profiled, or what the smart systems are concluding about them, how they’re being categorized?
Nell Watson: I think there should be a couple of different layers of information for those who either feel a need to dig into it or have the technical knowledge to make sense of it. But not everyone has the time or the inclination or the ability, and I think one important corollary of transparency is explainability.
It’s no good to have something which is overt if you can’t make sense of it. If it’s just like a bunch of hex characters or something, generally speaking very few people can understand that. So for transparency to be meaningful, it needs to be explainable as well.
And that’s why I would recommend boiling things down into as plain and simple language as possible, but also giving a grading to systems. So basically kind of like a report card if you will, that people can easily understand how much credence to give to the system, right? Because it may have an A in one area, but a D in another, and then maybe that gives you a reason to not trust it so much.
Another example I would give is that a lot of places in the world, I know in California, in the UK and other places, on the front of restaurants they have these hygiene ratings, and they’re mandated to have them on the door or the window. So you immediately know how much credence to give the food in terms of whether it’s going to be good for your tummy or not. And so that’s kind of what’s at the IEEE we’ve been working on to create something like that, that you know right at the first encounter whether or not you want to continue to engage with the system or not.
John Koetsier: That’s super interesting because algorithms and smart systems control our lives in so many different ways that we are not even aware of when we browse Facebook … what we see, what we don’t see, who sees what we post, who doesn’t see what we post.
When we search on Google there are smart systems that are interpreting not only what we’re posting or what we’re searching, but also maybe some things about us and where we are and other things like that, and then shaping perhaps the answers to a search query or perhaps our view of reality based on what information is coming in on those calculations.
And that’s pretty invisible to most people.
Nell Watson: Oh yeah. Oh yeah, it’s happening all the time. A lot of the big tech companies model us, right? They make a little voodoo doll if you will, of us.
John Koetsier: A mini-me.
Nell Watson: Yeah, and so they can kind of poke that in different ways and model its responses. And so sometimes these tech companies know what you’re going to do, better than you may know yourself. For example, your navigational app on your phone is understanding that you’re gunning the car a little bit more than usual, so maybe you’re in a bad mood and then typically when you’re in a bad mood you go and you buy a big tub of ice cream at the local store, you know? And so maybe it gives you a coupon for a competitor’s brand who has paid some system a privilege to be showcased … these are the sorts of things which are possible today and which are starting to be rolled out.
And we know that because there are lots of patents coming out of these companies, and you can kind of see the things that they’re working on. For example, potentially shadow banning people altogether. So shadow banning is this thing where you think that your posts are being read by other people, but then mysteriously nobody seems to respond to them, right?
John Koetsier: Yes, yes.
Nell Watson: And you’re kind of put in a corner, but at least you might have a guess that nobody’s reacted to your post in a week that something’s gone wrong. But if you go to Reddit there’s something called sub simulator and it uses this GPT-2 system that came out of open AI, which, it’s basically a very, very sophisticated text generator and you can play around with it at AIdungeon.io, which is a really cool little game.
It’s kind of like a choose your own adventure, except it creates it on the fly based on all of your responses. It’s amazing, but over at that Reddit subreddit sub simulator they’re basically stimulating an entire Reddit experience with like hundreds of different posters, posting stuff and reacting to things that the others have posted. So what if you actually get put into a corner facing a bunch of bots which have been trained on your peer group? So they sound like your guys and they use the same kind of means and things, and the same way of looking at the world ostensibly, but actually you may have been placed in a virtual ghetto with glass walls.
John Koetsier: Wow.
Nell Watson: And you would never even know it.
John Koetsier: Wow.
Nell Watson: And you know one of the big tech companies has a patent on that very idea.
John Koetsier: Wow.
Nell Watson: So, yeah.
John Koetsier: And you can see social benefit from that from certain angles right? You can see some, I forget what they call them, these men who never have had a girlfriend, and some of them had …
Nell Watson: Right.
John Koetsier: Say again?
Nell Watson: Yeah, Hikikikomori, sort of … type.
John Koetsier: Yeah, yeah, that sort of thing. Exactly, and they’re involuntarily celibate is what they say. And they post these very hateful things in a lot of different places … 4Chan, maybe Reddit, those sorts of things, and if you can at least let them feel like they’re getting it out there maybe that could be good. But I mean there’s other ways as well where people with maybe divergent political opinions that one platform doesn’t like … feels like they’re having conversations about something that’s actually real and meaningful, and it’s really fake.
It’s potentially powerfully undemocratic, and worse, you know if you mess with people’s sense of reality, then honestly I think that if you do that, that’s kind of like a form of abuse called gaslighting.
Nell Watson: Yes, indeed. It’s potentially powerfully undemocratic, and worse, you know if you mess with people’s sense of reality, then honestly I think that if you do that, that’s kind of like a form of abuse called gaslighting.
And that can literally drive people crazy if they feel that something is messing with them. And a lot of people that may have severe mental distress because they have a delusion that they’re being persecuted, they have like paranoid schizophrenia or something like that. But in actual fact, some kind of AI related system might actually be literally out for you at some point, which is a very disquieting kind of thought. And I observed that W. H. Auden, I believe, wrote that “Those whom the gods wish to destroy, they first make mad.”
John Koetsier: Yes.
Nell Watson: And so how will those big tech companies, if they don’t like our opinions on something, how will they be disposed to to treat us?
John Koetsier: Exactly. Mad in the old fashioned sense, crazy, not just angry.
Nell Watson: Right.
John Koetsier: Although maybe angry as well as we see a lot of what’s going on on social. So we’ve talked a little bit about some of the impact on individuals, but all of us individually make up a society, make up a group right? What are some of the potential societal level dangers of a lack of transparency in AI or autonomous systems?
Nell Watson: Well, I think that a lot of things such as science and medicine are particularly dependent on what’s called ‘epistemic virtue,’ which is basically knowing that you know what you know, not just inventing or confabulating things that sound favorable. But actually doing the hard work of examining something carefully and rationally to figure out how much truth is in it or not. And I can see from our culture, the polarization we’re experiencing, the splitting into echo chambers and people pointing fingers at each other and saying that ‘this is fake news … no, no, this is fake news.’ And if you kind of look at the news of different political ideologies, they often look at the same situation from completely bizarrely different perspectives, like night and day. And that’s already troublesome, very troublesome.
But when you add machine intelligence into that mix it gets even even trickier, partially because machines can invent things or counterfeit things so perfectly accurately now. But also because the algorithms which enable social media are driven by engagement.
John Koetsier: Yes.
Nell Watson: And outrage is the strongest form of engagement typically, and so they tend to to whip people up even more, and then amplifying that they tend to amplify the least agreeable people of a certain group.
John Koetsier: Unfortunately.
Nell Watson: And then sort of push them in front of another group, which then sort of metastasizes the outrage even more. I don’t think our civilization has quite figured out how to deal with these technologies yet, and I think it’s going to take us a while to do so. And that’s why I find it so important to have good rules around them, and ideally, hopefully, some international coordination on that as well, but that isn’t easy either.
John Koetsier: Yeah, so let’s talk about that a little bit. If I’m a developer building smart systems, I want to do it in an ethical way. How can I build in transparency to my AI?
Nell Watson: Well, it’s important on a lot of levels to … first of all, to build in transparency in the organization, which creates something. So to have an organization which actually respects the sharing of secrets. Some companies are notoriously secretive and even in the same department people aren’t even allowed to talk about products or new projects with each other, you know, to try and keep [a lid] on things.
And that makes it even harder to understand how your work might affect something else in another system or to understand that something isn’t being done properly. You know, maybe some manager has pushed for something to be done very quickly, but in a very improper manner. And if there’s more eyes on those kinds of things then they’re easier to keep track of.
But more than that, I think transparency of what hardware or what resources a system has access to, what data it’s using, what data it might be sending to somebody else. As well as whether there’s been any vetting or certification of that technology, whether it’s using something highly proprietary, very sort of personal corporate black box stuff, or whether it’s using more open source kind of technologies that probably have had more shakedown in the actual real world with more, again more eyes kind of doing a mutual safety check.
John Koetsier: Yes, yes. Interesting. So if we get that right, what’s that look like?
Nell Watson: If we get it right, the 2020s are going to be very interesting, in a good way, because machine intelligence has incredible opportunities to solve very difficult problems in the world. So essentially it is a technique of making sense out of chaos.
Whether that’s a chaotic environmental system like climate, whether it’s a chaotic social system like our deeply polarized societies, we can use these technologies to find all kinds of correlations and make predictions, and make sense out of things that are like a horrible puddle of mud. But we can only enjoy this if we know that it’s secure, if we know that the devices hooked into this network aren’t going to be hacked, which is a tremendous problem with the internet of things.
John Koetsier: Yes.
Nell Watson: Where light bulbs, for example, that are cool, you can mess around with your phone and change the color of the lighting and stuff. But those typically have a default password and if you know what the default password is your entire home network is easily hacked. If we can build in encryption into machine learning that’s going to be a game changer, for example, that will enable one to do … basically to work with data which is encrypted, right? So to use encrypted data for machine learning, and that means that if somebody has your data, they don’t necessarily have all of your secrets. And in fact, using blockchain type technologies, we can actually take away the key to access some data at a later stage.
John Koetsier: Sure. Sort of like a differential privacy from Apple?
Nell Watson: Yes, something along those lines. Yes, there are a couple of different potential techniques, and there’s another company called Fitchain which I think is doing some interesting stuff as well. Basically they enable a market for data and a market for models, and so you can kind of put your hand up and say, ‘I have a problem, who can help me to solve it?’ And this means that you can deploy machine learning in an encrypted way in, you know, not in six months, but in maybe six minutes, for example.
John Koetsier: Yes.
Nell Watson: That’s going to be a game changer, but we’re only going to be able to enjoy the benefits of machine learning if we can do it in a secure way and in a way that is reasonably wise and ethical as well.
John Koetsier: Exactly, exactly. Excellent. Well, you know, I want to thank you so much for joining on the AI Show. It’s been a real pleasure.
For all the listeners, whether you’re on the platform right now, or whether you’re listening on the podcast later on, please like, subscribe, share, or comment. Please rate it and review it. Thank you so much.
Until next time, this is John Koetsier with the AI Show.