Michael |
I’m Michael Stevens. |
Renato |
I’m Renato Beninatto. |
Michael |
And today on Globally Speaking we have a very special event. We’re coming from the Jewelbox Theatre here in Seattle, Washington. |
Renato |
It’s our live show. |
Michael |
It’s our live show and we’re actually coming full circle on content. |
Renato |
The first episode that we recorded for Globally Speaking was “Will I have a job tomorrow?” |
Michael |
We were discussing the implications of machine learning and will the machines take our jobs. And today we have a guest whose expertise is in that area. |
Renato |
Why don’t we let our guest introduce himself? |
Tripp |
My name is Tripp Parker, I work for Amazon currently, in Alexa Health and Wellness, working on artificial intelligence, machine learning and that space. I’ve been doing it for about 10 years for Amazon and Microsoft. |
Renato |
So, artificial intelligence. That’s something that scares a lot of people. Is it going to take my job? |
Tripp |
It probably depends on how good you are at it! I think that a better way of looking at whether or not there’s going to be a job or a particular type of job is not to look at the context of jobs but tasks within a job. Being an accountant today looks very differently than being an accountant 10 years ago or 25 years ago, and that will continue to be the case. I don’t think that we can say X job will go away. I think it will just look very, very differently as technology gets applied in different ways to those jobs. |
Renato |
So, the science changes, the accounting practice changes, but the technology follows those changes and expedites and creates more opportunities. |
Tripp |
Yeah, and the technology also can solve certain problems that could only be solved by a human in the past. And so, then humans need to adjust and say “Okay, how do I keep myself looking forward? What are the new tasks that need to be done as this technology gets applied in different ways?” |
Michael |
So you’re working in health and wellness. That’s a pretty big task for a machine. Can you take us back a little bit in some of the research you’ve been a part of, maybe even before Amazon? How is it machines learn? How do we find confidence in what they’re telling us? |
Tripp |
You can look at machine learning as a system and think of it as something that’s just trying to make predictions. And so, it’s trying to establish patterns between input and desired output. And that’s all it’s trying to do; it’s trying to establish correlations and then make predictions with various levels of confidence. You probably saw something like this if you watched Watson battle Ken Jennings on Jeopardy. You could actually see the confidence level of Watson whenever it tried to answer these questions. It’s not that the machine understands anything; it’s that it’s trying to establish patterns in both the question and the structure and that kind of stuff and what it anticipates might be a good answer. And so sometimes they’re highly competent, sometimes they’re not. That’s all machine learning and AI really does. It’s a fancy prediction system. That’s where the research is and so you just find better and better ways of making predictions. |
Michael |
One of the areas where a lot of the research began to see big benefits was around images. Do you have any stories around how machines pick up things from images? |
Tripp |
One of the first funny stories of artificial neural nets to try and do image recognition is a basic question: how do you tell the difference between a dog and a wolf? That’s something that you would probably actually struggle to describe. |
Michael |
And yet a small child could likely do it. |
Tripp |
What you would do with that small child is never describe how to do it, just keep pointing, “that’s a dog; that’s a wolf; that’s a dog; that’s a wolf.” And over time, the child will learn the difference. I have a three-year-old and same thing with tree versus bush or hedge or whatever: you just point, you give it names and over time the child learns. Artificial neural nets do something very similar. So, there is this very famous example of we tried to train an artificial neural net to tell the difference between a wolf and a dog and we gave it a bunch of pictures. Over time, we started giving it new pictures and it started guessing correctly. We were like, great, that’s awesome! It knows the difference between a wolf and a dog. We don’t know what exactly it knows, but it keeps predicting correctly. And then eventually it started getting wrong, and you’re like, why? Researchers ended up figuring out that people like to take pictures of wolves when they’re on snow. And so, what the AI was actually looking for was snow. There’s snow in the picture, you could put a Chihuahua on it, and it’s going to be like, “That’s totally a wolf.” And you could put a really mean-looking wolf in a house and it would be like, “That’s totally a dog; don’t worry about it.” It’s a fancy prediction algorithm and you don’t know what exactly it’s looking at and what patterns it’s establishing. It’s really hard to verbalize. |
Renato |
From what I remember, if you let machine learning algorithms go by themselves, they become racist. |
Tripp |
There’s other examples of this when people try to predict recidivism on parole boards. So, like, people in prison: we try and predict how likely are they to recommit a crime. You give it all the data and it will actually end up being racist. We’ve proven that it will look at African Americans unfairly. That one factor makes them more likely to be predicted to recommit offenses. The AI is just looking for patterns in data and that’s it. It’s very amoral in that sense. |
Michael |
Your background is interesting—you’re coming from a little bit different perspective. We have people in our industry who have a computational linguist background, we have people who are coders who come to machine learning, but you look at what applications are out there a little differently. Can you talk about that background that you have? |
Tripp |
My background is computer science, computer engineering, but I’m also a philosophy major, and so I look at it from a philosophical top-down of like what kind of problems should we as people, as humans be solving, and how do we apply our values to these problems? How do you balance your ethics or your theology against data-ism? Right? Are we just going to follow the data or are we going to follow our values? My answer to that is: you never just follow the data. There’s an infinite amount of data. There’s an infinite number of facts in the world. You always interpret it through some prism or some goal. And so, what I really try to do—and that’s why I like healthcare—is really think about the humans and what is it that I’m actually trying to solve. And so, from a philosophical standpoint, start there, then you look at the applications and how can I apply this methodology to solve those problems? But you start with a prism, you start with a priority structure. Your brain does; you can’t help it. |
Renato |
Steven Hawkings died recently, and he wrote his last paper warning humanity against the threats of artificial intelligence. You have the most advanced minds in our industry warning us about challenges with artificial intelligence. Is that something that concerns you? Because you said that artificial intelligence is amoral, how do you integrate that element into the business side of things? Because business is amoral too. |
Tripp |
Sure, yes. Business has incentives; AI only has incentives insofar as we give it incentives. It’s an interesting question. It’s actually a hard, technical problem to get AIs to do what you want them to do. And I think the concern with the Steven Hawkings and the Elon Musks of the world is not so much that, like, the machine’s going to wake up and hate us. It’s more that we can’t reliably get them to do what we want them to do. The reason for that is that you have a lot of what we call tacit knowledge. You have a lot of concerns and values that you understand but you can’t articulate. A philosopher would call this Polanyi’s paradox. It’s named after a philosopher who pointed out that you know more than you can say. For instance, you know the difference between a dog and a wolf. You probably can’t really describe it. You know more than you can say, and so if we try and build an AI and we can’t really say what we want it to do, we can just kind of describe it with some incentives and various incentive structures. If we give it something really important, it might do something like think that the Chihuahua on snow is a wolf because it’s learning differently than what you have in your head. |
Renato |
One approach is you look at it as a tool, as something to contribute and help humans make decisions, as a decision support system, as an information support system and not as an end in itself. |
Tripp |
There’s two tacks and there’s two different kinds of applications for it. There’s the kind of application where it can assist a human in performing their job better by noticing patterns that you would never be able to notice. And this is actually a really great application for it. Imagine if an Echo device or something like that could be sitting next to a doctor, listen in, gather all the data and make recommendations to a person who could then make decisions on it. That doesn’t really scare most people, right? |
Renato |
It’s a support system for an expert. |
Tripp |
Correct. Think of it as just a better version of your iPhone. It’s something that you use, it provides information for you so that you can do better. However, it’s not the thing that’s making all the decisions. So that doesn’t scare people and that’s one application that everyone’s cool with. But there’s certain applications where—and this is where healthcare gets involved and other more mission-critical stuff that worries people—if we artificially inject humans into the system, we limit how good the AI can be. So, I don’t want every decision that an autopilot flying a plane wants to make—every adjustment it wants to make in the speed and the wings and everything else like that—I don’t want that going through a human. Be like, “Is this okay? Good? Okay.” “Is this okay? Good? Okay.” You don’t want that going through a human because it’s too slow. We’re limiting what the AI can do. If I have to run it through a human, we’re eliminating the value of the AI. These are two different things that have to be tackled differently. There’s technology that can help you be better at what you do, and then there’s a technology that maybe it’s just going to be better than what you could be in doing that task. And so, I think you have to take it on a task-by-task level. |
Michael |
What are some of the more dubious applications you’ve seen of this? |
Tripp |
99% of AI and machine learning is used for advertising. 99% of these techniques is to get you to click on ads, engage with social networks and those kinds of things. That’s the vast majority of it. Everything else is specialized niche areas. I’ve worked on some of those applications. Now I work on healthcare which I feel much better about when I go to sleep at night. There are some really nefarious things that go on in terms of advertising and those applications. I’ve gotten very good job offers from both political parties in the United States to help them run and come up with better systems for political targeting, classification and that kind of stuff, because if I can swing half a percent in Wisconsin, I might be able to actually swing an American election, and that’s scary. Some of the more nefarious applications of this that I’ve seen is actually when you classify users. It’s not so much about lying to them; it’s about selectively providing them with information. You could even take certain sets of information out of an article, so the person will walk away with it and have an intelligent conversation about X political topic, or whatever, but they will be missing something key that changes their opinion on it. These are the kind of applications of AI that I really, really, really hate and I really worry about. By just knowing who you are, where you come from, your gender, sex, all these kinds of things, I can predict what’s going to change your opinion about X. And if I do it, I can manipulate you to do Y. I really worry about this. |
Renato |
Who is manipulating? Is it the artificial intelligence or is it the human behind the artificial intelligence? |
Tripp |
In that case, that would be a human that is explicitly giving that goal. In AI or machine learning, you establish a cost function, so you’re giving it the goal that you want. I want this kind of person to not vote. I want this kind of person to vote this way. So, it’s a person doing it. |
Renato |
You’re talking about an adversarial system. You mentioned getting offers from both political parties. You have two sides fighting with each other and who wins? The one who gets the smarter engineer. |
Tripp |
That’s why it’s an arms race in a lot of these kinds of scenarios. One of the things that worries me is that it is an arms race and that we can’t slow down. No-one wants to slow down and start thinking about, “okay, what is the safest way of proceeding in some of these areas?” Some areas I don’t have worries about. Localization would be one of them. But with political advertising, I do have worries with cyber security. It is an arms race and you can’t afford to slow down and ask questions. You want to keep pushing forward because there’s no prize for second place. |
Renato |
Yeah. |
Tripp |
You lose and you’re done, right? You’re out of power, someone else wins. I really worry about the ethics of AI in those scenarios because there are perverse incentives. |
Michael |
You do have a sense of the problems that AI is trying to solve. Where on the complexity level does language fit in the hierarchy? |
Tripp |
The hardest things for AI to solve are the really creative problems. It’s like creating music that you’re going to like to listen to. Right? People are working on this, by the way; this is not something that I think is impossible. But that is the hardest problem because you can’t train really well on that; it’s a fundamentally creative problem that you’re trying to do, and so therefore, it’s really rare that good music is made. Otherwise, we’d all be making music. Below that, you have other things that are a combination of repetitive and creative, and that’s where I would put really good localization and really good translation, because some things you can translate really regularly because it’s a common sentence, it’s a common phrase, it’s a common expression and those kinds of things. But think about the best novels you’ve ever read. It’s kind of more than that, too. Right? There’s an art to it. And so, I would say really good, reliable localization where, like, the AIs will just all do it, there’s going to be a really high bar to get rid of humans in that process. It’s going to be more like a tool for very talented people as a starting point, but there’s an art to it and that art is hard to pin down. That’s where I would put localization. And then there’s like the very low-level tasks like calculus. That stuff I’m not worried about. Facial recognition. That stuff is easy. There’s no art to that. That’s just pure math. |
Renato |
It’s easy to recognize a face but it’s hard to say whether it’s beautiful or ugly, right? |
Tripp |
Yeah. Or describe it. |
Renato |
That’s creative. |
Michael |
We’re going to take some questions from the audience. |
Question |
This is Mark, I’m from WordBee. Elon Musk is probably one of the most successful business people and smartest people. His biggest concern in the world right now is AI. He went to President Obama and said, “I’m concerned about AI, I think it’s going to be the end of our species. It’s pretty serious.” But, let’s break it down to our group right here. Is AI going to replace translators? |
Tripp |
I think what it’s going to do is replace certain tasks that a job has to perform. And this is not just specific to translators; lawyers, accountants, doctors, nurses, software engineers are going to have to worry about that. There’s definitely a lot of work that translators are doing today that AI is going to take away. Will there still be translators? Yeah. There are still horses; we don’t use them for all the same things and tasks that we used to, but they’re still here. It’s the same thing. Your job will still be around. It will probably look a lot different in ten years than what it does today. So, keep at the front of that wave if you’re worried about it. |
Question |
I have a question, having a translation background myself. Say, ten translators are given a sentence to translate, and they’re all good translators. You’re going to get ten correct answers. And if you’re asking somebody who’s not involved in the translation sense to evaluate what they’re reading in their native language that’s been translated into, whether it’s good or not is going to be based on your own value system. Would you say that AI would itself evolve to the point where it has its own value system? |
Tripp |
We call that annotator agreement. Annotators would look at the translation, they would say, “this is good; this is bad,” and they would mark it up. And then the AI would include that in the feedback loop as it keeps translating, and so it would learn the wisdom of the crowd. It would not develop its own value system; it would learn the crowd’s. We might even be able to classify you versus someone else: this kind of person is going to like this kind of translation; this person from Prague is going to have a different kind of translation from this other person in a suburb outside of Prague, maybe. You could classify, and then over time, if you have enough data, that’s probably how you would do it. In a lot of contexts, you would call that an inter-annotator agreement and it would learn your values. |
Question |
This is Geoff from Smartcat. I was just wondering if you think that AI is going to give more work to people in our industry or less in the future, and to what extent and volume? |
Tripp |
I would be really reluctant to give a prediction there; I don’t really have a perspective. I would be surprised if in the future there are fewer jobs overall in any context. However, I would say that the types of jobs and what’s done in those jobs will look very different, and so they might not be the same people doing the jobs. There’s a lot of people that do this kind of study and try and break down the tasks and see how much of it is suitable for machine learning and how much of it is not. I would have to know more; I am not comfortable making a prediction. |
Renato |
As a general rule, knowledge work is increasing and manufacturing and farm work or manual work is decreasing, and what we have is this gap. There is this scarcity of knowledge workers and that’s the challenge with advanced economies. |
Tripp |
One of the questions with AI is whether or not it’s going to do something similar with knowledge tasks that we did with manual tasks, where manual tasks are going away, manufacturing jobs are decreasing, because we’ve automated it. Are we going to do something similar with AI? It’s possible, but then there’s a lot of things that we’re not really close to automating. Anything easily cognitively repetitive, sure, we can probably automate that. Anything that’s not, anything that’s creative, anything that’s an art, anything that’s interpersonal, anything like that, that gets really hard to automate in any kind of cost-effective manner. There’s a lack of data. If I don’t have the data, I can’t build a model on it. |
Michael |
Thank you, Tripp! |
Renato |
Thank you very much. [Applause and cheers] |