Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
Artificial Intelligence Survey33:27 with Angelina Fabbro
Angelina Fabbro, Developer Advocate at Mozilla, talks about what makes us human in an insightful overview of existing projects to create artificial intelligence in our lifetimes.
[MUSIC] 0:00 Okay to to close out our surveys, I felt it most appropriate as 0:04 we all work toward sentient robots for the rest of the, the sessions. 0:08 To finish it out properly with one of our, one of my personal 0:14 favorite submissions that came in from the call for makers, call for speakers. 0:18 Angelina Fabrow talking about robot ethics, 0:23 AI, and all the assundry thereafter. 0:26 >> Yep, sounds about right. So let's do this. 0:29 [MUSIC] 0:32 >> Do you like our owl? 0:41 [MUSIC] 0:43 It's artificial? [SOUND] 0:46 >> Of course it is. 0:48 >> Must be expensive. >> Very. 0:52 I'm Rachel. >> Deckard. 0:57 >> It seems you feel our work is not a benefit to the public. 1:02 >> Replicants are like any other machine. 1:06 They're either a benefit or a hazard. 1:09 If they're a benefit it's not my problem. >> May I ask you a personal question? 1:11 [MUSIC] 1:16 >> Sure. 1:19 >> Have you ever retired a human by mistake? 1:25 No. >> But in your position that is a risk. 1:30 >> Is this to be an empathy test? 1:32 Capillary dilation of the so-called blush response? 1:36 Fluctuation of the pupil? 1:40 Involuntary dilation of the iris. 1:44 We call it void comp for short. >> Mr Decker, Dr. Eldon Terrill. 1:49 >> Demonstrate it. I want to see it work. 1:55 >> Where's the subject? >> I want to see it work on a person. 1:59 I want to see a negative before I provide you with a positive. 2:01 >> What's that going to prove? 2:04 >> Indulge me. On you? 2:08 >> Try her. >> So 2:12 what is it that makes us human? 2:20 Sounds kind of like a fluff questions but I promise you that it's not. 2:22 Let's take a look what happens in the next scene of this movie. 2:25 How many people know what this movie is? 2:29 Like everybody, right? 2:31 Okay, so for the few of you that don't, this is the movie Blade 2:32 Runner, it sorta, really, set the genre for, for, cyber punk, and I don't know. 2:34 I grew up with it as a film and it's part of what inspired me to, to make robots. 2:40 But let's just take a look at this next scene and let's, let's ask some 2:43 more questions. 2:49 >> Think. 2:55 >> She's a replicant, isn't she? >> I'm impressed. 2:58 How many questions does it normally take to spot one? 3:03 >> I don't get it Tyrone. >> How many questions? 3:06 20, 30 cross reference. 3:09 It took more than 100 for Rachel, didn't it? 3:10 >> She doesn't know. 3:14 >> She's beginning to suspect, I think. 3:16 >> Suspect. How can it not know what it is? 3:18 >> Commerce is our goal here at Tyrell. 3:21 More human than human is our motto. Rachel is an experiment, nothing more. 3:24 >> So, in, in, in this, in this, in this film 3:31 we have this idea of a replicant, which is some sort 3:35 of android or humanoid robot 3:37 that has sophisticated enough emotional intelligence 3:39 decision making capabilities and a chassis, 3:42 or body that is convincingly human. 3:45 And so much so that these replicants of a particular version called Nexa six. 3:47 Are so convincingly human that people like Deckard, this, this gentlemen 3:52 who is trained to administer a test to find out if he's dealing with an AI 3:57 or a human finds it really, really difficult 4:00 to tell whether it's a human or an AI. 4:02 And in this particular case, he he does, after 4:05 hundreds and hundreds of questions in this empathy test. 4:08 Figure out that this woman is a replicant but he also does make the observation 4:10 that this, this, android has no idea that, that, that this android is not a person. 4:14 So I want to go back to the question that that I was asking, I, 4:19 I heard, I heard somebody scoff in the audience 4:21 when I asked what is it that makes us 4:23 human cuz it sounds like a really fluff question, 4:24 but I mean this as a very, very serious question. 4:26 What is it that makes us human and separates us from animals? 4:29 The most common answer that people usually give at least in, 4:33 in, in the space of academia 4:35 and intellectual pursuits is generally language. 4:38 Actually, there's not too much that separates us from an, us as from 4:40 animals, but our sophisticated use of, of language and all of the semantic 4:43 context that comes with it is actually probably the most significant thing. 4:47 And so how do we know that we are humans and not machines? 4:52 Like in, in this particular clip I showed with with Rachel, the Android. 4:55 She's unaware that she is a human and not this replicant 5:00 thing, this Android that peop, that, that humans have actually made. 5:04 So how do we know that we are humans and not machines? 5:07 So what is the possible evidence that, me standing right here, 5:09 that I'm a human being? 5:12 Someone put up their hand and tell me why they 5:13 think I'm a human being and not an android like Rachel. 5:15 Okay, right there, piercings. 5:18 Okay, so this is the assumption then that androids would not have blood. 5:20 Or not have like punctures. 5:26 But I think we can make probably make an android that would have piercings to look. 5:27 Or that kind of. Creativity that's an interesting one. 5:29 So the idea is creativity. 5:33 That's one. Anybody else? 5:35 Never seen a robot that looks a person. Okay. 5:39 So there are some robots and I'm gonna show some 5:42 of these androids that are starting to look more like people. 5:44 But so far we have that uncanny valley where people are trying 5:46 to make humans or, robots that look like humans, not the other way. 5:49 And we kind of get this, we get close but we get 5:54 almost too close and too perfect, it's gets a little bit weird. 5:57 Anybody else? 6:00 Yes. 6:01 >> Nonverbal communications, 6:04 >> None, nonverbal communications, so my gestures 6:04 and the way that I'm talking seems 6:06 distinctly human to you, so some level of, of social competence, I would say? 6:07 Pick one more just for fun. 6:13 Chris, why not? Okay. 6:15 >> Random. 6:18 >> Random. Some randomness, okay. 6:18 Yeah, that's actually, you've done this before I see. 6:19 He's thought about this. 6:21 So when we're thinking about what it is 6:23 that makes us human generally these ideas of 6:26 emotional intelligence, social intelligence, decision making. 6:29 And actually, a lot of the sort 6:33 of central issues surrounding our study of consciousness. 6:35 And, that is sort of what, we usually say entails making us human. 6:38 And language is a big feature of that. 6:42 So, my background, before I worked at Mozilla, and 6:44 before I was doing full stack development and all that. 6:46 Actually, I went to school for cognitive science, 6:48 where I studied neuroscience and I studied artificial intelligence. 6:50 And, for those of you who don't 6:53 know what Cognitive Science is, it's the 6:54 interdisciplinary study of mind slash brain and consciousness. 6:55 I said mind slash brain because, for those of you who are dualists in the room, 6:59 I hate to disappoint you, but all of 7:02 the evidence sort of weighs in favor of materialism. 7:04 Probably mind and brain are the same things. 7:07 Probably in our research over the years is going 7:09 to, to demonstrate whether this is true or not. 7:11 Probably consciousness is an emergent property of the relationships 7:13 of neurons and things like that in the brain. 7:17 But we're still 7:19 all working this out. 7:19 So, a little bit of interesting stuff about that. 7:21 I mean cog, cognitive science I said neuroscience in AI. 7:23 Those are the areas that I concentrated 7:26 on, but it's an interdisciplinary field, so 7:27 we get aspects of psychology, philosophy, linguistics, 7:29 anthropology, at least when you go to Wikipedia 7:32 this is the article, or sorry this is the image in the article so 7:34 I was like oh well I'll just give people that because that's the light version. 7:36 There can still be other fields that contribute as well. 7:40 And I do think that interdisciplinary 7:43 is the approach for solving most things when 7:45 it comes to people especially because we are not 7:47 silos like, we can't just look at psychology and 7:49 pretend that that's going to explain all of us. 7:52 We can't just look at neuroscience and 7:53 pretend that that's going to explain consciousness. 7:55 The collaboration of these fields is really working 7:56 towards the goal of explaining consciousness and understanding it. 7:58 A big thing that we do over in like the, the merged field of 8:01 artificial intelligence and neuroscience called computational neurosciences, 8:04 we try and make representations of the mind, 8:08 and, and that's a really big,a really big thing. 8:10 There's a few labs, like on the east coast and Canada that are right now in 8:12 Python, writing brains, and trying to model neural 8:15 connections, and, and do that sort of stuff. 8:17 So the central question of, of cognitive science is, what is consciousness? 8:20 And if we figure out what consciousness is, 8:24 can we give consciousness, or can we program 8:27 consciousness, into something like a robot or an 8:29 android, or, or something that is a machine. 8:31 So, the Human Genome Project, who's familiar with that? 8:35 Okay, so most people. 8:38 That project started in 1990 and I think the estimate 8:39 for mapping out the genome, people thought it was gonna 8:42 take like 30, 50 years, maybe even with all the 8:45 collaboration they thought it was gonna take a pretty long time. 8:48 But that was actually complete about 13 years later in 2003. 8:50 Now although that's done, we're still figuring out what do with the 8:53 genome, but I do know that now, and I did do this, 8:57 I can pay like, 100 bucks, send away a, a you know, a swab from the 9:00 inside from my mouth, and have my genome come back to me, look at the data online 9:03 in a web interface and it will tell me that I have a predisposition for X, 9:07 or I have a resistance to S, or, a resistant to X based on my, my genetics. 9:10 And, of course, genetics are not necessarily deterministic, 9:14 that's why I use the word predisposition, but I 9:16 think that it's actually really fascinating that we sort 9:18 of underestimated our capacity for progress in this respect. 9:20 Almost as a, a next step past the Human Genome Project, 9:25 we have the Human Connectome Project and the Brain Activity Map Project. 9:28 And these are efforts to essentially map all the neural connections in the brain. 9:32 I mean, it's impossible to do, because everybody's brain 9:37 has different neural connections, but we can definitely see trends. 9:39 When we do things like functional MRI imaging, which is when you see 9:42 those, those pictures of brains, and they've 9:45 got splotches of color, like activated regions. 9:46 That shows like the whole blood 9:49 flow to active regions of the brain when people are doing certain tasks so. 9:51 Recently just this past year at Carnegie Mellon University, there were 9:54 researchers that were able to say that even though there's differences 9:57 in the wiring of brains between people they were able to 10:00 say that when people are happy, these particular areas light up. 10:03 When people are really sad or disgusted, then these areas light 10:06 up, and so we're making a lot of progress in this field. 10:09 And And then to extend beyond that. 10:11 Who here is familiar with Ray Kurzwell? 10:17 Yeah, did you guys read his earlier stuff and find it kind of sucked. 10:19 I just really wasn't into singularity stuff and all 10:23 that I kind of thought that was all bunk. 10:25 But then I saw this book when I was at Chapters, 10:26 which is I guess, I don't know some Canadian chain but. 10:29 I was like, sitting and I'm all, this seems like pop cognitive science. 10:31 And pop cognitive science usually enrages me, so I bought it as my plane reading. 10:34 You know, cuz I gotta, I gotta do a little bit. 10:39 I wanna at least like, think critically about 10:40 this stuff. 10:42 And I, I read his book and I learned a lot 10:43 about Ray Kurzweil, and he's actually I think at Google now. 10:45 He worked on Siri. 10:48 So if you've got like, an iPhone in your 10:50 pocket, then he, his work on natural language processing is, 10:51 is something that came to be an asset for 10:55 Siri and also I think Google Voice now as well. 10:57 But he does a really excellent job in his book of looking 11:00 at some of the ways that cognitive science has contributed to our understanding 11:04 of the mind. 11:07 And in particular his oh I think that, there we go, my slide disappeared. 11:08 the, he has a pattern recognition theory of mind actually. 11:13 That book I really recommended it's been vetted by somebody 11:16 who knows cognitive science at least somewhat and it's not ridiculous. 11:18 And he talks about pattern recognition theory of 11:22 mind, hierarchical hidden Markov models and a lot 11:24 of ideas that basically amount to the fact 11:27 that human beings, we are pattern matching machines. 11:30 And after reading his book, I was like, hey, you know what? 11:33 This really correlates really well with a lot 11:35 of the stuff that's going on over in neuroscience. 11:37 And it correlates really well with kind of things we can do over in computing, 11:39 which is where a lot of his work is in software for Siri and [COUGH]. 11:42 Excuse me. 11:46 So my hypothesis after like following this stuff and after 11:47 examining how we underestimated our progress for the Human Genome Project. 11:51 Is, I actually think that in the next 30 to 50 years, probably within my lifetime, 11:55 we will have mapped out this neural connections or this likely 11:59 neural connections for most people, and that probably the emerging property 12:03 of consciousness will have better ideas about what does and doesn't 12:06 make that possible, and that we'll be trying to actually program that. 12:09 And, we will actually be trying to program androids and human like robots. 12:13 So the question being. 12:17 Then can we make non-human machines that are indistinguishable from humans? 12:19 Because that's really what the story 12:22 of Blade Runner is about and a lot of other scifi. 12:24 In fact, if you go and look up robots on 12:27 the TV Tropes Wiki, there's something like 41 robot related tropes. 12:29 And, the most popular one is robots that end up really being like 12:33 humans, we treat them poorly and then they rebel and take over the world. 12:38 So my talk today it to tell you a little bit about 12:42 cognitive science, cause like this is an excuse for me to do that. 12:44 Normally when I'm at talks, I talk 12:46 about like you know, develop, full stack development 12:47 for the web and stuff like that. 12:49 But this is really interesting to me because, I think that people think that 12:50 this is a purely fictional thing and that it's not going to come to pass. 12:54 But I'm actually pretty confident in that how fast progress is being made 12:57 that in our lifetime, we will start 13:00 to see androids integrating themselves into society. 13:02 And I actually do think that we need to start thinking about what that 13:05 means for us, and what we teach our children about it, actually, as well. 13:08 So things that 13:13 we need in order to have a robot that's like a human just 13:14 from coming from some of the suggestions of what makes me a human. 13:17 The first one of course is convincing language 13:19 use, and might be convincing intelligence decision making. 13:21 I feel, I feel like number one we can kinda do with 13:25 software a little bit and I'll show you an example in a moment. 13:28 Convincing intelligence and decision making. 13:30 Those of you that are programming AI for your robots here this weekend, you're 13:33 gonna already do some pretty sophisticated intelligence 13:35 and decision making, but in this context 13:37 I mean like a human. 13:39 Which actually would mean some randomness or some flaw 13:41 actually, I think as well in order to be realistic. 13:43 I said the same word twice so 13:47 that's embarrassing but convincing, convincing, emotional social interactions. 13:49 You see, see what I mean about the error. 13:52 That's planned, I planned that. Running for the door. 13:54 Convincing body or chassis, so the frame 13:56 of the robot should look convincingly human. 13:59 And I also think that one criteria it means that the robot 14:02 should be fully autonomous, and that means that a robot should be able to know 14:04 when it needs to replenish its power source and have some way to do that. 14:08 For us it's, I'm hungry, and then we go eat food. 14:11 For the robot it might be like, I don't know, my batteries are low. 14:14 Plug myself in or something like that. So artificial intelligence in quotes. 14:17 I think as we move more 14:23 towards understanding, mind, brain consciousness and 14:23 start bringing more and more of that into computing we start emulating 14:27 humans in the software and hardware more and more. 14:30 It doesn't just become programming artificial intelligence. 14:32 If we're emulating ourselves I think we're really just programming intelligence. 14:35 I think we're almost kinda putting ourselves down if we say 14:38 artificial and then we go and try and make something like us. 14:40 Just, just a point of contention that I have in any case. 14:43 We have to talk about necessarily the Turing Test. 14:46 And how many people are familiar with the Turing Test? 14:48 Okay, so not as many people as, like, Blade 14:51 Runner and the Genome Project and all, but the Turing 14:53 test Alan Turing being the father, one of the fathers of modern computing 14:55 he was also sort of really involved with early, early AI hypotheses as well. 14:59 The idea of the Turing test, is that you have one person on one 15:05 side of the screen, and then either a computer or a human on the other. 15:08 And an artificial intelligence, or an AI, 15:11 a computer, android, whatever, passes the Turing test 15:14 if the person on the other side of the screen can't tell if they're talking 15:16 to a computer or a human. 15:20 And so this is just an XKCD comic which pokes a little bit of fun at that. 15:22 And so. 15:26 So let's actually go back to this convincing language use. 15:28 Let, let's think about all these things and what we, what 15:30 we can do to combine them to pass, say, the Turing test. 15:32 So convincing language use, I think a pretty 15:35 good example of this is actually something like Cleverbot. 15:37 I had small, I had a small conversation with Cleverbot earlier. 15:40 I said, hey Cleverbot, say hello to Robots Con, 15:43 it said you're a robot. It was onto me too. 15:45 [LAUGH] And I said don't be silly, don't be silly, I'm a human. 15:47 It says, you're silly robot Now Cleverbot is just an AI. 15:50 And it's a sophisticated AI that learns from the responses that it get. 15:54 And you can actually have some pretty 15:58 interesting conversations with it, where you ask 16:00 it things and it says things back to you and it can tease you. 16:03 If you start off a song, like if you try The Fresh Price 16:05 of Bel Air, it totally knows all the Fresh Prince of Bel Air. 16:07 And this thing is actually 16:09 won lots and lots of awards for being a sophisticated, sort of like 16:10 pass the Turing test kind of example, although if you, if you experiment with 16:14 Cleverbot on your own, you'll find that after about four or five minutes, in 16:17 the conversation, you'll say something, and Cleverbot 16:20 will just give you a nonsensical answer. 16:22 You know like, I look, let's see if you're do you like grape fruits? 16:24 We'll make it do that. 16:30 Do do do. Why do you ask? 16:35 OK this can go on. That is the sensical answer. 16:38 So if you play with clever bot in your own time and you haven't already, do 16:41 it for about four or five minutes and 16:43 start asking it some really hard questions about emotions. 16:44 Start asking it - try to contradict yourself and contradict clever bot and 16:47 you can get it into a state where you're like wait a minute. 16:51 Definitely, definitely not a human. 16:54 And there's actually whole websites dedicated 16:56 to people posting their Cleverbot conversations like, 16:58 wow, look at how real this conversation felt to me. 17:00 And, and that I find particularly fascinating, because it's really not 17:03 hard for people to project themselves on this sort of stuff. 17:06 So [INAUDIBLE] decision making. That one is pretty interesting. 17:10 IBM's Watson computer, probably most known for solving, or not solving, 17:15 but winning Jeopardy against, like, the longest running champion, Ken Jennings. 17:19 Was pretty fascinating. 17:23 I mean, keep in mind 17:24 IBM Watson, IBM, IBM's Watson is a super computer. 17:25 All of it's, all of its hardware would not 17:29 be able to fit into a reasonable sized human chassis. 17:30 There's a lot going on there but we do have some really 17:35 sophisticated software that makes sophisticated decisions 17:37 that are kind of like people. 17:40 And the interesting thing about IBM's Watson 17:41 is that it's now undergoing trials for diagnosing 17:43 cancer and they've found that it is better than humans at diagnosing cancer so I 17:46 think that my argument might be that Watson can't be like a human if it's 17:50 better than I don't know and then we 17:53 move on to convincing social and emotional interaction. 17:57 Someone was saying okay, I'm talking with my 18:01 hands and there's clearly some non verbal communication on. 18:02 At the MIT lab this, this looks a little bit garish com, compared to 18:05 some of like, the newer robots that have you know, facial, facial features to them. 18:09 But, Kismet's early 18:14 work was very, very interesting because it had I think about nine different 18:15 emotions that it could convey and it would interact with, with a person. 18:19 And it didn't have very sophisticated logic or reasoning but it 18:23 did respond with these pronounced emotional cues and people responded to it. 18:28 People, people when it looked sad would feel bad for the fact that the robot 18:32 felt bad and when the robot looked happy people were like, oh aren't you adorable. 18:36 And this is not really 18:39 surprising considering how much of this we project on our pets. 18:40 Like my cats, I'm pretty sure they don't give a 18:43 crap how my day was when I come home from work. 18:45 I mean they may seem concerned. 18:47 I may talk to them and, and project on them, but they don't actually care. 18:49 They would like to be pet, they would like me to clean their cat box, and they 18:53 would probably like to be fed at some 18:55 point, and those are the things that motivate cats. 18:57 Cuz I like to pretend my cat cares, you know what I mean? 18:59 And then I think the same thing is definitely true for robots. 19:01 people, people name them, 19:04 I'm going to show you a robot I build in a little 19:05 bit, and named it, and people actually imply gender on robots as well. 19:07 Like there was a video earlier from the Spark 19:11 people that they're calling their chip a he actually. 19:13 And I was like, oh, they chose to call it a he, 19:16 and not an it, and I thought that was very interesting, because, as 19:18 human beings, we tend to actually really want thing to feel we 19:21 can interact with them like they are a part of our social interaction. 19:24 It's just a very natural thing that humans do. 19:28 But, these social and emotional cues and that is really not complete 19:30 without sort of transitioning into convincing body and chassis. 19:34 [FOREIGN] [FOREIGN] 19:37 So we're not there 19:44 yet. >> [FOREIGN] 19:52 >> [FOREIGN] 20:00 [FOREIGN] 20:05 >> The interesting thing about these actroids, and she's wearing an I 20:18 love Hello Kitty shirt, it's actually the vision of Sanrio working on these. 20:21 So the same people 20:24 that brought you Hello Kitty and those adorable creatures, 20:25 are making androids which I find actually pretty interesting. 20:27 Yeah so it's [INAUDIBLE] 20:30 convincing body and people that, she comes up on the 20:34 screen and people are like arg, it's not convincing yet. 20:36 But I mean the point is really to show you that there's work 20:39 being done in this space and they will iterate on this over time. 20:41 And there will probably be a point where it becomes more 20:44 convincing and that uncanny valley thing kind of becomes a smaller valley. 20:46 So the next thing is autonomy. 20:51 So often we talk about the coming robot apocalypse, we're gonna make 20:53 these android like things, or maybe they won't even look that human. 20:57 Maybe we'll just make things with enough intelligence 20:59 that they seem to approximate sentience or are sentient. 21:02 That's an ambiguous thing. And then they're gonna come after us. 21:04 They're gonna turn on us. 21:08 And, and they're gonna just like, relentlessly crush humanity. 21:08 And someone is like, nodding vigorously over there. 21:11 [LAUGH] That's not a good thing. 21:13 But right now with the current state of things, 21:15 part of the reason that can't happen is because of this. 21:17 [BLANK_AUDIO] 21:19 Yes, I'm not joking. 21:21 So part of my research at SFU I worked in 21:23 the I worked on robots in the SFU autonomy lab, 21:26 and what the autonomy lab does, as well as trying 21:29 to make autonomous robots and look at kind of the. 21:31 The issues surrounding why robots can't be autonomous, 21:34 and one of the number one reasons is, power. 21:36 Oh my gosh. 21:38 It's actually really expensive to run all of this hardware 21:39 and all of these servos, and all of these things. 21:41 I mean, when I was plugging things into my 21:43 Arduino for the first time, like, a few years ago, 21:44 I suddenly plugged in so many things, I 21:47 was like, how come it's just not working anymore? 21:48 It was like well, Angelina, because all of those things keep drawing all your power. 21:50 And now you need another power source. Oh, like but, my robot. 21:53 I wanted a robot butler and now I'm stuck, because 21:56 how do I make a robot butler without it carrying around 21:58 like a giant heavy power source which then it takes energy 22:01 to actually move, and you see where this is going, right? 22:03 It's like, it's a complicated problem, but I do think that 22:05 there are scientists and robotocists that are innovating around this problem. 22:08 I mean, I know because I was 22:11 in a lab that specifically is trying to solve this problem. 22:12 Like, one of the things that they are working on 22:14 is, can a robot have an algorithm, it knows the 22:15 optimal time in completing work to know when to go 22:18 to a recharge station and how much work it can do? 22:21 And can it do something where it caches little power supplies 22:23 like, if you were to take robots to space and you 22:26 need to do some work on, on a planet like, let's 22:28 say they were colonizing Mars since we're talking about the future here. 22:30 Could we do something where like, robots, you know, sort of know how to, 22:33 like, distribute power sources for themselves when they 22:37 get there and then when to go charge, 22:39 and we actually did software simulations of this 22:41 sort of stuff because we found it particularly interesting. 22:43 So I do think that there's probably innovation in 22:45 this power space that, we're gonna start solving that problem. 22:47 But, I do think we've got a little of bit to go. 22:52 So thanks to this, you are safe for now. 22:54 So then the question is is making convincing humanoid robots possible? 22:58 And I'm going to say yes. 23:01 I'm going to say that's going to happen actually a lot sooner rather than later. 23:02 And even if we don't necessarily get 23:06 robots that, you know, approach consciousness, or you 23:07 know, we can't necessarily say that they are the exact same thing as human beings. 23:10 I think we can get ones that are pretty close. 23:14 And that that is probably going to happen during our lifetime. 23:17 I don't know if on like a mass scale like, you know, like maybe the 23:20 next or may, maybe robots come 30, 35 just like half androids and like half human. 23:22 Everybody sorta come together about like android health a little like humans. 23:27 Human and android socialization and, and [UNKNOWN] of politics and another. 23:31 I mean, we, we think it's a little bit silly but I, I don't actually. 23:34 I, I think that this may be things that we actually start to talk about. 23:38 So, kinda, sorta. 23:42 You know, like, we're kinda there. 23:43 You look at the actroid, you're kinda like, We 23:44 know that power splicing is kind of a problem. 23:46 Consciousness is really hard. 23:48 Like, just to be fair, I'm not trying to trivialize and say that it's an easy 23:49 problem to figure out these things. 23:52 They are very, very difficult, which is why people are spending 23:54 lots and lots of time, and why it's an interdisciplinary study. 23:56 So, please don't think that I'm coming here to say, like, 23:59 oh yeah, no, Angelina said that consciousness is just gonna be solved. 24:01 We'll know what it is in like 30 years. 24:03 I just think that it's probably gonna happen sooner than later. 24:05 It's not gonna be hundreds of years. 24:07 It's gonna be, it's gonna be sooner than that. 24:09 So, cognitive science eventually led me to artificial intelligence 24:11 and holy crap, there's a lot, lot to learn there. 24:14 If anybody wants to come talk about like good 24:16 old fashioned AI, and algorithms and stuff like that, come find me later. 24:17 We can wax about that a little bit. 24:21 There's a lot to learn there. 24:22 And eventually my study in A.I. lead to robotics 24:23 and it led me to the Richard Tapia Conference 24:26 for Diversity In Computing and this is really long 24:28 in the same breath but robotics competition in 2007. 24:30 So we had this little robot, and my little robot is, 24:33 or our rather, our team is the one on the left. 24:37 I'm so sorry that it's small but it's really hard to dig up photos of this. 24:39 This was in 2007, so I guess that's, six, almost seven years 24:42 ago that we did this and so what's, what you're seeing there is, 24:46 there's like the Create, so the iRobot Create, like a Roomba base and 24:49 then on top of that is microATX, not very micro form factor computer. 24:53 Because that's what we had. 24:57 And then stacked on top of that is a really crappy Logitech webcam. 25:00 And then behind it is actually like your, your standard sort of junk speakers, 25:03 so like they could like, announce things for us. 25:08 And then if you actually look really closely, there's a tiny 25:10 little Moogle from Final Fantasy 7 just because I'm a nerd. 25:12 This robot's name is Caprica, actually. 25:14 And its job actually, to navigate around a room, and it was 25:16 a search and rescue task, where we had to navigate around obstacles. 25:20 And there were these three colored pylons, and every time that it encountered 25:23 it encountered a goal, it had to announce out loud the three colored pylons. 25:25 And it had to go through like this closed 25:29 spaces as fast as possible, and find these survivors. 25:31 Maybe there was a virus from a crash. 25:33 There was this model search and rescue task. 25:34 And it wasn't, you know, it got points deducted it if, if 25:36 it did a duplicate ident, duplicated one of the, one of the results. 25:39 And this is my team, and there's us hanging out. 25:44 Like, you can see Caprica again and all that. 25:46 My friend, my friend Lauren in the front, I don't know what his deal is. 25:48 I think he just didn't have a lot of sleep. 25:50 He looks really bewildered. 25:51 And the woman who is right in the middle there, Angelica. 25:53 She went on to do her PhD in robotics. She did a really awesome 25:56 project where she plays her flute and then the robot plays a feramin along with her. 25:59 So the robotics competition led me to working in SFU autonomy lab. 26:04 And that led me to this guy named Richard Vaughn at 26:08 the SFU autonomy lab who actually one day brought up this idea. 26:10 Of not building evil robots. 26:15 And, six or seven years ago I thought this was complete fluff. 26:17 Like, like Arduino, I think it was just coming out and I 26:20 remember when it did we were all like oh man, we totally should 26:22 have used one of those, it's way smaller. 26:24 And the interesting thing is he doesn't just teach 26:25 this to his grad students and in his grad courses. 26:27 If you take like operating systems 300 with him, the, the basic 26:29 operating systems course in university, he just gives this or was giving 26:32 this like all of his classes and people were just so confused, 26:35 like, robots aren't gonna take over 26:38 the world, people aren't building dangerous robots. 26:40 But, the thing is people did start building dangerous robots. 26:42 We've got like killer drones that kind of just, you know, hang out. 26:44 What happens if somebody hacks those, 26:47 you know, like. 26:49 I, I, kind of think that's a little bit terrifying I'm 26:50 actually less worried about the robots, robots that exist right now, 26:53 then, I'm I'm less worried about them, and I'm more worried 26:56 about like the people you know behind them kind of thing. 26:58 So he came up, he came up with this 27:00 idea to sort of give like, a small ethical talk. 27:01 I'm like why don't we use our forces for good? 27:03 Why don't we build helping robots? 27:05 Like, people are working on that right now. 27:06 There's Paro. 27:07 Paro is a seal. 27:09 It's a, it's a plush seal and it was developed 27:10 in Japan and it's give, given to people in hospice societies. 27:12 And the elderly people actually find interacting with Paro 27:15 to be very comforting when their attendant isn't with them. 27:18 There's a robot called Keepon, which really 27:22 looks like two yellow tennis balls stuck together. 27:23 And Keepon was originally developed to help socialize 27:26 children with, with autism, or in the autistic spectrum. 27:28 The idea being, that if we use some very very simple similar to human emotional 27:31 cues, step by step, we can socialize 27:35 people in the autistic and Aspergers spectrum, gradually 27:38 and not overwhelming with stimulus which is 27:41 often a problem for people in the spectrum. 27:43 So it's possible to actually use this things for, for good 27:45 things and we don't you know, necessarily have to build killing machines. 27:47 But I still think that someone probably will. 27:50 You know, like, come on. Someone's gonna mess this up so. 27:52 And so that really brings the questions of what, what makes an evil robot? 27:56 [SOUND]. 28:02 [MUSIC] 28:07 [SOUND] 28:13 [MUSIC] 28:19 [SOUND] That thing's pretty freaking terrifying. 28:40 So who knows where this one's from? 28:47 No, not Terminator. Battlestar Galactica, sorry. 28:51 Terminator's cool too. I mean, that's pretty, pretty obvious one. 28:53 But this is one the, of the early Cylons, and this is one of 28:55 the early ones, which were developed, you know, for, for the military and all that. 28:58 But, what's interesting about the cylons though. 29:01 I think I, I think I got the order of this wrong. 29:05 In any case, the interesting thing about the cylons is that they do go on to, 29:07 look like very humanoid, humanoid, or pardon me, 29:10 to be androids and look very, very human-like. 29:13 And so we ask ourselves what makes an evil robot, 29:16 well in that video there, there's something that's very clearly programmed. 29:17 To kill and that seems pretty evil to me, I mean, 29:20 I think that anything that, that does harm is pretty terrible. 29:23 But the interesting thing about a lot of these, a lot of stories is 29:27 that often we talk about how there are humans, 29:30 we, or pardon me, there are robots, and humans 29:33 programmed them, and then a humans sort of didn't 29:35 treat them very well, and then the robots rebelled. 29:37 And the interesting thing is if we 29:40 do things more and more complex decision making. 29:41 If we, if we program things like emotion and we expect them to act like us. 29:43 If you treat robots that are meant to act 29:47 like us poorly why ,why would they stick around? 29:49 Why, why would they stick around? Like why would they put up with that? 29:51 I mean we are all talking really 29:54 hypothetically but think about if we program them to 29:55 be like us, who would wanna be a slave? 29:57 Nobody, nobody. 30:00 That's terrible, and I mean that's just awful. 30:00 And I don't want to make analogies 30:03 to actual historical events because that's completely inappropriate. 30:04 But I just mean that we shouldn't treat things 30:07 like ourselves like less than ourselves if that makes sense. 30:10 So there is this idea of Isaac Asimov, he obviously was a science fiction writer. 30:14 Probably some of you have read his stuff. But he 30:19 had these sort of laws of robotics. 30:20 A robot may not injure a human being or, 30:22 through inaction, allow a human being to come to harm. 30:24 A robot must obey the orders given to it by human 30:27 beings, except where such orders would conflict with the First Law. 30:30 And the third one is, a robot must protect its own existence as 30:33 long as such protection does not conflict with that First and Second Law. 30:37 And this is all well and good but it only really talks about the people, right? 30:40 Like what happens when these androids are programmed to behave 30:42 like people? 30:45 They should have reactions like people which could mean 30:46 that they get upset, and they do things like that. 30:48 So, this is, a reference to George Orwell's 1984. 30:51 And a lot of people when they read this book and it was 19, it was, you know, it 30:54 was actually 1984, people thought like, you know, they 30:59 were never gonna live in this surveillance state like that. 31:02 It's never gonna happen. 31:04 We're never gonna have to worry about that sorta stuff. 31:05 It's all science fiction, you know? We can watch these 31:08 clips and tell ourselves that this is never ever gonna be a thing. 31:10 So about those killer robots, right, like we just watched the clip, how, with 31:15 the, the cylon, and it's pretty easy to demonize them when they look like this. 31:18 But what about when they start to look like this, and this is caprica six. 31:22 And that is actually, this is act, Cap, Caprica 31:25 is actually a planet in the Battlestar Galactica universe. 31:27 It's sort of analogous to Earth. 31:30 And she is actually named Caprica 6 just throughout the series. 31:31 And she's a Cylon. She is an android. 31:36 And actually, the Cylons in the later iterations have actually 31:39 come about in a sort of biological tech sort of way. 31:43 In fact, throughout the series you realize that they're actually, they keep 31:46 referring to them as like not human but they're based on biological technology. 31:50 And if we're to do something 31:54 like [UNKNOWN] based on biological technology and 31:55 we've made them then like, we're even closer to humans than just silicon bits. 31:57 And that actually 32:01 kind of messes with my mind just a little bit. 32:02 So, fiction does not necessarily stay as such. 32:04 And the media, the media normalizes fiction as often the unattainable 32:07 or improbable, but as we've seen with, I mean, like things, like 32:12 surveillance states being a concern, like right now, privacy is a 32:16 pretty important topic for all of us that is working in tech. 32:19 And I think that people working in, in the space, in 30 to 50 years, 32:22 we might have the sort of same problem where the 32:27 media has normalized this idea, these ideas of robots as 32:28 kind of a joke, and kind of like something that 32:32 could never come to pass, but that's just simply not true. 32:34 Thing, things that are in fiction, that's often the inspiration to go make them. 32:37 You know, like things in fiction can definitely come to pass. 32:40 So ethics, right. 32:44 Ethics. Don't make bad things. 32:45 Don't make bad robots. Somebody probably will try and stop them. 32:48 If you make something that you want to be like a human, treat it like a human. 32:52 Don't treat it like less than. 32:56 Don't treat it like it's, it's a complete tool for you because as 32:57 things approach more and more human-like 33:00 properties, I just think that that's awful. 33:01 So RobotsConf. I would like you to make me a promise. 33:04 I pledge to use the things I learn today to make the world a better 33:08 place, and not cause suffering to any human 33:11 or any android that seems pretty darn human. 33:15 And that's it. [SOUND] 33:17
You need to sign up for Treehouse in order to download course files.Sign up