Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
Start a free Basic trial
to watch this video
Un-artificial Intelligence: How People Learn
34:34 with Melinda SeckingtonHAL, Skynet, KITT...we've always been intrigued by artificial intelligence, but have you ever stopped to considered the un-artificial? Most developers are familiar with the basics of AI: How do you make a computer, an algorithm, a system learn something? How do you model real world problems in such a way that an artificial mind can process them? What most don't realize though is that the same principles can be applied to people. This talk looks at some of the theories behind how machines learn versus how people learn, and maps it to real life examples of how specifically our users learn their way around interfaces and how designers and developers apply learning methodologies in their day-to-day actions.
-
0:00
[MUSIC]
-
0:12
[APPLAUSE] >> Okay, thanks everyone.
-
0:18
So I'm gonna be talking about unartificial intelligence.
-
0:23
So to start things off, can anyone tell me what the reference in the poster is, RUR?
-
0:30
Does anyone know this?
-
0:32
Good.
-
0:34
Oh, one person up there.
-
0:35
[INAUDIBLE] >> Sorry.
-
0:38
>> [INAUDIBLE] >> It's a reference from
-
0:42
Rossum's Universal Robots and it's a tech play that was written in the 1920s.
-
0:48
And it is the first time that term robot was used in the English language.
-
0:55
And Robots come from the Czech word roboti, and it means serf labor.
-
1:02
In RUR,
-
1:06
we get told a story about a factory where robots are made, and
-
1:12
it tells us how a robot rebellio, leads to the end of the human race.
-
1:18
And this is something that we've always been fascinated by.
-
1:21
Ever since the industrial revolution, we've had this fascination with machines
-
1:26
coming to life, and basically causing the downfall of us as humans.
-
1:31
And as our technology is evolving,
-
1:33
you can see that our stories about technology are evolving as well.
-
1:37
So just thinking of recent movies,
-
1:39
you've got Her, how many people here have seen this?
-
1:43
Okay, quite a bit so, here a guy falls in love with his operating system.
-
1:50
There's Chappie, that came out recently.
-
1:52
And this is about a police robot, that gets new programming, and
-
1:57
he basically becomes the first robot to think and feel for himself, and
-
2:00
has to learn everything as a child.
-
2:02
And then we have Ex_Machina, and this is my favorite, from all three of those.
-
2:06
Cuz it's basically about a cheering test, and
-
2:09
this is a experiment to evaluate whether this robot can be considered alive.
-
2:14
So I'm a huge movie geek.
-
2:17
And I love looking at movies, especially with AI slant and Scifi slant.
-
2:23
In my day job though I'm a developer.
-
2:26
Specifically a back end developer.
-
2:27
So it's a bit rare, I guess here at future web design.
-
2:31
And I work at FutureLearn.
-
2:34
So we're a social learning platform, and
-
2:37
we provide courses for anyone in the world.
-
2:41
We work and
-
2:41
partner with Universities and cultural institutions, and provide free learning.
-
2:48
Our mission is to pioneer the best
-
2:51
learning experiences for everyone, everywhere.
-
2:54
Everywhere.
-
2:56
Now what this means though,
-
2:57
is that everyone on our team is also encouraged to learn more about the theory,
-
3:03
and the principles of how we're building these learning experiences.
-
3:07
So it's not just a slogan that we have somewhere on the wall.
-
3:10
We're actually trying to make sure that everyone across our entire team,
-
3:14
be it from marketing to the product team, understands how we are doing this.
-
3:19
So, we'll have internal talks about pedagogy.
-
3:22
We are all encouraged to take courses on our own platform, but
-
3:25
also on other platforms.
-
3:26
And we've got learning technologists working side by side with us,
-
3:30
so that we can constantly ask what are we doing, and
-
3:33
are we doing it In such a way that we're providing a good learning experience?
-
3:38
So what has this got to do with AI?
-
3:41
So, my own background is in AI, back when I was in university,
-
3:47
I did a computer science degree, and specialize in machine learning, and
-
3:51
specifically in facial expression recognition.
-
3:54
So how to look and recognize at emotions.
-
3:58
And what I realized from listening to those pedagogy presentations internally,
-
4:03
is that how machines learn.
-
4:05
So how the artificial learn, is very similar to how people learn.
-
4:11
So how the unartificial learn.
-
4:15
So that's what this talk today is gonna be about.
-
4:17
I'll be explaining a bit of the basics of artificial intelligence, and
-
4:23
use that to explain unartificial intelligence, so how humans learn.
-
4:29
So to start things off, what is intelligence?
-
4:32
How do we define this?
-
4:33
What it makes something, or someone intelligent.
-
4:37
So I did what every geek would do, and I looked it up in the Dungeons and
-
4:41
Dragons rulebook.
-
4:42
[LAUGH] And we get the following quote.
-
4:48
Intelligence determines how well your character learns and reasons.
-
4:52
This ability is important for wizards,
-
4:54
because it affects how many spells they can cast,
-
4:56
how hard the spells are to resist, and how powerful those spells can be.
-
5:01
It's oh-so-important for any character who wants to have a wide assortment of skills.
-
5:08
But now, obviously, the parts about wizards and
-
5:11
spells isn't really relevant to all of us, but the parts that are, are how well your
-
5:16
character learns and reasons, and having a wide assortment of skills.
-
5:22
So this is all what intelligence is about.
-
5:25
And the reason why I fall back on the Dungeons and
-
5:27
Dragons rulebook, is it also includes wisdom.
-
5:32
Here, they really make the differentiation between intelligence and wisdom.
-
5:37
And as a kid, I always thought I was a bit odd.
-
5:41
I always struggled at the time trying to understand this, but
-
5:45
it makes perfect sense.
-
5:46
Wisdom is about what we know, and what knowledge we have.
-
5:52
While intelligence is about the skills and the knowledge, and how we apply it,
-
5:56
rather than the actual owning of it.
-
5:59
So intelligence is not about being intellectual.
-
6:03
Here's another definition from a proper dictionary.
-
6:07
And it's the ability to acquire and apply knowledge and skills.
-
6:11
So again it's not about just having the knowledge or
-
6:13
skills, it's knowing how to obtain them.
-
6:15
How to reason about them, and how to use them.
-
6:21
So what do we then mean with artificial intelligence?
-
6:25
Well, it actually has two meanings.
-
6:28
On one side, we use it to mean intelligence, or machines, or software.
-
6:35
But on the other hand, we also actually use it quite a lot to indicate the field
-
6:39
of study that's researching, creating, to create intelligence within machines.
-
6:45
And looking at this research area,
-
6:47
there's four different approaches that we can look at.
-
6:53
Mainly because it started ages ago,
-
6:56
and it's kind of evolved over the times how we think about AI.
-
6:59
What do we mean with artificial intelligence?
-
7:03
So on the left side, you have systems that think like humans, and
-
7:07
systems that act like humans.
-
7:09
So this very much looks at intelligence, compared to us as humans.
-
7:14
Then on the right-hand side, you've got systems that think rationally, and
-
7:18
systems that act rationally.
-
7:20
And this is very much looking at the ideal concept of intelligence.
-
7:25
A system is rational, if it does the right thing, but what is the right thing?
-
7:32
And then we can split it up this way.
-
7:33
On the top, we have think like humans and think rationally.
-
7:37
So it's very much the idea of thinking versus acting, thought versus behavior.
-
7:45
So let's look a little bit closer at systems that think like humans.
-
7:48
What do we actually mean with this?
-
7:52
So the research in this area mainly looks at cognitive science.
-
7:56
So these researchers are mainly at trying to come up with a theory of the mind.
-
8:02
And using this theory of the mind, to express it as a computer program.
-
8:07
So if we understand how our brains work, we can recreate this, and
-
8:11
reuse this in computers.
-
8:14
And it's the other way around as well.
-
8:17
If you understand how computer programs think,
-
8:20
we can get a better understanding of how humans think.
-
8:23
So this entire area is focused on the brain.
-
8:28
Then we have systems that think rationally.
-
8:31
And this is very much what we call the logicist approach.
-
8:35
And this is completely rooted in logic.
-
8:39
And this is where traditional, kind of classic AI comes from.
-
8:43
The is where we started off with AI.
-
8:45
And in this case,
-
8:48
it's very much about how any problem can be defined in logical notation.
-
8:54
And once every problem is defined in logical notation,
-
8:56
we can then solve any problem, because any problem in logical notation can be solved.
-
9:03
The problem with this, though, is two-sided.
-
9:05
On one side, not everything can actually be described in logical notation,
-
9:11
and on the other hand, not every problem actually has a solution, and
-
9:15
there are some cases where a yes-or-no answer might not be the answer.
-
9:21
There are some things that are a bit more random, a bit more fuzzy.
-
9:25
So this area is still appropriate for some types of AI,
-
9:29
but when we're thinking about intelligence, it's not really appropriate.
-
9:35
So now we have the area that looks at systems that act like humans.
-
9:40
And this very much comes down to the Turing Test.
-
9:43
How many people here know what the Turning Test is?
-
9:47
I'm assuming quite a lot, yeah.
-
9:49
It was introduced in the 1950's and
-
9:52
it's Turning's response to the question, "Can machines think?", but
-
9:57
rather then focus on what we mean we're thinking, He proposed another question.
-
10:03
Can machines do what we as humans do?
-
10:09
So he proposed a game, the imitation game.
-
10:13
And this is based on.
-
10:17
So, he proposed a Turing test, which is based on the imitation game.
-
10:21
And this is actually a party game from the 1950s, where a observer or
-
10:27
judge, would have a text conversation with a man and a women.
-
10:33
And the judge would have to decide which person they were talking to.
-
10:37
Was the man, and which one was the woman.
-
10:39
It's a bit of a lame party game, if you ask me.
-
10:42
But Turing turned it around and decided to do it with humans and computers.
-
10:49
So if you have a judge conversing in text with two participants.
-
10:56
Which of the participants is a human, and which one is a computer?
-
11:00
So this is what traditionally the Turing test is.
-
11:04
But it has some limitations.
-
11:06
So we start this as purely text-based.
-
11:09
So we're looking at computers that can chat, and
-
11:12
we're not looking at any other type of human behavior at all.
-
11:15
So humans are much more than just talking.
-
11:18
We can dance, we can sing, we can run, we can do these other type of stuff,
-
11:23
and the Turing test is only looking at a very, very small subset of it.
-
11:28
So, rather than actually attempting to answer the question that Turing proposed
-
11:32
before, can machines do what we as humans do, we're actually answering this.
-
11:38
Can machines appear to respond in text as humans do?
-
11:41
Which isn't quite as catchy.
-
11:44
So, next to that though,
-
11:48
it also doesn't look at all types of behavior that humans do.
-
11:52
There are some human behavior that we can deem as unintelligent,
-
11:59
and there are some intelligent behavior, which we think is inhuman.
-
12:03
So the first chat box for instance that won the Turing Fest did
-
12:09
it buy showing human behavior that was unintelligent like throwing typos in text
-
12:15
and making grammar mistakes that a proper computer wouldn't do.
-
12:21
At the same time other chat box would fail because they could do things that
-
12:26
humans couldn't do like solve really complex mathematical equations.
-
12:33
This introduces this weird idea of Artificial Stupidity.
-
12:38
We're creating AI systems to win the Turing test,
-
12:43
which are just as dumb as we are.
-
12:46
It's pretty much this idea of dumbing down an algorithm just for
-
12:50
the sake of passing it off as human.
-
12:52
This feels really anticlimactic to me.
-
12:55
We're creating these AI systems to be just as dumb
-
13:00
as we are rather than smarter than we are or just as intelligent than we are.
-
13:04
So the final area that I want to look at is systems that act rationally.
-
13:10
And this is the area of intelligent agents.
-
13:16
So irrational agents Is one that acts to achieve the best outcome or
-
13:22
in the case of uncertainty, the best expected outcome.
-
13:27
So, this doesn't necessarily mean though, like in the Matrix and
-
13:32
Agent Smith, that it's a stand alone agent that acts like a robot in the world.
-
13:37
With agents we're thinking more of an abstract term.
-
13:42
So you can think of any type of computer program, parts of your code.
-
13:47
It's just anything that acts in this way.
-
13:51
So the most basic way that we can think about this is with this diagram.
-
13:58
You have an agent that lives somewhere in the environment.
-
14:03
And it has sensors that can observe the environment.
-
14:09
And it has effectors with which it can make changes to its environment.
-
14:13
So as I said, this is not necessarily a stand alone program.
-
14:17
This as, there's way more other things that we can think about.
-
14:23
And with humans, we can think about it in the same way.
-
14:26
Us as humans live in an environment and
-
14:29
we have sensors, which are our five senses, sound, sight, etc.
-
14:34
And we have effectors that we can change our environment, so our hands,
-
14:37
our feet, our voice, anything that makes changes to the world.
-
14:43
So how, but how do we actually reason about it?
-
14:49
We've got this big black box in the middle and
-
14:51
this is the thing that makes the decisions for us.
-
14:53
Gets observant the observations in and
-
14:55
it has to make decisions on how to effect the world.
-
14:59
So how do we do this?
-
15:01
So here's the kind of first order version that you can think of.
-
15:05
It's really simple.
-
15:07
You get observations in.
-
15:09
The agent decides and creates a state of what his current world is.
-
15:16
There's a set of condition action rules, so these are just if-then rules and
-
15:20
it can decide what action it should take.
-
15:23
So the really simple example that I always think of is email filters, which again,
-
15:29
nobody would really consider and agent in the movie sense.
-
15:33
But it pretty much is.
-
15:38
So here's an example.
-
15:39
The environment in this case, our inbox.
-
15:42
You get a incoming mail from a specific email address.
-
15:46
And your filters have an if-then rule that if it matches this email address,
-
15:51
it should apply this label.
-
15:53
So it makes the decision to apply that label.
-
15:56
So it's a really simple loop, trying to come up with actions to take on the world.
-
16:02
So again, we have this as humans as well.
-
16:05
The unartificial think about these types of things as well.
-
16:10
So before going on to how humans do it, let's look at dogs and animals.
-
16:16
So who here knows who Pavlov is?
-
16:20
Okay.
-
16:21
So yeah, he's a Russian physiologist that was known for
-
16:24
his work in classical conditioning.
-
16:26
And he trained dogs to associate the sound of a buzzer with food.
-
16:32
So, couple years back, I thought I'd see if I can do this.
-
16:38
I've got two cats, Casey and Dusty and they're really cute, but they're also
-
16:45
really super annoying whenever they're hungry which tends to be most of the time.
-
16:52
So I wondered, what would happen if I could train them like Pavlov did?
-
16:57
Would that work?
-
17:01
So I tried.
-
17:03
So first up, this was the beginning state.
-
17:08
So my cats would smell the food that I give them and because they wanted
-
17:14
to show that they wanted the food, they'd race to the kitchen and be all excited.
-
17:20
So this was the current state.
-
17:23
I then started training them with a standard iPhone alarm.
-
17:27
So I'd only actually feed them if the iPhone had gone off.
-
17:32
And then I'd stand up and go to the kitchen and feed them.
-
17:35
And then they'd race to the kitchen and follow me and be all excited.
-
17:39
Eventually they associated that noise with food.
-
17:44
So whenever the iPhone alarm went, they'd race to the kitchen and be excited.
-
17:49
So the experiment was a success.
-
17:52
Just like Pavlov, I managed to train them to associate this noise with food.
-
17:59
It didn't mean though that they weren't hungry the rest of the time.
-
18:03
So even though it was successful.
-
18:06
It still meant they were really annoying because they just always wanted food.
-
18:09
The funny thing though is this is about three years ago that I did this,
-
18:14
and even now when there are TV shows or movies that use that iPhone alarm,
-
18:20
my cats will jump up and race to the kitchen.
-
18:25
So, yeah, it does work.
-
18:27
And, we're not that different from cats, actually.
-
18:30
We use these same principles on ourselves to create habits.
-
18:35
So, just thinking about one that I do every morning,
-
18:39
whenever I hear my alarm clock, I know that alarm clock means I need to get up.
-
18:44
And I decide that I should get up, on good days, that is.
-
18:49
And same way as developers.
-
18:52
We are trained to know that if we see failing tests.
-
18:56
We associate that with a bad situation, so we need to stop and
-
19:01
think and actually fix those tests before doing anything else.
-
19:06
So these are all very simplified loops of behaviors and rules.
-
19:13
The question though is, how do we actually obtain those rules?
-
19:16
How do we learn that a failing test is bad.
-
19:19
How do we learn that an alarm clock means I need to get up?
-
19:24
How do we do that bit?
-
19:27
So then, things start getting a little bit more complex.
-
19:31
So now we have a bit more of a larger diagram and
-
19:36
this is kind of got two loops within it.
-
19:40
The first one focuses on the performance element.
-
19:45
This performance element is pretty much the entire agent that you saw before.
-
19:49
This is the thing that creates a state of the world, has the rules about
-
19:55
what the world is currently at and decides what decisions to make.
-
20:00
Only now we have a learning element.
-
20:03
And this thing is what influences the performance element to change the rules
-
20:08
and to change the way we see the world.
-
20:12
So this is what makes us make better decisions.
-
20:14
This actually gets feedback.
-
20:17
From the critic and this takes some observations of past events and
-
20:21
figures out what feedback it can pass on to the learning elements and
-
20:25
then on to the performance elements.
-
20:27
So it's this kind of constant loop of learning from the actions that we take,
-
20:32
observing what it does, and seeing if we can actually use it again in future.
-
20:36
And then finally, there is the problem generator.
-
20:40
And this bit is responsible for suggesting actions to lead to new experiences,
-
20:46
so if we didn't have this element, the agent would always remain doing
-
20:50
what it thinks is best rather than trying out new things.
-
20:54
Without it, we wouldn't have any experiments.
-
20:56
We wouldn't have any exploration.
-
20:58
And we'd be stuck doing the best thing rather than having a bit of randomness in
-
21:02
and trying to do new things.
-
21:06
So humans do this as well.
-
21:07
We have the same types of loops that we learn from.
-
21:12
In a bit more human terms, you can look at it like this.
-
21:15
So you have the main control, which is what we use to make decisions and
-
21:20
observe the world.
-
21:22
We have reflection moments,
-
21:24
where we reflect on the things that we've learnt and seen in the past.
-
21:29
And understanding elements, which understands the decisions that we've made,
-
21:33
and how to change the decisions we're making in the future.
-
21:36
And the planning element which is kind of this experimental section of our brain.
-
21:45
So the main thing we're interested in here, is the learning element.
-
21:48
How do you define?
-
21:51
This anti-section and how does it learn what changes to make.
-
21:55
This is when learning algorithms come in.
-
22:00
So there are kind of free areas of learning algorithms that we can look at.
-
22:03
First is, supervised learning.
-
22:07
So in this case all the feedback that you get is upfront.
-
22:10
You basically get a bunch of labeled training data and
-
22:14
you're trying to infer the rules for which input belongs with which label.
-
22:19
So here's a really basic example.
-
22:21
On the left you have a series of shapes and then you have a series of labels.
-
22:27
So it will basically go this shape is a circle.
-
22:30
This shape is a square.
-
22:32
This shape is a triangle.
-
22:34
We're basically creating a mapping between the input and the output so
-
22:40
then when new input comes along we can go hey I've seen the before, it's a square.
-
22:48
Then we have unsupervised learning.
-
22:50
And in this case, our feedback,
-
22:52
our input is just a bunch of data, with no labels, no anything.
-
22:58
And in this case, the algorithm needs to learn what path and what
-
23:03
structure there is from that input, even though there aren't any specific outputs.
-
23:08
So here's another basic example.
-
23:11
You get a bunch of shapes in.
-
23:13
And the learning algorithm based on what it sees,
-
23:18
so things that it finds in common, creates these two different groups.
-
23:23
So in this case it thinks everything that has edges, ok,
-
23:25
let's put that in one group.
-
23:27
Everything that doesn't have edges put it in this other group.
-
23:31
So it's about finding the commonalities between things and
-
23:34
then making decisions based on that.
-
23:37
The final one, and that's the most like human learning, is reinforcement learning.
-
23:43
So in this case we actually get proper feedback in our algorithms.
-
23:48
So the agent will make a decision.
-
23:50
And it then gets feedback whether or not it's wrong or right.
-
23:55
And it's much more general than the other types of algorithms.
-
23:58
But at the same time, it has to also have a much better understanding of
-
24:02
how its environment works.
-
24:04
And it needs something to tell it whether or not it's right or wrong.
-
24:09
So again, basic example.
-
24:11
In this case, the agent just makes the decision of calling a triangle and
-
24:18
then someone, a teacher or a developer most probably,
-
24:23
tells it that it's wrong or whether it's right.
-
24:27
So as it goes along it learns what rules there are.
-
24:31
And can adapt to its environment.
-
24:36
So in the same way that machines can learn through different types of algorithms,
-
24:41
humans also learn through different types of activities.
-
24:45
So when you're traditionally thinking of learning.
-
24:49
Most of the time you're thinking about somebody standing in front of a lecture
-
24:53
room, talking to people, and giving information that way, but there are way
-
24:58
more other types of ways that people learn and I think that most people don't really
-
25:03
realize that we need all these things to get a good learning experience.
-
25:07
So this is an overview of 16 of the different types of
-
25:11
learning activities that there are.
-
25:13
I'll just highlight a couple here because I don't want to go over all 16.
-
25:16
So the first one is delivered.
-
25:18
And that's very much were learners are presented with information.
-
25:23
This is that traditional, someone standing up in front of a stage and
-
25:27
talking to people and telling them how things are.
-
25:30
So this very similar to supervised learning.
-
25:33
You get presented with some contents which contains all the information
-
25:38
that you need.
-
25:40
You have labeled data basically.
-
25:43
So at Future Learn we do this with some of steps.
-
25:49
We've got article steps and video steps, so
-
25:52
we're delivering content that users can absorb and process in one go.
-
25:58
So it's delivered content.
-
26:01
So the same thing we do as developers and as designers.
-
26:05
Whenever we want to learn something new, we will learn from books and
-
26:09
YouTube videos and other types of videos of conferences like this.
-
26:13
It's very much, again, about delivered content.
-
26:16
Second one that I wanted to look at is conversational and collaborative.
-
26:20
So here it's very much about learning with others, and
-
26:24
constructing a shared understanding by talking to others.
-
26:28
So, this is more along the lines of unsupervised learnings.
-
26:31
Where there's not really any main content, it's more about
-
26:35
the acts of recognizing the patterns that you have in conversations with others.
-
26:39
So again, in Future Learn, we encourage people to have conversations about what
-
26:46
they've learned, so rather than just being about content that they can consume,
-
26:52
we also have conversations that are happening around pieces of content.
-
26:57
So it's about learning through those conversations.
-
27:01
And again, as developers we also do this pretty much day to day with pairing.
-
27:09
We're conversing with others and working together with them and
-
27:11
that leads us to learn something better.
-
27:15
And the same as with cross-disciplinary teams.
-
27:20
Can't say that words.
-
27:23
Even between designers and
-
27:24
developers, when we're working together we learn from each other.
-
27:28
Or even if it's wider type of cross-disciplinary teams like
-
27:32
having people from the marketing team or strategy team work alongside you.
-
27:37
It makes things, it allows you to learn more.
-
27:42
The third one that I wanted to look at is assessing.
-
27:45
So, in this case, it's about receiving constructive feedback, and
-
27:49
learning from that feedback.
-
27:50
So again, this is like reinforcement learning.
-
27:53
It's all about the feedback that you get back.
-
27:56
And again it's more general, but it works.
-
28:00
And again at Future Learn we've got peer review assignments.
-
28:03
So learners have to enter an assignment and
-
28:08
they'll get randomly assigned to someone else who has finished their assignment.
-
28:13
And you get feedback from those people.
-
28:15
So it's all about getting this constructive feedback and
-
28:18
learning from the feedback itself.
-
28:21
Its also all about actually creating the feedback.
-
28:25
Its about, you also learn from the recreational
-
28:28
feedback rather than just from the receiving it.
-
28:31
And so it's two ways.
-
28:34
And we did the same with Pull Requests.
-
28:37
Pull requests are all about receiving constructive feedback, if you
-
28:42
work in a good team that is.
-
28:47
Here's 8 of the 16 that you saw before and again, it's
-
28:50
just a couple of examples of the types of activities that we do when learning.
-
28:55
As I said before, I'm not going to go through all of them.
-
28:59
I just wanted you to stop and think about this conference.
-
29:04
If you think about it what I'm doing now here on stage and
-
29:08
what all the other speakers are doing on stage is delivered learning.
-
29:14
So you guys are all listening to us and learning from what we're saying.
-
29:19
Then again, all of the speakers are doing performative learning.
-
29:24
So we're all presenting before an audience and
-
29:27
we're learning from that experience as well.
-
29:31
Then you can think of construction learning.
-
29:36
As the workshops that happen on Monday, but everyone went there.
-
29:39
But it's all about learning through designing and making and
-
29:43
building things yourself.
-
29:45
And then conversational learning is what everyone here does during the breaks.
-
29:49
You talk to each other and you converse with each other and you learn from that.
-
29:54
So there's all these different ways that we learn.
-
29:58
And most of the time we only consider one of them.
-
30:00
Rather than thinking about all the different ways that people can learn.
-
30:05
So what makes us different from machines?
-
30:09
There are quite a couple of things right now, but I think that we're getting closer
-
30:14
and closer to having machines that aren't that different from us.
-
30:18
So the first is contextual.
-
30:20
Right now, anything new that we learn, we know in what situation,
-
30:25
or in what context, we're learning it.
-
30:28
So unlike machines, we're not really bound to one domain, or one purpose.
-
30:33
We are constantly switching between contexts, and we constantly know
-
30:37
to what type of context we're applying the knowledge that we're learning.
-
30:44
Besides that we're also constantly learning.
-
30:46
We don't have an off switch for it.
-
30:48
Even though we might not be consciously learning a new skill or
-
30:52
new knowledge, it's something that we're doing all the time.
-
30:56
We're always processing information and
-
30:58
we're always adapting to the people, the world, and the environment around us.
-
31:02
So right now I'm looking at everyone and
-
31:05
trying to figure out which jokes of mine aren't working.
-
31:07
I'm learning from that whereas
-
31:11
machines are all very much state-based and they go from one state to the other.
-
31:16
So one moment they'll be learning and one moment they'll be observing.
-
31:19
And one moment they'll be trying to figure out whether or
-
31:21
not their decisions were right, so they're not constantly learning.
-
31:28
The other thing is prior knowledge.
-
31:30
So we got a huge backlog of years, and years of things that we know.
-
31:35
And we can make associations between different types of
-
31:38
information that we know.
-
31:40
So we can discover patterns and connections in ways that
-
31:44
other people might not have and machines don't have this yet.
-
31:48
Machines are kind of like babies.
-
31:50
They're really, really blank slates and
-
31:53
it takes time for machines to learn new things.
-
31:58
Next to that, we are emotional, and we attach value to certain skills,
-
32:02
information, and experiences, and we use that emotion to make better decisions,
-
32:09
and again, machines don't have that yet.
-
32:12
And last but not least, social, we learn from other people.
-
32:16
And it's one of the key things that makes us different from machines,
-
32:20
but also different from most animals.
-
32:23
It's mainly through having conversations with others, and learning at events like
-
32:28
this and from our friends and our family that we know how to learn things.
-
32:35
So we have started creating machines with all these different abilities, but
-
32:40
not all of them combined yet.
-
32:43
So machines need to be able to do all of these things together in a generalized way
-
32:48
before we can consider any type of true artificial intelligence.
-
32:53
They need to learn the way humans do before we can call them that and
-
32:59
I don't think we're that far off.
-
33:02
I know it was said yesterday that predictions are bullshit,
-
33:07
but here's one anyway and I'm definitely not the first one to say this, but
-
33:12
I believe we will have artificial intelligence in this century, so
-
33:17
most probably in most of our lifetimes.
-
33:20
And I don't think this is a type of AI that will be plotting our downfall
-
33:24
as we see in most movies.
-
33:27
Rather, we'll have machines that can learn and reason about skills and
-
33:32
knowledge the way that we as humans do and that's when things, I think,
-
33:36
will get really interesting.
-
33:38
Because, in a world where humans and machines learn in the same way,
-
33:44
does that mean they'll learn together?
-
33:47
So, will our schools actually become places where both humans and
-
33:51
machines learn?
-
33:53
Will those of artificial intelligence be treated the same way as
-
33:57
the unartificial intelligence?
-
34:00
So when we're thinking of web design of the future, consider this.
-
34:07
Will what we design and develop for humans also be used for machines?
-
34:14
Will what we design and develop for humans also work for machines?
-
34:19
So, thanks for listening.
-
34:21
[APPLAUSE]
-
34:27
[MUSIC]
You need to sign up for Treehouse in order to download course files.
Sign up