Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Preview
Video Player
00:00
00:00
00:00
- 2x 2x
- 1.75x 1.75x
- 1.5x 1.5x
- 1.25x 1.25x
- 1.1x 1.1x
- 1x 1x
- 0.75x 0.75x
- 0.5x 0.5x
Perpetuating Bias and Amplifying Hate with Artificial Intelligence
6:52 with Hope Armstrong and Michelle ZohlmanMachine learning opens up a substantial ethical debate, as it can perpetuate bias and amplify hate. We'll look at real-world examples where AI got it wrong, and then we'll explore some best practices to prevent mistakes.
New Terms:
- Artificial intelligence (AI): creating intelligent machines that can simulate human thinking
- Machine learning: A subset of AI; the ability for the computer to analyze information and then create new rules, enabling the computer to learn and create on its own
Further Reading:
- Algorithmic Justice League
- Weapons of Math Destruction - book by Cathy O'Neil
- Dear Facebook, this is how you're breaking democracy
- When It Comes to Gorillas, Google Photos Remains Blind - Wired
- Machines Taught by Photos Learn a Sexist View of Women - Wired
- The spread of true and false news online - a study in Science magazine
- Who's watching the algorithms? - Lowy Institute
- Why Does Siri Sound White - ConveyUX conference
- Alan Cooper's keynote "The Oppenheimer Moment" - IxDA 2018
- How I'm fighting bias in algorithms | TED Talk - Joy Buolamwini
- Soap dispenser sensor that doesn't recognize darker skin tone - Facebook video
- Free Speech Is Not the Same As Free Reach
- Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice - by Rashida Richardson, Jason Schultz, and Kate Crawford
- Algorithms of Oppression: How Search Engines Reinforce Racism - book by Safiya Umoja Noble
- The Bias Embedded in Algorithms | Pocket - articles curated by Safiya Umoja Noble
- Race After Technology - book by Ruha Benjamin
- Automating Inequality | Virginia Eubanks | Macmillan - book by Virginia Eubanks
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
MICHELLE: You've probably heard about self
driving cars, voice assistance, or
0:00
some artificial intelligence
approaches to science and healthcare.
0:04
Like IBM Watson that helps us dig
through vast libraries of complicated
0:08
information in an instant.
0:12
All of these things are powered
by machine learning,
0:14
which is a subset of
artificial intelligence.
0:18
Artificial Intelligence,
abbreviated as AI,
0:21
is the overarching concept
of creating intelligent
0:25
machines that can simulate human thinking.
0:30
Machine learning is a subset of AI and
it is the ability of the computer to
0:35
analyze information and
then create new rules.
0:40
So instead of humans doing
all of the programming,
0:43
we can instead point computers
in the right direction and
0:46
they can continue on a path
of self improvement.
0:50
We're giving computers the chance
to learn and create on their own.
0:53
Machine learning opens up
a substantial ethical debate.
0:56
Cathy O'Neil, author of the book
Weapons of Math Destruction,
1:02
explains it like this.
1:06
Big Data processes codify the past,
they do not invent the future.
1:07
Doing that requires moral imagination and
that's something only humans can provide.
1:14
We have to explicitly
embed better values into
1:20
our algorithms creating Big Data
models that follow our ethical lead.
1:23
Sometimes that will mean putting
fairness ahead of profit.
1:27
Increasingly, machine learning is
organizing information such as search
1:31
results according to invisible parameters
that reveal biases in society.
1:37
Even though developers may
not identify as racist or
1:42
sexist, the algorithms or
the data fed to the system might be.
1:47
The software we create can amplify
inequalities to make them even worse.
1:51
HOPE: The data that informs machine
learning can have biases,
1:58
this is called algorithmic bias.
2:03
In 2016, University of Virginia computer
science professor Vicente Ordonez,
2:06
noticed a sexist pattern in two image
collections commonly used for research.
2:12
Images of shopping and
washing are linked to women,
2:17
images of coaching and
shooting are linked to men.
2:21
This is just one example of how
machine learning can amplify sexism.
2:26
It can amplify racial bias too.
2:30
In a TEDx Talk by Joy Buolamwini,
2:34
she explains how facial recognition
software didn't detect her face.
2:37
Those who develop the algorithm
hadn't trained it to
2:43
recognize a range of skin tones and
facial structures.
2:47
In addition to perpetuating biases,
2:51
technology can shape our
perception of reality.
2:53
Guillaume Chaslot, a former software
engineer at YouTube created
2:56
the algorithm that predicts
which video to play next.
3:00
He admitted the algorithm tries to
find rabbit holes that will draw
3:04
you into more videos on a given topic.
3:09
Algorithms optimized for
3:12
increasing watch time can
inadvertently lead to radicalization.
3:14
A comprehensive study in
Science Magazine found that fake news is
3:19
shared six times more
than accurate stories.
3:23
When systems are not factually checked,
3:27
they use virality as
the only success metric.
3:30
Interestingly, this is how
conspiracy theories are spread.
3:33
Similarly, something as innocuous as
3:38
autocompletion in the Google
search bar shapes our reality.
3:41
Let's search for "climate change is."
3:45
The search prediction makes
recommendations based on
3:52
your search history as well as
trending searches in your area.
3:55
I live in an area that's supportive
of reversing climate change, so
3:58
my results are mostly reflective of that.
4:03
In a place resistant to climate change,
the results may look very different.
4:06
While google.com generally
looks the same for everyone,
4:11
it's subtle nudges like these that
keep people in differing realities.
4:15
It's one of the reasons why political
polarization is at a 30 year high.
4:20
Rashida Richardson is the Director
of Policy Research at AI Now,
4:27
a research institute studying the social
implications of artificial intelligence.
4:32
She has said that while there
are anti-discrimination laws to protect
4:38
people regarding housing, credit,
and employment, they have limits.
4:42
Racial disparities persist
because of historical exclusion.
4:47
In a paper she authored,
the team researched predictive policing
4:50
systems that forecast criminal activity.
4:55
The predictions are flawed because they're
trained on racially discriminatory data.
4:59
The system took corrupt, illegal,
and racially biased police
5:04
activity in the past and
created more of that for the future.
5:10
MICHELLE: So what do you think?
5:15
Should computers be making
automatic decisions?
5:17
Is it okay in some applications,
but not in others?
5:19
Consider its application in finance,
insurance,
5:23
healthcare, employment,
and law enforcement.
5:27
Should algorithms amplify any content?
5:30
Who should decide?
5:33
As machine learning is
still relatively new,
5:35
I don't think anyone
has perfected this yet.
5:38
Here are a few ideas.
5:41
Carefully select training data.
5:42
Ensure the algorithm is fed a dataset that
reflects a wide spectrum of humanity and
5:46
audit for biases.
5:52
Put measures in place to regulate the
system once it's released to the world.
5:53
If an automated process relies
on an algorithm to review it,
5:59
provide an option for a manual review.
6:03
An algorithm cannot understand the nuances
of a complicated decision as a human can.
6:06
Sometimes AI is not an appropriate
solution, it has limitations.
6:13
Build diverse teams.
6:17
A diverse group minimizes bias and
products and
6:21
better serves a broad audience.
6:24
Do your due diligence before
releasing a product by
6:27
conducting user testing with
a diverse range of participants.
6:31
Al is an emerging technology that's just
beginning to develop best practices.
6:36
Do you have any other ideas for
making artificial intelligence ethical?
6:41
For more information,
6:46
check out the Algorithmic Justice League
link in the teacher's notes.
6:47
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up