Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Preview
Start a free Courses trial
to watch this video
With different algorithms performing different kinds of operations how do we figure out which algorithms are more efficient than others? In this video let's talk about Big O notation.
Glossary
- Order of growth - a measure of how much the time taken to execute operations increases as the input size increases
- Big O - a theoretical definition of the complexity of an algorithm as a function of the size
Resources
Runtime Data
Listed below are the data points used to plot the runtime graphs seen in the video
Linear Search
n | Worst case runtime |
---|---|
10 | 10 |
100 | 100 |
1000 | 1000 |
10000 | 10000 |
100000 | 100000 |
1000000 | 1000000 |
Binary Search
n | Worst case runtime |
---|---|
10 | 4 |
100 | 7 |
1000 | 10 |
10000 | 14 |
100000 | 17 |
1000000 | 20 |
[MUSIC]
0:00
We have a vague understanding that
Britney's approach is better in most cases
0:04
but just like with linear search,
it helps to visualize this.
0:08
Much like we did with linear search
when determining the efficiency of
0:12
an algorithm, and
0:16
remember we're still only looking
at efficiency in terms of time.
0:17
Time complexity as it's called.
0:20
We always want to evaluate how
the algorithm performs in the worse case
0:22
scenario.
0:26
Now you might be thinking that of that
doesn't seem fair because given a series
0:27
of data,
if the target value we're searching for
0:31
is somewhere near the front
of the list then linear
0:34
search may perform just as well if not
slightly better than binary search.
0:37
And that is totally true.
0:41
Remember a crucial part of learning
algorithms is understanding what works
0:43
better in a given context.
0:47
When measuring efficiency though,
we always use the worse case scenarios as
0:49
a benchmark, because remember, it can
never perform worse than the worse case.
0:54
Let's plot these values on the graph we
started earlier, with the number of tries
0:59
or the runtime of the algorithm on the
y-axis, and the maximum number of values
1:04
in the series, or n on the horizontal axis
to represent the worst case scenario.
1:09
We have two data points.
1:14
When n equals 10, Britney took
four tries using binary search.
1:15
And when n equals 100,
it took seven tries, but
1:19
even side by side,
these data points are sort of meaningless.
1:22
Remember that,
1:26
while there's quite a difference between
the runtime of linear search and
1:27
binary search at an n value of 100,
for a computer, that shouldn't matter.
1:31
What we should check out is how
the algorithm performs at levels of n that
1:36
might actually slow a computer down.
1:40
As n grows larger and larger, how do
these algorithms compare to one another?
1:43
Let's add that to the graph.
1:48
[SOUND] Okay,
now a picture starts to emerge.
1:49
As n gets really large,
1:52
the performance of these two
algorithms differs significantly.
1:54
The difference is kind of staggering,
actually.
1:58
Even with the simple gain, we saw that
binary search was better, but now,
2:01
we have a much more complete
idea of how much better.
2:06
For example, when n is 1,000,
the runtime of linear search measured
2:09
by the number of operations or
turns is also 1,000.
2:14
For binary search,
it takes just 10 operations.
2:18
Now let's look at what happens when
we increase n by factor of 10.
2:21
At 10,000,
2:25
linear search takes 10,000 operations
while binary search takes 14 operations.
2:26
An increase by a factor of 10 in
binary search only needs four more
2:32
operations to find the value.
2:37
If we increase it again by
a factor of ten once more,
2:39
to an n value of 100,000,
binary search takes only 17 operations.
2:42
It is blazing fast.
2:48
What we've done here is plotted on
a graph how the algorithm performs
2:50
as the input set it is
working on increases.
2:55
In other words,
we've plotted the growth rate
2:58
of the algorithm also known
as the order of growth.
3:01
Different algorithms grow at different
rates, and by evaluating their growth
3:04
rates, we're getting much better
picture of their performance.
3:09
Because we know how the algorithm
will hold up as n grows larger.
3:12
This is so important.
3:17
In fact, it is the standard way
of evaluating an algorithm, and
3:19
brings us to a concept called big O.
3:23
You might have heard
this word thrown about.
3:26
And if you found it confusing,
don't worry.
3:28
We've already built a definition
in the past few videos.
3:30
We just need to bring it all together.
3:34
Let's start with a common statement
you'll see in studies on algorithms.
3:36
Big O is a theoretical definition
of the complexity of an algorithm
3:40
as a function of the size.
3:45
Wow, what a mouthful.
3:47
This sounds really intimidating but
it's really not.
3:48
Let's break it down.
3:51
Big O is a notation used
to describe complexity.
3:52
And what I mean by notation is that
it simplifies everything we've
3:56
talked about down into a single variable.
4:01
An example of complexity written
in terms of big O looks like this.
4:04
As you can see,
it starts with an upper case letter O,
4:09
that's why we call it big O.
4:12
It's literally a big O.
4:14
The O comes from order of
magnitude of complexity.
4:16
So that where we get the big O from.
4:20
My complexity here refers to the exercise
we have been carrying out in measuring
4:22
efficiency.
4:26
If takes Britney 4 tries when n is 10,
4:27
how long does the algorithm
take when n is 10 million?
4:30
When we use big O for this,
the variable used which we'll get to,
4:34
distills that information down so that
by reading the variable, you get a big
4:38
picture view without having to run through
data points and grasps just like we did.
4:42
It's important to remember
that complexity is relative.
4:47
When we evaluate the complexity
of the binary search algorithm,
4:51
we're doing it relative to other
search algorithms, not all algorithms.
4:55
Big O is a useful notation for
understanding both time and
5:00
space complexity.
5:03
But only when comparing amongst
algorithms that solve the same problem.
5:04
The last bit in the definition of
big O is a function of the size, and
5:09
all this means is that big O measures
complexity as the input size grows.
5:13
Because it's not important to
understand how an algorithm performs in
5:18
a single data set.
5:22
But in all possible data sets.
5:24
[SOUND] You will also see big O referred
to as the upper bound of the algorithm.
5:26
And what that means is that big O
measures how the algorithm performs in
5:31
the worst case scenario.
5:35
So that's all they go is, nothing special.
5:37
Is just a notation that condenses
the data points and graphs.
5:40
So we've built up down to one variable.
5:44
Okay, so
what do these variables look like?
5:46
For John's strategy, linear search, we say
that it has a time complexity of big O and
5:49
then n, so that's again big O
with an n inside parentheses.
5:55
For Britney's strategy, binary search, we
say that it has a time complexity of big
5:59
O of log n, that's big O with something
called a log and an n inside parentheses.
6:04
Now don't worry if you
don't understand that.
6:09
We'll go into that in more
detail later on in the course.
6:10
Each of these has a special meaning but
6:14
it helps to work through all of
them to get a big picture view, so
6:16
over the next few videos, let's examine
what are called common complexities or
6:20
common values of big O that you will
run into and should internalize.
6:24
You need to sign up for Treehouse in order to download course files.
Sign up