Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
With different algorithms performing different kinds of operations how do we figure out which algorithms are more efficient than others? In this video let's talk about Big O notation.
Glossary
 Order of growth  a measure of how much the time taken to execute operations increases as the input size increases
 Big O  a theoretical definition of the complexity of an algorithm as a function of the size
Resources
Runtime Data
Listed below are the data points used to plot the runtime graphs seen in the video
Linear Search
n  Worst case runtime 

10  10 
100  100 
1000  1000 
10000  10000 
100000  100000 
1000000  1000000 
Binary Search
n  Worst case runtime 

10  4 
100  7 
1000  10 
10000  14 
100000  17 
1000000  20 

0:00
[MUSIC]

0:04
We have a vague understanding that Britney's approach is better in most cases

0:08
but just like with linear search, it helps to visualize this.

0:12
Much like we did with linear search when determining the efficiency of

0:16
an algorithm, and

0:17
remember we're still only looking at efficiency in terms of time.

0:20
Time complexity as it's called.

0:22
We always want to evaluate how the algorithm performs in the worse case

0:26
scenario.

0:27
Now you might be thinking that of that doesn't seem fair because given a series

0:31
of data, if the target value we're searching for

0:34
is somewhere near the front of the list then linear

0:37
search may perform just as well if not slightly better than binary search.

0:41
And that is totally true.

0:43
Remember a crucial part of learning algorithms is understanding what works

0:47
better in a given context.

0:49
When measuring efficiency though, we always use the worse case scenarios as

0:54
a benchmark, because remember, it can never perform worse than the worse case.

0:59
Let's plot these values on the graph we started earlier, with the number of tries

1:04
or the runtime of the algorithm on the yaxis, and the maximum number of values

1:09
in the series, or n on the horizontal axis to represent the worst case scenario.

1:14
We have two data points.

1:15
When n equals 10, Britney took four tries using binary search.

1:19
And when n equals 100, it took seven tries, but

1:22
even side by side, these data points are sort of meaningless.

1:26
Remember that,

1:27
while there's quite a difference between the runtime of linear search and

1:31
binary search at an n value of 100, for a computer, that shouldn't matter.

1:36
What we should check out is how the algorithm performs at levels of n that

1:40
might actually slow a computer down.

1:43
As n grows larger and larger, how do these algorithms compare to one another?

1:48
Let's add that to the graph.

1:49
[SOUND] Okay, now a picture starts to emerge.

1:52
As n gets really large,

1:54
the performance of these two algorithms differs significantly.

1:58
The difference is kind of staggering, actually.

2:01
Even with the simple gain, we saw that binary search was better, but now,

2:06
we have a much more complete idea of how much better.

2:09
For example, when n is 1,000, the runtime of linear search measured

2:14
by the number of operations or turns is also 1,000.

2:18
For binary search, it takes just 10 operations.

2:21
Now let's look at what happens when we increase n by factor of 10.

2:25
At 10,000,

2:26
linear search takes 10,000 operations while binary search takes 14 operations.

2:32
An increase by a factor of 10 in binary search only needs four more

2:37
operations to find the value.

2:39
If we increase it again by a factor of ten once more,

2:42
to an n value of 100,000, binary search takes only 17 operations.

2:48
It is blazing fast.

2:50
What we've done here is plotted on a graph how the algorithm performs

2:55
as the input set it is working on increases.

2:58
In other words, we've plotted the growth rate

3:01
of the algorithm also known as the order of growth.

3:04
Different algorithms grow at different rates, and by evaluating their growth

3:09
rates, we're getting much better picture of their performance.

3:12
Because we know how the algorithm will hold up as n grows larger.

3:17
This is so important.

3:19
In fact, it is the standard way of evaluating an algorithm, and

3:23
brings us to a concept called big O.

3:26
You might have heard this word thrown about.

3:28
And if you found it confusing, don't worry.

3:30
We've already built a definition in the past few videos.

3:34
We just need to bring it all together.

3:36
Let's start with a common statement you'll see in studies on algorithms.

3:40
Big O is a theoretical definition of the complexity of an algorithm

3:45
as a function of the size.

3:47
Wow, what a mouthful.

3:48
This sounds really intimidating but it's really not.

3:51
Let's break it down.

3:52
Big O is a notation used to describe complexity.

3:56
And what I mean by notation is that it simplifies everything we've

4:01
talked about down into a single variable.

4:04
An example of complexity written in terms of big O looks like this.

4:09
As you can see, it starts with an upper case letter O,

4:12
that's why we call it big O.

4:14
It's literally a big O.

4:16
The O comes from order of magnitude of complexity.

4:20
So that where we get the big O from.

4:22
My complexity here refers to the exercise we have been carrying out in measuring

4:26
efficiency.

4:27
If takes Britney 4 tries when n is 10,

4:30
how long does the algorithm take when n is 10 million?

4:34
When we use big O for this, the variable used which we'll get to,

4:38
distills that information down so that by reading the variable, you get a big

4:42
picture view without having to run through data points and grasps just like we did.

4:47
It's important to remember that complexity is relative.

4:51
When we evaluate the complexity of the binary search algorithm,

4:55
we're doing it relative to other search algorithms, not all algorithms.

5:00
Big O is a useful notation for understanding both time and

5:03
space complexity.

5:04
But only when comparing amongst algorithms that solve the same problem.

5:09
The last bit in the definition of big O is a function of the size, and

5:13
all this means is that big O measures complexity as the input size grows.

5:18
Because it's not important to understand how an algorithm performs in

5:22
a single data set.

5:24
But in all possible data sets.

5:26
[SOUND] You will also see big O referred to as the upper bound of the algorithm.

5:31
And what that means is that big O measures how the algorithm performs in

5:35
the worst case scenario.

5:37
So that's all they go is, nothing special.

5:40
Is just a notation that condenses the data points and graphs.

5:44
So we've built up down to one variable.

5:46
Okay, so what do these variables look like?

5:49
For John's strategy, linear search, we say that it has a time complexity of big O and

5:55
then n, so that's again big O with an n inside parentheses.

5:59
For Britney's strategy, binary search, we say that it has a time complexity of big

6:04
O of log n, that's big O with something called a log and an n inside parentheses.

6:09
Now don't worry if you don't understand that.

6:10
We'll go into that in more detail later on in the course.

6:14
Each of these has a special meaning but

6:16
it helps to work through all of them to get a big picture view, so

6:20
over the next few videos, let's examine what are called common complexities or

6:24
common values of big O that you will run into and should internalize.
You need to sign up for Treehouse in order to download course files.
Sign up