Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
As you learn about algorithms you will run (pun intended) into some common runtimes that algorithms exhibit. In this video we'll look at two of them  constant and logarithmic runtimes
Glossary
Constant Time  O(1): The runtime of the algorithm is independent of the size of the data set. If n is 1 or 1 million it takes the same amount of time to execute the algorithm.
Logarithmic Time  O(log n): The runtime of the algorithm increases logarithmically as the size of the data set increases.
Resources

0:00
In our discussions of complexity, we made one assumption,

0:03
that the algorithm as a whole had a single measure of complexity.

0:07
That isn't true and we'll get at how we arrive at these measures for

0:11
the entire algorithm at the end of this exercise.

0:13
But each step in the algorithm has its own space and time complexity.

0:19
In linear search, for example, there are multiple steps and

0:22
the algorithm goes like this.

0:24
Start at the beginning of the list or range of values.

0:27
Compare the current value to the target.

0:29
If the current value is the target value that we're looking for, we're done.

0:33
If it's not, we'll move on sequentially to the next value in the list and

0:37
repeat step two.

0:39
If we reach the end of the list and the target value is not in the list,

0:42
let's go back to the step two for a second.

0:45
Comparing the current value to the target.

0:47
Does the signs of the dataset matter for this step?

0:51
When we're at step two we're already at that position in the list and

0:56
all we're doing is reading the value to make a comparison.

0:59
Reading the value is a single operation.

1:02
And if we were to plot it on a graph of Runtime per Operations against n,

1:06
it looks like this.

1:08
A straight line that takes constant time regardless of the size of n.

1:13
Since this takes the same amount of time in any given case,

1:17
we say that the runtime is constant time, it doesn't change.

1:21
In big O notation, we represent this as big O, with a 1 inside parentheses.

1:27
Now, when I first started learning all this,

1:30
I was really confused as to how to read this, even if it was in my own head.

1:34
Should I say big O of 1?

1:36
When you see this written, you're going to read this as constant time.

1:40
So reading a value in a list is a constant time operation.

1:44
This is the most ideal case when it comes to runtimes,

1:47
because input size does not matter.

1:49
And we know that regardless of the size of n,

1:51
the algorithm runtime will remain the same.

1:54
The next step up in complexity, so to speak,

1:57
is the situation we encountered with the binary search algorithm.

2:02
Traditionally, explaining the time complexity of binary search involves math.

2:06
I'm going to try to do it both with and without.

2:10
When we played the game using binary search,

2:13
we notice that with every turn we were able to discard half of the data.

2:17
But there's another pattern that emerges that we didn't explore.

2:21
Let say n equals 10.

2:23
How long is take to find an item at the tenth position of the list?

2:28
We can write this out.

2:29
So we go from 10 to 5 to 8 to 9 and then down to 10.

2:33
Here it takes us four tries to cut down the list to just one element and

2:38
find the value we're looking for.

2:40
Let's double the value of n to 20 and see how long it takes for

2:44
us to find an item at the 20th position.

2:46
So we start at 20, and then we pick 10.

2:48
From there we go to 15, 17, 19, and finally 20.

2:53
So here it takes us five tries.

2:55
Okay, let's double it again so that n is 40.

2:58
And we try to find the item in the 40th position.

3:02
So when we start at 40, the first midpoint we're going to pick is 20.

3:05
From there we go to 30, then 35, 37, 39, and then 40.

3:11
Notice that every time we double the value of n, the number of operations

3:17
it takes to reduce the list down to a single element only increases by one.

3:22
There is a mathematical relationship to this pattern,

3:26
and it's called a logarithm of n.

3:28
You don't really have to know what logarithms truly are.

3:31
But I know that some of you like underlying explainers, so

3:34
I'll give you a quick one.

3:35
If you've taken algebra classes, you may have learned about exponents.

3:39
Here's a quick refresher.

3:41
2 times 1 = 2.

3:43
Now this can be written as 2 raised to the first power, because it is our base case.

3:48
2 times 1 is 2, and 2 times 2 is 4.

3:52
This can be written as 2 raised to the second power because we're multiplying

3:57
2 twice.

3:58
First we multiply 2 times 1.

4:00
Then the result of that times 2.

4:02
2 times 2 times 2 is 8 and we can write this as 2 raised to the third

4:07
power because we're multiplying 2 three times.

4:11
In 2 raised to 2 and 2 raised to 3, the 2 and

4:14
3 there are called exponents and they define how the number grows.

4:19
With 2 raised to 3, we start with the base value and multiply itself three times.

4:25
The inverse of an exponent is called a logarithim.

4:29
So if I say log to the base 2 of 8 = 3,

4:32
I'm basically saying the opposite of an exponent.

4:36
Instead of saying how many times do I have to multiply this value,

4:40
I'm asking, how many times do I have to divide 8 by 2 to get the value 1?

4:45
This takes three operations.

4:47
What about the result of log to the base 2 of 16?

4:51
That evaluates to 4.

4:53
So why does any of this matter?

4:54
Notice that this is sort of how binary search works.

4:58
Log to the base 2 of 16 = 4.

5:02
If n was 16, how many tries does it take to get to that last element?

5:06
Well we start in the middle at 8, that's too low so we move to 12.

5:11
Then we move to 14, then to 15, and then to 16,

5:15
which is five tries, or log to the base 2 of 16 + 1.

5:20
In general, for a given value of n,

5:23
the number of tries it takes to find the worst case scenario is log of n + 1.

5:29
And because this pattern is overall a logarithmic pattern,

5:33
we say that the runtime of such algorithms is logarithmic.

5:37
If we plot these data points on our graph, a logarithmic runtime looks like this.

5:42
In big O notation, we represent a logarithmic runtime as O(log n),

5:48
which is written as big O with log n inside parentheses or

5:53
even sometimes as ln n inside parentheses.

5:57
When you see this, read it as logarithmic time.

6:00
As you can see on the graph as n grows really large,

6:04
the number of operations grows very slowly and eventually flattens out.

6:09
Since this line is below the line for a linear runtime,

6:13
which we'll look at in a second, you might often hear algorithms with

6:17
logarithmic runtimes being called sublinear.

6:20
Logarithmic or sublinear runtimes are preferred to linear because they're more

6:25
efficient, but in practice linear search has its own set of advantages,

6:29
which we'll take a look at in the next video.
You need to sign up for Treehouse in order to download course files.
Sign up