Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
Space complexity is a measure of how much working storage, or extra storage, is needed as a particular algorithm grows. In this video let's take a look at the space complexity of our algorithms
Glossary
 Space Complexity  a measure of how much working storage, or extra storage, is needed as a particular algorithm grows
 Tail call  a call, inside a function, to the same function itself as the last operation. Also called tail recursion
Resources

0:00
At the beginning of this series,

0:01
I mentioned that there were two ways of measuring the efficiency of an algorithm.

0:05
The first was time complexity, or

0:08
how the runtime of an algorithm grows as N grows larger.

0:11
The second is space complexity.

0:14
We took a pretty long route to build up this example, but

0:17
now we're in a good place to discuss space complexity.

0:20
Space complexity is a measure of how much working storage or

0:25
extra storage is needed as a particular algorithm grows.

0:29
We don't think about it much these days, but

0:32
every single thing we do on a computer takes up space in memory.

0:36
In the early days of computing, considering memory usage was of paramount

0:41
importance because memory was limited and really expensive.

0:45
These days we're spoiled, our devices are rich with memory.

0:48
This is okay when we write everyday code because most of us aren't dealing

0:53
with enormously large data sets.

0:55
When we write algorithms however, we need to think about this because we want

1:00
to design our algorithms to perform as efficiently as it can,

1:04
as the size of the data set, N, grows really large.

1:07
Like time complexity,

1:09
space complexity is measured in the worstcase scenario using big O notation.

1:14
Since you are familiar with the different kinds of complexities,

1:19
let's dive right into an example.

1:21
In our iterative implementation of binary search, the first one we wrote that uses

1:26
a while loop, let's look at what happens to our memory usage as N gets large.

1:31
Let's bring up that function.

1:34
Let's say we start off with a list of ten elements.

1:38
Now inspecting the codes we see that our solution relies heavily on these two

1:43
variables first and last.

1:44
First points to the start of the list and last, to the end.

1:48
When we eliminate a set of values, we don't actually create a sublist.

1:52
Instead we just redefine first and last as you see here,

1:57
to point to a different section of the list.

2:01
Since the algorithm only considers the values between first and

2:05
last when determining the midpoint, by redefining first and

2:09
last as the algorithm proceeds we can find the solution using just the original list.

2:14
This means that for any value of n the space complexity of

2:18
the interative version of binary search is constant.

2:22
Or that the iterative version of binary search takes constant space.

2:27
Remember that we would write this as big O of one.

2:31
This might seem confusing, because as n grows,

2:34
we need more storage to account for that larger list size.

2:37
Now this is true, but

2:39
that storage is not what space complexity cares about measuring.

2:43
We care about what additional storage is needed as the algorithm runs and

2:48
tries to find a solution.

2:50
If we assume something simple, say that for a given size of a list, represented

2:55
by a value N, it takes N amount of space to store it, whatever that means.

3:00
Then for the iterative version of binary search, regardless of how large

3:05
the list is, at the start, middle, and end of the algorithm process,

3:10
the amount of storage required does not get larger than N.

3:15
And this is why we consider it to run in constant space.

3:18
Now, this is an entirely different story with the recursive version, however.

3:23
In the recursive version of binary search we don't make use of variables to keep

3:27
track of which portion of the list we're working with.

3:30
Instead, we create new lists every time with a subset of values, or

3:35
sublists, with every recursive function call.

3:39
Let's assume we have a list of size n and in the worst case scenario,

3:43
the target element is the last in the list.

3:46
Calling the Recursive Implementation of binary search on this list and

3:50
target would lead to a scenario like this.

3:53
The function would call itself and

3:55
create a new list that goes from the midpoint to the end of the list.

3:59
Since we're discarding half the values, the size of the sublist is N by 2.

4:05
This function will keep calling itself,

4:07
creating a new sublist that's half the size of the current one,

4:11
until it arrives at a single element list, and a stopping condition.

4:15
This pattern that you see here, where the size of the sublist is reduced

4:19
by a factor on each execution of the algorithmic logic,

4:23
well we've seen that pattern before.

4:25
Do you remember where?

4:26
This is exactly how binary search works.

4:29
[SOUND] It discards half the values every time until it finds a solution.

4:33
Now we know that because of this pattern,

4:36
the running time of binary search is logarithmic.

4:38
In fact,

4:39
this space complexity of the recursive version of binary search is the same.

4:44
If we start out with the memory allocation of size N that matches the list,

4:49
on each function call of recursive binary search,

4:53
we need to allocate additional memory of size N/2, N/4, and so

4:57
on until we have a sublist that is either empty or contains a single value.

5:02
Because of this, we say that the recursive version of the binary

5:07
search algorithm runs in logarithmic time with a big O of log N.

5:12
Now, there's an important caveat here.

5:14
This totally depends on the language.

5:16
Remember how I said that a programming language like Swift can do some tricks to

5:21
where recursion depth doesn't matter?

5:23
The same concept applies here.

5:25
If you care to read more about this concept, it's called tail optimization.

5:30
It's called tail optimization because if you think of a function as

5:35
having a head and a tail, if the recursive function call is the last line

5:40
of code in the function, as it is in our case, we call this tail recursion,

5:45
since it's the last part of the function that calls itself.

5:49
Now the trick that Swift does to reduce the amount of space, and

5:53
therefore computing overhead, to keep track of this recursive calls,

5:58
is called tail call optimization, or tail call elimination.

6:02
It's one of those things that you'll see thrown around a lot

6:06
in algorithm discussions, but may not always be relevant to you.

6:09
Now, what of any this is relevant to us?

6:12
Well, Python does not implement tail call optimization.

6:16
So the recursive version of binary search takes logarithmic space.

6:20
If we had to choose between the two implementations,

6:24
given that time complexity or runtime of both versions, the iterative and

6:28
the recursive version are the same, we should definitely go with the iterative

6:33
implementation in Python, since it runs in constant space.

6:37
Okay, that was a lot.

6:38
But all of this, with all of this we've now established two important ways

6:43
to distinguish between algorithms that handle the same task,

6:47
and determine which one we should use.
You need to sign up for Treehouse in order to download course files.
Sign up