Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Start a free Basic trial
to watch this video
A class of algorithms we likely won't work with but are fun to explore are those with exponential runtimes.
Glossary
Polynomial runtimes  O(n ^ k): An algorithm is considered to have a polynomial runtime if, for a given value of n
, it's worst case runtime is in the form of n raise to the k power. Examples of this are O(n^2)
and O(n^3)
. Algorithms with polynomial runtimes are considered efficient.
Exponential runtime  O(k^n): A runtime where number of operations increases exponentially as the size of the data set increases. Exponential runtimes are considered inefficient.
Combinatorial or factorial runtime  O(n!)  Runtimes where as the size of n
increases the number of operations increases by n!
where n!
is the factorial of n
.
Resources

0:00
The runtimes we've looked at so far are all called polynomial runtimes.

0:05
An algorithm is considered to have a polynomial run time if, for

0:09
a given value of n.

0:10
It's worstcase runtime is in the form of n raised to the k power,

0:15
where k just means some value.

0:17
So it could be n squared, where k = 2, for a quadratic runtime, and cubed for

0:21
a cubic runtime, and so on.

0:23
All of those are in the form of n raised to some power.

0:27
Anything that is bounded by this, and what I mean by that is,

0:31
if we had a hypothetical line on our graph of n raised to the k power?

0:35
Anything that falls under this graph is considered to have a polynomial runtime.

0:40
Algorithms with an upper bound, or

0:42
a runtime with a big O value that is polynomial,

0:45
are considered efficient algorithms, and are likely to be used in practice.

0:50
Now, the next class of runtimes that we're going to look at are runtimes that we

0:55
don't consider efficient.

0:57
And these are called exponential runtimes.

1:00
With these runtimes, as n increases slightly,

1:03
the number of operations increases exponentially.

1:06
And as we'll see in a second,

1:08
these algorithms are far too expensive to be used.

1:12
An exponential runtime is an algorithm with a big O value of

1:16
some number raised to the nth power.

1:19
Imagine that you wanted to break into a locker that had a padlock on it.

1:23
Let's assume you for forgot your code.

1:25
This lock takes a twodigit code, and the digit for the code ranges from 0 to 9.

1:30
You start by setting the dials to 0.

1:33
And then, with the first dial remaining on 0, you change the second dial to 1,

1:38
and try and open it.

1:39
If it doesn't work, you set it to 2, and try again.

1:42
You would keep doing this, and

1:43
if you still haven't succeeded with the second dial set to 9?

1:47
Then you go back to that first dial, set it to 1, and start the second dial over.

1:52
The range of values you'd have to go through is 00 to 99,

1:57
which is 100 values.

1:58
This can be generalized as 10 to the 2nd power,

2:02
since there are 10 values on each dial, raised to 2 dials.

2:07
Searching to each individual value, until you stumble on the right one,

2:11
is a strategy called brute force.

2:13
And brute force algorithms have exponential runtimes.

2:16
Here, there are two dials, so n is 2, and each dial has 10 values.

2:20
So again, we can generalize this algorithm as 10 raised to n,

2:25
where n represents the number of dials.

2:27
The reason that this algorithm is so inefficient is because with just one more

2:32
dial on the lock, the number of operations increases significantly.

2:36
With three dials, the number of combinations in the worstcase scenario,

2:41
where the correct code is the last digit in the range, is 10 raised to 3, or

2:45
1,000 values.

2:46
With an additional wheel, it becomes 10 raised to 4, or 10,000 values.

2:51
As n increases, the number of operations increases exponentially,

2:55
to a point where it's unsolvable in a realistic amount of time.

3:00
Now you might think, well, any computer can crack a fourdigit numerical lock.

3:05
And that's true, because n here is sufficiently small.

3:08
But this is the same principle that we use for passwords.

3:12
In a typical password field, implemented well, users are allowed to use letters of

3:16
the English alphabet, so to up to 26 characters.

3:19
Numbers from 0 to 9, and

3:21
a set of special characters of which there can be around 33.

3:25
So typically, that means each character in a password can be 1 out of 69 values.

3:32
This means that for a onecharacter password,

3:35
it takes 69 to the nth power, so 1.

3:38
Which equals 69 operations, in the worstcase scenario,

3:42
to figure out the password.

3:43
Just increasing n to 2 increases the number of operations needed

3:49
to guess the password to 69 squared, or 4,761 operations.

3:54
Now usually, on a secure website, there isn't really a limit.

3:58
But in general, passwords are limited to around 20 characters in length.

4:03
With each character being a possible 69 values, and there being 20 characters.

4:08
The number of operations needed to guess the password,

4:11
in the worstcase scenario, is 69 raised to the 20th power.

4:15
Or approximately 6, followed by 36 zeros, number of operations.

4:22
An intel CPU with five cores can carry out roughly about 65,000

4:26
million instructions per second.

4:29
That's a funny number, I know.

4:31
To crack our 20digit passcode in this very simplistic model, it

4:36
would take this Intel CPU 2 raised to 20th power years to bruteforce the password.

4:43
So while this algorithm would eventually produce a result, it is so

4:47
inefficient that it's pointless.

4:49
This is one of the reasons why people recommend you have longer passwords.

4:53
Since brute forcing is exponential in the worst case,

4:56
each character you add increases the number of combinations by an exponent.

5:01
The next class of exponential algorithms is best highlighted

5:05
by popular problem known as the traveling salesman.

5:09
The problem statement goes like this.

5:11
Given a list of cities, and the distance between each pair of cities.

5:15
What is the shortest possible route that visits each city, and

5:19
then returns to the origin city?

5:21
This seems like a simple question.

5:23
But let's start with a simple case, three cities, A, B, and C.

5:28
To figure out what the shortest route is,

5:30
we need to come up with all the possible routes.

5:33
With three cities, we have 6 routes, in theory at least.

5:37
Some of these routes can be discarded.

5:39
Because ABC is the same as CBA, but in the opposite direction.

5:43
But as we do know sometimes,

5:45
going from A to C through B may go through a different route than C through A to B.

5:50
So we'll stick to the six routes, and from there, we could determine the shortest,

5:54
no big deal.

5:55
Now, if we increase this to four cities, we jump to 24 combinations.

6:00
The mathematical relationship that defines this is called a factorial, and

6:05
is written out as n followed by an exclamation point.

6:08
Factorials are basically n times (n 1), repeated until you reach the number 1.

6:15
So for example, the factorial of 3 is 3 x 2 x 1, which is 6.

6:20
Which is the number of combinations we came up with for three cities.

6:24
The factorial of 4 is 4 x 3 x 2 x 1, or 24,

6:28
which is the number of combinations we arrived at with four cities.

6:33
In solving the travelling salesman problem,

6:36
the most efficient algorithm will have a factorial runtime.

6:40
Or a combinatorial runtime, as it's also called.

6:44
At low values of n, algorithms with a factorial runtime may be used.

6:48
But with an n value of say, 200,

6:50
it would take longer than humans have been alive to solve the problem.

6:55
For sake of completeness, let's plot a combinatorial runtime on our graph, so

7:00
that we can compare.

7:01
An algorithm, such as one that solves the traveling salesman problem,

7:06
has a worstcase runtime of big O of n factorial.

7:09
Studying exponential runtimes like this are useful for two reasons.

7:14
First, in studying how to make such algorithms efficient,

7:17
we develop strategies that are useful across the board.

7:20
And can potentially be used to make existing algorithms even more efficient.

7:25
Second, it's important to be aware of problems that take a long time to solve.

7:29
Knowing right off the bat that a problem is somewhat unsolveable in a realistic

7:34
time means you can focus your efforts on other aspects of the problem.

7:38
As beginners, though, we're going to steer clear of all this, and

7:41
focus our efforts on algorithms with polynomial runtimes.

7:45
Since we're much more likely to work with, and learn about such algorithms.

7:49
Now that we know some of the common complexities,

7:52
in the next video, let's talk about how we determine the complexity of an algorithm.

7:56
Because there are some nuances.
You need to sign up for Treehouse in order to download course files.
Sign up