**Bummer!** This is just a preview. You need to be signed in with a Basic account to view the entire video.

Start a free Basic trial

to watch this video

A class of algorithms we likely won't work with but are fun to explore are those with exponential runtimes.

#### Glossary

*Polynomial runtimes - O(n ^ k)*: An algorithm is considered to have a polynomial runtime if, for a given value of `n`

, it's worst case runtime is in the form of n raise to the k power. Examples of this are `O(n^2)`

and `O(n^3)`

. Algorithms with polynomial runtimes are considered efficient.

*Exponential runtime - O(k^n)*: A runtime where number of operations increases exponentially as the size of the data set increases. Exponential runtimes are considered inefficient.

*Combinatorial or factorial runtime - O(n!)* - Runtimes where as the size of `n`

increases the number of operations increases by `n!`

where `n!`

is the factorial of `n`

.

#### Resources

The runtimes we've looked at so far are all called polynomial runtimes. 0:00 An algorithm is considered to have a polynomial run time if, for 0:05 a given value of n. 0:09 It's worst-case runtime is in the form of n raised to the k power, 0:10 where k just means some value. 0:15 So it could be n squared, where k = 2, for a quadratic runtime, and cubed for 0:17 a cubic runtime, and so on. 0:21 All of those are in the form of n raised to some power. 0:23 Anything that is bounded by this, and what I mean by that is, 0:27 if we had a hypothetical line on our graph of n raised to the k power? 0:31 Anything that falls under this graph is considered to have a polynomial runtime. 0:35 Algorithms with an upper bound, or 0:40 a runtime with a big O value that is polynomial, 0:42 are considered efficient algorithms, and are likely to be used in practice. 0:45 Now, the next class of runtimes that we're going to look at are runtimes that we 0:50 don't consider efficient. 0:55 And these are called exponential runtimes. 0:57 With these runtimes, as n increases slightly, 1:00 the number of operations increases exponentially. 1:03 And as we'll see in a second, 1:06 these algorithms are far too expensive to be used. 1:08 An exponential runtime is an algorithm with a big O value of 1:12 some number raised to the nth power. 1:16 Imagine that you wanted to break into a locker that had a padlock on it. 1:19 Let's assume you for forgot your code. 1:23 This lock takes a two-digit code, and the digit for the code ranges from 0 to 9. 1:25 You start by setting the dials to 0. 1:30 And then, with the first dial remaining on 0, you change the second dial to 1, 1:33 and try and open it. 1:38 If it doesn't work, you set it to 2, and try again. 1:39 You would keep doing this, and 1:42 if you still haven't succeeded with the second dial set to 9? 1:43 Then you go back to that first dial, set it to 1, and start the second dial over. 1:47 The range of values you'd have to go through is 0-0 to 9-9, 1:52 which is 100 values. 1:57 This can be generalized as 10 to the 2nd power, 1:58 since there are 10 values on each dial, raised to 2 dials. 2:02 Searching to each individual value, until you stumble on the right one, 2:07 is a strategy called brute force. 2:11 And brute force algorithms have exponential runtimes. 2:13 Here, there are two dials, so n is 2, and each dial has 10 values. 2:16 So again, we can generalize this algorithm as 10 raised to n, 2:20 where n represents the number of dials. 2:25 The reason that this algorithm is so inefficient is because with just one more 2:27 dial on the lock, the number of operations increases significantly. 2:32 With three dials, the number of combinations in the worst-case scenario, 2:36 where the correct code is the last digit in the range, is 10 raised to 3, or 2:41 1,000 values. 2:45 With an additional wheel, it becomes 10 raised to 4, or 10,000 values. 2:46 As n increases, the number of operations increases exponentially, 2:51 to a point where it's unsolvable in a realistic amount of time. 2:55 Now you might think, well, any computer can crack a four-digit numerical lock. 3:00 And that's true, because n here is sufficiently small. 3:05 But this is the same principle that we use for passwords. 3:08 In a typical password field, implemented well, users are allowed to use letters of 3:12 the English alphabet, so to up to 26 characters. 3:16 Numbers from 0 to 9, and 3:19 a set of special characters of which there can be around 33. 3:21 So typically, that means each character in a password can be 1 out of 69 values. 3:25 This means that for a one-character password, 3:32 it takes 69 to the nth power, so 1. 3:35 Which equals 69 operations, in the worst-case scenario, 3:38 to figure out the password. 3:42 Just increasing n to 2 increases the number of operations needed 3:43 to guess the password to 69 squared, or 4,761 operations. 3:49 Now usually, on a secure website, there isn't really a limit. 3:54 But in general, passwords are limited to around 20 characters in length. 3:58 With each character being a possible 69 values, and there being 20 characters. 4:03 The number of operations needed to guess the password, 4:08 in the worst-case scenario, is 69 raised to the 20th power. 4:11 Or approximately 6, followed by 36 zeros, number of operations. 4:15 An intel CPU with five cores can carry out roughly about 65,000 4:22 million instructions per second. 4:26 That's a funny number, I know. 4:29 To crack our 20-digit passcode in this very simplistic model, it 4:31 would take this Intel CPU 2 raised to 20th power years to brute-force the password. 4:36 So while this algorithm would eventually produce a result, it is so 4:43 inefficient that it's pointless. 4:47 This is one of the reasons why people recommend you have longer passwords. 4:49 Since brute forcing is exponential in the worst case, 4:53 each character you add increases the number of combinations by an exponent. 4:56 The next class of exponential algorithms is best highlighted 5:01 by popular problem known as the traveling salesman. 5:05 The problem statement goes like this. 5:09 Given a list of cities, and the distance between each pair of cities. 5:11 What is the shortest possible route that visits each city, and 5:15 then returns to the origin city? 5:19 This seems like a simple question. 5:21 But let's start with a simple case, three cities, A, B, and C. 5:23 To figure out what the shortest route is, 5:28 we need to come up with all the possible routes. 5:30 With three cities, we have 6 routes, in theory at least. 5:33 Some of these routes can be discarded. 5:37 Because A-B-C is the same as C-B-A, but in the opposite direction. 5:39 But as we do know sometimes, 5:43 going from A to C through B may go through a different route than C through A to B. 5:45 So we'll stick to the six routes, and from there, we could determine the shortest, 5:50 no big deal. 5:54 Now, if we increase this to four cities, we jump to 24 combinations. 5:55 The mathematical relationship that defines this is called a factorial, and 6:00 is written out as n followed by an exclamation point. 6:05 Factorials are basically n times (n- 1), repeated until you reach the number 1. 6:08 So for example, the factorial of 3 is 3 x 2 x 1, which is 6. 6:15 Which is the number of combinations we came up with for three cities. 6:20 The factorial of 4 is 4 x 3 x 2 x 1, or 24, 6:24 which is the number of combinations we arrived at with four cities. 6:28 In solving the travelling salesman problem, 6:33 the most efficient algorithm will have a factorial runtime. 6:36 Or a combinatorial runtime, as it's also called. 6:40 At low values of n, algorithms with a factorial runtime may be used. 6:44 But with an n value of say, 200, 6:48 it would take longer than humans have been alive to solve the problem. 6:50 For sake of completeness, let's plot a combinatorial runtime on our graph, so 6:55 that we can compare. 7:00 An algorithm, such as one that solves the traveling salesman problem, 7:01 has a worst-case runtime of big O of n factorial. 7:06 Studying exponential runtimes like this are useful for two reasons. 7:09 First, in studying how to make such algorithms efficient, 7:14 we develop strategies that are useful across the board. 7:17 And can potentially be used to make existing algorithms even more efficient. 7:20 Second, it's important to be aware of problems that take a long time to solve. 7:25 Knowing right off the bat that a problem is somewhat unsolveable in a realistic 7:29 time means you can focus your efforts on other aspects of the problem. 7:34 As beginners, though, we're going to steer clear of all this, and 7:38 focus our efforts on algorithms with polynomial runtimes. 7:41 Since we're much more likely to work with, and learn about such algorithms. 7:45 Now that we know some of the common complexities, 7:49 in the next video, let's talk about how we determine the complexity of an algorithm. 7:52 Because there are some nuances. 7:56

You need to sign up for Treehouse in order to download course files.

Sign up