Constant and Logarithmic Time6:32 with Pasan Premaratne
As you learn about algorithms you will run (pun intended) into some common runtimes that algorithms exhibit. In this video we'll look at two of them - constant and logarithmic runtimes
Constant Time - O(1): The runtime of the algorithm is independent of the size of the data set. If n is 1 or 1 million it takes the same amount of time to execute the algorithm.
Logarithmic Time - O(log n): The runtime of the algorithm increases logarithmically as the size of the data set increases.
In our discussions of complexity, we made one assumption, 0:00 that the algorithm as a whole had a single measure of complexity. 0:03 That isn't true and we'll get at how we arrive at these measures for 0:07 the entire algorithm at the end of this exercise. 0:11 But each step in the algorithm has its own space and time complexity. 0:13 In linear search, for example, there are multiple steps and 0:19 the algorithm goes like this. 0:22 Start at the beginning of the list or range of values. 0:24 Compare the current value to the target. 0:27 If the current value is the target value that we're looking for, we're done. 0:29 If it's not, we'll move on sequentially to the next value in the list and 0:33 repeat step two. 0:37 If we reach the end of the list and the target value is not in the list, 0:39 let's go back to the step two for a second. 0:42 Comparing the current value to the target. 0:45 Does the signs of the dataset matter for this step? 0:47 When we're at step two we're already at that position in the list and 0:51 all we're doing is reading the value to make a comparison. 0:56 Reading the value is a single operation. 0:59 And if we were to plot it on a graph of Runtime per Operations against n, 1:02 it looks like this. 1:06 A straight line that takes constant time regardless of the size of n. 1:08 Since this takes the same amount of time in any given case, 1:13 we say that the runtime is constant time, it doesn't change. 1:17 In big O notation, we represent this as big O, with a 1 inside parentheses. 1:21 Now, when I first started learning all this, 1:27 I was really confused as to how to read this, even if it was in my own head. 1:30 Should I say big O of 1? 1:34 When you see this written, you're going to read this as constant time. 1:36 So reading a value in a list is a constant time operation. 1:40 This is the most ideal case when it comes to runtimes, 1:44 because input size does not matter. 1:47 And we know that regardless of the size of n, 1:49 the algorithm runtime will remain the same. 1:51 The next step up in complexity, so to speak, 1:54 is the situation we encountered with the binary search algorithm. 1:57 Traditionally, explaining the time complexity of binary search involves math. 2:02 I'm going to try to do it both with and without. 2:06 When we played the game using binary search, 2:10 we notice that with every turn we were able to discard half of the data. 2:13 But there's another pattern that emerges that we didn't explore. 2:17 Let say n equals 10. 2:21 How long is take to find an item at the tenth position of the list? 2:23 We can write this out. 2:28 So we go from 10 to 5 to 8 to 9 and then down to 10. 2:29 Here it takes us four tries to cut down the list to just one element and 2:33 find the value we're looking for. 2:38 Let's double the value of n to 20 and see how long it takes for 2:40 us to find an item at the 20th position. 2:44 So we start at 20, and then we pick 10. 2:46 From there we go to 15, 17, 19, and finally 20. 2:48 So here it takes us five tries. 2:53 Okay, let's double it again so that n is 40. 2:55 And we try to find the item in the 40th position. 2:58 So when we start at 40, the first midpoint we're going to pick is 20. 3:02 From there we go to 30, then 35, 37, 39, and then 40. 3:05 Notice that every time we double the value of n, the number of operations 3:11 it takes to reduce the list down to a single element only increases by one. 3:17 There is a mathematical relationship to this pattern, 3:22 and it's called a logarithm of n. 3:26 You don't really have to know what logarithms truly are. 3:28 But I know that some of you like underlying explainers, so 3:31 I'll give you a quick one. 3:34 If you've taken algebra classes, you may have learned about exponents. 3:35 Here's a quick refresher. 3:39 2 times 1 = 2. 3:41 Now this can be written as 2 raised to the first power, because it is our base case. 3:43 2 times 1 is 2, and 2 times 2 is 4. 3:48 This can be written as 2 raised to the second power because we're multiplying 3:52 2 twice. 3:57 First we multiply 2 times 1. 3:58 Then the result of that times 2. 4:00 2 times 2 times 2 is 8 and we can write this as 2 raised to the third 4:02 power because we're multiplying 2 three times. 4:07 In 2 raised to 2 and 2 raised to 3, the 2 and 4:11 3 there are called exponents and they define how the number grows. 4:14 With 2 raised to 3, we start with the base value and multiply itself three times. 4:19 The inverse of an exponent is called a logarithim. 4:25 So if I say log to the base 2 of 8 = 3, 4:29 I'm basically saying the opposite of an exponent. 4:32 Instead of saying how many times do I have to multiply this value, 4:36 I'm asking, how many times do I have to divide 8 by 2 to get the value 1? 4:40 This takes three operations. 4:45 What about the result of log to the base 2 of 16? 4:47 That evaluates to 4. 4:51 So why does any of this matter? 4:53 Notice that this is sort of how binary search works. 4:54 Log to the base 2 of 16 = 4. 4:58 If n was 16, how many tries does it take to get to that last element? 5:02 Well we start in the middle at 8, that's too low so we move to 12. 5:06 Then we move to 14, then to 15, and then to 16, 5:11 which is five tries, or log to the base 2 of 16 + 1. 5:15 In general, for a given value of n, 5:20 the number of tries it takes to find the worst case scenario is log of n + 1. 5:23 And because this pattern is overall a logarithmic pattern, 5:29 we say that the runtime of such algorithms is logarithmic. 5:33 If we plot these data points on our graph, a logarithmic runtime looks like this. 5:37 In big O notation, we represent a logarithmic runtime as O(log n), 5:42 which is written as big O with log n inside parentheses or 5:48 even sometimes as ln n inside parentheses. 5:53 When you see this, read it as logarithmic time. 5:57 As you can see on the graph as n grows really large, 6:00 the number of operations grows very slowly and eventually flattens out. 6:04 Since this line is below the line for a linear runtime, 6:09 which we'll look at in a second, you might often hear algorithms with 6:13 logarithmic runtimes being called sublinear. 6:17 Logarithmic or sublinear runtimes are preferred to linear because they're more 6:20 efficient, but in practice linear search has its own set of advantages, 6:25 which we'll take a look at in the next video. 6:29
You need to sign up for Treehouse in order to download course files.Sign up