Welcome to the Treehouse Community

Want to collaborate on code errors? Have bugs you need feedback on? Looking for an extra set of eyes on your latest project? Get support with fellow developers, designers, and programmers of all backgrounds and skill levels here with the Treehouse Community! While you're at it, check out some resources Treehouse students have shared here.

Looking to learn something new?

Treehouse offers a seven day free trial for new students. Get access to thousands of hours of content and join thousands of Treehouse students and alumni in the community today.

Start your free trial

Computer Science Introduction to Algorithms Algorithms in Code Binary Search in Code

Calvin Secrest
Calvin Secrest
24,815 Points

I am stump on what is the complexity and Big O notation within this algorithm

In the video for binary search the worst case scenario is the target '8' but that means big O notation would be O(log 8) given n=8. Also since this is binary search it will have logarithmic time in the worst case scenario. Yet while loop shows as follows

''' while first <= last: midpoint = (first + last)//2 '''

Is divide by two but in the list for the midpoint is at 4 is that why we divide by two? Also how can the while loop be constant time and logarithmic time together in the same condition? I hope this isn't a confusing question, but any explanation could help.

Thanks!

3 Answers

Because the length of the list is halved with each iteration of the while loop, we get a time complexity of O(log(n)). If it is helpful for you to think of O(log(n)) in concrete terms, by substituting 8 for n, you will see that this algorithm does indeed have a worst case performance of ceiling( log2(8) + 1 ) iterations of the while loop to complete. Imagine we have some while loop that will iterate at most log2(n) + 1 times for any input n and target within n, and within that while loop there is some constant number k of constant time operations that take place. The total maximum number of operations needed is then ceiling( k*log2(n) + 1 ). In big O notation, multiplication by a constant works as follows: O(|k|g) = O(g) if k is nonzero (source: wikipedia) It's not that the constant time operations just go away, it's that they are increasingly irrelevant compared to the logarithmic time operation as n becomes large. I hope this helps, the question was a little hard to understand.

gustavoalvarado
seal-mask
.a{fill-rule:evenodd;}techdegree
gustavoalvarado
Python Development Techdegree Student 7,278 Points

The complexity is a logarithmic runtime for the upper bound time limit, and/or O(log n).

Because there is one part of the algorithm wich involves logarithmic runtime (the while loop). Even if there are other parts with a constant runtim, remember that we implement the "Worst case scenario" to predict any possible situations.

Have a nice time !

It's not that the constant time operations just go away, it's that they are increasingly irrelevant compared to the logarithmic time operation as n becomes large.

Excellent explanation, in case this wasn't already clear.
Thanks!