Welcome to the Treehouse Community
The Treehouse Community is a meeting place for developers, designers, and programmers of all backgrounds and skill levels to get support. Collaborate here on code errors or bugs that you need feedback on, or asking for an extra set of eyes on your latest project. Join thousands of Treehouse students and alumni in the community today. (Note: Only Treehouse students can comment or ask questions, but non-students are welcome to browse our conversations.)
Looking to learn something new?
Treehouse offers a seven day free trial for new students. Get access to thousands of hours of content and a supportive community. Start your free trial today.
I am stump on what is the complexity and Big O notation within this algorithm
In the video for binary search the worst case scenario is the target '8' but that means big O notation would be O(log 8) given n=8. Also since this is binary search it will have logarithmic time in the worst case scenario. Yet while loop shows as follows
''' while first <= last: midpoint = (first + last)//2 '''
Is divide by two but in the list for the midpoint is at 4 is that why we divide by two? Also how can the while loop be constant time and logarithmic time together in the same condition? I hope this isn't a confusing question, but any explanation could help.
Because the length of the list is halved with each iteration of the while loop, we get a time complexity of O(log(n)). If it is helpful for you to think of O(log(n)) in concrete terms, by substituting 8 for n, you will see that this algorithm does indeed have a worst case performance of ceiling( log2(8) + 1 ) iterations of the while loop to complete. Imagine we have some while loop that will iterate at most log2(n) + 1 times for any input n and target within n, and within that while loop there is some constant number k of constant time operations that take place. The total maximum number of operations needed is then ceiling( k*log2(n) + 1 ). In big O notation, multiplication by a constant works as follows: O(|k|g) = O(g) if k is nonzero (source: wikipedia) It's not that the constant time operations just go away, it's that they are increasingly irrelevant compared to the logarithmic time operation as n becomes large. I hope this helps, the question was a little hard to understand.
The complexity is a logarithmic runtime for the upper bound time limit, and/or O(log n).
Because there is one part of the algorithm wich involves logarithmic runtime (the while loop). Even if there are other parts with a constant runtim, remember that we implement the "Worst case scenario" to predict any possible situations.
Have a nice time !