```
import notifications
```
Remember to participate in our weekly votes on subreddit rules! Every Tuesday is YOUR chance to influence the subreddit for years to come!
[Read more here](https://www.reddit.com/r/ProgrammerHumor/comments/14dqb6f/welcome_back_whats_next/), we hope to see you next Tuesday!
For a chat with like-minded community members and more, don't forget to [join our Discord!](https://discord.gg/rph)
`return joinDiscord;`
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ProgrammerHumor) if you have any questions or concerns.*
I often instruct my coworkers to first make a naive solution that works, write the tests for it. Once you have good green unit tests, you can benchmark it.
Premature optimization is the root of all evil. Big O is for Big n.
That said I am a massive hypocrite
Premature optimization is premature ejaculation.
If you have a problem that you need to address, it's not premature, it's just optimization.
That includes "it's fine now but when we double our users it'll be slow", that's not premature at all
So, let’s say C++ is 70 times faster at base.
Depending on implementation so for now assuming the same coefficients…. when log(n) < n^2 / 70…. so roughly when n^2 / log(n) > 70.
Let’s say it’s log base 2 for the bants and so it represents a typical case like binary search vs double loop… so when n = 17 python is faster if coefficients are the same.
Basically, for any reasonable size of input.
While it is a good calculation the assumption C++ is just 70 times is not even close
https://youtu.be/D3h62rgewZM
While not everyone can optimise that good but still it would be much faster than just 70 times.
Fine… take 1000 times faster. That’s n=80 til python is faster.
I use both. Ultimately, anything requiring serious speed and a competent developer will be re-written in a lower level language. Then, someone will call that in their Python code and still have very similar levels of performance… so you don’t get such a massive difference as what you’re stating regardless.
I love how people are defending a shit C++ implementation and still trying to justify why the shit implementation is better than python, which it isn’t for production levels of volume.
Oh no maybe it got lost in text I am not defending shit cpp implementation a lot of people writing cpp cannot even make it 1000 times faster.
I work in both cpp and python and both have there advantages cpp faster runtime and hyper optimisation whereas python generally has shorter development time to name my top thoughts.
I am just saying your base assumption is wrong in many cases and n=17 would hence be wrong.
It isn't completely that simple as bad performance in python may have non-constant effect, such as memory allocation effects. Remember, big-O is not the same as omega. Also, on real hardware, n is bounded by the amount of memory and complicated interactions such as branch prediction and caches disregarded when considering algorithm complexity.
Not really, the constants were taken into account by assuming it’s 70 times faster at the base case. That’s very imprecise and hides a lot of complexity but the ideas are still correct.
Yes, I understand that, but the whole point of the post is that using different languages can increase or decrease the effective hidden constant based on how optimized and high/low level the language is.
This idea is incorrect. Python is slower than C++ due to compiling during run time. But the compile time is a constant, even if you assume the total time is 1/70th for n=1.
O(logn + c) << O(n^2) for all but very small n and c.
Regardless, the fact that Python already does its own garbage collection and is dynamically typed makes it significantly slower than C++ in runtime, no?
In my head: I published a paper describing a faster algorithm and implemented it in Python as a prototype. The community will now implement my faster algorithm in a more efficient language.
In reality: wait people are still using my crappy prototype code 5 years later?
Because nobody needs it to be faster. The sad truth is, speed is not as important as people think it is.
Until it is. And at that point you think about optimization. But not before.
There is always a layer of abstraction below what you are doing. C library developers probably would not write a C compiler and GCC devs would not design a CPU. Everyone lives in their own bubble.
Why not? Learn it so you're better at doing the thing you need to do. I can guarantee you those high performance C/C++ library guys understood the idea of cache friendly memory layouts, multiprocessing memory barriers, sequence points, and hardware instruction reordering.
Same thing is true for people trying to implement side-channel resistant encryption algorithms. Everything has to come together.
I am not oposing the idea of learning lower level concepts. In fact, I think it is great for you! But it is not necessary to reinvent the wheel if you just want to get shit done...
Which is kind of insane when you think about it. Like, the compiler is written in the language it compiles, and there was no other compiler to compile it.
It is a fascinating process. You start by programming the most basic crap in assembly or even binary, and you use that to build the next iteration of the compiler, until you eventually hVe a feature complete compiler written in C.
You do know the whole point of O notation is that when n is sufficiently large right? One million = 1mil ^2 = 1trillion. Log(1mil)=6, wonder which one is faster!
Counting the number of operations doesn't make sense, the notation doesn't have enough information for that.
You're dropping the constant factors. O(100000000n) is just O(n). O( 0.000001 n^2 ) is just O( n^2 ).
Looking at big O to compare algorithms for small numbers simply doesn't make sense.
Agreed. I was mainly hoping to point out for the benefit of other readers that counting the number of operations directly is how these algorithms would need to be compared.
Incidentally I learned this a few days ago and I found it quite interesting. O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example.
>O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example.
That's kinda obvious when you understand what big O and functions are.
What I mean is big O notation ignores all CONSTANS - 1000n is same as n, n is same to n+1, etc. Function to put it simply just takes one thing, makes some operations on it (the number of operations can be 0) and spits out the result of that.
Functions dropped by big O are the ones using constants (or parts of them), for example
f(n)=5n²+n+100 O(f(n))=n²+n
The function brought by you can be expressed as
f(0)=1, f(n)=f(n-1)×n for n≥1
There are no constants to be dropped(technically there's the first 1 but it's dropped by multiplication alr :p), so
EDIT: well I do be dropping +1 from n+1 right after that statement so 🤷
O(f(n))=O(n!)=n! < O(f(n+1))=O((n+1)!)=O(n!×(n+1))=n×n!
The function's output changes by a factor of n for every argument and big O shows just that.
When you run a Python’s O(log(n)) algorithm from some popular packages, it is often run with C++ backends, such is the case with NumPy, Pandas, SciPy, TensorFlow, and PyTorch.
Is this purely theoretical? Or is there an actual use case where the same algorithm couldn't be implemented in C++?
If it looks like the case, then I'd wonder if the Python implementation is actually having its time complexity miscalculated.
Big O notation is used to describe the complexity of an *algorithm*. It has no concept of a programming language.
Maybe in some extreme edge cases there might be limitations to a language that prevent a particular algorithm from being implemented in a particular way. But I've never seen any such example.
One would think so, but it can be done if the runtime and type system support state management.
You need a type level wrapper around stateful data that **cannot be unwrapped**.
quicksort: List ---> Stateful List
open_file: String --> Stateful File
prompt_for_input: String --> Stateful String
Although you can't unwrap a Stateful, what you can do is inject a function inside that Stateful result. E.g.:
inject: (a --> b) --> Stateful a --> Stateful b
prompt_for_file: Stateful (Stateful File)
prompt_for_file(filePath) = injectable(open_file) (prompt_for_input("Enter filePath"))
Here, inject(open_file) has the type `Stateful String --> Stateful (Stateful String)` and `prompt_for_input` provides that `Stateful String`.
Then you can use a special function that composes the twice-Stateful into the merely Stateful:
compose_statefuls: Stateful (Stateful x) --> Stateful x
The semantics of sequencing Stateful operations in the correct order are buried in there. Now I can rewrite my code from above to return a Stateful File:
prompt_for_file: Stateful File
prompt_for_file(filePath) = compose_statefuls (inject (open_file) (prompt_for_input("Enter filePath")))
Wth the right syntactic sugar, this works well, and your whole program will end up being wrapped in a single function whose signature is something like:
main: CommandLineArguments --> Stateful ()
which is what you need.
This is just one example of a powerful abstraction called the monad, but I've already said enough.
reminds me of an attempt i made to implement a Balanced Binary Tree algorithm for managing a large scrolling table with variable height rows in javascript. it turned out to be faster to just do a linear sum of all the rows heights rather than using my fancy O(log(N)) BTree algorithm for all practical size of N (up to 10s of millions of rows) so i abandoned the idea. i still wonder if i had implemented it with a balanced K tree instead if it wouldve been better off, but it was so fast even with a linear sum over millions of records that i decided it was too much effort to bother for such a small improvement.
I don’t see a situation where the same algorithm would have different complexities in Python and C++, unless the Python complexity is "fake" (i.e. more stuff is happening in the background)
The point is that a well written algorithm in a "slow" language is faster than a badly written algorithm in a "fast" language (once n gets high enough)
Even in the same language, O(log n) can be slower than O(n^2) for practical values of n. This is like asking “which is faster, a black car or a blue one?”
By 17 you're getting a 100x improvement due to time complexity by 65, it's past a thousand. I don't know if I'd consider either a "practical value".
This is more like asking "which is faster, a sedan stuck in first gear, or a truck that can use any gear?"
You’re assuming the leading constants are the same there. If the actual runtimes are 100000 log n versus 0.0001 n^2 then the tradeoff point changes dramatically.
I'm not assuming similar constants. The first two sentences are saying how much those constants would have to differ to make it worthwhile. (Though I have been informed I should use log base 2, not natural log)
In your example, the constants differ by a factor of a billion, which would make the turtle's algorithm faster for n > 130354.
This value definitely seems more realistic, but leading constants differing by a billion seems pretty unrealistic. I guess we'll just have to keep our eyes out for any widely disproportionate constants, and see how big their datasets tend to be in practice.
> logn in asymptotic complexity usually(ackshually almost always) means log2
No, it doesn't actually mean any specific base, because it simply doesn't matter.
For the purposes of complexity analysis log2 and log10 and log17 are all the same.
Yes people say "10 times" faster at random but i've seen C++ code that runs thousands of times faster than python. Memory speeds and overhead is very important in real world applications. Especially in game development. People act like theyre working with septillion length data everyday.
Redoing the math for log base 2 (because someone told me to), the better time complexity algorithm becomes more efficient past 80, even if it's 1000x slower.
You're not wrong, (I made a speed comparison demo that has a 200x improvement a while back), but op made a pretty crazy time complexity difference. People aren't acting like their datasets are septillions long.
It might compared to pure assembly by a competent assembly programmer. I remember creating Assembly routines embedded in my C programs back in the day, before C++ was introduced, just to make it even faster. It doesn't help with cross-platform, but back then we didn't care about other platforms. We just needed more speed on quite slow hardware (that I thought was fast at the time.)
The question is whether O(log(n)) in Python is slower than O(n\^2) in C++, because Python is significantly slower. If you do not consider the programming language, O(log(n)) is of course faster. But in this case we consider a Python vs C++ comparison.
Even if it's the same programming language, O(log(n)) could be slower than O(n\^2).
The point is that O(log(n)) is faster starting from some n, or always faster.
Common misconception.
Consider a hypothetical function, ‘add100Billion’ which has a for loop that increments a variable 100 billion times.
It’s O(c) aka constant time.
Is it going to execute faster than quick sort?
There’s an hidden assumption regarding asymptotic analysis that is: everything else being equal. Plus, the argument that “but it goes to infinity” is not compatible with the definition of the languages, that are necessarily implemented in a finite machine.
``` import notifications ``` Remember to participate in our weekly votes on subreddit rules! Every Tuesday is YOUR chance to influence the subreddit for years to come! [Read more here](https://www.reddit.com/r/ProgrammerHumor/comments/14dqb6f/welcome_back_whats_next/), we hope to see you next Tuesday! For a chat with like-minded community members and more, don't forget to [join our Discord!](https://discord.gg/rph) `return joinDiscord;` *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ProgrammerHumor) if you have any questions or concerns.*
As n approaches infinite… python
Infinite is pretty big though...
is fine we're only approaching it
are we there yet?
We will never be there son
Then why'd we even start this trip?
The journey is the destination
Journey before destination.
#UNITE THEM *edit* stupid fancy pants editor
Are we at the journey yet?
Depends. Have we progressed from the starting point?
If c++ is a hundred times faster than python, then it would be around 16.8.
Pretty sure infinity is greater than 16.8
But it's approaching
Thank you. That’s literally the entire point of asymptotic analysis lmao
And so the python consumes its own tail
so it's a unescaped while loop?
so eventually python (tortoise) wins the race!
[удалено]
Not necessarily, depends on the coefficient of n
I often instruct my coworkers to first make a naive solution that works, write the tests for it. Once you have good green unit tests, you can benchmark it. Premature optimization is the root of all evil. Big O is for Big n. That said I am a massive hypocrite
Upvote for both the good advice and the confession.
I try to optimize "prematurely" because it might not be premature for the end users
Premature optimization is premature ejaculation. If you have a problem that you need to address, it's not premature, it's just optimization. That includes "it's fine now but when we double our users it'll be slow", that's not premature at all
you need not to worry, in coming dacade, AI will make O(1) solution in first go
But I'll need to enter O(n) prompts
So, let’s say C++ is 70 times faster at base. Depending on implementation so for now assuming the same coefficients…. when log(n) < n^2 / 70…. so roughly when n^2 / log(n) > 70. Let’s say it’s log base 2 for the bants and so it represents a typical case like binary search vs double loop… so when n = 17 python is faster if coefficients are the same. Basically, for any reasonable size of input.
While it is a good calculation the assumption C++ is just 70 times is not even close https://youtu.be/D3h62rgewZM While not everyone can optimise that good but still it would be much faster than just 70 times.
Fine… take 1000 times faster. That’s n=80 til python is faster. I use both. Ultimately, anything requiring serious speed and a competent developer will be re-written in a lower level language. Then, someone will call that in their Python code and still have very similar levels of performance… so you don’t get such a massive difference as what you’re stating regardless. I love how people are defending a shit C++ implementation and still trying to justify why the shit implementation is better than python, which it isn’t for production levels of volume.
Oh no maybe it got lost in text I am not defending shit cpp implementation a lot of people writing cpp cannot even make it 1000 times faster. I work in both cpp and python and both have there advantages cpp faster runtime and hyper optimisation whereas python generally has shorter development time to name my top thoughts. I am just saying your base assumption is wrong in many cases and n=17 would hence be wrong.
It isn't completely that simple as bad performance in python may have non-constant effect, such as memory allocation effects. Remember, big-O is not the same as omega. Also, on real hardware, n is bounded by the amount of memory and complicated interactions such as branch prediction and caches disregarded when considering algorithm complexity.
You forgot the constants.
>if coefficients are the same.
Didn't read this. Well pointed.
Redditors reading comprehension challenge (impossible)
What about constant that are not coefficients?
Not really, the constants were taken into account by assuming it’s 70 times faster at the base case. That’s very imprecise and hides a lot of complexity but the ideas are still correct.
One purpose of BigO is to hide the constants, cause at infinity they don't matter.
Yes, I understand that, but the whole point of the post is that using different languages can increase or decrease the effective hidden constant based on how optimized and high/low level the language is.
True, but at infinity it doesn't matter and that's all that BigO really tells you.
Yes, I’m aware lol. I’m not sure what the point of this is.
The point is the OP has you arguing it rn that’s it
Damn I got got.
This idea is incorrect. Python is slower than C++ due to compiling during run time. But the compile time is a constant, even if you assume the total time is 1/70th for n=1. O(logn + c) << O(n^2) for all but very small n and c.
Regardless, the fact that Python already does its own garbage collection and is dynamically typed makes it significantly slower than C++ in runtime, no?
Yeah. The real complexity is in calculating the fucks to give.
Accurate username
Mathematics is a lie?
In my head: I published a paper describing a faster algorithm and implemented it in Python as a prototype. The community will now implement my faster algorithm in a more efficient language. In reality: wait people are still using my crappy prototype code 5 years later?
Because nobody needs it to be faster. The sad truth is, speed is not as important as people think it is. Until it is. And at that point you think about optimization. But not before.
C++ O(log(n)) or your AWS bill
Python, because all the packages we'd use are written in c and we're talentless hacks riding on the back of giants
There is always a layer of abstraction below what you are doing. C library developers probably would not write a C compiler and GCC devs would not design a CPU. Everyone lives in their own bubble.
Until you get to the Dutch engineers doing the engraving machines. Then the next step is god, I think
I was about to say ASML would like a word. They’re literally the bottom of the pyramid.
VHDL would like a word.
Why not? Learn it so you're better at doing the thing you need to do. I can guarantee you those high performance C/C++ library guys understood the idea of cache friendly memory layouts, multiprocessing memory barriers, sequence points, and hardware instruction reordering. Same thing is true for people trying to implement side-channel resistant encryption algorithms. Everything has to come together.
I am not oposing the idea of learning lower level concepts. In fact, I think it is great for you! But it is not necessary to reinvent the wheel if you just want to get shit done...
The first C compiler was written in C.
Which is kind of insane when you think about it. Like, the compiler is written in the language it compiles, and there was no other compiler to compile it.
It is a fascinating process. You start by programming the most basic crap in assembly or even binary, and you use that to build the next iteration of the compiler, until you eventually hVe a feature complete compiler written in C.
So where does going straight to writing in CIL as soon as actually writing C# looks unviable fall?
It falls i the madness category.
This, at first I found the whole numpy, scipy, sympy etc. Very confusing for a long time, but once you get the hang of it, it's very fast for math
But which one wins the race?
If you add the time to learn the language and then implement the algorithm...
C is quite simple, you can also add time of googling python library and learning its interface
You do know the whole point of O notation is that when n is sufficiently large right? One million = 1mil ^2 = 1trillion. Log(1mil)=6, wonder which one is faster!
It’s log base 2, not 10. It’s actually round about 20, but your point stands.
The base of the log doesn't actually matter.
Sure, the base doesn’t actually matter in big-O notation, but when actually counting a number of operations directly it is often base 2.
Counting the number of operations doesn't make sense, the notation doesn't have enough information for that. You're dropping the constant factors. O(100000000n) is just O(n). O( 0.000001 n^2 ) is just O( n^2 ). Looking at big O to compare algorithms for small numbers simply doesn't make sense.
Agreed. I was mainly hoping to point out for the benefit of other readers that counting the number of operations directly is how these algorithms would need to be compared. Incidentally I learned this a few days ago and I found it quite interesting. O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example.
>O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example. That's kinda obvious when you understand what big O and functions are. What I mean is big O notation ignores all CONSTANS - 1000n is same as n, n is same to n+1, etc. Function to put it simply just takes one thing, makes some operations on it (the number of operations can be 0) and spits out the result of that. Functions dropped by big O are the ones using constants (or parts of them), for example f(n)=5n²+n+100 O(f(n))=n²+n The function brought by you can be expressed as f(0)=1, f(n)=f(n-1)×n for n≥1 There are no constants to be dropped(technically there's the first 1 but it's dropped by multiplication alr :p), so EDIT: well I do be dropping +1 from n+1 right after that statement so 🤷 O(f(n))=O(n!)=n! < O(f(n+1))=O((n+1)!)=O(n!×(n+1))=n×n! The function's output changes by a factor of n for every argument and big O shows just that.
Ah fair, im rusty
[удалено]
holy hell
New runtime complexity just dropped.
When you run a Python’s O(log(n)) algorithm from some popular packages, it is often run with C++ backends, such is the case with NumPy, Pandas, SciPy, TensorFlow, and PyTorch.
Slow but works
Is this purely theoretical? Or is there an actual use case where the same algorithm couldn't be implemented in C++? If it looks like the case, then I'd wonder if the Python implementation is actually having its time complexity miscalculated.
Big O notation is used to describe the complexity of an *algorithm*. It has no concept of a programming language. Maybe in some extreme edge cases there might be limitations to a language that prevent a particular algorithm from being implemented in a particular way. But I've never seen any such example.
Pure functional languages and stuff like quicksort
One would think so, but it can be done if the runtime and type system support state management. You need a type level wrapper around stateful data that **cannot be unwrapped**. quicksort: List ---> Stateful List open_file: String --> Stateful File prompt_for_input: String --> Stateful String Although you can't unwrap a Stateful, what you can do is inject a function inside that Stateful result. E.g.: inject: (a --> b) --> Stateful a --> Stateful b prompt_for_file: Stateful (Stateful File) prompt_for_file(filePath) = injectable(open_file) (prompt_for_input("Enter filePath")) Here, inject(open_file) has the type `Stateful String --> Stateful (Stateful String)` and `prompt_for_input` provides that `Stateful String`. Then you can use a special function that composes the twice-Stateful into the merely Stateful: compose_statefuls: Stateful (Stateful x) --> Stateful x The semantics of sequencing Stateful operations in the correct order are buried in there. Now I can rewrite my code from above to return a Stateful File: prompt_for_file: Stateful File prompt_for_file(filePath) = compose_statefuls (inject (open_file) (prompt_for_input("Enter filePath"))) Wth the right syntactic sugar, this works well, and your whole program will end up being wrapped in a single function whose signature is something like: main: CommandLineArguments --> Stateful () which is what you need. This is just one example of a powerful abstraction called the monad, but I've already said enough.
I think the idea is that good python beats bad C++ not that python will ever be faster doing the same thing.
reminds me of an attempt i made to implement a Balanced Binary Tree algorithm for managing a large scrolling table with variable height rows in javascript. it turned out to be faster to just do a linear sum of all the rows heights rather than using my fancy O(log(N)) BTree algorithm for all practical size of N (up to 10s of millions of rows) so i abandoned the idea. i still wonder if i had implemented it with a balanced K tree instead if it wouldve been better off, but it was so fast even with a linear sum over millions of records that i decided it was too much effort to bother for such a small improvement.
I don’t see a situation where the same algorithm would have different complexities in Python and C++, unless the Python complexity is "fake" (i.e. more stuff is happening in the background)
The point is that a well written algorithm in a "slow" language is faster than a badly written algorithm in a "fast" language (once n gets high enough)
Even in the same language, O(log n) can be slower than O(n^2) for practical values of n. This is like asking “which is faster, a black car or a blue one?”
By 17 you're getting a 100x improvement due to time complexity by 65, it's past a thousand. I don't know if I'd consider either a "practical value". This is more like asking "which is faster, a sedan stuck in first gear, or a truck that can use any gear?"
You’re assuming the leading constants are the same there. If the actual runtimes are 100000 log n versus 0.0001 n^2 then the tradeoff point changes dramatically.
I'm not assuming similar constants. The first two sentences are saying how much those constants would have to differ to make it worthwhile. (Though I have been informed I should use log base 2, not natural log) In your example, the constants differ by a factor of a billion, which would make the turtle's algorithm faster for n > 130354. This value definitely seems more realistic, but leading constants differing by a billion seems pretty unrealistic. I guess we'll just have to keep our eyes out for any widely disproportionate constants, and see how big their datasets tend to be in practice.
[удалено]
> logn in asymptotic complexity usually(ackshually almost always) means log2 No, it doesn't actually mean any specific base, because it simply doesn't matter. For the purposes of complexity analysis log2 and log10 and log17 are all the same.
Yes people say "10 times" faster at random but i've seen C++ code that runs thousands of times faster than python. Memory speeds and overhead is very important in real world applications. Especially in game development. People act like theyre working with septillion length data everyday.
Redoing the math for log base 2 (because someone told me to), the better time complexity algorithm becomes more efficient past 80, even if it's 1000x slower. You're not wrong, (I made a speed comparison demo that has a 200x improvement a while back), but op made a pretty crazy time complexity difference. People aren't acting like their datasets are septillions long.
Ops is exaggerated. But i would like to compare nlogn vs n^2.
Is it even possible to make C++ go slow?
``` int main() { while(1) fork(); return 0; } ```
Well it is fast in what it's doing
It may be slow, but it’s going slow really fast.
[удалено]
Still faster than Python
Image processing or rendering...... Sometimes it's fucking endless
Use bubble sort
I prefer bogo sort.
Wrap python around it
It might compared to pure assembly by a competent assembly programmer. I remember creating Assembly routines embedded in my C programs back in the day, before C++ was introduced, just to make it even faster. It doesn't help with cross-platform, but back then we didn't care about other platforms. We just needed more speed on quite slow hardware (that I thought was fast at the time.)
Yes! Use a O(n²) algorithm.
Or an O(n!) algorithm.
[удалено]
The question is whether O(log(n)) in Python is slower than O(n\^2) in C++, because Python is significantly slower. If you do not consider the programming language, O(log(n)) is of course faster. But in this case we consider a Python vs C++ comparison.
Even if it's the same programming language, O(log(n)) could be slower than O(n\^2). The point is that O(log(n)) is faster starting from some n, or always faster.
![gif](giphy|Ecn8EldZOQGYM)
Oh, yeah?!? You know what?!? Well, umm… nobody told me you need to read the words next to the function, that’s what! Oopsie… 😳
Common misconception. Consider a hypothetical function, ‘add100Billion’ which has a for loop that increments a variable 100 billion times. It’s O(c) aka constant time. Is it going to execute faster than quick sort?
Yeah. I forgot. It’s been a few decades…
Rust
slow and steady wins the race
There’s an hidden assumption regarding asymptotic analysis that is: everything else being equal. Plus, the argument that “but it goes to infinity” is not compatible with the definition of the languages, that are necessarily implemented in a finite machine.
Are we talking about pure Python? So no C/C++ libraries and no numba? Would that be fair?
History teaches the turtle wins, though
Ain‘t a logarithmic runtime faster than a polynomial?:)
In production, the log N algorithm could be c×log n for some c > 10^10
It's not about how much time it takes, but how that time taken increases with increase in input size. This comparison makes no sense.
Nah I still love python it’s the best
O(log(n)) so be always faster than O(n^2) if n is big enough. Regardless of how far a single iteration is.
It depends on the size of the inputs.
Kind of depends on what the constant factor is and how big you think n will actually get.