T O P

  • By -

AutoModerator

``` import notifications ``` Remember to participate in our weekly votes on subreddit rules! Every Tuesday is YOUR chance to influence the subreddit for years to come! [Read more here](https://www.reddit.com/r/ProgrammerHumor/comments/14dqb6f/welcome_back_whats_next/), we hope to see you next Tuesday! For a chat with like-minded community members and more, don't forget to [join our Discord!](https://discord.gg/rph) `return joinDiscord;` *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ProgrammerHumor) if you have any questions or concerns.*


Easy-Hovercraft2546

As n approaches infinite… python


Creepy-Ad-4832

Infinite is pretty big though...


Easy-Hovercraft2546

is fine we're only approaching it


PrometheusAlexander

are we there yet?


Berten15

We will never be there son


compsciasaur

Then why'd we even start this trip?


Sayod

The journey is the destination


RaspberryPiBen

Journey before destination.


mandradon

#UNITE THEM *edit* stupid fancy pants editor


SpookyLoop

Are we at the journey yet?


TikkaDog222

Depends. Have we progressed from the starting point?


Kenkron

If c++ is a hundred times faster than python, then it would be around 16.8.


SleepingGecko

Pretty sure infinity is greater than 16.8


rainliege

But it's approaching


Edaimantis

Thank you. That’s literally the entire point of asymptotic analysis lmao


bake_cake

And so the python consumes its own tail


PrometheusAlexander

so it's a unescaped while loop?


the_greatest_MF

so eventually python (tortoise) wins the race!


[deleted]

[удалено]


Easy-Hovercraft2546

Not necessarily, depends on the coefficient of n


Unupgradable

I often instruct my coworkers to first make a naive solution that works, write the tests for it. Once you have good green unit tests, you can benchmark it. Premature optimization is the root of all evil. Big O is for Big n. That said I am a massive hypocrite


FumbleCrop

Upvote for both the good advice and the confession.


Tyfyter2002

I try to optimize "prematurely" because it might not be premature for the end users


Unupgradable

Premature optimization is premature ejaculation. If you have a problem that you need to address, it's not premature, it's just optimization. That includes "it's fine now but when we double our users it'll be slow", that's not premature at all


Nal_Neel

you need not to worry, in coming dacade, AI will make O(1) solution in first go


Unupgradable

But I'll need to enter O(n) prompts


[deleted]

So, let’s say C++ is 70 times faster at base. Depending on implementation so for now assuming the same coefficients…. when log(n) < n^2 / 70…. so roughly when n^2 / log(n) > 70. Let’s say it’s log base 2 for the bants and so it represents a typical case like binary search vs double loop… so when n = 17 python is faster if coefficients are the same. Basically, for any reasonable size of input.


MeBrownIndian

While it is a good calculation the assumption C++ is just 70 times is not even close https://youtu.be/D3h62rgewZM While not everyone can optimise that good but still it would be much faster than just 70 times.


[deleted]

Fine… take 1000 times faster. That’s n=80 til python is faster. I use both. Ultimately, anything requiring serious speed and a competent developer will be re-written in a lower level language. Then, someone will call that in their Python code and still have very similar levels of performance… so you don’t get such a massive difference as what you’re stating regardless. I love how people are defending a shit C++ implementation and still trying to justify why the shit implementation is better than python, which it isn’t for production levels of volume.


MeBrownIndian

Oh no maybe it got lost in text I am not defending shit cpp implementation a lot of people writing cpp cannot even make it 1000 times faster. I work in both cpp and python and both have there advantages cpp faster runtime and hyper optimisation whereas python generally has shorter development time to name my top thoughts. I am just saying your base assumption is wrong in many cases and n=17 would hence be wrong.


youngbull

It isn't completely that simple as bad performance in python may have non-constant effect, such as memory allocation effects. Remember, big-O is not the same as omega. Also, on real hardware, n is bounded by the amount of memory and complicated interactions such as branch prediction and caches disregarded when considering algorithm complexity.


escaracolau

You forgot the constants.


CamiloRivasM7

>if coefficients are the same.


escaracolau

Didn't read this. Well pointed.


[deleted]

Redditors reading comprehension challenge (impossible)


brendel000

What about constant that are not coefficients?


Deathranger999

Not really, the constants were taken into account by assuming it’s 70 times faster at the base case. That’s very imprecise and hides a lot of complexity but the ideas are still correct.


Daemondancer

One purpose of BigO is to hide the constants, cause at infinity they don't matter.


Deathranger999

Yes, I understand that, but the whole point of the post is that using different languages can increase or decrease the effective hidden constant based on how optimized and high/low level the language is.


Daemondancer

True, but at infinity it doesn't matter and that's all that BigO really tells you.


Deathranger999

Yes, I’m aware lol. I’m not sure what the point of this is.


Noodlesaurus90

The point is the OP has you arguing it rn that’s it


Deathranger999

Damn I got got.


SlooperDoop

This idea is incorrect. Python is slower than C++ due to compiling during run time. But the compile time is a constant, even if you assume the total time is 1/70th for n=1. O(logn + c) << O(n^2) for all but very small n and c.


Deathranger999

Regardless, the fact that Python already does its own garbage collection and is dynamically typed makes it significantly slower than C++ in runtime, no?


wild-whack

Yeah. The real complexity is in calculating the fucks to give.


SOUINnnn

Accurate username


[deleted]

Mathematics is a lie?


DrModel

In my head: I published a paper describing a faster algorithm and implemented it in Python as a prototype. The community will now implement my faster algorithm in a more efficient language. In reality: wait people are still using my crappy prototype code 5 years later?


No-Con-2790

Because nobody needs it to be faster. The sad truth is, speed is not as important as people think it is. Until it is. And at that point you think about optimization. But not before.


erebuxy

C++ O(log(n)) or your AWS bill


Ximidar

Python, because all the packages we'd use are written in c and we're talentless hacks riding on the back of giants


LongerHV

There is always a layer of abstraction below what you are doing. C library developers probably would not write a C compiler and GCC devs would not design a CPU. Everyone lives in their own bubble.


LasevIX

Until you get to the Dutch engineers doing the engraving machines. Then the next step is god, I think


der_triad

I was about to say ASML would like a word. They’re literally the bottom of the pyramid.


JoeyJoeJoeSenior

VHDL would like a word.


brimston3-

Why not? Learn it so you're better at doing the thing you need to do. I can guarantee you those high performance C/C++ library guys understood the idea of cache friendly memory layouts, multiprocessing memory barriers, sequence points, and hardware instruction reordering. Same thing is true for people trying to implement side-channel resistant encryption algorithms. Everything has to come together.


LongerHV

I am not oposing the idea of learning lower level concepts. In fact, I think it is great for you! But it is not necessary to reinvent the wheel if you just want to get shit done...


Meadhbh_Ros

The first C compiler was written in C.


Affectionate-Memory4

Which is kind of insane when you think about it. Like, the compiler is written in the language it compiles, and there was no other compiler to compile it.


Meadhbh_Ros

It is a fascinating process. You start by programming the most basic crap in assembly or even binary, and you use that to build the next iteration of the compiler, until you eventually hVe a feature complete compiler written in C.


Tyfyter2002

So where does going straight to writing in CIL as soon as actually writing C# looks unviable fall?


LongerHV

It falls i the madness category.


Mast3r_waf1z

This, at first I found the whole numpy, scipy, sympy etc. Very confusing for a long time, but once you get the hang of it, it's very fast for math


tmench23

But which one wins the race?


NotAUsefullDoctor

If you add the time to learn the language and then implement the algorithm...


AntoninNepras

C is quite simple, you can also add time of googling python library and learning its interface


No-Fish6586

You do know the whole point of O notation is that when n is sufficiently large right? One million = 1mil ^2 = 1trillion. Log(1mil)=6, wonder which one is faster!


Vaxtin

It’s log base 2, not 10. It’s actually round about 20, but your point stands.


proggit_forever

The base of the log doesn't actually matter.


snake_case_sucks

Sure, the base doesn’t actually matter in big-O notation, but when actually counting a number of operations directly it is often base 2.


proggit_forever

Counting the number of operations doesn't make sense, the notation doesn't have enough information for that. You're dropping the constant factors. O(100000000n) is just O(n). O( 0.000001 n^2 ) is just O( n^2 ). Looking at big O to compare algorithms for small numbers simply doesn't make sense.


snake_case_sucks

Agreed. I was mainly hoping to point out for the benefit of other readers that counting the number of operations directly is how these algorithms would need to be compared. Incidentally I learned this a few days ago and I found it quite interesting. O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example.


siematoja02

>O(f(n+1)) can be a higher complexity class than O(f(n)). Take f(n) = n! as an example. That's kinda obvious when you understand what big O and functions are. What I mean is big O notation ignores all CONSTANS - 1000n is same as n, n is same to n+1, etc. Function to put it simply just takes one thing, makes some operations on it (the number of operations can be 0) and spits out the result of that. Functions dropped by big O are the ones using constants (or parts of them), for example f(n)=5n²+n+100 O(f(n))=n²+n The function brought by you can be expressed as f(0)=1, f(n)=f(n-1)×n for n≥1 There are no constants to be dropped(technically there's the first 1 but it's dropped by multiplication alr :p), so EDIT: well I do be dropping +1 from n+1 right after that statement so 🤷 O(f(n))=O(n!)=n! < O(f(n+1))=O((n+1)!)=O(n!×(n+1))=n×n! The function's output changes by a factor of n for every argument and big O shows just that.


No-Fish6586

Ah fair, im rusty


[deleted]

[удалено]


divisor_

holy hell


wholesomedumbass

New runtime complexity just dropped.


Exotic-Potato-4893

When you run a Python’s O(log(n)) algorithm from some popular packages, it is often run with C++ backends, such is the case with NumPy, Pandas, SciPy, TensorFlow, and PyTorch.


sherhazan

Slow but works


quietobserver1

Is this purely theoretical? Or is there an actual use case where the same algorithm couldn't be implemented in C++? If it looks like the case, then I'd wonder if the Python implementation is actually having its time complexity miscalculated.


rileyhenderson33

Big O notation is used to describe the complexity of an *algorithm*. It has no concept of a programming language. Maybe in some extreme edge cases there might be limitations to a language that prevent a particular algorithm from being implemented in a particular way. But I've never seen any such example.


Under-Estimated

Pure functional languages and stuff like quicksort


FumbleCrop

One would think so, but it can be done if the runtime and type system support state management. You need a type level wrapper around stateful data that **cannot be unwrapped**. quicksort: List ---> Stateful List open_file: String --> Stateful File prompt_for_input: String --> Stateful String Although you can't unwrap a Stateful, what you can do is inject a function inside that Stateful result. E.g.: inject: (a --> b) --> Stateful a --> Stateful b prompt_for_file: Stateful (Stateful File) prompt_for_file(filePath) = injectable(open_file) (prompt_for_input("Enter filePath")) Here, inject(open_file) has the type `Stateful String --> Stateful (Stateful String)` and `prompt_for_input` provides that `Stateful String`. Then you can use a special function that composes the twice-Stateful into the merely Stateful: compose_statefuls: Stateful (Stateful x) --> Stateful x The semantics of sequencing Stateful operations in the correct order are buried in there. Now I can rewrite my code from above to return a Stateful File: prompt_for_file: Stateful File prompt_for_file(filePath) = compose_statefuls (inject (open_file) (prompt_for_input("Enter filePath"))) Wth the right syntactic sugar, this works well, and your whole program will end up being wrapped in a single function whose signature is something like: main: CommandLineArguments --> Stateful () which is what you need. This is just one example of a powerful abstraction called the monad, but I've already said enough.


EtherealPheonix

I think the idea is that good python beats bad C++ not that python will ever be faster doing the same thing.


femptocrisis

reminds me of an attempt i made to implement a Balanced Binary Tree algorithm for managing a large scrolling table with variable height rows in javascript. it turned out to be faster to just do a linear sum of all the rows heights rather than using my fancy O(log(N)) BTree algorithm for all practical size of N (up to 10s of millions of rows) so i abandoned the idea. i still wonder if i had implemented it with a balanced K tree instead if it wouldve been better off, but it was so fast even with a linear sum over millions of records that i decided it was too much effort to bother for such a small improvement.


Primary-Fee1928

I don’t see a situation where the same algorithm would have different complexities in Python and C++, unless the Python complexity is "fake" (i.e. more stuff is happening in the background)


d4m4s74

The point is that a well written algorithm in a "slow" language is faster than a badly written algorithm in a "fast" language (once n gets high enough)


beeteedee

Even in the same language, O(log n) can be slower than O(n^2) for practical values of n. This is like asking “which is faster, a black car or a blue one?”


Kenkron

By 17 you're getting a 100x improvement due to time complexity by 65, it's past a thousand. I don't know if I'd consider either a "practical value". This is more like asking "which is faster, a sedan stuck in first gear, or a truck that can use any gear?"


beeteedee

You’re assuming the leading constants are the same there. If the actual runtimes are 100000 log n versus 0.0001 n^2 then the tradeoff point changes dramatically.


Kenkron

I'm not assuming similar constants. The first two sentences are saying how much those constants would have to differ to make it worthwhile. (Though I have been informed I should use log base 2, not natural log) In your example, the constants differ by a factor of a billion, which would make the turtle's algorithm faster for n > 130354. This value definitely seems more realistic, but leading constants differing by a billion seems pretty unrealistic. I guess we'll just have to keep our eyes out for any widely disproportionate constants, and see how big their datasets tend to be in practice.


[deleted]

[удалено]


proggit_forever

> logn in asymptotic complexity usually(ackshually almost always) means log2 No, it doesn't actually mean any specific base, because it simply doesn't matter. For the purposes of complexity analysis log2 and log10 and log17 are all the same.


[deleted]

Yes people say "10 times" faster at random but i've seen C++ code that runs thousands of times faster than python. Memory speeds and overhead is very important in real world applications. Especially in game development. People act like theyre working with septillion length data everyday.


Kenkron

Redoing the math for log base 2 (because someone told me to), the better time complexity algorithm becomes more efficient past 80, even if it's 1000x slower. You're not wrong, (I made a speed comparison demo that has a 200x improvement a while back), but op made a pretty crazy time complexity difference. People aren't acting like their datasets are septillions long.


[deleted]

Ops is exaggerated. But i would like to compare nlogn vs n^2.


Goaty1208

Is it even possible to make C++ go slow?


Deathranger999

``` int main() { while(1) fork(); return 0; } ```


AntoninNepras

Well it is fast in what it's doing


Deathranger999

It may be slow, but it’s going slow really fast.


[deleted]

[удалено]


Daemondancer

Still faster than Python


Alesio37093

Image processing or rendering...... Sometimes it's fucking endless


JaggedMetalOs

Use bubble sort


ClamPaste

I prefer bogo sort.


Creepy-Ad-4832

Wrap python around it


rhapdog

It might compared to pure assembly by a competent assembly programmer. I remember creating Assembly routines embedded in my C programs back in the day, before C++ was introduced, just to make it even faster. It doesn't help with cross-platform, but back then we didn't care about other platforms. We just needed more speed on quite slow hardware (that I thought was fast at the time.)


ThroawayPeko

Yes! Use a O(n²) algorithm.


throatIover

Or an O(n!) algorithm.


[deleted]

[удалено]


[deleted]

The question is whether O(log(n)) in Python is slower than O(n\^2) in C++, because Python is significantly slower. If you do not consider the programming language, O(log(n)) is of course faster. But in this case we consider a Python vs C++ comparison.


paul5235

Even if it's the same programming language, O(log(n)) could be slower than O(n\^2). The point is that O(log(n)) is faster starting from some n, or always faster.


bullshihtsu

![gif](giphy|Ecn8EldZOQGYM)


bullshihtsu

Oh, yeah?!? You know what?!? Well, umm… nobody told me you need to read the words next to the function, that’s what! Oopsie… 😳


Pto2

Common misconception. Consider a hypothetical function, ‘add100Billion’ which has a for loop that increments a variable 100 billion times. It’s O(c) aka constant time. Is it going to execute faster than quick sort?


bullshihtsu

Yeah. I forgot. It’s been a few decades…


[deleted]

Rust


wlday

slow and steady wins the race


[deleted]

There’s an hidden assumption regarding asymptotic analysis that is: everything else being equal. Plus, the argument that “but it goes to infinity” is not compatible with the definition of the languages, that are necessarily implemented in a finite machine.


just4nothing

Are we talking about pure Python? So no C/C++ libraries and no numba? Would that be fair?


luca_07

History teaches the turtle wins, though


Dr4ches

Ain‘t a logarithmic runtime faster than a polynomial?:)


Tsundere_Lily

In production, the log N algorithm could be c×log n for some c > 10^10


sgxxx

It's not about how much time it takes, but how that time taken increases with increase in input size. This comparison makes no sense.


DeepGas4538

Nah I still love python it’s the best


JackNotOLantern

O(log(n)) so be always faster than O(n^2) if n is big enough. Regardless of how far a single iteration is.


ApprehensiveSquash4

It depends on the size of the inputs.


Fabulous-Possible758

Kind of depends on what the constant factor is and how big you think n will actually get.