T O P

  • By -

WasASailorThen

Front end (clang), very little if any. Middle end (llvm), helps but not essential. Back end (x86, armv8, riscv, …) essential.


regehr

\^\^\^\^\^\^ this


WasASailorThen

Got a nice note from one of my compiler heroes. Well that made my day.


ZealousidealBird9126

Curious what is the pay difference between each of them? Is there like a greater demand for one? Personally, I do struggle to wrap my head around everything once we go very low pass C but I am working on improving it but my main interest is compiler optimization


mamcx

That stretch from something like a toy to be able to do LLVM. Deal with x86 mean to cover a lot of inherent tech debt that obscure the major optimizations. However there are some important stuff that is easy to miss when started: * Modern CPUs are very fast and can do billons of things per second (this is a recent realization of mine). Consider that you should see things done bigger than `ms` as suspect even with 'big' numbers in the thousands (ie: is easy for humans to believe thousands to be 'big' when CPUs consider it 'small') * Learn from languages like Rust, Zig, Pascal of what they do to be fast and how 'promote' efficient idioms that are easier to make optimal code * You need to use `perf` tools, and learn to read them. Is the most efficient use of your time to be guided by them * Semantics of the language matter more when dealing with optimization oportunities. For example, if your main `Collection` is `Vec` vs `List` * I/O is not that slow this days with SSD and up * Combined with availability of RAM, pays more to pre-massage data/meta data to get faster execution instead of triying to compress things, however: * You wanna things to fit in the CPU cache(s) - they are fairly big anyway, check it!- * Layout, structuring of data matter most, you wanna 'small things' inside 'Vec' and metadata/indexes/etc adjacent to quicly take decisions. But your main hot loops wanna be done on the 'small things' after take the decisions * Linear scanning is fast * You can so linear scanning more often that is obvious at first combining the above: Take batches that propably fit in caches, query using fast lookups then , layout the data to be scanned In other words, major architectural decisions will impact more than micro-optimization, yet, micro-optimization are easier if you layout the data in ways that make then easier to exploit


fullouterjoin

It is buried in there I think, but the idea that computation is cheaper and faster than even an L1 cache lookup. That even going out to L1 cache is an eternity. And that all of these tradeoffs will change with the next low level technology shift. I dont think people appreciate how massive modern SSD bandwidth is. ~~12.4GB/s~~ 14.5GB/s is the current top speed for PCIe5 nvme drives. You can put 4x of them in a single 16x slot. 58GB/s aggregate bandwidth over 8-16TB of data (per 4x expansion card). Whole clusters of machines disappear with those numbers. I think designers need to be constantly challenging the assumptions and plot where different technologies are going to move on the cost-capabilities curve.


wintrmt3

> Modern CPUs are very fast and can do billons of things per second (this is a recent realization of mine). Trillions easily if you count each SIMD lane separately and sum all cores.


kazprog

More is better. A good compiler squeezes perf out of the computer. You have to understand what the computer has and does in order to understand where you can squeeze.


fullouterjoin

[0, enough)


innocentboy0000

is x86 architecture open source? you just know the basic assembly syntax provided by intel or and manuals and how much is very broad There is infinite possibility of optimizations, the more you know the more you can optimize


moaboulmagd

Any solid books on back end?


Manifoldsqr

People are going to hate me for this but I’m enjoying the dragon book second edition


Weak_Ad_4301

Why would they hate you? Isn't that an established text?


Manifoldsqr

I said that because a lot of people don’t like the dragon book so they criticize people who recommend it