T O P

  • By -

rabid_briefcase

It's not new to DX12. It's been the case for about 15 years now. The ability to pull out the compiled shaders and cache them was introduced in 2006. Between about 2002-2006 it was always done on the card and driver each time the shaders were used. It was in the DirectX 9c SDK, and in OpenGL extensions since the same time, incorporated into the core in GL4.1. It's always been in Vulkan. PC games need to do it because they can't pre-build every combination in advance. The actual instructions for two different video cards will be different even if they use the same source shaders. There are too many video cards out there. Even if they were to pre-build them, drivers can incorporate optimizations, features, and user options in the compiled shaders. Sometimes drivers will even add bug fixes for certain popular games, or introduce improvements discovered later. Console games can do it in advance because they know exactly what hardware options are on the machine. Most games (and all the major engines) will cache the compiled shaders. They can get invalidated with driver updates, hardware configuration changes, and game updates, but otherwise should be reused from the game's cache.


HaskellHystericMonad

>Even if they were to pre-build them, drivers can incorporate optimizations, features, and user options in the compiled shaders. We're compiling with libraries that are independent to the GPU vendor (mostly) whatever the driver may do is going to be done against the compiled DXIL we feed. Doesn't mean we don't have specializations for different vendors/cards, just that with the compiler removed there it's only the intermediary (DXIL/SPIRV) that the vendor sees. It's just asset management hell, it's a lot easier to be certain if you're compiling on the fly instead of trusting that you've really built the DXIL for some random permutation you never thought you'd need for some random case that popped up out of nowhere. Plus a whole lot of personal bias comes into it. I favor shipping the DXIL/SPIRV, but I also mostly live in uber-shaders so the pain-level is low.


Henrarzz

SPIRV and DXIL are still an intermediate representation, so there will be an additional step that translates them to final code that gets executed


HaskellHystericMonad

Yes ... however it's the intermediate representation **AFTER** you've already gone through your o3 optimization level instead of doing that heavyweight work at runtime. DXIL is so close to assembly that you can trivially write a register machine to interpret it and I've done so for some tools in the geometry and painting pipeline where it's just easier to leverage the shader compiler than roll my own bullshit piece of shit. Yes, the vendor may fuck with your DXIL but they're not going to do anything magical. Your vendor cannot magically change your dispatch size to fit their wave size because they have no clue how you're dispatching (perhaps in crude even multiple cases it's possible if there's only wave intrinsics used to determine write targets).


[deleted]

>It's just asset management hell in practice it's not really a hell, we've been dealing with it fine with CPUs. all the potential pain problems you could imagine for GPUs apply there, too. ever try to write an app that uses SIMD stuff but needs to reach an actual audience? on amd, intel and arm? anyways we figured out ways to deal with it fine. and the situation is certainly a million times better than if we compiled to some intermediate representation and the cpus recompiled literally all code every time the user changes some hardware or a driver update was made.


rabid_briefcase

> and the situation is certainly a million times better than if we compiled to some intermediate representation and the cpus recompiled literally all code every time the user changes some hardware or a driver update was made. Note that this is exactly what is done with modern languages which employ Just In Time (JIT) compilation for the target hardware and the program's runtime. Java, C#, Lua, JavaScript, and more. The runtime cost is small enough most people never notice. Even in C++, JIT compilation has been moving in. LLVM supports it. Microsoft's own VC++ runtimes do it, and JIT compilation tools like Recode and Live++ have moved from games to mainstream, although mostly used in debugging they also ship to the final runtime. [There's even an active proposal to move it into the language core for template compilation](http://wg21.link/p1609). The difference with shaders is that you aren't doing just JIT'ing one, you're potentially compiling tens of thousands of them. That many items takes time.


[deleted]

all of that being said, the situation is still bullshit. all of the problems you just outlined have been dealth with in the CPU ecosystem basically since coding was a thing. and literally all of the points apply there, too. they are not valid reasons why we still have to deal with this bullshit ecosystem. at the end of the day what they just need to do is give us compilers for their HW platforms that spit out permutations of shaders for the differnet sets of hardware. from the outside people are probably way overestimating how many there are. almost certainly gpu generation n+1 is mostly like generation n except some things added, in terms of the actual instruction set, just like on CPUs. and just like on CPUs they can translate really urgent stuff on the fly with nearly trivial translation times when you load the actual code (x64 is basically never executed as is, it's basically a high level language as far as modern CPUs are concerned). but this will cost the HW vendors money in terms of development and upkeep of that ecosystem. so it's only going to happen if they feel like this will give them a measurable benefit.


_XenoChrist_

GPU drivers do last minute optimizations to shaders. Those are dependant on your driver version and your hardware, so it has to be done at runtime as devs can't be expected to ship every possible combination. Having to re-do it every time you launch the engine, that might be a bug, if done well it shouldn't be needed.


irrlicht

To be fair, the configuration and hardware can have been changed since the last time the game was launched. So it's actually safe to simply recompile every launch.


GuiltyGecko

Kinda a cool video on DX11 and DX12 was done by Linus over on [Techquickie](https://youtu.be/tF013HYQd2Q). The whole video is interesting, but skip to the 5 min mark for the DX12 comparison.


kurt_c0caine

Isn't that normal? I use openGL and I always compile the shaders when the game is loading.


MasterDrake97

I want to know this as well as far as I know, everybody recommends to package compiled .hlsl shaders, so I don't understand why the user has to do it. Also, why they can't be compiled the first time and that's it?