-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slang takes ~30x as long as shaderc to compile a simple compute shader #6358
Comments
Thanks for providing this benchmark, we will look into this issue. |
I am able to improve the performance quite a bit in this benchmark in #6396, but I should also note that Slang will never be as fast as GLSL compilers, similar to how C++ can never be as fast as a C compiler, due to the more powerful type system and flexible compiler architecture. There might still be 1 or things we can do from here to get another 20-30% speedup, but it is unlikely we are able to get any better performance in these small examples. Note that there are a lot of components in the compiler that incurs a flat cost at the beginning that is supposed to be amortized out when compiling large code. The advanced type system in Slang often allows users to write more compact, generic code so all of these will help with the compile time when handling more complex application code. In particular, Slang allows you to precompile modules into .slang-module files, so you never need to reparse the same module twice. In this example, if you first convert .slang to .slang-module file, and then generate code from there, you will be able to by-pass the front-end entirely and get much shorter compile times. |
Thank you @csyonghe! I'll build #6396 and verify the performance improvement. I had some ideas around further optimizations (e.g. I saw Just to check my understanding, the Slang module system wouldn't help for small shaders like this, right? |
If compile time is of concern, the idea is to always precompile all .slang files into slang-modules before your application starts, https://shader-slang.org/slang/user-guide/link-time-specialization.html |
I took a quick look into |
Yes, this is consistent with the profiling results I've been seeing, and there is no obvious bottleneck in the system. I am not identifying any low hanging fruit here that will significantly improve performance from the current state. Since it is not clear if there are any actionable item, would you mind if we close this issue? I think our users can avoid paying front-end cost anyways if they can architect their code to use Slang modules. |
Sure, this can be closed; I feel like there's probably more to find here (should compiling a file this small require 275K dynamic casts?), but this speedup is good to see. I'll create another issue for the larger-scale benchmark testing out modules if I find issues there. Thanks! |
I agree that there is definitely more optimizations there, but it is unlikely that a single change or fix is going to make thing significantly different, and ROI will be diminishing. Just to show why the type system is complex, here is an example of what needs to happen when checking a simple
Step 3 there can spawn a lot of additonal checks, because Slang allow things like:
That makes all types conform to IInterface1 also conform to IInterface2, so if we have something like:
To know if this candidate is applicable, we need not only know if an argument type is Compared to GLSL which does not have generics, the checking step will be much simpler. There are certainly more optimizations we can do algorithmically to make things fast, in fact Slang already uses some caches to hold checked subtype relationships and operator overload resolution results, but there can be more opportunities for optimization. But this is just to give you an idea of the complexity of the type system, and hopefully it explains why Slang's type checking is going to take more time than the GLSL compiler. |
I am going to close the issue now, but I am happy to work with anyone who are interested in optimizing the compiler to see if there are more we can do here. |
Hi Slang team! I've been running into some issues affecting hot-reload workflows, where re-compiling small shaders is common.
The ToT version of Slang (as of 944c19b) takes 48-49 ms on my Windows computer to compile the following 841 bytes of source to SPIR-V. This imports no modules, does not include Slang global session creation time or I/O time, uses optimization level 0, and is averaged over 128 runs. I've included a benchmark you can use to reproduce this issue; more information about it below.
The options and targets used in my benchmark are:
In comparison, shaderc (using shaderc_shared from the 1.4.304.0 Vulkan SDK) compiles the GLSL equivalent in ~1.6 ms, about 30 times as quickly:
The generated SPIR-V files are similar, although shaderc's is slightly larger.
I've put together a benchmark at https://github.com/NBickford-NV/slang-compile-timer to test this under controlled conditions. It first initializes each shader compiler, then times how long it takes to compile a shader to SPIR-V 128 times and averages the results. (Varying the number of repetitions doesn't change the result much, so the first compilation isn't significantly more expensive).
To build the benchmark (currently only tested on Windows), run:
I've included a Release binary compiled using Visual Studio 2022 17.12.3.
Then to benchmark Slang, run
./slang-compile-timer shader.slang
:And to benchmark shaderc, run
./slang-compile-timer --shaderc shader.comp.glsl
:Thank you!
package.zip
The text was updated successfully, but these errors were encountered: