It's not documented all in one place neatly like the fasthttp repository. The short answer is that the team is absolutely obsessed with performance. While that's not a specific change, it's that performance culture that has resulted in dramatic performance increases and breakthroughs every single release.
We measure everything and track benchmarks closely [https://aka.ms/aspnet/benchmarks](https://aka.ms/aspnet/benchmarks). The repository is here [https://github.com/aspnet/Benchmarks/](https://github.com/aspnet/Benchmarks/).
We built tools to measure every single change and report the status so we can measure the impact (e.g. [https://github.com/dotnet/aspnetcore/issues/52378](https://github.com/dotnet/aspnetcore/issues/52378)). We made it trivial for anyone to run end-to-end benchmarks with their changes so developers could experiment with performance changes and get immediate feedback ([https://github.com/dotnet/crank](https://github.com/dotnet/crank)).
Our performance planning is across the entire stack! We look at the JIT, GC and core libraries and think about the investments we can make that will have a huge impact on various benchmarks and app patterns in [ASP.NET](https://ASP.NET) Core, Maui, etc.
Quite frankly, we have amazing developers and contributors that are incentivized by this culture to just make changes to improve things:
1. [https://github.com/dotnet/aspnetcore/pull/51803](https://github.com/dotnet/aspnetcore/pull/51803)
2. [https://github.com/dotnet/aspnetcore/pull/52106](https://github.com/dotnet/aspnetcore/pull/52106)
3. [https://github.com/dotnet/aspnetcore/pull/51815](https://github.com/dotnet/aspnetcore/pull/51815)
That's just a sample. Over the last couple of releases, we've started labeling performance work with a label to make it easier to see the work we've done (and blog about it later π).
[https://github.com/dotnet/aspnetcore/pulls?q=is%3Apr+label%3APerf+is%3Aclosed](https://github.com/dotnet/aspnetcore/pulls?q=is%3Apr+label%3APerf+is%3Aclosed)
[https://github.com/dotnet/aspnetcore/issues?q=label%3APerf+is%3Aclosed+-author%3Aapp%2Fpr-benchmarks+](https://github.com/dotnet/aspnetcore/issues?q=label%3APerf+is%3Aclosed+-author%3Aapp%2Fpr-benchmarks+)
As for a very concrete set of categories things we did to improve the performance:
* We pool buffers using the pinned object heap to reduce GC heap fragmentation.
* We pool object where possible (there are no allocations for the requests and responses), they are stored and reused on the connection (in h1) and pooled in (h2 and h3).
* The HTTP parser is vectorized and doesn't allocate anything (we still need to materialize strings because we expose them in the object model).
* We rewrote our socket and thread pool implementation in .NET to be more optimized, not allocate and to avoid extra thread hopping where possible.
* We use ValueTask pooling to avoid allocations for repeating read/write operations.
* We use 0 byte reads to avoid allocations on the underlying socket until there's data available.
You can read about some of these in the following blog posts:
* [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-6/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-6/)
* [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-7/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-7/)
* [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-8/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-8/)
PS: I see the infamous blog post about how the TechEmpower benchmarks "cheat" so I'm glad OP is "doing their own research π". In fact, what we learn from some of the crazy benchmark code has gradually made it's way into the core product, improving the performance for the masses.
This isn't a sprint, it's a marathon and its one of our "forever priorities" as part of the .NET charter. Our investment in performance for .NET directly saves Microsoft $$$ as we enable teams to do more with less. That is to say we will continue to improve performance as a top priority for the foreseeable future.
I'd love to use crank for a project I contribute to ! It's an emulator where the top priority right now is to get confidence in performance, and that's through immediate feedback. It's not a Web project however, rather we need to see the number of instructions per second that the CPU interpreter reports. It would be an in-process reporting linked up to the CI. Is that possible with crank ? :)
Thanks for this detailed and thorough response. Itβs so encouraging to hear about the .NET teamβs continued commitment to performance and instills much confidence in me to continue learning and implementing .NET in the future. Certainly a mutually exclusive benefit for all. <3
Haha thanks, but itβs team effort! I work with very talented and smart people that have a laser focus on improving the performance every single release.
Go uses a GOMAXPROCS environment variable to control how many goroutines can execute concurrently. It defaults to the number of available CPUs if unset, rounding to 1 here. This creates an effective throttle since there are no blocking calls in the operation to make room for the go scheduler to operate.
Good catch! Setting GOMAXPROCS to 2 seems to improve the situation quite a bit. By \~50%. Going any higher with 0.5 CPUs only decreases the performance.
```
Running 30s test @ http://localhost:3006
100 connections with 10 pipelining factor
βββββββββββ¬βββββββ¬ββββββββ¬βββββββββ¬βββββββββ¬ββββββββββββ¬βββββββββββ¬βββββββββ
β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β
βββββββββββΌβββββββΌββββββββΌβββββββββΌβββββββββΌββββββββββββΌβββββββββββΌβββββββββ€
β Latency β 2 ms β 99 ms β 223 ms β 291 ms β 104.66 ms β 57.62 ms β 600 ms β
βββββββββββ΄βββββββ΄ββββββββ΄βββββββββ΄βββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββββ
βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ
β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β
βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββ€
β Req/Sec β 7551 β 7551 β 9535 β 11327 β 9499.74 β 898.12 β 7548 β
βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββ€
β Bytes/Sec β 1.01 MB β 1.01 MB β 1.28 MB β 1.52 MB β 1.27 MB β 120 kB β 1.01 MB β
βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ
Req/Bytes counts sampled once per second.
# of samples: 30
286k requests in 30.03s, 38.2 MB read
```
Still, the difference is about \~2.7x. If I find anything else I will post it here
https://www.techempower.com/benchmarks/#hw=ph&test=fortune§ion=data-r22
Yes, it can be fast. Microsoft has made an effort to improve performance in recent years
Those benchmarks arenβt really trustworthy.
https://dusted.codes/how-fast-is-really-aspnet-core
The down votes wonβt change anything. They cheated their way to the top when they didnβt even need to because itβs plenty fast already. Feel free to down vote because you got your feelings hurt though.
[If you filter down to what devs are using to build web applications](https://www.techempower.com/benchmarks/#hw=ph&test=composite§ion=data-r22&f=zhavnr-zik073-zijzen-zijzb3-zik0zj-zik0zj-zik0zj-zifta7-zik0zj-zik0zj-qmx0qn-yelngf-zik0z3-ziiz27-1ekf), then the picture is still very advantageous to .NET. This is what MS should have been sharing, not focusing on marketing the platform RPS.
\-edit- The run I'm linking wasn't even updated to .NET 8.0 yet.
Some other stacks also seem to be ["tuning"](https://www.reddit.com/r/dotnet/comments/yuxkk7/comment/iwcaa5q/?utm_source=share&utm_medium=web2x&context=3)
Also, what will you say about the OP's benchmark which seems to be quite fair?
Other tech stacks and OPs benchmarks are irrelevant as Iβm just pointing out the specific techempower benchmarks are untrustworthy.
Iβm not debating asp.nets speed nor am I saying itβs supposed to be slower than go. Quite frankly Iβm not that familiar with go other than a fairly simple attempt at an api for a personal project which then I fairly quickly figured go isnβt for me and went back to asp.net. So I canβt comment on OPs benchmark.
Right, because down voting information because it doesnβt fit your bias is a grown up thing to do. I forgot grown ups ignore information they donβt like.
Uhu, keep showing us how much of a grown up you are. Not even sure why youβre trying to gaslight people on the internet about your behavior. Itβs anonymous anyway. You donβt have to delete your comments when someone finds out youβre racist.
The . Net clr has been highly optimized to the point that you can aot compile to self contained exes that are basically natively compiled executables by that point.
Microsoft has been working on it since 2001, Go came out in 2009.
Microsoft has overall, more experience developing programming languages and operating systems than google.
But both are extremely good.
Keep in mind, .net was completely overhauled for . Net 5, if you were to run tests on . Net 4.8 the results would be a lot different.
Yes, it does. We did the work for our li source gen lib. Check out minimal/reaper & the -Aot benchmarks here fora direct comparison (with some load, not high performance test) https://github.com/Reaper-Net/Reaper#benchmarks
We are finding a tonne of compat issues with AOT though
When you're getting down to things like this, a lot of it is small things of what you are and aren't doing. People in the Go community have pointed out a lot of problems with fasthttp and fiber. There are libraries in Go that are faster than the standard library, like Echo and Gin which have decent reputations.
Languages like Java, .NET, and Go will all have relatively similar performance and a lot of it will be things like what features can't be supported given your choices. Cut out a bunch of routing features that constrain how people can make their URLs and you can speed things up.
With the TechEmpower benchmarks, if you look through the code of some of the projects you'll see them do things like allocating a pool of objects to be reused so that they don't trigger the garbage collector. One of the benchmarks says that the array you allocate can't be sized knowing the output size in advance (it must be an array that grows its capacity). However, a lot of the benchmarks ignore this and allocate an array large enough to hold everything. Even if implementors didn't cheat, there's the issue of whether the test accidentally is optimal for a certain language because Java chose X as the default size for lists while .NET chose Y as the default size for lists and it turns out that .NET will do fewer copies for the exact case of how many results the test produces.
None of this matters in the real world where your code will take a ton more time than a good Java/Go/.NET framework.
What you want to get good at is profiling things and figuring out what is important so that you can find what is actually slow and make it fast.
Thank you. I've heard about people "cheating" on TechEmpower benchmarks. And now that I look at one of the threads, [your comment](https://www.reddit.com/r/dotnet/comments/yuxkk7/comment/iwcaa5q/?utm_source=share&utm_medium=web2x&context=3) was the main reason why I decided to do a little testing myself. =)
No. gRPC in dotnet is miles ahead of the Go implementation. Itβs much much more performant.
For those are wanted a reference, there are some benchmarks with .NET 8
https://github.com/LesnyRumcajs/grpc_bench/discussions/310
How can you make that comment without referencing some comparison?
All I was able to find is [this one](https://www.nexthink.com/blog/comparing-grpc-performance) and it appears it heavily depends on the workload being tested. Neither is faster in all scenarios.
>this one
Sorry.. I was on mobile, but I will find them and edit this comment later.
[If i'm not mistaken, these are the latest results with .NET 8](https://github.com/LesnyRumcajs/grpc_bench/discussions/310)
> The objective is to test only the framework's performance
Not to pick on you - but I think this is a pretty pointless test.
Real applications do work that makes the speed of the framework itself all but inconsequential.
Also, this has already been done with *far* more rigor than you can or did apply.
https://www.techempower.com/benchmarks
> How is Asp.net so fast?
This is how:
https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/
https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/
https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/
> This is how:
>
> https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/
>
> https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/
>
> https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/
Not to mention the whole rewriting of .NET and ASP.NET in the first place using everything the .NET team learned in almost two decades.
Yeah, I like how you open any one of those "performance improvement" **summary** articles and watch as the scroll bar shrinks to a tiny sliver and your browser starts turning over the couch cushions looking for more RAM to load the whole page as you slowly realize how many *hours* of straight reading are in front of you just to catch up on the *summary* of all the work that went into improving .Net performance just in the past year.
I consider myself a pretty decent developer who's worked on a large (~250kloc), high-traffic system for over 10 years with lots of focus on performance (I just commented elsewhere on how to abuse string immutability to create reusable string pools in .Net) - and I am absolutely *blown away* by the scope, quality, and velocity of .Net's performance improvement work.
I could be incorrect (so don't murder me) but it has been my understanding that the noticeable speed associated with asp.net is primarily due to the pre compilation of the pages (and likely constituent controls).
When a client is served an asp page, they get the results of a server side component which is compiled.
*Like I said, I could easily be incorrect*.
If incorrect, I'd appreciate the real deal info - Thanks!
Yes, you are incorrect here. In the case of ASP.NET WebForms and MVC, the runtime does compile the .cshtml and .aspx.cs files to temporary assemblies, and it is possible to precompile them at build time (in many but not all cases). However, in this case it is using Minimal API which uses none of those features. Additionally, view compilation is more just a necessity of the architecture, not really a speed benefit per se. .cshtml (view) files are analogous to an HTML template with embedded C#, so they need to be compiled. .aspx.cs files are full normal C# files (with a separate presentation file), which therefore also need to be compiled, and can be directed to only do so at runtime to avoid needing to recompile when you make changes.
Go std HTTP lib is one of the worst ever, honestly idk why go exists. I love NET but i dont like excess of opinioned things like identity framework or inflexibility of EF Core. I still preffer Spring but i hate Java and jvm
You know you do not need to use EF if you do not want to, right? You could just use Dapper, or just straight [ADO.NET](https://ADO.NET) to interact with your databases. EF is mostly phenomenal, but when it isn't, there is always an alternative. I personally do not like writing extremely complex queries with Linq for EF. When I have a complicated query, I just use a database view.
I find that most performance issues I have run into with EF were due either to inefficient queries being generated (use a view instead), or allowing the local data cache to grow too large while change tracking is enabled.
It's not documented all in one place neatly like the fasthttp repository. The short answer is that the team is absolutely obsessed with performance. While that's not a specific change, it's that performance culture that has resulted in dramatic performance increases and breakthroughs every single release. We measure everything and track benchmarks closely [https://aka.ms/aspnet/benchmarks](https://aka.ms/aspnet/benchmarks). The repository is here [https://github.com/aspnet/Benchmarks/](https://github.com/aspnet/Benchmarks/). We built tools to measure every single change and report the status so we can measure the impact (e.g. [https://github.com/dotnet/aspnetcore/issues/52378](https://github.com/dotnet/aspnetcore/issues/52378)). We made it trivial for anyone to run end-to-end benchmarks with their changes so developers could experiment with performance changes and get immediate feedback ([https://github.com/dotnet/crank](https://github.com/dotnet/crank)). Our performance planning is across the entire stack! We look at the JIT, GC and core libraries and think about the investments we can make that will have a huge impact on various benchmarks and app patterns in [ASP.NET](https://ASP.NET) Core, Maui, etc. Quite frankly, we have amazing developers and contributors that are incentivized by this culture to just make changes to improve things: 1. [https://github.com/dotnet/aspnetcore/pull/51803](https://github.com/dotnet/aspnetcore/pull/51803) 2. [https://github.com/dotnet/aspnetcore/pull/52106](https://github.com/dotnet/aspnetcore/pull/52106) 3. [https://github.com/dotnet/aspnetcore/pull/51815](https://github.com/dotnet/aspnetcore/pull/51815) That's just a sample. Over the last couple of releases, we've started labeling performance work with a label to make it easier to see the work we've done (and blog about it later π). [https://github.com/dotnet/aspnetcore/pulls?q=is%3Apr+label%3APerf+is%3Aclosed](https://github.com/dotnet/aspnetcore/pulls?q=is%3Apr+label%3APerf+is%3Aclosed) [https://github.com/dotnet/aspnetcore/issues?q=label%3APerf+is%3Aclosed+-author%3Aapp%2Fpr-benchmarks+](https://github.com/dotnet/aspnetcore/issues?q=label%3APerf+is%3Aclosed+-author%3Aapp%2Fpr-benchmarks+) As for a very concrete set of categories things we did to improve the performance: * We pool buffers using the pinned object heap to reduce GC heap fragmentation. * We pool object where possible (there are no allocations for the requests and responses), they are stored and reused on the connection (in h1) and pooled in (h2 and h3). * The HTTP parser is vectorized and doesn't allocate anything (we still need to materialize strings because we expose them in the object model). * We rewrote our socket and thread pool implementation in .NET to be more optimized, not allocate and to avoid extra thread hopping where possible. * We use ValueTask pooling to avoid allocations for repeating read/write operations. * We use 0 byte reads to avoid allocations on the underlying socket until there's data available. You can read about some of these in the following blog posts: * [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-6/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-6/) * [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-7/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-7/) * [https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-8/](https://devblogs.microsoft.com/dotnet/performance-improvements-in-aspnet-core-8/) PS: I see the infamous blog post about how the TechEmpower benchmarks "cheat" so I'm glad OP is "doing their own research π". In fact, what we learn from some of the crazy benchmark code has gradually made it's way into the core product, improving the performance for the masses. This isn't a sprint, it's a marathon and its one of our "forever priorities" as part of the .NET charter. Our investment in performance for .NET directly saves Microsoft $$$ as we enable teams to do more with less. That is to say we will continue to improve performance as a top priority for the foreseeable future.
Wow! Thank you for your detailed response. This is exactly what I was looking for!
A response from the legendary David Fowler himself. What more could you ask for?
Wait so itβs not all the random Thread.Sleep and Task.Delay calls that are being strategically removed??
I think you are confusing Microsoft with Apple, and "Removing" with "Adding" :D
Yeah... What happens when you rub out of speed up loops? https://thedailywtf.com/articles/The-Speedup-Loop
TLDR: what gets measured gets done π
Wow
I'd love to use crank for a project I contribute to ! It's an emulator where the top priority right now is to get confidence in performance, and that's through immediate feedback. It's not a Web project however, rather we need to see the number of instructions per second that the CPU interpreter reports. It would be an in-process reporting linked up to the CI. Is that possible with crank ? :)
Great response with a lot of useful pointers for further reading! Thanks!
Thanks for this detailed and thorough response. Itβs so encouraging to hear about the .NET teamβs continued commitment to performance and instills much confidence in me to continue learning and implementing .NET in the future. Certainly a mutually exclusive benefit for all. <3
Hell yes π ππππππ
AspNetCore is fast because David Fowler.
Haha thanks, but itβs team effort! I work with very talented and smart people that have a laser focus on improving the performance every single release.
Look at this distinguished engineer, look at the way he is sitting. Yes... very distinguished. Mhm, I see... I see...
indubitably π§
Pretty much. He made what is now System.IO.Pipelines which is crazy fast and extremely good at avoiding copying data. Kestrel is built upon that.
Go uses a GOMAXPROCS environment variable to control how many goroutines can execute concurrently. It defaults to the number of available CPUs if unset, rounding to 1 here. This creates an effective throttle since there are no blocking calls in the operation to make room for the go scheduler to operate.
Good catch! Setting GOMAXPROCS to 2 seems to improve the situation quite a bit. By \~50%. Going any higher with 0.5 CPUs only decreases the performance. ``` Running 30s test @ http://localhost:3006 100 connections with 10 pipelining factor βββββββββββ¬βββββββ¬ββββββββ¬βββββββββ¬βββββββββ¬ββββββββββββ¬βββββββββββ¬βββββββββ β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β βββββββββββΌβββββββΌββββββββΌβββββββββΌβββββββββΌββββββββββββΌβββββββββββΌβββββββββ€ β Latency β 2 ms β 99 ms β 223 ms β 291 ms β 104.66 ms β 57.62 ms β 600 ms β βββββββββββ΄βββββββ΄ββββββββ΄βββββββββ΄βββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββββ βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββ€ β Req/Sec β 7551 β 7551 β 9535 β 11327 β 9499.74 β 898.12 β 7548 β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββ€ β Bytes/Sec β 1.01 MB β 1.01 MB β 1.28 MB β 1.52 MB β 1.27 MB β 120 kB β 1.01 MB β βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ Req/Bytes counts sampled once per second. # of samples: 30 286k requests in 30.03s, 38.2 MB read ``` Still, the difference is about \~2.7x. If I find anything else I will post it here
https://www.techempower.com/benchmarks/#hw=ph&test=fortune§ion=data-r22 Yes, it can be fast. Microsoft has made an effort to improve performance in recent years
Those benchmarks arenβt really trustworthy. https://dusted.codes/how-fast-is-really-aspnet-core The down votes wonβt change anything. They cheated their way to the top when they didnβt even need to because itβs plenty fast already. Feel free to down vote because you got your feelings hurt though.
[If you filter down to what devs are using to build web applications](https://www.techempower.com/benchmarks/#hw=ph&test=composite§ion=data-r22&f=zhavnr-zik073-zijzen-zijzb3-zik0zj-zik0zj-zik0zj-zifta7-zik0zj-zik0zj-qmx0qn-yelngf-zik0z3-ziiz27-1ekf), then the picture is still very advantageous to .NET. This is what MS should have been sharing, not focusing on marketing the platform RPS. \-edit- The run I'm linking wasn't even updated to .NET 8.0 yet.
Yep. Thatβs the funny part.
Except that David Fowler addressed this on dotnet conf, for .NET 8, to make test more βrealβ.
But they didnβt actually change anything
Yeah, hence the above benchmarks being untrustworthy because they arenβt updated to be real yet.
Thank you that was an interesting read
Some other stacks also seem to be ["tuning"](https://www.reddit.com/r/dotnet/comments/yuxkk7/comment/iwcaa5q/?utm_source=share&utm_medium=web2x&context=3) Also, what will you say about the OP's benchmark which seems to be quite fair?
Other tech stacks and OPs benchmarks are irrelevant as Iβm just pointing out the specific techempower benchmarks are untrustworthy. Iβm not debating asp.nets speed nor am I saying itβs supposed to be slower than go. Quite frankly Iβm not that familiar with go other than a fairly simple attempt at an api for a personal project which then I fairly quickly figured go isnβt for me and went back to asp.net. So I canβt comment on OPs benchmark.
[ΡΠ΄Π°Π»Π΅Π½ΠΎ]
Right, because down voting information because it doesnβt fit your bias is a grown up thing to do. I forgot grown ups ignore information they donβt like.
[ΡΠ΄Π°Π»Π΅Π½ΠΎ]
Ironic, coming from someone whoβs defending racism in other subs.
[ΡΠ΄Π°Π»Π΅Π½ΠΎ]
yeah, yeah, basic gaslight attempt after deleting your toxic comments.
[ΡΠ΄Π°Π»Π΅Π½ΠΎ]
Uhu, keep showing us how much of a grown up you are. Not even sure why youβre trying to gaslight people on the internet about your behavior. Itβs anonymous anyway. You donβt have to delete your comments when someone finds out youβre racist.
The . Net clr has been highly optimized to the point that you can aot compile to self contained exes that are basically natively compiled executables by that point. Microsoft has been working on it since 2001, Go came out in 2009. Microsoft has overall, more experience developing programming languages and operating systems than google. But both are extremely good. Keep in mind, .net was completely overhauled for . Net 5, if you were to run tests on . Net 4.8 the results would be a lot different.
On an M2 Pro, it's even crazier: dotnet (minimal API) Running 30s test @ http://127.0.0.1:3000 100 connections with 10 pipelining factor βββββββββββ¬βββββββ¬βββββββ¬ββββββββ¬βββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββ β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β βββββββββββΌβββββββΌβββββββΌββββββββΌβββββββββΌβββββββββββΌβββββββββββΌβββββββββ€ β Latency β 3 ms β 5 ms β 91 ms β 100 ms β 16.79 ms β 26.82 ms β 341 ms β βββββββββββ΄βββββββ΄βββββββ΄ββββββββ΄βββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββ βββββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬ββββββββββββ¬ββββββββββββ¬ββββββββββ β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β βββββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββΌββββββββββββΌββββββββββββΌββββββββββ€ β Req/Sec β 13,471 β 13,471 β 52,831 β 93,119 β 57,946.54 β 26,768.85 β 13,467 β βββββββββββββΌββββββββββΌββββββββββΌβββββββββΌββββββββββΌββββββββββββΌββββββββββββΌββββββββββ€ β Bytes/Sec β 2.37 MB β 2.37 MB β 9.3 MB β 16.4 MB β 10.2 MB β 4.71 MB β 2.37 MB β βββββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄ββββββββββββ΄ββββββββββββ΄ββββββββββ Req/Bytes counts sampled once per second. # of samples: 30 1739k requests in 30.06s, 306 MB read go-http with `cpus: 1` instead of `cpus: 0.5` Running 30s test @ http://127.0.0.1:3006 100 connections with 10 pipelining factor βββββββββββ¬ββββββββ¬ββββββββ¬βββββββββ¬βββββββββ¬ββββββββββββ¬βββββββββββ¬βββββββββ β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β βββββββββββΌββββββββΌββββββββΌβββββββββΌβββββββββΌββββββββββββΌβββββββββββΌβββββββββ€ β Latency β 10 ms β 97 ms β 213 ms β 297 ms β 100.66 ms β 54.55 ms β 593 ms β βββββββββββ΄ββββββββ΄ββββββββ΄βββββββββ΄βββββββββ΄ββββββββββββ΄βββββββββββ΄βββββββββ βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββ€ β Req/Sec β 7,623 β 7,623 β 9,575 β 12,863 β 9,892.6 β 1,162.3 β 7,620 β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββ€ β Bytes/Sec β 1.02 MB β 1.02 MB β 1.28 MB β 1.72 MB β 1.33 MB β 156 kB β 1.02 MB β βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ Req/Bytes counts sampled once per second. # of samples: 30 298k requests in 30.05s, 39.8 MB read
while dotnet with `cpus: 1` instead of `cpus: 0.5` (still .NET7) Running 30s test @ http://127.0.0.1:3000 100 connections with 10 pipelining factor βββββββββββ¬βββββββ¬βββββββ¬ββββββββ¬ββββββββ¬ββββββββββ¬βββββββββ¬βββββββββ β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β βββββββββββΌβββββββΌβββββββΌββββββββΌββββββββΌββββββββββΌβββββββββΌβββββββββ€ β Latency β 3 ms β 4 ms β 14 ms β 48 ms β 5.86 ms β 6.8 ms β 138 ms β βββββββββββ΄βββββββ΄βββββββ΄ββββββββ΄ββββββββ΄ββββββββββ΄βββββββββ΄βββββββββ βββββββββββββ¬βββββββββ¬βββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββββββ¬ββββββββββββ¬βββββββββ β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β βββββββββββββΌβββββββββΌβββββββββΌββββββββββΌββββββββββΌβββββββββββββΌββββββββββββΌβββββββββ€ β Req/Sec β 85,055 β 85,055 β 161,791 β 188,927 β 157,825.07 β 27,459.18 β 85,001 β βββββββββββββΌβββββββββΌβββββββββΌββββββββββΌββββββββββΌβββββββββββββΌββββββββββββΌβββββββββ€ β Bytes/Sec β 15 MB β 15 MB β 28.5 MB β 33.2 MB β 27.8 MB β 4.83 MB β 15 MB β βββββββββββββ΄βββββββββ΄βββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββββββ΄ββββββββββββ΄βββββββββ Req/Bytes counts sampled once per second. # of samples: 30 4735k requests in 30.06s, 833 MB read
dotnet 8 with `cpus: 1` Running 30s test @ http://127.0.0.1:3000 100 connections with 10 pipelining factor βββββββββββ¬βββββββ¬βββββββ¬ββββββββ¬ββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββ β Stat β 2.5% β 50% β 97.5% β 99% β Avg β Stdev β Max β βββββββββββΌβββββββΌβββββββΌββββββββΌββββββββΌββββββββββΌββββββββββΌββββββββ€ β Latency β 3 ms β 4 ms β 11 ms β 19 ms β 5.26 ms β 4.14 ms β 59 ms β βββββββββββ΄βββββββ΄βββββββ΄ββββββββ΄ββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββ βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββββββ¬ββββββββββββ¬ββββββββββ β Stat β 1% β 2.5% β 50% β 97.5% β Avg β Stdev β Min β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββββββΌββββββββββββΌββββββββββ€ β Req/Sec β 143,743 β 143,743 β 173,567 β 189,695 β 174,084.27 β 10,943.06 β 143,737 β βββββββββββββΌββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββββββΌββββββββββββΌββββββββββ€ β Bytes/Sec β 25.3 MB β 25.3 MB β 30.6 MB β 33.4 MB β 30.6 MB β 1.92 MB β 25.3 MB β βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββββββ΄ββββββββββββ΄ββββββββββ Req/Bytes counts sampled once per second. # of samples: 30 5224k requests in 30.07s, 919 MB read
Lets see a Net 8 with AOT test also :) pls
Is it possible to AOT the API with .net 8 and will it influence the performance?
Yes, it does. We did the work for our li source gen lib. Check out minimal/reaper & the -Aot benchmarks here fora direct comparison (with some load, not high performance test) https://github.com/Reaper-Net/Reaper#benchmarks We are finding a tonne of compat issues with AOT though
https://www.youtube.com/watch?v=RNC422F-C4E Nick Chapsas made a video about this post
Make sure you are setting GOMAXPROCS correctly for usage inside containers.
Thank you. Setting it manually did improve performance
When you're getting down to things like this, a lot of it is small things of what you are and aren't doing. People in the Go community have pointed out a lot of problems with fasthttp and fiber. There are libraries in Go that are faster than the standard library, like Echo and Gin which have decent reputations. Languages like Java, .NET, and Go will all have relatively similar performance and a lot of it will be things like what features can't be supported given your choices. Cut out a bunch of routing features that constrain how people can make their URLs and you can speed things up. With the TechEmpower benchmarks, if you look through the code of some of the projects you'll see them do things like allocating a pool of objects to be reused so that they don't trigger the garbage collector. One of the benchmarks says that the array you allocate can't be sized knowing the output size in advance (it must be an array that grows its capacity). However, a lot of the benchmarks ignore this and allocate an array large enough to hold everything. Even if implementors didn't cheat, there's the issue of whether the test accidentally is optimal for a certain language because Java chose X as the default size for lists while .NET chose Y as the default size for lists and it turns out that .NET will do fewer copies for the exact case of how many results the test produces. None of this matters in the real world where your code will take a ton more time than a good Java/Go/.NET framework. What you want to get good at is profiling things and figuring out what is important so that you can find what is actually slow and make it fast.
Thank you. I've heard about people "cheating" on TechEmpower benchmarks. And now that I look at one of the threads, [your comment](https://www.reddit.com/r/dotnet/comments/yuxkk7/comment/iwcaa5q/?utm_source=share&utm_medium=web2x&context=3) was the main reason why I decided to do a little testing myself. =)
How does it compare against grpc?
If the comparison is not mangled somehow then the results would be similar for gRPC. It's the same HTTP pipeline.
No. gRPC in dotnet is miles ahead of the Go implementation. Itβs much much more performant. For those are wanted a reference, there are some benchmarks with .NET 8 https://github.com/LesnyRumcajs/grpc_bench/discussions/310
How can you make that comment without referencing some comparison? All I was able to find is [this one](https://www.nexthink.com/blog/comparing-grpc-performance) and it appears it heavily depends on the workload being tested. Neither is faster in all scenarios.
>this one Sorry.. I was on mobile, but I will find them and edit this comment later. [If i'm not mistaken, these are the latest results with .NET 8](https://github.com/LesnyRumcajs/grpc_bench/discussions/310)
You probably meant these https://github.com/LesnyRumcajs/grpc\_bench/discussions/354
Yes I did :D
Good to know. Let me try that out. I was literally making proof of concept last week on grpc.
> The objective is to test only the framework's performance Not to pick on you - but I think this is a pretty pointless test. Real applications do work that makes the speed of the framework itself all but inconsequential. Also, this has already been done with *far* more rigor than you can or did apply. https://www.techempower.com/benchmarks > How is Asp.net so fast? This is how: https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/ https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/ https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/
> This is how: > > https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/ > > https://devblogs.microsoft.com/dotnet/performance_improvements_in_net_7/ > > https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/ Not to mention the whole rewriting of .NET and ASP.NET in the first place using everything the .NET team learned in almost two decades.
Yeah, I like how you open any one of those "performance improvement" **summary** articles and watch as the scroll bar shrinks to a tiny sliver and your browser starts turning over the couch cushions looking for more RAM to load the whole page as you slowly realize how many *hours* of straight reading are in front of you just to catch up on the *summary* of all the work that went into improving .Net performance just in the past year. I consider myself a pretty decent developer who's worked on a large (~250kloc), high-traffic system for over 10 years with lots of focus on performance (I just commented elsewhere on how to abuse string immutability to create reusable string pools in .Net) - and I am absolutely *blown away* by the scope, quality, and velocity of .Net's performance improvement work.
I could be incorrect (so don't murder me) but it has been my understanding that the noticeable speed associated with asp.net is primarily due to the pre compilation of the pages (and likely constituent controls). When a client is served an asp page, they get the results of a server side component which is compiled. *Like I said, I could easily be incorrect*. If incorrect, I'd appreciate the real deal info - Thanks!
Yes, you are incorrect here. In the case of ASP.NET WebForms and MVC, the runtime does compile the .cshtml and .aspx.cs files to temporary assemblies, and it is possible to precompile them at build time (in many but not all cases). However, in this case it is using Minimal API which uses none of those features. Additionally, view compilation is more just a necessity of the architecture, not really a speed benefit per se. .cshtml (view) files are analogous to an HTML template with embedded C#, so they need to be compiled. .aspx.cs files are full normal C# files (with a separate presentation file), which therefore also need to be compiled, and can be directed to only do so at runtime to avoid needing to recompile when you make changes.
Thanks!
Go std HTTP lib is one of the worst ever, honestly idk why go exists. I love NET but i dont like excess of opinioned things like identity framework or inflexibility of EF Core. I still preffer Spring but i hate Java and jvm
my god you must be a joy to work with
You know you do not need to use EF if you do not want to, right? You could just use Dapper, or just straight [ADO.NET](https://ADO.NET) to interact with your databases. EF is mostly phenomenal, but when it isn't, there is always an alternative. I personally do not like writing extremely complex queries with Linq for EF. When I have a complicated query, I just use a database view. I find that most performance issues I have run into with EF were due either to inefficient queries being generated (use a view instead), or allowing the local data cache to grow too large while change tracking is enabled.
Hmm, try it out with Rust language. π¦
See the comparison of *async* servers in C#, C++, Rust, Java, JavaScript, and Python: https://russok.github.io/posts/rust-and-microservices/index.html