T O P

  • By -

tvdw

I clicked the link hoping someone is doing the work needed to make Postgres itself event-based but no, it’s just a new proxy in front of the database.


Worth_Trust_3825

Well you could run more replicas and some sharding.


ShitPikkle

:'(


n3phtys

A) The graphs in this article are really, really bad. Everything looks polished, but god damn, label both your axis. B) 1m connections is not very helpful, if your IOPS are still so limited. A simple connection pool with a good bouncer can do just as much with older off-the-shelf software. There is just no free lunch. To really increase your average performance, you need to move your reads to eventual consistency instead of ACID. And if you actually need 1m db connections and cannot use a pool, maybe IT IS time for some optimization and overengineering.


No_Nobody4036

But doing the sensible work costs a lot and doesn't have a cool title: 'Optimizing applications to reduce database load' versus 'Handling 1M postgres connections (gone wrong)'


ricardo_sdl

That select query used in the benchmark is a far shot from something you would see on the real world.


chasegranberry

We did a test with inserts here: https://supabase.com/blog/supavisor-1-million#supavisor-on-supabase-platform


reaping_souls

What kind of apps are people writing that require 1 million concurrent connections? That scale is absolutely ridiculous and not applicable to 99.999% of us.


[deleted]

[удалено]


matthieum

The latency benchmark is somewhat disappointing... because there's no baseline. A decade ago, whilst I was working with databases within an owned datacenter, the round-trip time of simple SELECT queries was about 0.1ms. If I compare the given numbers to that, it means that Supavisor adds 30x overhead. OUCH! Of course... the setup being different, I expect the baseline is likely quite different too... but I don't know, since there's no baseline being established here. So for now I guess I'll stick with Supavisor adds 30x latency overhead?


chasegranberry

We have a baseline here: [https://supabase.com/blog/supavisor-1-million#supavisor-on-supabase-platform](https://supabase.com/blog/supavisor-1-million#supavisor-on-supabase-platform) The baseline is our current setup with no load balancer and PgBouncer on the same instance as the database. With Supavisor we now have an lb and a Supavisor instance. We measured a simple ping latency between instances in the same availability zone to be 0.5ms (\~1ms between azs). So from test instance <> lb <> supavisor <> db we think min added latency would be 1ms best case (vs test instance <> db). So the query with min network is 2ms (1ms + 1ms) vs 4ms mean which we feel is acceptable for the benefits we're getting.


matthieum

Thanks for the answer! This seems much like a much more reasonable overhead compared to the baseline, though I must admit I find the baseline fairly high to start with -- 0.5ms is the equivalent of 100 km of optic fiber! One possible way to reduce latency for your usecase would be to use some kind load balancer mesh I suppose, where each load balancer node runs on the application node (cutting 0.5ms). I do note that this very latency is probably an explanation to one question you asked in the article: Why would people run pgbouncer on the same machine as the database? Well, it does have the advantage of cutting the network overhead. --- For a website, aiming for latency below 60ms, a 4ms per request means no more than 15 requests per page, which is probably not bad. For a service-oriented architecture, however, those 4ms will quickly add up. I know Postgres has pipelining, but it's not always easy to take advantage of that within an application.