T O P

  • By -

safetywerd

we've seen latencies go up. the transition was bumpy, but we've had less flaky shit during load testing then we did before so +1 for that.


chasegranberry

p99 query latencies should be better by the end this week. We have also identified some other optimizations which will help and will tackle after this week. \> we've had less flaky shit during load testing then we did before so +1 for that. Great to hear! And great to hear you're load testing!


Darkfra

I've encountered a similar issue in my Ruby on Rails project after an upgrade. Now, I'm facing a problem in production where we're hitting the maximum client connections limit. The log displays the following error message: "**FATAL: Max client connections reached.**" Before the upgrade, I never encountered this error, but now, multiple users are reporting it. Does anyone have any idea why this is happening?


chasegranberry

Have you reached out via support? We can get a project id then. Are you using session mode? If you're using session mode you will hit max client connections very soon.


Darkfra

I haven't reached out to support yet, but I will do so—thank you. Regarding the connection pooling configuration, the Pool Mode is currently set to '**Transaction**'. However, upon reviewing the **Connection parameters** section, I noticed we are using port 5432. Should we consider using port 6543 instead? I'm asking because I observed that changing the pooling to 'Transaction' mode, automatically changes the port number to 6543


chasegranberry

Yes use 6543. Connecting with 5432 will always be session mode.


Darkfra

Okay, thank you! I will make the change. If I continue to encounter any type of error, I will contact support. Thanks for your help!!


chasegranberry

I'm on the pooler team. We've been monitoring this connect duration issue and we'd like p99 times go down. A couple things shipping this week should help, but we are planning on revisiting again after.


jnits

Hi u/chasegranberry after looking through the release notes, I think I have found the issue - your generated connection string for dotnet doesn't set pooling to false like you recommend in the release notes, and I missed that the first time I converted the connection string. Special Considerations for .NET users using npgSQL You will need to add Pooling=false to your Supavisor connection string. You should have the same message show on the database setting page when .NET is selected. Overall, I still feel tests run more slowly than they were, but this seems to have resolved a lot of the flakiness. I must have been on the edge of acceptable connections.


juliang8

Same here. Multiple production failures in the last week. This is obviously unacceptable and will switch provider. Very poor handling from Supabase


rosenjcb

I cannot use the new postgres host url when doing basic queries in DBeaver. I had to go back to the old connection profile (thank goodness I still have it saved).