We are using loopback-connector-mongodb#4.2.0 version to connect to mongo db . Our database admins are reporting that our application is using more connections and it is constantly growing. When they report issues , we just restart our applications and connection use count drops drastically (from 800 active connections to 500 connections on mongodb).
We need to implement connection pooling in loopback, but not sure on what settings needs to be done on the datasource.json file. Can you please give some suggestions.
thanks
Arun S.
Related
I have a web application running on an App Service on Azure cloud.
On the back-end I'm using a tcp connection to our database (Neo4j graph db), the best practice is to open the tcp connection and keep it alive in order to be more reactive when we perform queries.
The issue I encountered is that the database is logging the exception "Connection reset by peer";
reading on the web I found out that maybe Azure has a TCP timeout configured by default, I read it to be set up to 4 minutes, which could be my issue root cause.
Someone knows how to configure the tcp KEEP ALIVE to always for App Services on Azure?
I found on the web how to do it in Google cloud but nothing about Azure cloud.
Thank you in advance.
OaicStef
From everything I can find that is not an adjustable setting. Here is the forum link that says it will not be changing and that is a couple years old at this point. https://social.msdn.microsoft.com/Forums/en-US/32b76114-67a4-4e6b-ac45-61b0f0a0829f/changing-the-4-minute-request-time-out-for-app-services?forum=windowsazurewebsitespreview
I think you are going to have to add logic to your app that tests the connection, if it has been closed then either reopen it or create a new one. I don't know what language you are using to make any suggestions there.
Edit
I will add that the total number of TCP connections that can be open on a single App Service is about 6k, at least on the S1. Keep that in mind because if you don't have pooling on the server side or you are not disposing of those then you will exhaust that the TCP pool and you will start getting errors. I recommend you configure an alert for that.
We are currently using Pgbouncer(installed on database server) for database connection pooling. At the same time we use Npgsql Library which has its own connection pool. I have read recommendations that we should disable pooling in Npgsql and use only Pgbouncer.
There is a performance problem when we disable connection pooling in Npgsql.
According to my test, it takes 100 ms to connect to pgbouncer. Latency to server with PgBouncer is <1ms.
Executing 5 queries with 5 connections will take more than 500ms, which is too much.
Are we using it correct? That connection latency is killing my performance.
I tried to connect to pgbouncer from different server in network and it took from 8 to 22 ms. I am assuming, it is some network issue.
There is no reason to disable connection pooling in Npgsql unless you have errors or compatibility issues.
PGBouncer helps with scalability by handling many more concurrent connections at once without overloading Postgres (which creates a process for each new connection). This does not mean that creating new connections is any faster, so it's better to reuse an existing pool.
Npgsql would maintain a pool of connections from your application to pgbouncer, and pgbouncer would have a pool of connections from itself to Postgres. This works fine and will ensure both network hops are as efficient as possible.
I'm building an advanced RoR application with several instances of database connections running concurrently. However I'm still looking for answers for the following,
Does the connection handler automatically add a connection into the pool once its established? Like the following way,
ActiveRecord::Base.establish_connection(spec)
When would it disconnect the active connection if there's no activity for a while? Or should it be manually disconnected.
Are there any other alternatives to disconnecting the database connection like the following,
ActiveRecord::Base.remove_connection(connection_object)
How can I increase the pool size which could be for different database adapters.
Looking forward to some tips.
I am running a tonne of jobs in parallel using Sidekiq and a lot of them are failing to connect to the database because I've only got a connection pool size of 5.
I'd like to just bump that up to 15 (at least on localhost) but was wondering what the possible negative consequences of that might be.
Setup is Ruby on Rails, default poolsize is 5.
It depends on many factors like:
how much memory you want to allocate to your database pool
how long your connections last
the timeout on the connections
the locality of your database server compared to your app/web server
There are other tweaks that some connection pools have also such as the minimum number of connections to have open (even if not used), and the maximum open connections which looks like what you are trying to set.
I have heard that you can potentially saturate your network card with as little as 10 open connections.
I think the only answer is to monitor your cpu/memorry/io usage based on what you have so you have some sort of a baseline, and then bump the connection count up and compare.
Personally I think you should be fine with 15 connections assuming you arent' already pushing your server to the limit or have a tiny VM with 256MB of ram :)
Setting the value too high could saturate the # of allowable open connections to postgres (check the default but it may be around 100). This could especially be a problem if you prematurely shut your services down without allowing it to gracefully close the connections. Then when you try and restart your app server it will error out saying that postgres is not allowing any more connections. This isn't a problem of setting it too high as this would happen in either case but it would def. accelerate the issue.
I'm using CentOS (Linux) and was wondering
the maximum connection that one server can have through epoll (Edge Trigger, OneSHot) .
I've succeeded in having 100,016 connections doing ping-pongs (nonstop) atm. How many socket connections can one server handle?
I don't think it is unlimited. If anyone who've tried it. Could you please share ?
500,000 TCP connections from a single server is the gold standard these days. The record is over a million. It does require kernel tuning. See, for example, Linux Kernel Tuning for C500k.