PostgreSQL: Session Timeout? - delphi

I search for a way to control the session timeout of the PGSQL (9.0) client (Windows).
When a Session dying? What happened with them after die.
How can I force a Session to die? (For example it is "locked", on some wrong long query, and I want to force the server to release the resources).
Thanks for it:
dd
I extend this to understand it:
The databases need to know which session is dead.
Dead session must be released, because it is only hold the resources, and if this operation not finished, many locks we should get, or we can out of available connections (reach the maximum).
Other DataBases (FireBird, EDB) defines a TimeOut parameter for it.
When it reached, the session set to dead, and user connection aborted.
To avoid exhausting you need to periodically do something, that extend the period.
Theres is 3 ways to reach the timeout:
1.) the client program hangs, or freezed, or closed.
2.) the network connection broken
3.) the client send some very long query/stored procedure that don't finish.
If the timeout not handled by server, may somebody's transaction, lock, etc still alive for X hours, and you have to only one way to remove it: restart the db server service.
Other databases handle dead sessions as they no more interact to the server, so the client got some error, it need to restart the client software.
Some databases supports the return to the "inactive" but "not dead" session, and they can continue the work.
So, with this preface I ask my question again:
How can I control the client's session timeout under pgsql? System variable, SQL parameter, etc?
How can I extend this time?
What happens if a long query is exhausting the period?
When does the pgsql server release the resources held by the client ?
Thanks:
dd

I don't understand the first part of your question, but to kill a running session you can use pg_terminate_backend()
To kill the query of a running session use pg_cancel_query()
Both functions are explained in the manual:
http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE

Theres is 3 ways to reach the timeout: 1.) the client program hangs, or freezed, or closed. 2.) the network connection broken 3.) the client send some very long query/stored procedure that don't finish.
For 2, the tcp_keepalives_* settings might be useful: http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
For 3, there is a statement_timeout setting: http://www.postgresql.org/docs/8.4/static/runtime-config-client.html but this will only terminate the statement, not the connection.

Related

Elixir: DBConnection queue_target and queue_interval explained

I am reading DBConnection documentation. And I don't quite understand following quote:
Our goal is to wait at most :queue_target for a connection.
If all connections checked out during a :queue_interval takes more than
:queue_target, then we double the :queue_target. If checking out
connections take longer than the new target, then we start dropping
messages.
Could you please explain me on examples?
In my app I have very huge operation that is executed by periodic worker. I would like to have timeout for it 1minute, or don't have timeout at all. Which queue_target and queue_interval should I set to avoid: Elixir.DBConnection.ConnectionError',message => <<"tcp recv: closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)"
In regular case I would like me queue timeout to be 5 seconds. How could I achieve this with queue_target and queue_interval?
The timeouts you're referring to are set with the :timeout option in execution functions (i.e. execute/4), :queue_target and :queue_interval are only meant to affect the pool's ability to begin new requests (for requests to checkout connections from the pool), not requests that have already checked out connections and are already being processed.
Keep in mind that all attempts to checkout connections during a :queue_interval must take longer than :queue_target in order for these values to affect anything. Normally you'd test different values and monitor your database's ability to keep up in order to find optimal values for your environment.

Web2py server problem

I am running a web2py server which handles some requests which may take a total completion time of few seconds to few minutes. Once a connection is made to the server and it is processing a request which takes about 2-3 minutes, new connections to the server have to wait untill the former's request is completed.
I don't know if we can tweak some parameters in web2py for this. Do we have any way out of this problem.
web2py does not lock the server when busy with a connection but it does lock the user session, on purpose. That means other users can connect but not the one that started the original request. In the acton that takes time you can do:
session._unlock(response)
and this problem (if diagnosis is correct) will go away.
Anyway, it is not a good idea to have requests that take so long. The web server may kill your process and it is not good for usability. You should have a db table where you queue such tasks and handle them in a background process (explained in the manual) than use ajax or html5 websockets (web2y/gluon/contrib/comet_messaging.py) to check progress on the long running task.
Please bring this up on the web2py mailing list and we will help with more concrete examples.

How to handle pending connections to a server that is designed to handle a limited number of connections at a time

I wonder what is the best approach to handle the following scenario:
I have a server that is designed to handle only 10 connections at a time, during which the server is busy with interacting with the clients. However, while the the server is busy, there may be new clients who want to connect (as part of the next 10 connections that the server is going to accept). The server should only accept the new connections after it finishes with all previous 10 agents.
Now, I would like to have an automatic way for the pending clients to wait and connect to the server once it becomes available (i.e. finished with the previous 10 clients).
So far, I can think of two approaches: 1. have a file watch on the client side, so that the client will watch for a file written by the server. When the server finishes with 10 clients, it will write the file, and the pending clients will know it's time to connect; 2. make the pending clients try to connect the server every 5 - 10 secs or so until success, and the server will return a message indicating whether it is ready.
Any other suggestion would be much welcome. Thanks.
Of the two options you provide, I am inclined toward the 2nd option of "Pinging" the server. I think it is more complicated to have the server write a file to the client triggering another attempt.
I would think that you should be able to have the client waiting and simply send a READY signal. Keep a running Queue of connection requests (from Socket.Connection.EndPoint, I believe). When one socket completes, accept the next Socket off the queue.

Is there any reason to use a database connection pool with ActiveRecord?

What are the benefits to using an external connection pool?
I've heard that most other applications will open up a connection for each unit of work. In Rails, for example, I'd take that to mean that each request could open a new connection. I'm assuming a connection pool would make that possible.
The only benefit I can think of is that it allows you to have 1,000 frontend processes without having 1,000 postgres processes running.
Are there any other benefits?
Rails has connection pooling built in:
Simply use ActiveRecord::Base.connection as with Active Record 2.1 and earlier (pre-connection-pooling). Eventually, when you’re done with the connection(s) and wish it to be returned to the pool, you call ActiveRecord::Base.clear_active_connections!. This will be the default behavior for Active Record when used in conjunction with Action Pack’s request handling cycle.
Manually check out a connection from the pool with ActiveRecord::Base.connection_pool.checkout. You are responsible for returning this connection to the pool when finished by calling ActiveRecord::Base.connection_pool.checkin(connection).
Use ActiveRecord::Base.connection_pool.with_connection(&block), which obtains a connection, yields it as the sole argument to the block, and returns it to the pool after the block completes.
This has been available since version 2.2. You'll see a pool parameter in your database.yml for controlling it:
pool: number indicating size of connection pool (default 5)
I don't think there would be much point in layering another pooling system underneath it and it could even confuse AR's pooling if you tried it.

Connection Timeout and Connection Lifetime

What is the advantage and disadvantage of connection timeout=0?
And what is the use of Connection Lifetime=0?
e.g
(Database=TestDB;
port=3306;
Uid=usernameID;
Pwd=myPassword;
Server=192.168.10.1;
Pooling=false;
Connection Lifetime=0;
Connection Timeout=0)
and what is the use of Connection Pooling?
Timeout is how long you wait for a response from a request before you give up. TimeOut=0 means you will keep waiting for the connection to occur forever. Good I guess if you are connecting to a really slow server that it is normal if it takes 12 hours to respond :-). Generally a bad thing. You want to put some kind of reasonable timeout on a request, so that you can realize your target is down and move on with your life.
Connection Lifetime = how long a connection lives before it is killed and recreated. A lifetime of 0 means never kill and recreate. Normally not a bad thing, because killing and recreating a connection is slow. Through various bugs your connections may get stuck in an unstable state (like when dealing with weird 3 way transactions).. but 99% of the time it is good to keep connection lifetime as infinite.
Connection pooling is a way to deal with the fact that creating a connection is very slow. So rather than make a new connection for every request, instead have a pool of say, 10, premade connections. When you need one, you borrow one, use it, and return in. You can adjust the size of the pool to change how your app behaves. Bigger pool = more connections = more threads doing stuff at a time, but this could also overwhelm whatever you are doing.
In summary:
ConnectionTimeout=0 is bad, make it something reasonable like 30 seconds.
ConnectionLifetime=0 is okay
ConnectionPooling=disabled is bad, you will likely want to use it.
I know this is an old thread but I think it is important to point out an instance in which you may want to disable Connection Pooling or use Connection Lifetime.
In some environments (especially when using Oracle, or at least in my experience) the web application is designed so that it connects to the database using the user's credentials vs a fixed connection string located in the server's configuration file. In this case enabling connection pooling will cause the server to create a connection pool for each user accessing the website (See Pool Fragmentation). Depending on the scenario this could either be good or bad.
However, connection pooling becomes a problem when the database server is configured to kill database connections that exceed a maximum idle time due to the fact that the database server could kill connections that may still reside in the connection pool. In this scenario the Connection Lifetime may come in handy to throw away these connections since they have been closed by the server anyway.

Resources