Lets say I am using IMAP IDLE to monitor changes in a mail folder.
The IMAP spec says that IDLE connections should only stay alive for 30 minutes max, but it is recommended that a lower number of minutes is selected - say 20 minutes, then cancel the idle and restart.
I am wondering what would happen if the mail contents changed between the idle canceling, and the new idle being created. An email could potentially be missed. Given that RECENT is a bit vague, this could lead to getting a message list before the old idle ends, and a new idle starts.
But this is almost the same as polling every 20 minutes, and defeats some of the benefit of idle.
Alternatively, a new idle session could be started prior to terminating the expiring one.
But in any case, I think this problem has already been solved so here I am asking for recommendations.
Thanks,
Paul
As you know, the purpose of IMAP IDLE command (RFC 2177) is to make it possible to have the server transmit status updates to the client in real time. In this context, status updates means untagged IMAP server responses such as EXISTS, RECENT, FETCH or EXPUNGE that are sent when new messages arrive, message status is updated or a message is removed.
However, these IMAP status updates can be returned by any IMAP command, not just the IDLE command - for example, the NOOP command (see RFC 3501 section 6.1.2) can be used to poll for server updates as well (it predates the IDLE command). IDLE only makes it possible to get these updates more efficiently - if you don't use IDLE command, server updates will simply be sent by the server when the client executes another command (or even when no command is in progress in some cases) - see RFC 3501 section 5.2 and 5.3 for details.
This means that if a message is changed between the IDLE canceling and the new IDLE command, the status updates should not be lost, just as they are not lost if you never used IDLE in the first place (and use NOOP every few seconds instead, for example) - they should simply be sent after the new IDLE command is started.
Another approach would be to remember last highest uid of the folder being monitored. Whenever you think there is chance that you missed update. Do a search as follows :*
Related
I am utilising spring cloud aws messaging (2.0.1.RELEASE) in java to consume from an SQS queue. If it's relevant we use default settings, java 10 and spring cloud Finchley.SR2,
We recently had an issue where a message could not be processed due to an application bug, leading to an exception and no confirmation (deletion) of the message. The message is later retried (this is desirable) presumably after the visibility timeout has elapsed (again default values are in use), we have not customised the settings here.
We didn't spot the error above for a few days, meaning the message receive count was very high and the message had conceptually been on the queue for a while (several days by now). We considered creating a cloud watch SQS alarm to alert us to a similar situation in future. The only suitable metric appeared to be ApproximateAgeOfOldestMessage.
Sadly, when observing this metric I see this:
The max age doesn't go much above 5 mins (despite me knowing it was several days old). If a message is getting older each time a receive happens, assuming no acknowledgment comes and the message isn't deleted - but is instead becoming available again after the visibility timeout has elapsed should this graph not be much much higher?
I don't know if this is something specific to thew way that spring cloud aws messaging consumes the message or whether it's a general SQS quirk, but my expectation was that if a message was put on the queue 5 days ago, and a consumer had not successfully consumed the message then the max age would be 5 days?
Is it in fact the case that if a message is received by a consumer, but not ultimately deleted that the max age is actually the length between consume calls?
Can anyone confirm whether my expectation is incorrect, i.e. this is indeed how SQS is expected to behave (it doesn't consider the age to be the duration of time since the message was first put on the queue, but instead considers it to be the time between receive calls?
Based on a similar question on AWS forums, this is apparently a bug with regular SQS queues where only a single message is affected.
In order to have a useful alarm for this issue, I would suggest setting up a dead-letter-queue (where messages get automatically delivered after a configurable number of consume-without-deletes), and alarm on the size of the dead-letter-queue (ApproximateNumberOfMessagesVisible).
I think this might have to do with the poison pill handling by this metric. After 3+ tries, the message won't be included in the metric. From the AWS docs:
After a message is received three times (or more) and not processed,
the message is moved to the back of the queue and the
ApproximateAgeOfOldestMessage metric points at the second-oldest
message that hasn't been received more than three times. This action
occurs even if the queue has a redrive policy.
I search for a way to control the session timeout of the PGSQL (9.0) client (Windows).
When a Session dying? What happened with them after die.
How can I force a Session to die? (For example it is "locked", on some wrong long query, and I want to force the server to release the resources).
Thanks for it:
dd
I extend this to understand it:
The databases need to know which session is dead.
Dead session must be released, because it is only hold the resources, and if this operation not finished, many locks we should get, or we can out of available connections (reach the maximum).
Other DataBases (FireBird, EDB) defines a TimeOut parameter for it.
When it reached, the session set to dead, and user connection aborted.
To avoid exhausting you need to periodically do something, that extend the period.
Theres is 3 ways to reach the timeout:
1.) the client program hangs, or freezed, or closed.
2.) the network connection broken
3.) the client send some very long query/stored procedure that don't finish.
If the timeout not handled by server, may somebody's transaction, lock, etc still alive for X hours, and you have to only one way to remove it: restart the db server service.
Other databases handle dead sessions as they no more interact to the server, so the client got some error, it need to restart the client software.
Some databases supports the return to the "inactive" but "not dead" session, and they can continue the work.
So, with this preface I ask my question again:
How can I control the client's session timeout under pgsql? System variable, SQL parameter, etc?
How can I extend this time?
What happens if a long query is exhausting the period?
When does the pgsql server release the resources held by the client ?
Thanks:
dd
I don't understand the first part of your question, but to kill a running session you can use pg_terminate_backend()
To kill the query of a running session use pg_cancel_query()
Both functions are explained in the manual:
http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE
Theres is 3 ways to reach the timeout: 1.) the client program hangs, or freezed, or closed. 2.) the network connection broken 3.) the client send some very long query/stored procedure that don't finish.
For 2, the tcp_keepalives_* settings might be useful: http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
For 3, there is a statement_timeout setting: http://www.postgresql.org/docs/8.4/static/runtime-config-client.html but this will only terminate the statement, not the connection.
I am running a web2py server which handles some requests which may take a total completion time of few seconds to few minutes. Once a connection is made to the server and it is processing a request which takes about 2-3 minutes, new connections to the server have to wait untill the former's request is completed.
I don't know if we can tweak some parameters in web2py for this. Do we have any way out of this problem.
web2py does not lock the server when busy with a connection but it does lock the user session, on purpose. That means other users can connect but not the one that started the original request. In the acton that takes time you can do:
session._unlock(response)
and this problem (if diagnosis is correct) will go away.
Anyway, it is not a good idea to have requests that take so long. The web server may kill your process and it is not good for usability. You should have a db table where you queue such tasks and handle them in a background process (explained in the manual) than use ajax or html5 websockets (web2y/gluon/contrib/comet_messaging.py) to check progress on the long running task.
Please bring this up on the web2py mailing list and we will help with more concrete examples.
I wonder what is the best approach to handle the following scenario:
I have a server that is designed to handle only 10 connections at a time, during which the server is busy with interacting with the clients. However, while the the server is busy, there may be new clients who want to connect (as part of the next 10 connections that the server is going to accept). The server should only accept the new connections after it finishes with all previous 10 agents.
Now, I would like to have an automatic way for the pending clients to wait and connect to the server once it becomes available (i.e. finished with the previous 10 clients).
So far, I can think of two approaches: 1. have a file watch on the client side, so that the client will watch for a file written by the server. When the server finishes with 10 clients, it will write the file, and the pending clients will know it's time to connect; 2. make the pending clients try to connect the server every 5 - 10 secs or so until success, and the server will return a message indicating whether it is ready.
Any other suggestion would be much welcome. Thanks.
Of the two options you provide, I am inclined toward the 2nd option of "Pinging" the server. I think it is more complicated to have the server write a file to the client triggering another attempt.
I would think that you should be able to have the client waiting and simply send a READY signal. Keep a running Queue of connection requests (from Socket.Connection.EndPoint, I believe). When one socket completes, accept the next Socket off the queue.
My application has a long running request that takes over a minute. If I'm using Chrome or Firefox I just need to be patient. If I use IE however, at the one minute mark I get the popup that says I've reached a Network Connection Timeout.
Why is that?
The default Internet Explorer time out is 1 minute. Since your process is a long-running one, IceFaces doesn't send the response and it times out.
You can avoid this by spawning a new thread for your long running process and returning the response immediately. IceFaces has plenty of polling or push options available to you to let your client know when the long-running process is done.