Is there a race condition in IMAP's idle-search-idle? - imap

I have a bunch of code that does IMAP commands “search, idle, done, search, idle, done, search, ...”.
Is it possible that some messages arrive between search and idle commands and, thus, will only be received by that code after the idle return / timeout?
EDIT1: I tried it with GMail: tested with sleep 60 between the message-processing and IDLE, and IDLE didn't return before timeout even though there were messages; to make sure I didn't miss an event from IDLE, I did a clientside send/recv dump and tried an additional read() before the sending of IDLE after the sleep(); while sending test messaged during the sleep().
EDIT2: Using two connections, one for getting the mail (using 'SEARCH') and another for using IDLE to get instant 'there are new messages' events, avoids the race condition, but someone claims there are some problems with that.

   
A properly implemented server will notify you of the new messages as soon as you start IDLE, if it hasn't already notified you about them in response to some other command.

Related

Jobs pushing to queue, but not processing

I am using AWS SQS. I am getting 2 issues.
Sometime, messages are present in the queue but I am not able to read that.
When I fetch, I am getting blank array, same like not any messages found in queue.
When I am deleting a message from queue then it gives me like
sqs.delete_message({queue_url: queue_url, receipt_handle: receipt_handle})
=> Aws::EmptyStructure
When I check in SQS (In AWS), message still present even I refresh page more then 10 times.
Can you help me why this happens ?
1. You may need to implement Long Polling.
SQS is a distributed system. By default, when you read from a queue, AWS returns you the response only from a small subset of its servers. That's why you receive empty array some times. This is known as Short Polling.
When you implement Long Polling, AWS waits until it gets the response from all it's servers.
When you're calling ReceiveMessage API, set the parameter WaitTimeSeconds > 0.
2. Visibility Timeout may be too short.
The Visibility Timeout controls how long a message currently being read by one poller is invisible to other pollers. If the visibility timeout is too short, then other pollers may start reading the message before your first poller has processed and deleted it.
Since SQS supports multiple pollers reading the same message. From the docs -
The ReceiptHandle is associated with a specific instance of receiving a message. If you receive a message more than once, the ReceiptHandle is different each time you receive a message. When you use the DeleteMessage action, you must provide the most recently received ReceiptHandle for the message (otherwise, the request succeeds, but the message might not be deleted).

Looking to implement write timeout when there is a delay in writing message to a queue

We are working on a billing invoice system. As a part of processing our request, we need to make an asynchronous call by placing a message in a queue. We work at 20TPS and have SLA for entire transaction of 12 sec. Occasionally, we have observed that when MQ server becomes very slow but still operational it's taking a lot of time just to write the message in the queue. We want to handle this scenario and have a system that throws an exception when it exceeds a predefined limit for writing the message in the queue.
In simple words, we want to implement a write timeout when there is a delay in writing a message in the queue. Any help is appreciated.
We are aware of mentioning timeout for receiving the response but we are unable to find any fix for mentioning timeout while writing the message in the queue.
We have found some suggestions on revalidating the destination. But in our case, we already know the destination is operational and our system becomes slow only during the response.

Erlang dead letter queue

Let's say my Erlang application receives an important message from the outside (through an exposed API endpoint, for example). Due to a bug in the application or an incorrectly formatted message the process handling the message crashes.
What happens to the message? How can I influence what happens to the message? And what happens to the other messages waiting in the process mailbox? Do I have to introduce a hierarchy of processes just to make sure that no messages are lost?
Is there something like Akka's dead letter queue in Erlang? Let's say I want to handle the message later - either by fixing the message or fixing the bug in the application itself, and then rerunning the message processing.
I am surprised how little information about this topic is available.
There is no information because there is no dead letter queue, if you application crashed while processing your message the message would be already received, why would it go on a dead letter queue (if one existed).
Such a queue would be a major scalability issue with not much use (you would get arbitrary messages which couldn't be sent and would be totally out of context)
If you need to make sure a message is processed you usually use a way to get a reply back when the message is processed like a gen_server call.
And if your messages are such important that it would be a catastrophe if lost you should probably persist it in a external DB, because otherwise if your computer crashes what would happen to all the messages in transit?

Unordered socket read & close notification using IOCP

Most server framework/examples using sockets and I/O completion ports makes notifications in a way I couldn't completely figure out the purpose.
Upon read packets are processed, usually they are reordered to circumvent thread scheduling issues processing packets out of order no matter IOCP ensure a FIFO queue.
The problem is when a socket is closed gracefully or by an error. I saw in both situation, and again by the o.s. thread scheduler, the close notification may be sent to the application (i.e. http server using the framework) "before" the notification of data previously readed.
I think that the close notification should be queued in such way so the application receives it after previous reads.
Is there any intended use in most code I saw or my behavior may be correct depending on the situation?
What you suggest makes sense and I would imagine that any code that handles graceful close (a read returning 0 bytes) would do so by processing it after any proceeding successful read. Errors coming out of GetQueuedCompletionStatus(), such as connection reset errors, etc, are harder to integrate into the receive flow as they occur out of band as far as the receive data is concerned. Your question's a bit vague and depends very much on the code you're using and how you (or the people who wrote that code) want to handle these things. There is no single correct way, IMHO.

Suspending already executing task NSOperationQueue

I have problem suspending the current task being executed, I have tried to set NSOperationQueue setSuspended=YES for pausing and setSuspended=NO for resuming the process.
According to apple docs I can not suspend already executing task.
If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding operation queue using the setSuspended: method. Suspending a queue does not cause already executing operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for execution. You might suspend a queue in response to a user request to pause any ongoing work, because the expectation is that the user might eventually want to resume that work.
My app needs to suspend the time taking upload operation in case of internet unavailability and finally resume the same operation once internet is available. Is there any work around for this? or I just need to start the currently executing task from zero?
I think you need to start from zero. otherwise two problems will come there. If you resume the current uploading you cant assure that you are not missed any packets or not. At the same time if the connection available after a long period of time, server may delete the data that you uploaded previously because of the incomplete operation.
Whether or not you can resume or pause a operation queue is not your issue here...
If it worked like you imagined it could (and it doesn't) when you get back to servicing the TCP connection it may very well be in a bad state, it could have timed out, closed remotely...
you will want to find out what your server supports and use the parts of a REST (or similar) service to resume a stalled upload on a brand new fresh connection.
If you haven't yet, print out this and put it on the walls of your cube, make t-shirts for your family members to wear... maybe add it as a screensaver?

Resources