Azsure Worker Role reading from Queue - azure-web-roles

I've implemented a Web role that writes to a queue. This is working fine. Then I developed a Worker role to read from the queue. When I run it in debug mode from my local machine it reads the messages from the queue fine, but when i deploy the Worker role it dos'nt seem to be reading the queue as the message eventually end up in the dead letter queue. Anyone know what could be causing this behavior? Below are some bit that might be key in figuring this thing out
queueClient = QueueClient.Create(queueName, ReceiveMode.PeekLock);
var queueDescription = new QueueDescription(QueueName)
{
RequiresSession = false,
DefaultMessageTimeToLive = TimeSpan.FromMinutes(2),
EnableDeadLetteringOnMessageExpiration = true,
MaxDeliveryCount = 20
};

Increase the QueueDescription.DefaultmessageTimeToLive to ~10 mins.
This property dictates how much time a message should live in the Queue - before being processed (Message.Complete() is called). If it remains in the queue for more than 2 mins - it will be automatically moved to DeadLetterQueue (as you had Set EnableDeadLetteringOnMsgExp to true).
TTL is useful in these messaging scenarios
if a message is not being processed after N mins after it arrived -then it might not be useful to process it any more
if the message was attempted to process many times and was never completed (Reciever - msg.Complete()) - this might be needing special processing
So - to be safe have a bit higher value of DefaultMsgTTL.
Hope it Helps!
Sree

Related

Serverless SQS consumer skips messages

I am using the Serverless Framework to consume messages from SQS. Some of the messages sent to the queue do not get consumed. They go straight to the in-flight SQS status and from there to my dead letter queue. When I look at my log of the consumer, I can see that it consumed and successfully processed 9/10 messages. One is always not consumed and ends up in the dead letter queue. I am setting reservedConcurrency to 1 so that only one consumer can run at a time. The function consumer timeout is set to 30 seconds. This is the consumer code:
module.exports.mySQSConsumer = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log(event.Records);
await new Promise((res, rej) => {
setTimeout(() => {
res();
}, 100);
});
console.log('DONE');
return true;
}
Consumer function configuration follow:
functions:
mySQSConsumer:
handler: handler.mySQSConsumer
timeout: 30 # seconds
reservedConcurrency: 1
events:
- sqs:
arn: arn:aws:sqs:us-east-1:xyz:my-test-queue
batchSize: 1
enabled: true
If I remove the await function, it will process all messages. If I increase the timeout to 200ms, even more messages will go to straight to the in-flight status and from there to the dead letter queue. This code is very simple. Any ideas why it's skipping some messages? The messages that don't get consumed don't even show up in the log using the first console.log() statement. They seem entirely ignored.
I figured out the problem. The SQS queue Lambda function event triggering works differently than I thought. The messages get pushed into the Lambda function, not pulled by it. I think this could be engineered better by AWS, but it's what it is.
The issue was the Default Visibility Timeout set to 30 seconds together with Reserved Concurrency set to 1. When the SQS queue gets filled up quickly with thousands of records, AWS starts pushing the messages to the Lambda function at a rate that is faster than the rate at which the single function instance can process them. AWS "assumes" that it can simply spin up more instances of the Lambda to keep up with the backpressure. However, the concurrency limit doesn't let it spin up more instances - the Lambda function is throttled. As a result, the function starts returning failure to the AWS backend for some messages, which will, consequently, hide the failed messages for 30 seconds (the default setting) and put them back into the queue after this period for reprocessing. Since there are so many records to process by the single instance, 30 seconds later, the Lambda function is still busy and can't process those messages again. So the situation repeats itself and the messages go back to invisibility for 30 seconds. This repeats total 3 times. After the third attempt, the messages go to the dead letter queue (we configured our SQS queue that way).
To resolve this issue, we increased the Default Visibility Timeout to 5 minutes. That's enough time for the Lambda function to process through most of the messages in the queue while the failed ones wait in invisibility. After 5 minutes, they get pushed back into the queue and since the Lambda function is no longer busy, it will process most of them. Some of them have to go to invisibility twice before being successfully processed.
So the remedy to this problem is either increasing the Default Invisibility Timeout like we did or increasing the number of failures necessary before a message goes to the dead letter queue.
I hope this helps someone.

Usage of FreeRTOS xQueueSelectFromSet and xQueueReceive

I am using FreeRTOS v8.2.3 on a PIC32 micro controller. I have a case where I need to post the following 3 events to 3 corresponding queues from an ISR, in order to unblock a task awaiting one of these events at a time -
a) SETUP packet arrival
b) Transfer completed event 1
c) Transfer completed event 2
My exection sequence and requirement are as follows:
Case 1 (execution is blocked for an event at point_1):
As SETUP arrives while waiting at point_1 of execution -
i) the waiting task should be unblocked
ii)Setup received from queue and processed
Some code is processed and reaches point_2
Case 2 (execution is blocked for an event at point_2):
If any one of SETUP or transfer complete events occur at point_2 -
i) unblock the wait
ii) receive transfer_complete_1 or transfer_complete_2 event from queue to carry out some additional transfers and loop at point_2
iii)if it was a Setup queue event, do not receive, but go to point_1
The code does not seem to work when I try to use xQueueReceive and xQueueSelectFromSet on the Setup queue even when one of them is used at point_1 and the other used at point_2.
But seems to work fine if I use xQueueSelectFromSet at both the places and verify the queuset member handle that caused the event to proceed further.
Given the requirement above, the problem with using xQueueSelectFromSet at both the places is that
- the xQueueSelectFromSet call will be placed back to back, first on a Setup event at point_2 and then immediately on point_1 which is not intentional
- the xQueueSelectFromSet call at point_1 is also not desired
Hence can anyone please explain whether and how to use both a queueset and queuereceive on the same queue? If not possible how do we typically implement the above requirement in FreeRTOS?
This is a duplicate of a question asked on the FreeRTOS support forum, so below is a duplicate of the answer I gave there:
I don't fully understand your usage scenario, but some points which may help.
1) If a queue is a member of a queue set, then the queue can only be read after its handle has been returned from the queue set. Further, if a queue's handle is returned from a queue set then the item must be read from the queue. If either of these requirements are not met then the state of the queue set will not match that of the queues in the set.
2) If the same task is reading from the multiple queues then it is probably not necessary to use a queue set at all. See the "alternatives to using queue sets" section on the following page: http://www.freertos.org/Pend-on-multiple-rtos-objects.html

Does await Task.Delay; really enable web server to process more simultaneous requests?

From Pro Asynchrnous Programming with .Net:
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
Thread.Sleep(2000);
}
While sleeping, the thread doesn’t consume any CPU-based resources,
but the fact that the thread is alive means that it is still consuming
memory resources. On a desktop application this is probably no big
deal, but on a server application, having lots of threads sleeping is
not ideal because if more work arrives on the server, it may have to
spin up more threads, increasing memory pressure and adding additional
resources for the OS to manage.
Ideally, instead of putting the thread to sleep, you would like to
simply give it up, allowing the thread to be free to serve other
requests. When you are ready to continue using CPU resources again,
you can obtain a thread ( not necessarily the same one ) and continue
processing. You can solve this problem by not putting the thread to
sleep, but rather using await on a Task that is deemed to be completed
in a given period.
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
await Task.Delay(2000);
}
I don't follow author's reasoning. While it's true that calling await Task.Delay will release this thread ( which is processing a request ), but it's also true that task created by Task.Delay will occupy some other thread to run on. So does this code really enable server to process more simultaneous requests or is the text wrong?!
Task.Delay does not occupy some other thread. It gives you a task without blocking. It starts a timer that completes that task in its callback. The timer does not use any thread while waiting.
It is a common myth that async actions like delays or IO just push work to a different thread. They do not. They use OS facilities to truly use zero threads while the operation is in progress. (They obviously need to use some thread to initiate and complete the operation.)
If async was just pushing work to a different thread it would be mostly useless. It's value would be just to keep the UI responsive in client apps. On the server it would only cause harm. It is not so.
The value of async IO is to reduce memory usag (less thread stacks), context switching and thread-pool utilization.
The async version of the code you posted would scale to literally tens of thousands of concurrent requests (if you increase the ASP.NET limits appropriately, which is a simple web.config change) with small memory usage.

Does sleeping script will be counted in Entry Processes?

I'm developing Dating Website with messaging feature. Message Controller script sleeps for a second to check for new Message. But my hosting provider has allowed me 20 Entry Processes. So,
Does sleeping script will be counted in Entry Processes?
if there are more than 20 users then will the limit be reached and I'll have limit resource error?
If the answer is YES. Then how do I achieve same goal within this limit?
Sample Code:
...
while($isNewMessage) {
$isNewMessage = $model->checkNewMessage();
sleep($this->sleepTime); //1 second
}
...

QTP Recovery scenario used to "skip" consecutive FAILED steps with 0 timeout -- how can I restore original timeout value?

Suppose I use QTPs recovery scenario manager to set the playback synchronization timeout to 0. The handler would return with "continue with next statement".
I'd do that to make sure that any following playback statements don't waste their time waiting for the next non-existing/non-matching step before failing:
I have a lot of GUI tests that kind of get stuck because let's say if 10 controls are missing, their (consecutive) playback steps produce 10 timeout waits before failing. If the playback timeout is 30 seconds, I loose 10x30 seconds=5 minutes execution time while it really would be sufficient to wait for 30 seconds ONCE (because the app does not change anymore -- we waited a full timeout period already).
Now if I have 100 test cases (=action iterations), this possibly happens 100 times, wasting 500 minutes of my test exec time window.
That's why I come up with the idea of a recovery scenario function setting the timeout to 0 after/upon the first failed playback step. This would accelerate the speed while skipping the rightly-FAILED step, yet would not compromise the precision/reliability of identifying the next matching GUI context (which creates a PASSED step).
Then of course upon the next passed playback step, I would want to restore the original timeout value. How could I do that? This is my question.
One cannot define a recovery scenario function that is called for PASSED steps.
I am currently thinking about setting a method function for Reporter.ReportEvent, and "sniffing" for PASSED log entries there. I'd install that method function in the scenario recovery function which sets timeout to 0. Then, when the "sniffer" function senses a ReportEvent call with PASSED status during one of the following playback steps, I'd reset everything (i.e. restore the original timeout, and uninstall the method function). (I am 99% sure, however, that .Click and .Set methods do not call ReportEvent to write their result status...so this option might probably not work.)
Better ideas? This really bugs me.
It sounds to me like you tests aren't designed correctly, if you fail to find an object why do you continue?
One possible (non recovery scenario) solution would be to use RegisterUserFunc to override the methods you are using in order to do an obj.Exist(0) before running the required method.
Function MyClick(obj)
If obj.Exist(1) Then
obj.Click
Else
Reporter.ReportEvent micFail, "Click failed, no object", "Object does not exist"
End If
End Function
RegisterUserFunc "Link", "Click", "MyClick"
RegisterUserFunc "WebButton", "Click", "MyClick"
''# etc
If you have many controls of which some may be missing and you know that after 10 seconds you mentioned (when the first timeout occurs), nothing more will show up, then you can use the exists method with a timeout parameter.
Something like this:
timeout = 10
For Each control in controls
If control.exists(timeout) Then
do something with the control
Else
timeout = 0
End If
Next
Now only the first timeout will be 10 seconds. Each and every subsequent timeout in your collection of controls will have the timeout set to 0 which will save your time.

Resources