I am having issues trying to get the status of how many things are left to process using SignalR. I have a starting number and every time an item completes I have it increment a counter. However, the user isn't being notified in the manner the code would suggest.
I'm not entirely sure how to word this, but here goes. I'm queuing up a series of passengers to process and then processing them. If my understanding is correct, the processing starts immediately after the first thread is queued. After everyone is queued, every second there is a SignalR call to inform the user of where we are in the process. However, the SignalR call isn't working as expected.
Next, code:
StatusInfo.SendStatus("Retrieving Passenger Details");
foreach (var passenger in manifestResponse.Manifest.PassengerList)
{
//Spin up all the threads.
PassengerThreads++;
TotalPassengers++;
//StatusInfo.SendStatus(TotalPassengers - PassengerThreads, 0, TotalPassengers, StartTime);
ThreadPool.QueueUserWorkItem(new WaitCallback(GetSinglePassengerDetails), passenger);
if (TotalPassengers % 5 == 0)
{
StatusInfo.SendStatus(TotalPassengers - PassengerThreads, 0, TotalPassengers, StartTime);
}
}
//Wait for them to be done.
do
{
StatusInfo.SendStatus(TotalPassengers - PassengerThreads, 0, TotalPassengers, StartTime);
Thread.Sleep(1000);
}
while (PassengerThreads > 0);
So what it happening is that I will send the threads to the pool to run, however, during the send status loop it does not actually send anything back. When I open the console in the browser there's a 20 second gap between showing "Retrieving Passenger Details" and the first X of Y status. Is there something I'm doing wrong here? Maybe using the wrong threading model? Thanks.
Related
I am using the Serverless Framework to consume messages from SQS. Some of the messages sent to the queue do not get consumed. They go straight to the in-flight SQS status and from there to my dead letter queue. When I look at my log of the consumer, I can see that it consumed and successfully processed 9/10 messages. One is always not consumed and ends up in the dead letter queue. I am setting reservedConcurrency to 1 so that only one consumer can run at a time. The function consumer timeout is set to 30 seconds. This is the consumer code:
module.exports.mySQSConsumer = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log(event.Records);
await new Promise((res, rej) => {
setTimeout(() => {
res();
}, 100);
});
console.log('DONE');
return true;
}
Consumer function configuration follow:
functions:
mySQSConsumer:
handler: handler.mySQSConsumer
timeout: 30 # seconds
reservedConcurrency: 1
events:
- sqs:
arn: arn:aws:sqs:us-east-1:xyz:my-test-queue
batchSize: 1
enabled: true
If I remove the await function, it will process all messages. If I increase the timeout to 200ms, even more messages will go to straight to the in-flight status and from there to the dead letter queue. This code is very simple. Any ideas why it's skipping some messages? The messages that don't get consumed don't even show up in the log using the first console.log() statement. They seem entirely ignored.
I figured out the problem. The SQS queue Lambda function event triggering works differently than I thought. The messages get pushed into the Lambda function, not pulled by it. I think this could be engineered better by AWS, but it's what it is.
The issue was the Default Visibility Timeout set to 30 seconds together with Reserved Concurrency set to 1. When the SQS queue gets filled up quickly with thousands of records, AWS starts pushing the messages to the Lambda function at a rate that is faster than the rate at which the single function instance can process them. AWS "assumes" that it can simply spin up more instances of the Lambda to keep up with the backpressure. However, the concurrency limit doesn't let it spin up more instances - the Lambda function is throttled. As a result, the function starts returning failure to the AWS backend for some messages, which will, consequently, hide the failed messages for 30 seconds (the default setting) and put them back into the queue after this period for reprocessing. Since there are so many records to process by the single instance, 30 seconds later, the Lambda function is still busy and can't process those messages again. So the situation repeats itself and the messages go back to invisibility for 30 seconds. This repeats total 3 times. After the third attempt, the messages go to the dead letter queue (we configured our SQS queue that way).
To resolve this issue, we increased the Default Visibility Timeout to 5 minutes. That's enough time for the Lambda function to process through most of the messages in the queue while the failed ones wait in invisibility. After 5 minutes, they get pushed back into the queue and since the Lambda function is no longer busy, it will process most of them. Some of them have to go to invisibility twice before being successfully processed.
So the remedy to this problem is either increasing the Default Invisibility Timeout like we did or increasing the number of failures necessary before a message goes to the dead letter queue.
I hope this helps someone.
I have this loop on a background thread, of which I want to inform the user of its progress.
So, in this loop, I call
[self performSelectorOnMainThread:#selector(updateProgressBarto:) withObject:#(value) waitUntilDone:YES];
I put some logging before and after the loop, and made the following observations:
If I enable the above line in order to show the progress, my logging console shows:
2017-06-17 16:43:49.675 myApp[8523:551864] Start Import
2017-06-17 16:43:59.119 myApp[8523:551864] Done Importing
that's a delta of roughly 9.5 seconds
without the progress bar, it looks like
2017-06-17 16:47:06.052 myApp[8611:556572] Start Import
2017-06-17 16:47:12.776 myApp[8611:556572] Done Importing
down to 6.7 seconds
As a comparison, if the loop is run in a Background Fetch, where there is no UI involved at all, logging shows:
2017-06-17 16:45:12.199 myApp[8523:553684] Start Import
2017-06-17 16:45:13.084 myApp[8523:553684] Done Importing
which is less than a second.
if I set
waitUntilDone:NO
I get the unwanted side effect that the progress bar is update only 3 times, rather than 50+ times.
The technical question: is this something I/the user has to live with, or are there any perceptional tricks to solve this?
The psychological question:
Would you/the user prefer six seconds without visual feedback over 9 seconds with feedback?
Your insights are very welcome.
why dont you do the UI update async using the GDC. There should be very little overhead in the background thread and little latency in the UI:
dispatch_async(dispatch_get_main_queue(), ^{
[self updateProgressBarto:#(value)];
});
Also, maybe you dont need to update the UI in every iteration, but only for every 10 or 100
if !(i % 10) {
// update UI progress bar
}
I've implemented a Web role that writes to a queue. This is working fine. Then I developed a Worker role to read from the queue. When I run it in debug mode from my local machine it reads the messages from the queue fine, but when i deploy the Worker role it dos'nt seem to be reading the queue as the message eventually end up in the dead letter queue. Anyone know what could be causing this behavior? Below are some bit that might be key in figuring this thing out
queueClient = QueueClient.Create(queueName, ReceiveMode.PeekLock);
var queueDescription = new QueueDescription(QueueName)
{
RequiresSession = false,
DefaultMessageTimeToLive = TimeSpan.FromMinutes(2),
EnableDeadLetteringOnMessageExpiration = true,
MaxDeliveryCount = 20
};
Increase the QueueDescription.DefaultmessageTimeToLive to ~10 mins.
This property dictates how much time a message should live in the Queue - before being processed (Message.Complete() is called). If it remains in the queue for more than 2 mins - it will be automatically moved to DeadLetterQueue (as you had Set EnableDeadLetteringOnMsgExp to true).
TTL is useful in these messaging scenarios
if a message is not being processed after N mins after it arrived -then it might not be useful to process it any more
if the message was attempted to process many times and was never completed (Reciever - msg.Complete()) - this might be needing special processing
So - to be safe have a bit higher value of DefaultMsgTTL.
Hope it Helps!
Sree
Well I find weird point of message loop.
first, lock this code below
MSG msg = {0};
while( WM_QUIT != msg.message )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
TranslateMessage( &msg );
DispatchMessage( &msg );
}
else
{
Render(); // Do some rendering
}
}
It is a tutorial of directx and this part is part of message loop.
If I click a mouse, It goes to queue as Message.
So Input like this should be process in proc function of win api.
Now that peekMessage return true, render() will not be called in frame when I clicked.
I think code be changed if~else to if~if for render when I click.
Can you explain this??
Your understanding is close, but not quite right. The loop isn't run once per frame. Rather, what happens is that for every iteration of the loop, either a single message is processed or Render is called. Effectively this makes rendering the lowest priority, but keeps your application responsive. The loop may be run many times or few times for each frame drawn, depending on how much work there is to do.
Does Render directly call Present? Or does it invalidate the window? If it invalidates the window, you would not want to change to always calling Render like you mentioned, because you'd risk not redrawing the window between renders.
Essentially this loop will process any pending Win32 messages for your window, and if there aren't any, it will render a frame. If it sees a WM_QUIT message, it exits the loop to quit the app.
There's no need for a 'throttle' because DirectX Present will block the thread (i.e. suspend it) if there are already 3 frames pending to render.
This model assumes you are doing one frame 'Update' per 'Render' call which isn't that realistic for a game, but it is simple for the tutorial. Extending the tutorial loop with StepTimer would look something like:
#include “StepTimer.h
DX::StepTimer g_timer;
...
MSG msg = {0};
while( WM_QUIT != msg.message )
{
if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
TranslateMessage( &msg );
DispatchMessage( &msg );
}
else
{
g_timer.Tick([&]()
{
Update(g_timer); // Update world/game state
});
Render(); // Do some rendering
}
}
...
void Render();
void Update(DX::StepTimer& timer);
StepTimer defaults to using variable step updates which means Update is called once per frame with whatever time delta and then Render is called once.
You can use a fixed-step update (say 60 times a second) like this:
g_timer.SetFixedTimeStep(true);
g_timer.SetTargetElapsedSeconds(1.f / 60.f);
In this mode, you'll have all pending Win32 messages processed, and then Update is called as any times as needed to keep up an average of 60 fixed-step updates per second, and then Render is called once.
The Render() inside the else basically gives preference to handling messages in the queue over rendering. Moving the mouse over the directx rendered window will add messages quickly to the message queue, but not fast enough to cause rendering to be delayed to any degree you'd ever see it. There is no advantage to rendering with each iteration because the iterations happen much faster than each frame is generated in your swapchain and much faster than a new message could swamp your queue. Most computers today will run this loop more than once per millisecond and even mouseover events happen less often than this. You wouldn't be wrong to render with every iteration, it's just unnecessary. With the example running, moving your mouse over the directx window as quickly as you can will cause fewer than 10% of the iterations of this loop to handle a message and delay rendering.
This message loop is executed as quickly as possible and has no facility to detect when the swapchain is ready to render. The PeekMessage checks to see if there's a message in the queue. If there is, it processes it, if not it Renders. What you're worried about is that a sequence of window events will cause the render to be delayed, but that's practically impossible. No matter how fast messages are sent to the queue, the swapchain is rendered more than 10 times faster than it needs to even for 60fps. This loop is the cause of high CPU utilization. The reason for it may be to simplify the tutorial, but as it's an inherently complicated environment. You might modify the swap chain in a separate thread if you're worried about the message queue delaying frame rendering.
To improve the CPU efficiency of the example program, just add a Sleep(8); at the bottom of the Render() routine. This will cause the message handler/render thread to pause between cycles handling messages and rendering at about 120 times per second. You can improve upon this by using high resolution timers and a modulus based sleep between cycles.
A good source of information to improve this example can be found here.
I looked into GCDAsyncSocket.m at the code that handles read timeout. If I don't extend the timeout, it seems that socket got closed and there is no option to the socket alive keep. I can't use infinite timeout (timeout = -1) because I still need to know when it is timed out, but also doesn't want it to disconnect. I'm not sure there is a reason behind this. Does anyone know?
- (void)doReadTimeoutWithExtension:(NSTimeInterval)timeoutExtension
{
if (currentRead)
{
if (timeoutExtension > 0.0)
{
currentRead->timeout += timeoutExtension;
// Reschedule the timer
dispatch_time_t tt = dispatch_time(DISPATCH_TIME_NOW, (timeoutExtension * NSEC_PER_SEC));
dispatch_source_set_timer(readTimer, tt, DISPATCH_TIME_FOREVER, 0);
// Unpause reads, and continue
flags &= ~kReadsPaused;
[self doReadData];
}
else
{
LogVerbose(#"ReadTimeout");
[self closeWithError:[self readTimeoutError]];
}
}
}
FYI, there is a pull request at https://github.com/robbiehanson/CocoaAsyncSocket/pull/126 that adds this keep-alive feature but it is not pulled yet.
I am the original author of AsyncSocket, and I can tell you why I did it that way: there are too many ways for protocols to handle timeouts. So I implemented a "hard" timeout and left "soft" timeouts up to the application author.
The usual way to do a "soft" timeout is with an NSTimer or dispatch_after. Set one of those up, and when the timer fires, do whatever you need to do. Meanwhile, use an infinite timeout on the actual readData call. Note that infinite timeouts aren't actually infinite. The OS will still time out after, say, 10 minutes without successfully reading. If you really want to keep the connection alive forever, you might be able to set a socket option.