I'm developing Dating Website with messaging feature. Message Controller script sleeps for a second to check for new Message. But my hosting provider has allowed me 20 Entry Processes. So,
Does sleeping script will be counted in Entry Processes?
if there are more than 20 users then will the limit be reached and I'll have limit resource error?
If the answer is YES. Then how do I achieve same goal within this limit?
Sample Code:
...
while($isNewMessage) {
$isNewMessage = $model->checkNewMessage();
sleep($this->sleepTime); //1 second
}
...
Related
I am using the Serverless Framework to consume messages from SQS. Some of the messages sent to the queue do not get consumed. They go straight to the in-flight SQS status and from there to my dead letter queue. When I look at my log of the consumer, I can see that it consumed and successfully processed 9/10 messages. One is always not consumed and ends up in the dead letter queue. I am setting reservedConcurrency to 1 so that only one consumer can run at a time. The function consumer timeout is set to 30 seconds. This is the consumer code:
module.exports.mySQSConsumer = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log(event.Records);
await new Promise((res, rej) => {
setTimeout(() => {
res();
}, 100);
});
console.log('DONE');
return true;
}
Consumer function configuration follow:
functions:
mySQSConsumer:
handler: handler.mySQSConsumer
timeout: 30 # seconds
reservedConcurrency: 1
events:
- sqs:
arn: arn:aws:sqs:us-east-1:xyz:my-test-queue
batchSize: 1
enabled: true
If I remove the await function, it will process all messages. If I increase the timeout to 200ms, even more messages will go to straight to the in-flight status and from there to the dead letter queue. This code is very simple. Any ideas why it's skipping some messages? The messages that don't get consumed don't even show up in the log using the first console.log() statement. They seem entirely ignored.
I figured out the problem. The SQS queue Lambda function event triggering works differently than I thought. The messages get pushed into the Lambda function, not pulled by it. I think this could be engineered better by AWS, but it's what it is.
The issue was the Default Visibility Timeout set to 30 seconds together with Reserved Concurrency set to 1. When the SQS queue gets filled up quickly with thousands of records, AWS starts pushing the messages to the Lambda function at a rate that is faster than the rate at which the single function instance can process them. AWS "assumes" that it can simply spin up more instances of the Lambda to keep up with the backpressure. However, the concurrency limit doesn't let it spin up more instances - the Lambda function is throttled. As a result, the function starts returning failure to the AWS backend for some messages, which will, consequently, hide the failed messages for 30 seconds (the default setting) and put them back into the queue after this period for reprocessing. Since there are so many records to process by the single instance, 30 seconds later, the Lambda function is still busy and can't process those messages again. So the situation repeats itself and the messages go back to invisibility for 30 seconds. This repeats total 3 times. After the third attempt, the messages go to the dead letter queue (we configured our SQS queue that way).
To resolve this issue, we increased the Default Visibility Timeout to 5 minutes. That's enough time for the Lambda function to process through most of the messages in the queue while the failed ones wait in invisibility. After 5 minutes, they get pushed back into the queue and since the Lambda function is no longer busy, it will process most of them. Some of them have to go to invisibility twice before being successfully processed.
So the remedy to this problem is either increasing the Default Invisibility Timeout like we did or increasing the number of failures necessary before a message goes to the dead letter queue.
I hope this helps someone.
I have a service bus queue in Azure and a service bus queue trigger function. When I first publish the function and send a message to the service bus queue the function gets triggered and runs ok.
But if I leave it alone and don't send any messages to the queue for say ~ 1 hour and then I send a message the function doesn't get triggered. I have to manually run the function again in the portal by pressing 'run' or I have to re-publish it to Azure.
How do I keep it running so I don't have to restart it every hour or so???
My app might not send a new message to the queue for a couple of hours or even days.
FYI - I read here that the function shuts down after 5 minutes, I can't use functions if this is the case and I don't want to use a timer trigger because then I'd be running the function more then I would want, wasting money! Right?
If my only, best, choice here is to run a timer function every 30 minutes throughout the day I might just have to suck it up and do it this way, but I'd prefer to have the function run when a message hits the message queue.
FYI2 - ultimately I want to hide the message until a certain date, when I first push it to the queue (ex. set to show 1 week from the time it's placed in the queue). What am I trying to accomplish? I want to send an email out to registered students after a class has ended, and the class might be scheduled 1-30 days in advance. So when the class is scheduled I don't want the function to run until after the class ends, which might be 1 week, 2 weeks 2 days, 3 weeks 3 days, etc after the class is initially scheduled in my app.
Here is the snippet from the function.json
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "serviceBusTrigger",
"connection": "Yogabandy2017_RootManageSharedAccessKey_SERVICEBUS",
"queueName": "yogabandy2017",
"accessRights": "listen",
"name": "myQueueItem"
}
],
"disabled": false,
"scriptFile": "..\\bin\\PostEventEmailFunction.dll",
"entryPoint": "PostEventEmailFunction.Function1.Run"
}
Here is the function
public static class Function1
{
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("yogabandy2017", AccessRights.Listen, Connection = "Yogabandy2017_RootManageSharedAccessKey_SERVICEBUS")]string myQueueItem, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I've been asked to post my payment plan
I suspect you're using scenario 1 and 2 below - which will shutdown after a period of no activity.
The Function app on a Free/Shared App Service Plan. Always On is not available, so you'll have the upgrade your plan.
The Function App on a Basic/Standard/Premium App Service Plan. You'll need to ensure the Always On is checked in the function Application Setting
If the Function App using a Consumption Plan, this should work. It will wake up the function app on each request. For your scenario I would recommend this approach as it only utilises resources when required.
FYI1 - If you get the above working, you won't need the timer.
A Consumption Plan in the Application Settings will look like:
Please check your service plan by
1. Platform features
2. App Service plan
3. under Pricing Tier
Please provide you app name.
I've implemented a Web role that writes to a queue. This is working fine. Then I developed a Worker role to read from the queue. When I run it in debug mode from my local machine it reads the messages from the queue fine, but when i deploy the Worker role it dos'nt seem to be reading the queue as the message eventually end up in the dead letter queue. Anyone know what could be causing this behavior? Below are some bit that might be key in figuring this thing out
queueClient = QueueClient.Create(queueName, ReceiveMode.PeekLock);
var queueDescription = new QueueDescription(QueueName)
{
RequiresSession = false,
DefaultMessageTimeToLive = TimeSpan.FromMinutes(2),
EnableDeadLetteringOnMessageExpiration = true,
MaxDeliveryCount = 20
};
Increase the QueueDescription.DefaultmessageTimeToLive to ~10 mins.
This property dictates how much time a message should live in the Queue - before being processed (Message.Complete() is called). If it remains in the queue for more than 2 mins - it will be automatically moved to DeadLetterQueue (as you had Set EnableDeadLetteringOnMsgExp to true).
TTL is useful in these messaging scenarios
if a message is not being processed after N mins after it arrived -then it might not be useful to process it any more
if the message was attempted to process many times and was never completed (Reciever - msg.Complete()) - this might be needing special processing
So - to be safe have a bit higher value of DefaultMsgTTL.
Hope it Helps!
Sree
I have a reminder functionality using signal R in asp.net mvc
I have userinterface to set the reminder time, If the current time matches the reminder time , it invokes a popup.
I successfully implemented this functionality with Signal R by checking the database once in every 30 seconds by using javascript timer. If current time does not match, it gives '0'.If it matches, it return '1' and the popup is shown across all browsers. But can this checking the db for every 30 seconds can be replaced by signal R ? is there any way to bring this whole thing to signal R?
You can use System.Threading.Timer to create a periodical method call to both client and server side. According to Sample project created for stocks
_timer = new Timer(UpdateStockPrices, null, _updateInterval, _updateInterval);
It creates and Event-Delegate and calls UpdateStockPrices event timely with period of __updateInterval.
In This event(code given below) you can broadcast the remainder message from server to all clients or clients who are associated with that remainder.
You can write code as :-
Clients.All.updateStockPrice(stock);
You can refer to Timer from link:-
http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx
Yes, you can use Timer in the appdomain scope, application scope or at the hub level. Just get the sample from nuget, called "Microsoft.AspNet.SignalR.Sample". It implements stock timer that periodically broadcasts changes to all clients.
I have implemented a Google App Engine application which uploads documents to specific folders in Google Docs. A month ago I started having response time issues (deadline exceeded on GdataClient.GetDocList, fetch-url call, in Gdata Client)when querying for a specific folder in Google Docs. This caused a lot of tasks to queue-up in the Task Queue.
When I saw this, I paused the queues for a while - about 24 hours. When I restarted the queue nearly all of the where uploaded again, except 10 of the the files / tasks.
When I implemented the GetDocList call, I implemented a retry / sleep functionality to avoid the sometimes intermittent "DeadLineExceeded" which I got during my .GetNextLink().href-loop. I know that this is not a good "Cloud" design. But I was forced to do this to get it stable enough for production. For every sleep I extend the wait time and I only retry 5 times. The last time I wait for about 25 sec before retrying.
What I think is that all the tasks in the queues retried so many times (even though I have limited the tasks to running in serial-mode , one at a time. Maximum 5 a minute) that the App Engine App where black-listed from the Google Docs Api.
Can this happen?
What do I need to do to be able to query Google Docs Api from the same App Engine instance again?
Do I need to migrate the App Engine app to a new Application ID?
When I try this from my development environment, the code works, it queries the folder structure and returns a result within the time-limit.
The folder-structure I'm querying is rather big, which means that I need to fetch them via the .GetNextLink().href. In my development environment, the folderstructure contains of much less folders.
Anyway, this have been working very good for about a year in the production AppEngine instance. But stopped working around the 4th - 5th of March.
The user-account which is queried is currently using 7000 MB (3%) of the available 205824 MB.
When I use the code from dev-env but with completely different Google Apps domain / app-id / google account I can not reproduce the error.
When I changed the max-results to 1 (instead of 100 or 50 or 20) I succeed intermittently. But as the max-result is 1 I need to query many 1000 times, and since I only succeed with max 3 in a row, until my exponential back-off quits I never get my whole resultset. The resultset (the folder I query consist of between 300 to 400 folders (which in turn consists of at least 2 - 6 subfolders with pdf-files in)
I have tried with max-result 2, then the fetch fails on every occasion. If I change back to max-result 1 , then it succeeds on one or two fetches in a row, but this is not suffient. Since I need the whole folder-structure to be able to find a the correct folder to store the file in.
I have tried this from my local environment - i.e. from a completly different IP-adress and it still fails. This means that the app-engine app is not blocked from accessing google docs. The max-result change from 2 to 1 also proves that.
Conclusion:
The slow return time from the Google Docs API must be due to the extensive amount of files and collections inside the collection which I'm looping through. Keep in mind that this collection contains about 3500 Mb. Is this an issue?
Log:
DocListUrl to get entries from = https://docs.google.com/feeds/default/private/full/folder:XXXXXXX/contents?max-results=1.
Retrying RetryGetDocList, wait for 1 seconds.
Retrying RetryGetDocList, wait for 1 seconds.
Retrying RetryGetDocList, wait for 4 seconds.
Retrying RetryGetDocList, wait for 9 seconds.
Retrying RetryGetDocList, wait for 16 seconds.
Retrying RetryGetDocList, wait for 25 seconds.
ApplicationError: 5
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in call
handler.post(*groups)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsHandler.py", line 418, in post
success = uploader.Upload(blob_reader, fileToUpload.uploadSize, fileToUpload.MainFolder, fileToUpload.ruleTypeReadableId ,fileToUpload.rootFolderId,fileToUpload.salesforceLink,fileToUpload.rootFolder, fileToUpload.type_folder_name, fileToUpload.file_name, currentUser, client, logObj)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsClasses.py", line 404, in Upload
collections = GetAllEntries('https://docs.google.com/feeds/default/private/full/%s/contents?max-results=1' % (ruleTypeFolderResourceId), client)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/DocsClasses.py", line 351, in GetAllEntries
chunk = RetryGetDocList(client.GetDocList , chunk.GetNextLink().href)
File "/base/data/home/apps/XXX/prod-43.358023265943651014/DocsClasses.py", line 202, in RetryGetDocList
return functionCall(uri)
File "/base/data/home/apps/XXX/prod-43.358023265943651014/gdata/docs/client.py", line 142, in get_doclist
auth_token=auth_token, **kwargs)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/gdata/client.py", line 635, in get_feed
**kwargs)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/gdata/client.py", line 265, in request
uri=uri, auth_token=auth_token, http_request=http_request, **kwargs)
File "/base/data/home/apps/XXXX/prod-43.358023265943651014/atom/client.py", line 117, in request
return self.http_client.request(http_request)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/atom/http_core.py", line 420, in request
http_request.headers, http_request._body_parts)
File "/base/data/home/apps/XXXXX/prod-43.358023265943651014/atom/http_core.py", line 497, in _http_request
return connection.getresponse()
File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 206, in getresponse
deadline=self.timeout)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 263, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 371, in _get_fetch_result
raise DeadlineExceededError(str(err))
DeadlineExceededError: ApplicationError: 5
Regards
/Jens
On occasion responses from the Google Documents List API exceed the deadline for App Engine HTTP requests. This can be the case with extremely large corpuses of documents being returned in the API.
To workaround this, set the max-results parameter to a smaller number than 1000.
Also, retry the request using exponential back-off.
To work around failing uploads, use the Task Queue in App Engine to complete uploads, as well as resumable upload with the API.
You can request the App Engine team increase the size of the HTTP timeout of your application to a large number of seconds that would allow this request to succeed. However, it is rare that the team approves such a request without a strong need.