I get a request Timeout by upload big Data - timeout

I become sometimes a Request Timeout when i Upload big Data >= 1 GB.
Smaller Data are no Problem.
Why might that be?
I set the Timeout to 0

Related

OneDrive - uploading file using Graph API results in occasional 416

I'm using the "get an upload session, upload chunks" method. Generally, it's working. I'm using a chunk size of 640 * 1024, which according to the docs is legit.
Occasionally I'll get a response code of 416 (Requested Range Not Satisfiable). The Upload large files with an upload session documentation is not very clear on what to do when I get one:
On failures when the client sent a fragment the server had already received, the server will respond with HTTP 416 Requested Range Not Satisfiable.
I am keeping track in my code of the chunks, which I believe I am doing correctly. Is there another reason I might get a 416? I will say that I was getting crappy upload speeds from my ISP at this time.
If I do get a 416, should I just retry it? Or should I ignore it, and believe the docs, that (for some reason) that chunk has already been received?

What is the difference between "dailyLimitExceeded" and "quotaExceeded" in YouTube data API?

When I run the program using the youtube data API, sometimes I get a dailyLimitExceeded error and a quotaExceeded error. In the case of dailyLimitExceeded, if I run it again after a few seconds,
sometimes it works, sometimes it doesn't work. However, in the case of quotaExceeded, quota is not initialized until 5 o'clock of the next day in the Pacific. Only after the quota has been initialized can the program resume normal operation.
I understand quotaExceeded to some extent, but I don't know exactly what dailyLimitExceeded is.
It's because of the latency of the data refresh rate. if you sending a request to Youtube frequently. Youtube protects himself and gives us Daily Limit exceeded results temporarily, but a bit time later it will work properly with a new request.
Check out the report:
https://issuetracker.google.com/150106191

Application Insights Alert (Response Time) triggering when all my requests are below the limit

I'm experiencing a weird behaviour in production where my Insights Alert for response time (> than 3 secs in the last 5 minutes) is triggering all the time but if I look into the logged requests most of them are lower than my threshold.
In the first picture you can see that there is a time in the last 24 hours that my average response time is greater than a minute, but in the second picture we can see that for the same time there is no request with more than 30 sec. So, how the average can be > than 1 minute?
We are using Azure Mobile Apps, SignalR (but the SignalR requests are being ignored using a filter) and our Insights version is 2.6.4.
Images

QPS/Call quota limit?

I keep getting:
Response:
{"errors":[{"message":"You have made too many requests recently. Please, be chill."}]}
When trying to download all my tasks - is there a published QPS or other quota limit so I know how liong I should pause between requests?
(I work at Asana)
As stated in the documentation, the current request limit is around 100 / minute. The error response you are getting back also contains a Retry-After header which contains the number of seconds you must wait until you can make a request again.
We may also be institute a daily limit at some point in the future -- we think 100 / minute is a reasonable burst rate, but not a reasonable sustained rate throughout the day. However, we are not enforcing a daily limit yet.

PHP script just hangs on large file uploads

I have a php script that allows users to upload multiple files to server on POST, then redirect to next page.
It seems to have been working for some time but lately users are reporting it hanging infinitely. They input all fields, select files to upload, hit post, then wait for hours then give up and close the window. But when I check it appears the files were successfully uploaded and in tact. Just the fields were not posted.
It seems the script cannot transition to the next section where form fields get parsed and inserted to mysql database. I've did some small tests and cannot recreate the problem. Although I don't have the time to test with large files such as 200M.
The max total filesize any user would upload would be 200M so I feel my php core settings are sufficient. Here is what I have:
max_execution_time = 7200
max_file_uploads = 20
max_input_time = 7200
memory_limit = 8000M
output_buffering = 4096
upload_max_filesize = 500M
Anything else in the core settings that could perhaps be giving me this problem? Or would it be a browser problem?
This is most likely your users' connection speed. Ask one of your users their connection speed and to use Google Chrome and look at the status bar, it should increment the percentage of the progress of the upload. Or I recommend trying this yourself and throttling your bandwidth someone. Remember your users most likely have a maximum of 1.5 up unless they have Fios or a better connection (e.g. T1).

Resources