I am getting intermittent 500 errors when hitting the YouTube ContentID/partner API. At times, this is the response:
{"errors":[{"domain":"youtubePartner","reason":"internalError","message":"An internal error has occurred."}],"code":500,"message":"An internal error has occurred."}
At other times, this is the response
{"code":500,"message":null}
And at other times, the request succeeds.
This is happening most often when inserting a claim, next most often when setting ownership and, less often but still happening, when creating an asset and when setting advertising options.
Is there any alternative to adding retry logic?
There are most likely transient errors, you can just implement an exponential backoff retry policy and that would do best for now.
On the other hand, issue should've been fixed by now.
Related
I am developing a Node.js app which connects to the Microsoft Graph API.
Often times, I get back a 429 status code, which is described as "Too Many Requests" in the error documentation.
Sometimes the message returned is:
TooManyRequests. Client application has been throttled and should not attempt to repeat the request until an amount of time has elapsed.
Other times, it returns:
"TooManyRequests. The server is busy. Please try again later.".
Unfortunately, it is not returning a Retry-After field in the headers, even though their best practices claims that it should do so.
This is entirely in development, and I have not been hitting the service much, as it has all been during debugging. I realize Microsoft is often changing how this works. I just find it difficult to develop an app around a service which does not even provide a Retry-After field, and seems to have a lot of problems (I am using the v1.0 endpoint).
When I wait 5 minutes (as I have seen recommended), the service still errors out. Here is an example return response:
{
"error": {
"code": "TooManyRequests",
"message": "The server is busy. Please try again later.",
"innerError": {
"request-id": "d963bb00-6bdf-4d6b-87f9-973ef00de211",
"date": "2017-08-31T23:09:32"
}
}
}
Could this relate at all to the operation being carried out?
I am updating a range from A2:L3533. They are all text values. I am wondering if this could impact the throttling. I have not found any guidance regarding using "smaller" operation sets.
Without seeing your code, it is hard to diagnose exactly what is going on. That said, you're Range here is enormous and almost certainly will result in issues.
From the documentation:
Large Range implies a Range of a size that is too large for a single API call. Many factors such as number of cells, values, numberFormat, and formulas contained in the range can make the response so large that it becomes unsuitable for API interaction. The API makes a best attempt to return or write to the requested data. However, the large size involved might result in an API error condition because of the large resource utilization.
To avoid this, we recommend that you read or write for large Range in multiple smaller range sizes.
The operation couldn’t be completed. An internal error occurred in the Places API library. If you believe this error represents a bug, please file a report using the instructions on our community and support page (https://developers.google.com/places/support).
I'm using the Current Place api and I'm getting this error the entire day today. It worked fine until now but today it started trowing that error every time.
I thought it could be because of the limits, but I've raised the limit by enabling billing and I also saw in google developers console that I had 50 requests for the last day and 500 for the entire month. Default limit is 1000 requests per day or 150k per day if you've enabled billing. So it seems like this is not the reason.
Any ideas what could cause this problem?
Judging on other people's comments and this answer it seems like this was a temporary issue on Google's end. Everything started working on it's own.
Background
I am creating a series of requests to grab a chunk of a file. The chunk size stays the same so the number of requests may change depending on which file I am downloading. For smaller files, and thus smaller number of requests I seem to reliably succeed. However once my request chain reaches the 10+ ballpark I start to get an error.
Error
I am getting an error from what appears to be Alamofire.
Error code -999 cancelled.
Looking at other purposed solutions
From the searching I have done it seems that this occurs when either the session manager is deallocated or another request was kicked off before I received a response from the previous request.
I made my session manager static, as stated by some other posts to handle the deallocating issue, but I still get this error.
I don't think the next request is being called before the first finishes as my logs seem to be printing out in order and the failure is rather random. I would expect that the requests would overwrite quite reliably.
Is there any thing else that causes this error code to occur?
Additional Logs
NetworkFilesClient.swift:351 - Error downloading chunk URL: MY_URL_HERE,
Range: bytes=29360128-33554432,
Error: Error Domain=NSURLErrorDomain Code=-999 "cancelled"
UserInfo={NSErrorFailingURLKey=MY_URL_HERE,
NSLocalizedDescription=cancelled,
NSErrorFailingURLStringKey=MY_URL_HERE}
What works for me:
sessionManager.session.finishTasksAndInvalidate()
I put this at the end of my response handling. Why? No clue...
I am observing some really strange behavior.
Sometimes the following URL works, sometimes it just says "Connect Failure" No definitions found.
Is there a timeout? It might work 1 out of 10-15 times, I have tried waiting for 10minutes but it just keeps saying the same. Stability issues? Something I'm doing wrong? What's happening?
http://query.yahooapis.com/v1/public/yql?q=select%20Name,Symbol,LastTradePriceOnly,Volume,Change%20from%20yahoo.finance.quotes%20where%20symbol%20in%20(%22AAPL%22%2C%22GOOGL%22)&format=json&diagnostics=true&env=store://datatables.org/alltableswithkeys
I'm using ASINetworkQueue to execute multiple ASIHTTPRequests, and if any request fails I'd like the queue to cancel any pending requests and end. From reading the docs this should be the default behaviour. But I'm finding that even after a request fails, I still get 'requestStarted' for most of the remaining requests, and 'requestFailed' for all of them - is this how it is supposed to be? I'm guessing it's maybe because my requests are quite small and the requests start before it has chance to cancel them once a failure is detected. I tried implicitly setting setShouldCancelAllRequestsOnFailure:YES but this made no difference.
Without knowing the exact nature of your requests ... short answer: Yes, it's working how it's supposed to. Your requests are starting before a failure occurs. Longer answer: Try setting the queue's maxConcurrentOperationCount property. This may help you control the request pipeline a bit better if you need to test for failure.