I'm using the "get an upload session, upload chunks" method. Generally, it's working. I'm using a chunk size of 640 * 1024, which according to the docs is legit.
Occasionally I'll get a response code of 416 (Requested Range Not Satisfiable). The Upload large files with an upload session documentation is not very clear on what to do when I get one:
On failures when the client sent a fragment the server had already received, the server will respond with HTTP 416 Requested Range Not Satisfiable.
I am keeping track in my code of the chunks, which I believe I am doing correctly. Is there another reason I might get a 416? I will say that I was getting crappy upload speeds from my ISP at this time.
If I do get a 416, should I just retry it? Or should I ignore it, and believe the docs, that (for some reason) that chunk has already been received?
Related
I'm receiving a server error when trying to process large audio files. The audio files are originally audio/m4a # 32kHz and per the recommendations of the documentation am converting/compressing them to audio/amr_wb # 16kHz. These files are well below the 180 minutes of audio limit yet I'm still receiving a server error when processing them.
GaxError Exception occurred in retry method that was not classified as transient, caused by 8:Received message larger than max (5371623 vs. 4194304)
I'm using version V1p1beta and the method long_running_recognize to transcribe these audio files. My files are hosted on Google Cloud Storage and I'm providing the uri in my api call.
How can I send large audio files to the API without the server enforcing a size restriction ? It seems wrong to recommend using FLAC or WAV and having a 180 minutes of length in audio limit if the server can't even handle my hour long audio file that has been encoded to AMR_WB.
Thanks for any help
Currently Speech-to-Text API released the v1 endpoint, I suggest trying this version. I was able to get a proper response by using a 90 minute audio.
Using the Google cloud speech API's streaming requires the length of a streaming session to be less than 60 seconds. To handle streaming beyond this limitation, we would require to split the streaming data in to several chunks, using something like single_utterance, for example. When submitting such chunk, we use the "RequestObserver.onCompleted()" to mark the end of streaming session. However it seems like the Grpc threads which were created to handle streaming running even after getting the final results, causing the error "io.grpc.StatusRuntimeException: OUT_OF_RANGE: Exceeded maximum allowed stream duration of 65 seconds".
Is there any other mechanisms that could be used to properly terminate Grpc threads, so that it won't run until the allowed 60 second limit? (Seems like we could use SpeechClient.close() or SpeechClient.shutdown() to release all background resources, but would require recreating the SpeechClient instances again. This would be somewhat heavy.)
Is there any other recommended ways that we could use to stream data beyond the 60 second limit, without splitting the stream to several chunks.
Parameters : [Encoding=LINEAR16,Rate=44100]
I am calling the Microsoft Graph REST API from Node.js (JavaScript). I receive the result of GET operations for a single cell which is empty as returned with a status code 429 "TooManyRequests - The server is busy. Please try again later." error. Another SO question [ Microsoft Graph throttling Excel updates ] has answers that point to MS documentation about making smaller requests. Unfortunately, the suggestions are rather vague.
My question is does the size of the file in OneDrive have an impact on throttling? The file I am attempting to update is over 4 MB in size. However, the updates (PATCH) that I have attempted are only 251 bytes (12 cells) and I continue to get the error. Even a GET for a single cell receives this. This happened after 72 hours of inactivity. I am using a business account, and unfortunately MS support will not help, as they will only speak to Admins.
Assuming this is an unrelated issue, as I do have about 3500 rows (of about 12 columns) to update, what is the best "chunk size" to update them in? Is 50 ok? Is 100 ok? Thank You!
NOTE: This same throttling happens in the Graph Explorer, not just via code. Also, there is no Retry-After field returned in the Response Headers.
My app communicates with a server over TCP, using AsyncSocket. There are two situations in which communication takes place:
The app sends the server something, the server responds. The app needs to read this response and do something with the information in it. This response is always the same length, e.g., a response is always 6 bytes.
The app is "idling" and the server initiates communication at some time (unknown to the app). The app needs to read whatever the server is sending (could be any number of bytes, but the first byte will indicate how many bytes are following so I know when to stop reading) and process this information.
The first situation is working fine. readDataToLength:timeout:tag returns what I need and I can do with it what I want. It's the second situation that I'm unsure of how to implement. I can't use readDataToLength:timeout:tag, since I don't know the length beforehand.
I'm thinking I could do something with readDataWithTimeout:tag:, setting the timeout to -1. That makes the socket to constantly listen for anything that's coming in, I believe. However, that will probably interfere with data that's coming in as response to something I sent out (situation 1). The app can't distinguish incoming data from situation 1 or situation 2 anymore.
Anybody here who can give me help me solve this?
Your error is in the network protocol design.
Unless your protocol has this information, there's no way to distinguish the response from the server-initiated communication. And network latency prevents obvious time-based approach you've described from working reliably.
One simple way to fix the protocol in your case (if the server-initiated messages are always less then 255 bytes) - add the 7-th byte to the beginning of the response, with the value FF.
This way you can readDataWithTimeout:tag: for 1 byte.
On timeout you retry until there's a data.
If the received value is FF, you read 6 more bytes with readDataToLength:6 timeout: tag:, and interpret it as the response to the request you've sent earlier.
If it's some other value, you read the message with readDataToLength:theValue timeout: tag:, and process the server-initiated message.
I mentioned Amazon CDN and iOS devices because I am not sure which part is the culprit.
I host jpg and PDF files in Amazon CDN.
I have an iOS application that download a large number of jpg and PDF files in a queue. I have tried using dataWithContentOfURL and ASIHttpRequest, but I get the same result. ASIHttpRequest at least gives a callback to indicate that there is problem with the download, so I can force it to retry.
But this happens very often. Out of 100 files, usually 1-5 files have to be redownloaded. If I check the file size, it is smaller than the original file size and can't be opened.
The corrupted files are usually different everytime.
I've tried this on different ISP and network. It's the same.
Is there a configuration that I missed out in Amazon CDN, or is there something else I missed out in iOS doWnload? Is it not recommended to queue large number of files for download?
I wouldn't download more than 3 or 4 items at once on an iPhone. Regardless of implementation limitations (ASIHTTPRequest is decent anyway) or the potential for disk thrashing, you have to code for the 3G use case unless your app is explicitly marked (as in, with an Info.plist setting) that it requires Wi-Fi.
A solution exists within ASIHTTPRequest itself to make these operations sequential rather than concurrent. Add your ASIHTTPRequest objects to an ASINetworkQueue (the one returned by [ASINetworkQueue queue] will do fine). They will be executed one after the other.
Note that if you encounter any errors, all the requests on the queue will by default be cancelled unless you set the queue's shouldCancelAllRequestsOnFailure to NO.
EDIT: I just noticed you already mentioned using a download queue. I would therefore suspect it's more of an issue at the other end than on your device. Connections could be dropping for various reasons: keep-alive setting is too short, resources too low on the server so it's timing out, some part of the physical link between the server and the Internet backbone is failing intermittently. To be sure, though, you probably need to test your app on multiple devices to make sure it's consistently failing across all of them to really be able to say that.
You could possibly try reducing the number of concurrent downloads:
ASIHTTPRequest.sharedQueue.maxConcurrentOperationCount = 2;
This changes the default ASIHTTPRequest queue - if you're using your own queue set the value on that instead.
The default is 4, which is above the limit recommended by the HTTP 1.1 RFC when using persistent connections and when all the content is on the same server.