The HTTP error code 502 returned frequently when calling Google EmbeddedAssistant API by Protocol Buffers over HTTP - google-assistant-sdk

I'm trying to call Google Assistant API using Protocol Buffers (protobuf) over HTTP. refer to:
https://googleapis.github.io/HowToRPC#grpc-fallback-experimental
My problem is that I frequently get HTTP error code 502 when sending request to the back-end service.
To test the problem, I wrote a python script to send (through HTTP POST) the pre-built protobuf binaries and check the response. The test results are:
32KB audio data (about 1 second length of audio), 20 times post, 0 times 502 error received / 32KB 20/0, failure rate: 0%
2*32KB, 20/0, failure rate: 0%
3*32KB, 20/3, failure rate: 15%
4*32KB, 20/10, failure rate: 50%
6*32KB, 20/19, failure rate: 95%
HTTP status code is 200 for successful requests, while 502 for failed cases.
where we can see, the larger audio length the greater failure rate.
The python code to post pre-built protobuf binaries is as below. while the content of file f1 is the just protobuf binaries.
def postData():
url = "https://embeddedassistant.googleapis.com/$rpc/google.assistant.embedded.v1alpha2.EmbeddedAssistant/Assist"
header = {"Content-type".encode('utf-8'): "application/x-protobuf".encode('utf-8'),"Accept".encode('utf-8'):"text/plain".encode('utf-8'), "Connection".encode('utf-8'):"keep-alive".encode('utf-8'), "Authorization".encode('utf-8'):repToken.encode('utf-8')}
with open(fl) as f:
r = requests.post(url, data=f, headers=header)
with open(fl + "_out", 'wb') as fd:
print(r.status_code)
fd.write(r.content)
f.close()
fd.close()
I also tried to post binary files which contain invalid protobuf, e.g a mp3 file,
and in this case, no matter the size of the file, the HTTP status code returned is always 400 with following message, which is just expected.
Invalid PROTO payload received. Invalid request data in stream body, unexpected message type: 7a
It seems the back-end service sets some kind of limitation for the latency of data transfer which makes it doesn’t work well with low bandwidth connections?

Related

AVPlayer won't play audio files from FFMPEG

Before requesting audio data AVPlayer requests byte range 0-1 from FFMPEG.
FFMPEG gives a 200 response, but AVPlayer requires a 206 response.
This results in the request failing and audio can't be played.
Expected behavior:
Play tracks when streaming through ffmpeg
Current behavior: When trying to stream with ffmpeg we get "Operation Stopped"
Sample FFMPEG command:
ffmpeg -i "/path/to/audio/track.mp3" -vn -strict -2 -acodec pcm_u8 -f wav -listen 1 -seekable 1 http://localhost:8090/restream.wav
Player Log:
Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x600003bcc4b0 {Error Domain=NSOSStatusErrorDomain Code=-12939 "(null)"}}
!av_interleaved_write_frame(): Broken pipe
!Connection to tcp://localhost:8090 failed: Connection refused
!Connection to tcp://localhost:8090 failed: Connection refused
!Connection to tcp://localhost:8090 failed: Connection refused
!Error writing trailer of http://localhost:8090/restream.wav: Broken pipe
This error is defined by Apple as:
+"The HTTP server sending the media resource is not configured as expected. This might mean that the server does not support byte range requests."
And summarised nicely in this StackOverflow post:
when AVPlayerItem receive a video URL , it do the following task:
Send a bytes request HTTP Request, and range = 0 -1
If the response code is 206 and return 1 bytes data, It do the 3th task, if not, AVErrorServerIncorrectlyConfigured error occurred.
continue send other HTTP Request, to download segment of All duration. and the response of VideoData code must be 206
In my situation , when send range[0-1] HTTP request, the server side give me a 200 OK response, So error occurred.
Network Log:
GET /file.wav HTTP/1.1
Host: localhost:1234
X-Playback-Session-Id: F72F1139-6F4C-4A22-B334-407672045A86
Range: bytes=0-1
Accept: */*
User-Agent: AppleCoreMedia/1.0.0.18C61 (iPhone; U; CPU OS 14_3 like Mac OS X; en_us)
Accept-Language: en-us
Accept-Encoding: identity
Connection: keep-alive
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Reproduce using this sample app:
This can also be reproduced using standard ffmpeg and adding URL to local or remote ffmpeg URL
Can we solve this by making changes to FFMPEG or AVPlayer?
I asked this question on the ffmpeg email list and it seems it's not possible:
If I understand your request correctly, you are not asking for a
change of a return type (you could probably do that yourself) but for
the implementation of byte range requests. I don't think this is
possible at all with FFmpeg.
Also it's not possible to get AVPlayer to ignore the byte range request:
it is an apple standard that media providers need to support http 1.1
with the range header (check out the iTunes store guidelines for
podcasts for example), so I wouldn't expect it anytime soon
SO Q'n: Is there a way to stop the avplayer sending a range http header field

DocumentDB return "Request rate is large", parse on azure

I'm runing parse on azure (Parse Server on managed Azure services),
I'ts include DocumentDB as database and have limit for requests per seconds.
Some parse cloud functions are large and the speed of requests is too high (even for S3 tier) so i'm getting this error (seen using Visual Studio Team Services (was Visual Studio Online) and Streaming logs).
error: Uncaught internal server error. { [MongoError: Message: {"Errors":["Request rate is large"]}
ActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s]
name: 'MongoError',
message: 'Message: {"Errors":["Request rate is large"]}\r\nActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s' } MongoError: Message: {"Errors":["Request rate is large"]}
ActivityId: a4f1e8eb-0000-0000-0000-000000000000, Request URI: rntbd://10.100.99.69:14000/apps/f8a35ed9-3dea-410f-a89a-28650ff41381/services/2d8e5320-89e6-4750-a06f-174c12013c69/partitions/53e8a085-9fed-4880-bd90-f6191765f625/replicas/131091039101528218s
at D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:673:34
at handleCallback (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:159:5)
at setCursorDeadAndNotified (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:501:3)
at nextFunction (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:672:14)
at D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:585:7
at queryCallback (D:\home\site\wwwroot\node_modules\mongodb-core\lib\cursor.js:241:5)
at Callbacks.emit (D:\home\site\wwwroot\node_modules\mongodb-core\lib\topologies\server.js:119:3)
at null.messageHandler (D:\home\site\wwwroot\node_modules\mongodb-core\lib\topologies\server.js:397:23)
at TLSSocket.<anonymous> (D:\home\site\wwwroot\node_modules\mongodb-core\lib\connection\connection.js:302:22)
at emitOne (events.js:77:13)
How to handle this error?
TL;DR;
Upgrade the old S3 collection to a new single collection under the new pricing scheme. This can support up to 10K RU (up from 2500 RU)
Delete the old S3 collection and create a new partitioned collection. Will require support for partitioned collection in parse.
Implement a backoff strategy in line with the x-ms-retry-after-ms response header.
Long answer:
Each request to DocumentDB returns a HTTP header with the Request charge for that operation. The number of request units is configured per collection. As per my understanding you have 1 collection of size S3, so this collection can only handle 2500 Request Units per second.
DocumentDB scales by adding multiple collections. With the old configuration using S1 -> S3 you must do this manually, i.e. you must distribute your data over the collections using an algorithm such as consistent hashing, a map or perhapse date. With the new pricing in DocumentDB you can use partitioned collections, by defining a partition key, DocumentDB will shard your data for you. If you see sustained rates of RequestRateTooLarge errors I recommend scaling out the partitions. However, you will need to investigate if Parse supports partitined collections.
When you receive a HTTP 429 RequestRateTooLarge there's also a header called x-ms-retry-after-ms :### where ### denotes the number of milliseconds to wait before you retry the operation. What you can do is to implement a back-off strategy which retries the operation. Do note that if you have clients hanging on the server during retries, you may build up request queues and clog the server. I recommend adding a Queue to handle such burst. For short burst of requests this is a nice way to handled it without scaling up the collections.
i used Mlab as external mongoDB database and configure the parse app in azure to use it instead of documentDB.
I have to will to pay so much for "performance" increase.

Twitter stream API - Erlang client

I'm very new in Erlang world and I'm trying to write a client for the Twitter Stream API. I'm using httpc:request to make a POST request and I constantly get 401 error, I'm obviously doing something wrong with how I'm sending the request... What I have looks like this:
fetch_data() ->
Method = post,
URL = "https://stream.twitter.com/1.1/statuses/filter.json",
Headers = "Authorization: OAuth oauth_consumer_key=\"XXX\", oauth_nonce=\"XXX\", oauth_signature=\"XXX%3D\", oauth_signature_method=\"HMAC-SHA1\", oauth_timestamp=\"XXX\", oauth_token=\"XXX-XXXXX\", oauth_version=\"1.0\"",
ContentType = "application/json",
Body = "{\"track\":\"keyword\"}",
HTTPOptions = [],
Options = [],
R = httpc:request(Method, {URL, Headers, ContentType, Body}, HTTPOptions, Options),
R.
At this point I'm confident there's no issue with the signature as the same signature works just fine when trying to access the API with curl. I'm guessing there's some issue with how I'm making the request.
The response I'm getting with the request made the way demonstrated above is:
{ok,{{"HTTP/1.1",401,"Unauthorized"},
[{"cache-control","must-revalidate,no-cache,no-store"},
{"connection","close"},
{"www-authenticate","Basic realm=\"Firehose\""},
{"content-length","1243"},
{"content-type","text/html"}],
"<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\"/>\n<title>Error 401 Unauthorized</title>\n</head>\n<body>\n<h2>HTTP ERROR: 401</h2>\n<p>Problem accessing '/1.1/statuses/filter.json'. Reason:\n<pre> Unauthorized</pre>\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n</body>\n</html>\n"}}
When trying with curl I'm using this:
curl --request 'POST' 'https://stream.twitter.com/1.1/statuses/filter.json' --data 'track=keyword' --header 'Authorization: OAuth oauth_consumer_key="XXX", oauth_nonce="XXX", oauth_signature="XXX%3D", oauth_signature_method="HMAC-SHA1", oauth_timestamp="XXX", oauth_token="XXX-XXXX", oauth_version="1.0"' --verbose
and I'm getting the events just fine.
Any help on this would be greatly appreciated, new with Erlang and I've been pulling my hair out on this one for quite a while.
There are several issues with your code:
In Erlang you are encoding parameters as a JSON body while with curl, you are encoding them as form data (application/x-www-form-urlencoded). Twitter API expects the latter. In fact, you get a 401 because the OAuth signature does not match, as you included the track=keyword parameter in the computation while Twitter's server computes it without the JSON body, as it should per OAuth RFC.
You are using httpc with default options. This will not work with the streaming API as the stream never ends. You need to process results as they arrive. For this, you need to pass {sync, false} option to httpc. See also stream and receiver options.
Eventually, while httpc can work initially to access Twitter streaming API, it brings little value to the code you need to develop around it to stream from Twitter API. Depending on your needs you might want to replace it a simple client directly built on ssl, especially considering it can decode HTTP packets (what is left for you is the HTTP chunk encoding).
For example, if your keywords are rare, you might get a timeout from httpc. Besides, it might be easier to update the list of keywords or your code with no downtime without httpc.
A streaming client directly based on ssl could be implemented as a gen_server (or a simple process, if you do not follow OTP principles) or even better a gen_fsm to implement reconnection strategies. You could proceed as follows:
Connect using ssl:connect/3,4 specifying that you want the socket to decode the HTTP packets with {packet, http_bin} and you want the socket to be configured in passive mode {active, false}.
Send the HTTP request packet (preferably as an iolist, with binaries) with ssl:send/2,3. It shall spread on several lines separated with CRLF (\r\n), with first the query line (GET /1.1/statuses/filter.json?... HTTP/1.1) and then the headers including the OAuth headers. Make sure you include Host: stream.twitter.com as well. End with an empty line.
Receive the HTTP response. You can implement this with a loop (since the socket is in passive mode), calling ssl:recv/2,3 until you get http_eoh (end of headers). Note down whether the server will send you data chunked or not by looking at the Transfer-Encoding response header.
Configure the socket in active mode with ssl:setopts/2 and specify you want packets as raw and data in binary format. In fact, if data is chunked, you could continue to use the socket in passive mode. You could also get data line by line or get data as strings. This is a matter of taste: raw is the safest bet, line by line requires that you check the buffer size to prevent truncation of a long JSON-encoded tweet.
Receive data from Twitter as messages sent to your process, either with receive (simple process) or in handle_info handler (if you implemented this with a gen_server). If data is chunked, you shall first receive the chunk size, then the tweets and the end of the chunk eventually (cf RFC 2616). Be prepared to have tweets that spread on several chunks (i.e. maintain some kind of buffer). The best here is to do the minimum decoding in this process and send tweets to another process, possibly in binary format.
You should also handle errors and socket being closed by Twitter. Make sure you follow Twitter's guidelines for reconnection.

Why is json response NULL when I send video file larger than 20MB via POST?

I am creating a POST request and using it to send a video to the server. On the server side, I decode the video, and save it to a file directory. IF the video sent is under 20MB everything works as expected and I get a valid JSON response, otherwise my response dictionary is NULL or returns "The operation couldn’t be completed. (Cocoa error 3840.)"
$result = mysqli insert statement;
$videoDirectory = 'userVideos/'.$unique_id.'.mp4';
$decodedVideo =base64_decode($video);
file_put_contents($videoDirectory, $decodedVideo);
if (!$result['error'])
{
$e = "register into Str33trider successfully";
print json_encode(array('results'=>$videoCaption));
exit();
}
I've even edited my apache config file
<IfModule mod_php5.c>
php_value post_max_size 200M
php_value upload_max_filesize 200M
php_value memory_limit 320M
php_value max_file_uploads 200M
php_value max_execution_time 30000
php_value max_input_time 259200
php_value session.gc_maxlifetime 1200
</IfModule>
When you receive a response for a POST request, first check the status code.
If the POST request succeeded:
If status code equals 200 (OK) or 204 (No Content) the response body is likely empty or it describes the result of the operation. With either status codes, the request hasn't created a resource which can be identified by a URI.
If status code equals 201 (Created) the request created a resource on the server and the response body may describe the result of the operation, and the response should contain a location header where the new resource can be located.
Usually, the web service API describes the details about the response body (if any) and its content type and character encoding. Possibly, there are more than one format that can be send, e.g. JSON or XML.
If the POST request failed:
The server will send a corresponding status code and optionally a response body containing details about the error. Oftentimes, the server may send a response body in a content type which does not match the Accept header of the request.
Note:
A client should always also check the content type of the response body (if any) and decode it accordingly. In case of server errors, the content type may often be text/html instead in the content type specified in the Accept header, e.g. application/json.
So, if you log the complete error description for Cocoa error 3840 you will read that the given text is likely not JSON since it must start either with a '[' or '{'. This indicates, that you got an error message from the server which is not JSON. Decode the error message so that it is human readable and log it to the console to see what the server is telling you.

Streaming Results from Mochiweb

I have written a web-service using Erlang and Mochiweb. The web service returns a lot of results and takes some time to finish the computation.
I'd like to return results as soon as the program finds it, instead of returning them when it found them all.
edit:
i found that i can use a chunked request to stream result, but seems that i can't find a way to close the connection. so any idea on how to close a mochiweb request?
To stream data of yet unknown size with HTTP 1.1 you can use HTPP chunked transfer encoding. In this encoding each chunk of data prepended by its size in hexadecimal. Last chunk is a zero-length chunk, with the chunk size coded as 0, but without any data.
If client doesn't support HTTP 1.1 server can send data as binary chunks and close connection at the end of the stream.
In MochiWeb it's all works as following:
HTTP response should be started with Response = Request:respond({Code, ResponseHeaders, chunked}) function. (By the way, look at the code comments);
Then chunks can be send to client with Response:write_chunk(Data) function. To indicate client the end of the stream chunk of zero length should be sent: Response:write_chunk(<<>>).
When handling of current request is over MochiWeb decides should connection be closed or can be reused by HTTP persistent connection.

Resources