Python Requests POST timeout prematurely closed connection - post

I'm doing some file uploads that sends to an nginx reverse proxy. If I set the python requests timeout to 10 seconds and upload a large file, nginx will report client prematurely closed connection and forward an empty body to the server. If I remove the requests timeout, the file uploads without any issues. As I understand it, the timeout should only apply if the client fails to receive or send any bytes, which I don't believe is the case as it's in the middle of uploading the file. It seems to behave more like a time limit, cutting the connection after 10 seconds with no exception being raised by requests. Is sending bytes different than reading bytes for timeout? I haven't set anything for stream or tried any type of multi-part. I would like to set a timeout but confused as to why the connection is getting aborted early - thanks for any help.

Related

Request consistently returns 502 after 1 minute while it successfully finishes in the container (application)

To preface this, I know an HTTP request that's longer than 1 minute is bad design and I'm going to look into Cloud Tasks but I do want to figure out why this issue is happening.
So as specified in the title, I have a simple API request to a Cloud Run service (fully managed) that takes longer than 1 minute which does some DB operations and generates PDFs and uploads them to GCS. When I make this request from the client (browser), it consistently gives me back a 502 response after 1 minute of waiting (presumably coming from the HTTP Load Balancer):
However when I look at the logs the request is successfully completed (in about 4 to 5 min):
I'm also getting one of these "errors" for each PDF that's being generated and uploaded to GCS, but from what I read these shouldn't really be the issue?:
To verify that it's not just some timeout issue with the application code or the browser, I put a 5 min sleep on a random API call on a local build and everything worked fine and dandy.
I have set the request timeout on Cloud Run to the maximum (15min), the max concurrency to the default 80, amount of CPU and RAM to 2 and 2GB respectively and the timeout on the Fastify (node.js) server to 15 min as well. Furthermore I went through the logs and couldn't spot an error indicating that the instance was out of memory or any other error around the time that I'm receiving the 502 error. Finally, I also followed the advice to use strace to have a more in depth look at system calls, just in case something's going very wrong there but from what I saw, everything looked fine.
In the end my suspicion is that there's some weird race condition in routing between the container and gateway/load balancer but I know next to nothing about Knative (on which Cloud Run is built) so again, it's just a hunch.
If anyone has any more ideas on why this is happening, please let me know!

ActiveMQ Artemis Error - AMQ224088: Timeout (10 seconds) while handshaking with LB

My question is related to the question already posted here
Its indicated in the original post that the timeout happens about once a month. In our setup we are receiving this once every 10 seconds. Our production logs are filled with this handshake exception messages. Would setting the timeout value for handshake apply to our scenario as well?
Yes. Setting handshake-timeout=0 on the relevant acceptor URL in your broker.xml applies here even with the higher volume of timeouts.

With Delphi and Indy 10.6 how do I keep the TIdTCPServer from dropping a client on read timeout

I've noticed that, when my TCPServer is set for 30 sec read timeout, even though I am handling the EIdReadTimeout in the ServerExecute, and Raise it again there, then handle it in ServerException and show the timeout message there, it still drops that specific client without me telling it to. On the Client side I handle the EIdReadTimeout and it doesn't drop the server connection.
I want the Server side to do the same thing so I can set the Server timeout at 30 sec and the client at 45 sec so the server can send a timeout message to the Client for a retry and not abort the connection entirely.

HttpsURLConnection POST request of a large file throws IOException after several minutes

I've been working on this bug for several days now and couldn't solve it.
I wrote an HttpsURLConnection client to upload large files(>1GB) using POST requests.
I also implemented the server side using com.sun.net.httpserver.HttpServer.
As the files are quite big I have to use the: setFixedLengthStreamingMode/setChunkedStreamingMode settings on my connection
(bug is reproduced using either).
Please notice I'm using an HTTPS connection for the upload as well.
I'm uploading the file to several servers simultaneously (seperate thread for each http client, connected to a different server).
I have set a limit on the concurrent uploads so each time only X threads have an open UrlConnection (bug was reproduced with X=[1..4]).
(The other threads wait on a semaphore)
My problem is such:
When uploads takes less than 5 minutes (less than 4:50 minutes to be accurate) everything works just fine.
If the first batch of threads takes more then 5 minutes to finish then an Exception is thrown for every active thread:
java.io.IOException: Error writing request body to server
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(Unknown Source)
The exception is thrown while trying to write to the HttpURLConnection output stream.
[outputStream.write(buffer,0,len);]
The next batch of threads will work just fine (even if they take more then 5
minutes).
Please notice that the servers are completely identical, and now the process will not fail thus leading me to think that the problem is not on the server side.
(If it was then the second batch was suppose to fail after 5 minutes as well...)
I have reproduced this issue with/without connect/read timeouts on the connection.
Furthermore, on the server side I've seen the file is being created and growing until the exception occurs.
About 20-40 seconds after the client throws an exception the server will throw an IOException "read timeout".
I have collected a TCP/IP sample using wireshark and saw that the server sends me a FIN packet at about the time of the client exception, I have no idea why.
(All connection seems functioning prior to that)
I have read many threads on similiar issues but couldn't find any proper solution.
(including Using java.net.URLConnection to fire and handle HTTP requests)
Any ideas on why is this happening?
How can I find the cause of it?
How can I solve it?
Many Thanks.
P.S
I didn't publish the code because it is pretty long...
But if it could help understanding my problem I will be glad to do so.

How to handle pending connections to a server that is designed to handle a limited number of connections at a time

I wonder what is the best approach to handle the following scenario:
I have a server that is designed to handle only 10 connections at a time, during which the server is busy with interacting with the clients. However, while the the server is busy, there may be new clients who want to connect (as part of the next 10 connections that the server is going to accept). The server should only accept the new connections after it finishes with all previous 10 agents.
Now, I would like to have an automatic way for the pending clients to wait and connect to the server once it becomes available (i.e. finished with the previous 10 clients).
So far, I can think of two approaches: 1. have a file watch on the client side, so that the client will watch for a file written by the server. When the server finishes with 10 clients, it will write the file, and the pending clients will know it's time to connect; 2. make the pending clients try to connect the server every 5 - 10 secs or so until success, and the server will return a message indicating whether it is ready.
Any other suggestion would be much welcome. Thanks.
Of the two options you provide, I am inclined toward the 2nd option of "Pinging" the server. I think it is more complicated to have the server write a file to the client triggering another attempt.
I would think that you should be able to have the client waiting and simply send a READY signal. Keep a running Queue of connection requests (from Socket.Connection.EndPoint, I believe). When one socket completes, accept the next Socket off the queue.

Resources