Why Connect Timeout and Response Timeout not working in JMeter? - load-testing

I have a very simple thread group that simulates 100 users hitting www.google.com. I notice that Connect Timeout and Response Timeout are not working as expected.
In the HTTP Request, I've specified Connect Timeout to be 5ms and Response Timeout 7ms.
But in the results, I see requests failing that shouldn't have failed. In this case, Connect Time is 3ms and Response Time= Load - Connect = 9-3= 6ms.
Can some kind soul please show me what is going on here? Thanks a bunch :)

The Jmeter result is correct here. It should fail because you have given expected time of connect as 5 and response as 7. And the load time of the request is 9 seconds- Which is greater than your provided expected value.
To make it simple. Jmeter default timeout is 21 seconds. If you don't provide any value in timeout, the requests will fail automatically when its exceeds 21 seconds of load time.
In your case, since you already provided the expected timeout value and since all of the requests taking more this value, its failing which is normal.
Try increase the connect and response time and try

Related

Python Requests POST timeout prematurely closed connection

I'm doing some file uploads that sends to an nginx reverse proxy. If I set the python requests timeout to 10 seconds and upload a large file, nginx will report client prematurely closed connection and forward an empty body to the server. If I remove the requests timeout, the file uploads without any issues. As I understand it, the timeout should only apply if the client fails to receive or send any bytes, which I don't believe is the case as it's in the middle of uploading the file. It seems to behave more like a time limit, cutting the connection after 10 seconds with no exception being raised by requests. Is sending bytes different than reading bytes for timeout? I haven't set anything for stream or tried any type of multi-part. I would like to set a timeout but confused as to why the connection is getting aborted early - thanks for any help.

Request consistently returns 502 after 1 minute while it successfully finishes in the container (application)

To preface this, I know an HTTP request that's longer than 1 minute is bad design and I'm going to look into Cloud Tasks but I do want to figure out why this issue is happening.
So as specified in the title, I have a simple API request to a Cloud Run service (fully managed) that takes longer than 1 minute which does some DB operations and generates PDFs and uploads them to GCS. When I make this request from the client (browser), it consistently gives me back a 502 response after 1 minute of waiting (presumably coming from the HTTP Load Balancer):
However when I look at the logs the request is successfully completed (in about 4 to 5 min):
I'm also getting one of these "errors" for each PDF that's being generated and uploaded to GCS, but from what I read these shouldn't really be the issue?:
To verify that it's not just some timeout issue with the application code or the browser, I put a 5 min sleep on a random API call on a local build and everything worked fine and dandy.
I have set the request timeout on Cloud Run to the maximum (15min), the max concurrency to the default 80, amount of CPU and RAM to 2 and 2GB respectively and the timeout on the Fastify (node.js) server to 15 min as well. Furthermore I went through the logs and couldn't spot an error indicating that the instance was out of memory or any other error around the time that I'm receiving the 502 error. Finally, I also followed the advice to use strace to have a more in depth look at system calls, just in case something's going very wrong there but from what I saw, everything looked fine.
In the end my suspicion is that there's some weird race condition in routing between the container and gateway/load balancer but I know next to nothing about Knative (on which Cloud Run is built) so again, it's just a hunch.
If anyone has any more ideas on why this is happening, please let me know!

In what cases does Google Cloud Run respond with "The request failed because the HTTP connection to the instance had an error."?

We've been running Google Cloud Run for a little over a month now and noticed that we periodically have cloud run instances that simply fail with:
The request failed because the HTTP connection to the instance had an error.
This message is nearly always* proceeded by the following message (those are the only messages in the log):
This request caused a new container instance to be started and may thus take longer and use more CPU than a typical request.
* I cannot find, nor recall, a case where that isn't true, but I have not done an exhaustive search.
A few things that may be of importance:
Our concurrency level is set to 1 because our requests can take up to the maximum amount of memory available, 2GB.
We have received errors that we've exceeded the maximum memory, but we've dialed back our usage to obviate that issue.
This message appears to occur shortly after 30 seconds (e.g., 32, 35) and our timeout is set to 75 seconds.
In my case, this error was always thrown after 120 seconds from receiving the request. I figured out the issue that Node 12 default request timeout is 120 seconds. So If you are using Node server you either can change the default timeout or update Node version to 13 as they removed the default timeout https://github.com/nodejs/node/pull/27558.
If your logs didn't catch anything useful, most probably the instance crashes because you run heavy CPU tasks. A mention about this can be found on the Google Issue Tracker:
A common cause for 503 errors on Cloud Run would be when requests use
a lot of CPU and as the container is out of resources it is unable to
process some requests
For me the issue got resolved by upgrading node "FROM node:13.10.1 AS build" to "FROM node:14.10.1 AS build" in docker file it got resolved by upgarding the node.

What is the default client-side timeout for WL.Client.invokeProcedure in IBM Worklight?

When we invoke Worklight's WL.Client.invokeProcedure, the second parameter can contain a timeout value. The documentation says:
timeout: Integer. Number of milliseconds to wait for the server response before failing with a request timeout.
However, it doesn't say what the default timeout is. From observation, it appears that this may be 15s. Can anyone confirm?
If memory serves me right, the default WL.Client.invokeProcedure timeout is at 30 seconds.
I don't know how you're testing it, but 15 might just be the amount of time it takes in your setup for it to fail (there could be a failed response from the backend at 15 seconds).
Odd that this is not documented, though. I've opened a documentation defect for this.

Icefaces: Network Connection Timeouts in IE only

My application has a long running request that takes over a minute. If I'm using Chrome or Firefox I just need to be patient. If I use IE however, at the one minute mark I get the popup that says I've reached a Network Connection Timeout.
Why is that?
The default Internet Explorer time out is 1 minute. Since your process is a long-running one, IceFaces doesn't send the response and it times out.
You can avoid this by spawning a new thread for your long running process and returning the response immediately. IceFaces has plenty of polling or push options available to you to let your client know when the long-running process is done.

Resources