I am using https://gatling.io for load testing an application. I really appreciate the default reporting from the tool. After searching the documentation, it's not clear to me how particular requests get classified as "KO." (or not ok)
We are currently using all the default settings from Gatlin.
We suspect that requests need to respond under 10 seconds from inspection of gatling.conf.
Is this assumption correct?
Anything that fails a check() will be a KO
Even if you haven't added a check yourself Gatling checks for an http 2xx or 3xx response by default
https://gatling.io/docs/2.3/http/http_check/
If you're seeing KO's and haven't added any checks it's likely you're getting some 4xx or 5xx responses
Related
This question is maybe more a tip for people to search a solution if they have the same problem (as I found the solution eventually).
I had an application that does some HTTP requests with a local server (a mix of GET/POST with JSON content in the request/response bodies). The server is a third-party application, and after I upgraded it to a recent version, my Delphi app was no longer working.
It turned out that it was now hanging on the statement:
IdHTTP.Post("URL", "Payload", "BytesStreamResult");
As a manual POSTMAN request was still working, it had to be on the Delphi client side.
Further isolating the issue showed that the HTTP POST request did get an HTTP 200 response with valid HTTP response headers, but then was getting stuck reading the response body. It was hanging on:
IOHandler.ReadLn
When I compared the headers with the POSTMAN response, I noticed that 'Transfer-Encoding: chunked' was missing in the Delphi response.
Finally, I noticed the code related to TIdHTTP's hoKeepOrigProtocol option, which is not set by default.
So, my POST request was "downgraded" to an HTTP 1.0 request, and I guess this now made the (updated) server to respond differently (I'm not an RFC expert, but I guess 'chunked' is maybe an HTTP 1.1 option only).
After setting this option, everything worked like before (and indeed, the response was now read as "chunked" in Delphi).
Summary:
Shouldn't hoKeepOrigProtocol be the default option? (why punish good citizens for those that are not...)
Can we intercept this? Now my POST is assuming upfront a streamed response and thus it hangs because the server doesn't write anything to the buffer.
What would that high-level code look like? As it seems a mix of interpreting the header response headers and then deciding if more response reading is required.
(it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes...)
TIdHTTP supports non-chunked responses (which yes, is an HTTP 1.1 feature), so the hanging would have to be caused by the server sending a malformed response (a bug that should be reported to the server author).
When reading a non-chunked and non-MIME response, TIdHTTP does not use IOHandler.ReadLn to read the response's body, as you claim. Only when reading the response's headers.
But, since you did not show what the response actually looks like, nobody can explain for sure exactly why the hang occurs.
Shouldn't hoKeepOrigProtocol be the default option?
At the time the option was first introduced, no. There were plenty of buggy HTTP 1.1 servers around that downgrading to HTTP 1.0 was warranted.
However, that was many years ago. Nowadays, HTTP 1.1 is much more mature, and such buggy servers are rare. So, feel free to submit a change/pull request to Indy's GitHub repo if you feel the default behavior should be changed.
Can we intercept this?
No. The behavior you describe is most likely caused by a bug in the HTTP server. Either it is not sending all of the data it should be, or else the response is likely malformed in a way that makes TIdHTTP expect more data than is actually being sent. Either way, all you can do is assign a non-infinite timeout to TIdHTTP.
it didn't do anything specific regarding time-outs, either. I have the impression it hangs forever, or at least > 10 minutes.
Indy is designed to use infinite timeouts by default. You can assign custom timeouts to TIdHTTP's ConnectTimeout and ReadTimeout properties.
Setting this prevent the HTTP protocol downgrade:
IdHTTP.HTTPOptions := IdHTTP.HTTPOptions + [hoKeepOrigProtocol];
This is, of course, dependant upon how the server processes the protocol specification, and if it results in issues or not.
I call a http request,The reponse is html,but gatling get the response is incomplete.What should I do
I think a part of I need that is gatling supported resources.It is under the tag 'table'.
The server may not be returning the complete response due to an error or a problem with the server-side code. In this case, you should check the server logs to see if there are any errors, and you should also check the HTTP response headers to see if there are any indications of what went wrong.
The HTTP request may be failing or being blocked by a firewall or other network security device. In this case, you should check the network logs to see if the request is being sent and received successfully, and you should also check any network security settings to ensure that the request is not being blocked.
The HTML response may not be well-formed or may be missing some elements, such as the 'table' element you mentioned. In this case, you should validate the HTML using a tool such as the W3C HTML Validator, and you should also check the HTML source to ensure that all required elements are present.
User issue, as concluded on the Gatling community forum.
Setup
I have two different applications, both written in Go. The first is the server, and second is a smaller app that makes calls to the server. They use the http package for making calls and the router package for setting up endpoints.
The Problem
When the device makes a specific call to the server a 408 (StatusRequestTimeout) response is returned. This response is not due to our server actually timing out, but just used to describe the error (more info on this below). The first time the device makes this call it receives the 408 and proceeds normally. However, if the same call is made again a new 'third' call being sent to the server immediately after the second call has finished. This third call is the same as the first two. There is no http retry logic enabled for this call.
The bug
Why is this third call being issued? When the code is updated to instead return a 400 status response instead of a 408 this third call is no longer made. Additionally changing different calls to return a 408 instead of a 400 will start to exhibit the same behavior of sending a triplicate third call. I have been unable to find documentation to explain this behavior, or other articles which describe it.
Hunch
I have found many articles like this one which indicate browsers will sometimes retry requests. Additionally some other stackoverflow posts like this indicate that the http request doesn't retry without setting up our own retry logic. Again, we have set this up, but it is not enabled for this given call, and debugging shows that we do not ever enter our custom retry logic.
I believe that this is a chromium feature. I've tried to replicate this with firefox, but I haven't been successful, however Edge exhibits the same behavior. Chromes dev tools (and edge) however only show two network calls, the first and the third. I think it could also be the http library, but it is very strange that the behavior is different between browsers.
Bug Fix
Given the nature of what a 408 response is supposed to entail I have decided to move away from using them for custom error responses. At this point, I'm just more curious about why the behavior is as it is, if my hunch is correct, or if there is something else at play.
Lets start from method is408Message() which is here. It checks that the buffer carries 408 Request timeout status code. This method is used by another method to inspect the response from server and in case of 408 Request Timeout the persistConn is closed with an errServerClosedIdle error. The error is assigned to persistConn.closed field.
In the main loop of http Transport, there is a call to persistConn.roundTrip here which as an error returns the value stored in persistConn.closed field. Few lines below you can find a method called pconn.shouldRetryRequest which takes as an argument the error returned by persistConn.roundTrip and returns true when the error is errServerClosedIdle. Since the whole operation is wrapped by the for loop the request will be sent again.
It could be valuable for you to analyze shouldRetryRequest method because there are multiple conditions which must be met to retry the request. For example the request will not be repeated when the connection was used for the first time.
Using Jmeter GUI, I recorded a test scenario (placing an order) and the script ran successfully. But when I replay the test scripts it doesn't function as it was recorded to do, it did not make an order.
After query the dev, found that with each item selected, the server generate a CSRF token, and put the token in the URL path (Like: /cart/add/type/product_id/7245985/_csrf_token/b46c0aec2e5891808ec42141b1956943204ae8f8) when the item is added to the shopping cart. This is all recorded in the script. This path with the token is used to add the item to cart.
My question is how to test this dynamic token when it is concatenated in the path of URL?
Any help are appreciated.
If you have not already added Tree View Listener to your Test Plan, then add it now. You can use it to view the details of requests & responses. JMeter considers a request successful if it gets "some" response from Server-side. It does not matter if the response is functionally valid or not. So, in order to make sure that JMeter is sending valid parameters and receiving expected response, you will have to check the details of requests / responses in Tree view listener.
You can also add Response Assertions to requests so JMeter itself verifies that it is getting expected responses.
Important Tips:
Use TreeView Listener for debugging only. In real load test keep it disabled as it consumes lot of memory.
Do not use response assertions excessively as they consume lot of memory as well.
JMeter is not a browser-based tool. It just deals with back-end requests. Hence it is expected to be very fast. So nothing wrong with that. You should remove un-necessary timers as there is nothing wrong with it being fast.
If your requests involve some kind of login authorization then have a look at this question for further details Load testing using jmeter with basic authentication
Recording doesn't guarantee working script, it gives you only a "skeleton" and usually you need to perform some correlation (the process of extracting mandatory dynamic parameter from previous response and adding it to the next request).
Reference material:
Building a Web Test Plan
Building an Advanced Web Test Plan
How to use JMeter for Login Authentication?
How to make JMeter behave more like a real browser
I have a web application which i need to be load tested using LoadRunner. When I record the website using vugen it works good and there is no any application bug. But when I tried to replay the script, script failed after login and while navigating to next page, say, Transaction. At the end of log, I receive error:
Action.c(252): Error -26612: HTTP Status-Code=500 (Internal Server Error)
for "http://rob.com/common/transaction
Please help me to resolve this error.
LoadRunner generates HTTP request just as your browser does, this error is the same error you would get if you would go to that URL using your browser. Error code 500 is a generic server error that is returned when there is no better (more specific error to return).
Most likely the login process requires some form of authentication which is protected against a replay attack by using some form of token. It is up to you to capture this token using Correlations in LoadRunner and replay it as the server expects. The Correlation Studio in VuGen should detect and identify the token for you but since authentication methods vary it is sometimes impossible to do this automatically and you will have to create manual correlation. Please consult the product documentation for more details on how to do it. If your website is publicly available online then post its URL and I will try to record the script on my machine.
Thanks,
Boris.
Most common reasons
You are not checking each request for a valid result being returned and using a 200 HTTP status as an assumed correct step without examining the content of what is being returned. As a result when data being returned is incorrect you are not branching the code to handle the exception. Go one to two steps beyond where your business process has come off the rails with an assumptive success and you will have a 500 status message for an out of context action occurring 100% of the time.
Missed dynamic element. Record three times. Compare the code. Address the changing components.