I have been using yandex.tank for a few days to perform load tests
I have set up the URL's list in different ways but I do not get my goal
I want to simulate a real visit (like a web navigator):
request
html response
request of objects embedded in the code
I can create a grouped list of the objects embedded in the code, but the results are oriented to each of the requests per individual. For example:
My "home" tag in "Cumulative Cases Info" shows me:
4554 28.21% / avg 171.2 ms
171.2 ms is the average time of each of the objects. I want the average time for the full request (html and embeded objects)
Is it possible to perform a load test by making requests like those indicated with yandex.tank? Or with another load testing tool?
Yandex-tank (actually default load generator Phantom in it), doesn't parse response and therefore knows nothing about embedded resources. You'd better try jmeter as a load generator, since it's HTTP Request Sampler has an option to retrieve resources - http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Request
Related
I'm making a new iOS (swift) app to test some concepts, and I'm using the GitHub Serach API to retrieve a list of filtered repositories.
The request is working fine so far, but I'm having trouble understanding the pagination process and how to know I reached the end of the results.
For what I saw, the Search API returns a maximum of 1k results, broke in pages of 100 maximum results. But the field in the returned json with the total count shows way more available results (I imagine that it shows the total repositories that satisfy the query and not the maximum available for return in the API).
The only way I found so far to obtain information about the pages (and the pagination process) in GitHub Documentation comes in the header of the response, like:
Status: 200 OK
Link: <https://api.github.com/resource?page=2>; rel="next",
<https://api.github.com/resource?page=5>; rel="last"
X-RateLimit-Limit: 20
X-RateLimit-Remaining: 19
Anyone can suggest the best approach to detect the end of the pages in this case?
Should I try to parse the information from the header or infer it somehow based on the returned json? I even got the "Link" header value but don't know how to parse it.
I am using JMeter and Blazemeter to do some load testing. After i record a testcase say Login, I have 5 API calls recorded as part of the same testcase.
On the generating the report, My report is looks untidy and has all API calls displayed.
I tried using simple controller that did not help.
Is there a way i can display under Jmeter HTML report Login as the testcase and on expanding this section i can see the API calls?
This is how my report looks now.
Any help would be appreciated.
Thanks!!
Current report Imagecurrent report statistics section
Transaction Controller is what you're looking for. It can operate in 2 modes:
Default: you will have 5 individual child samplers plus Transaction Controller containing sum of all nested samplers response times
Generate parent sample - you will have only "cumulative" time instead of individual 5 samplers
See Using JMeter's Transaction Controller article for more details.
I want to save final data from console output to file without intermediate.
How can i do that?
The report module exports all info into html in JSON format. You can get some info from there (cumulative percentiles, for example). You even don't have to modify python code in that case, just add some JS to the page that generates a table.
On the other hand, if you want something more then that info included there, you should implement it in the report module.
What particular pieces of last screen data are you interested in?
P.S by the way, one may create a couple of templates and then provide the template parameter in report section of load.ini to specify which one you want to use.
This screen is good report only for "const" benchmarking. For "line" and "step" ramping the last screen always demonstrates the worst timings and resources. But we are thinking about this feature request.
When 100 users search for a content called "maths" through google search now i want to put load test through "Jmeter".
I want to know whether i have to make Parameter for this type of load test?
You can simply achieve it by creating test plan as below,
Add a Http sampler with CSV DATA SET CONFIG (element that allows
you to read different parameters from text file for
parameterization) and configure text file data to a PARAMETER_VARIABLE ( Read this blazemeter blog for better understanding ).
Use https:// www.google.com/search?q=${PARAMETER_VARIABLE} URL
in Http Sampler.
Set Number of threads to 100
NOTE: You are not suppose to do Google search load testing; it may illegal and try to find answer by googling before posting the question.
I am trying to implement sort of mail merge for printed documents in Ruby on Rails 3.2. I have about 8000 recipients and template origin in Microsoft Word. Template includes images (photos) and contains about 10-20 pages.
Actual situation is, that I rewritten original template to Textile (redcloth) and pictures are inserted from internet (http address). I did all personalisation etc. So I generate HTML file and must divide it to many small files each for 1000 pages. Total I need print about 8000 x 20 pages = 160.000 pages.
Anyone know how to print it to PDF from HTML? Or how to insert commands for changing paper tray (for first and last page) or for binding after each 20 pages etc?
Thank you for any idea :-)
Here's one idea: in your rails app, set it up to return one html per user. Also, have a nice /users/ index method that returns a list of users in something convenient, maybe json format.
Now, you want a local script, written in ruby, bash, whatever is convenient, to:
fetch a list of users from that /users/ method, probably save it to a file
loop over the list of users (from the file, so they're not all in memory), and fetch the HTML of the email
generate pdf from each HTML downloaded, either inside the loop, or loop over files in a directory where you saved the HTML. Use wkhtmltopdf or similar.
send each pdf to the printer, again either inside the same loop, or loop over the saved pdf's.
If you wanted to get fancy, and a little more efficient, you could use a queueing system like resque, and make each of those bullet points into a queue, and run one worker per queue. That would let you start printing some pdfs while others were still being downloaded and converted, so it should be less time overall. But if you're not already familiar with a queuing system like that, a simple script should get it done as well.