I am running a JMeter job in Jenkins using performance plugin. I need to fail a job if the average response time < 3 seconds. I see the "duration assertion" in jmeter, but that works on each thread (each http request). Instead is it possible to do the duration assertion on average for each page?
This is the way I tried adding the BeanSehll Listener and Assertion.
Recording Controller
**Home Page**
BeanShell Listener
Debug Sampler
**Page1**
BeanShell Listener
Debug Sampler
Beanshell Assertion
View Results Tree
You can implement this check via some form of Beanshell scripting
Add a Beanshell Listener at the same level as all your requests live
Put the following code into Beanshell Listener's "Script" area
String requests = vars.get("requests");
String times = vars.get("times");
long requestsSum = 0;
long timesSum = 0;
if (requests != null && times != null) {
log.info("requests: " + requests);
requestsSum = Long.parseLong(vars.get("requests"));
timesSum = Long.parseLong(vars.get("times"));
}
long thisRequest = sampleResult.getTime();
timesSum += thisRequest;
requestsSum++;
vars.put("requests", String.valueOf(requestsSum));
vars.put("times", String.valueOf(timesSum));
long average = timesSum / requestsSum;
if (average > 3000){
sampleResult.setSuccessful(false);
sampleResult.setResponseMessage("Average response time is greater than threshold");
}
The code above will record sums of response times for each request and total number of requests into times and requests JMeter Variables
See How to use BeanShell: JMeter's favorite built-in component guide for comprehensive information on Beanshell scripting in Apache JMeter.
Based on the other answer, I managed to create something that works while using multiple threads.
Add the following code as a script to your JSR223 listener, you can also save it to file and load it from file (for easy reuse). I used a parameter in seconds to set the duration threshold.
import org.apache.jmeter.util.JMeterUtils;
int totalRequests = Integer.parseInt(ctx.getThreadGroup().getSamplerController().getProperty("LoopController.loops").getStringValue()) * ctx.getThreadGroup().getNumThreads();
long requestsCount = JMeterUtils.getPropDefault("requestsCount"+sampleResult.toString(),0);
long timesSum = JMeterUtils.getPropDefault("times"+sampleResult.toString(),0);
long thisRequestTime = sampleResult.getTime();
timesSum += thisRequestTime;
requestsCount++;
JMeterUtils.setProperty("requestsCount"+sampleResult.toString(), String.valueOf(requestsCount));
JMeterUtils.setProperty("times"+sampleResult.toString(), String.valueOf(timesSum));
long average = timesSum / requestsCount;
long threshold = Integer.parseInt(args[0])*1000;
if (requestsCount >= totalRequests) {
if(average > threshold){
sampleResult.setSuccessful(false);
sampleResult.setResponseMessage("Average response time is greater than threshold, average: " + String.valueOf(average) + ", threshold: " + threshold);
}
log.info("Average response time (" + sampleResult.toString() + "): " + String.valueOf(average) + ", threshold: " + threshold);
}
This stores it in the global properties, that are saved for the full JVM run. To keep consistency in between runs, I added a setup thread group with a JSR223 sampler, with this code:
import org.apache.jmeter.util.JMeterUtils;
JMeterUtils.getJMeterProperties().clear();
log.info("cleared properties");
Related
I'm querying one api and sending data to another. I'm also querying a mysql database. And doing all this about 40 times in one second. Then waiting a minute and repeating. I have a feeling I'm at the limit of what PHP can do.
My question is about two variables that will randomly revert back to their last value, from the previous loop. They only change their value after the call to self::apiCall() (below in the second function). Both $product and $productId will randomly change their value, about once every 40 loops or so.
I boosted PHP to 7.2, increased memory to 512, and assigned some variables to null to save memory. I'm not getting any official memory warnings, but watching the variables randomly go back to their last value is perplexing. Here's what the code looks like.
/**
* The initial create products loop which calls the secondary function where
* the variables can change.
**/
public static function createProducts() {
// Create connection
$conn = new mysqli(SERVERNAME, USERNAME, PASSWORD, DBNAME, PORT);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
// This will go through each row and echo the id column
$productResults = mysqli_query($conn, "SELECT * FROM product_creation_queue");
if(mysqli_num_rows($productResults) > 0) {
$rowIndex = 0;
while($row = mysqli_fetch_assoc($productResults)){
self::createProduct($conn, $product);
}
}
}
/**
* The second function where I see both $product and $productId changing
* from time to time, which completely breaks the code. Their values
* only change after the call to self::createProduct() which is simply a
* curl function to hit an api endpoint.
**/
public static function createProduct($mysqlConnection, $product) {
// convert back to array from json
$productArray = json_decode($product, TRUE);
// here the value of $productId is one thing
$productId = $productArray['product']['id'];
// here is the curl call
$addProduct = self::api_call(TOKEN, SHOP, ENDPOINT, $product, 'POST');
// and randomly here it can revert to it's last value in a previous loop
echo $productId;
}
The problem was that the entire 40-query procedure took more than one minute to complete. And the cron job that started the procedure on the minute would start the next one before the first one had completed, thereby somehow re-assigning variables on the fly. The queries usually took less than one minute, but when it was longer, the conflicts appeared, thus leading to the appearance of randomness.
I reduced the number of queries per minute so now the process completes in less than 60 seconds and no variables are ever overwritten. I still don't understand how the variables would change if two php processes are happening at the same time--it seems like they would be siloed.
I'm currently using an svg conversion library which wraps puppeteer:
https://github.com/etienne-martin/svg-to-img
After each call to its convert function it waits 500 ms and if there aren't any other calls then it closes the browser instance and on the subsequent call it will again call puppeteer.launch.
I'm using this inside of a docker container running in a Kubernetes cluster. I'm wondering how expensive it is to continually call puppeteer.launch versus connecting to an already running instance of headless chrome.
I'm considering instead just always having a docker container running an instance of headless chrome and connect to it from my docker container doing the svg conversion.
Before doing this though I wanted to get a sense of what is going on behind on the scenes of launch vs connect.
Short Answer:
Using puppeteer.connect() / browser.disconnect() whenever possible is best from a performance standpoint and is ≈ 146 times faster than using puppeteer.launch() / browser.close() (according to my benchmark tests).
Detailed Answer:
I ran some tests to compare the performance of calling puppeteer.connect() / browser.disconnect() versus puppeteer.launch() / browser.close().
Each method was tested 10,000 times, and the total time for all iterations and the average time for each iteration were recorded.
My tests found that using puppeteer.connect() / browser.disconnect() is approximately 146 times faster than using puppeteer.launch() / browser.close().
You can perform the tests on your own machine using the code provided below.
Benchmark (puppeteer.launch() / browser.close()):
'use strict';
const puppeteer = require('puppeteer');
const { performance } = require('perf_hooks');
const iterations = 10000;
(async () => {
let browser;
const start_time = performance.now();
for (let i = 0; i < iterations; i++) {
browser = await puppeteer.launch();
await browser.close();
}
const end_time = performance.now();
const total_time = end_time - start_time;
const average_time = total_time / iterations;
process.stdout.write (
'Total Time:\t' + total_time + ' ms\n'
+ 'Average Time:\t' + average_time + ' ms\n'
+ 'Iterations:\t' + iterations.toLocaleString() + '\n'
);
})();
Result:
Total Time: 1339075.0866550002 ms
Average Time: 133.90750866550002 ms
Iterations: 10,000
Benchmark (puppeteer.connect() / browser.disconnect()):
'use strict';
const puppeteer = require('puppeteer');
const { performance } = require('perf_hooks');
const iterations = 10000;
(async () => {
let browser = await puppeteer.launch();
const browserWSEndpoint = browser.wsEndpoint();
browser.disconnect();
const start_time = performance.now();
for (let i = 0; i < iterations; i++) {
browser = await puppeteer.connect({
browserWSEndpoint,
});
browser.disconnect();
}
const end_time = performance.now();
const total_time = end_time - start_time;
const average_time = total_time / iterations;
process.stdout.write (
'Total Time:\t' + total_time + ' ms\n'
+ 'Average Time:\t' + average_time + ' ms\n'
+ 'Iterations:\t' + iterations.toLocaleString() + '\n'
);
process.exit();
})();
Result:
Total Time: 9198.328596000094 ms
Average Time: 0.9198328596000094 ms
Iterations: 10,000
Puppeteer Source Code:
You can view what is happening behind the scenes by inspecting the source code of the functions in question:
puppeteer.connect() source code
browser.disconnect() source code
puppeteer.launch() source code
browser.close() source code
puppeteer.launch()
puppeteer.launch() starts the chromium instance and connects to it afterwards. The start of a chromium instance takes between 100 and 150ms depending on your hardware. The connect happens instantly (as it is a websocket on a local machine).
puppeteer.connect()
puppeteer.connect() only connects to an existing chromium instance.
If the instance you are connecting to is one the same machine as your script, this should happen instantly as before (<1ms).
If you run the chromium instance on a second machine, you will introduce a network delay for the puppeteer.connect() call and all following puppeteer function calls. The delay will depend entirely on the network, but if your machines are in the same location this should be below 10ms.
svg-to-img
Regarding the library you linked: It looks like the library you linked does not support connecting to a puppeteer instance. You could also put the library on a machine and offer an API that receives the SVG code and returns the image. That way you could keep the chromium instances running.
Consider an example of testing API's with Gatling. For some weird requirement i had to get a scenario for each user
var scenarioList // This is of type mutable list
I have plenty of scenarios added to this list as my request body should differ for each user or the request won't be processed.This individual scenarios have following gatling simulation configured at this moment
Ex: scenarioList += scenario1. inject(rampUsers(1) over (1 minutes)
scenarioList += scenario2. inject(rampUsers(1) over (1 minutes)
scenarioList += scenario3. inject(rampUsers(1) over (1 minutes)
.
.
.
so on
Now in the global setup as below while calling all these scenarios
setUp(scenarioList: _*).assertions(
forAll.successfulRequests.percent.gte(90)
)
Suppose i have 1000 users (scenarioList size is 1000), The problem here would be all of the 1000 users would start at the same time but i want to ramp up these users. So the question comes of ramping up the scenarios instead of running them parallely.
Is this possible ? If not is there any other approach to follow ?
I can't have the luxury of running the same scenario with multiple users as the body of the requests change. Please let me know.
I was able to solve this problem by using feeders within the scenario so i don't need to create multiple scenarios.
With feeders Gatling provides option to parameterize your request body of any http request.
Code Example:
var randomSession = Iterator.continually(Map("randsession" -> ( req.replace("0000000000", randomStringGenerator.randomString(10)))))
val httpConf = http
.baseURL("http://localhost:5000")
.acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
.userAgentHeader("Mozilla/4.0(compatible;IE;GACv10. 0. 0. 1)")
val scn = scenario("Activate")
.feed(randomSession)
.exec(http("activate request")
.post("/login/activate")
.body(StringBody("""${randsession}"""))
.check(status.is(200)))
.pause(5)
setUp(
scn.inject(atOnceUsers(5))
).protocols(httpConf)
}
I am building an MVC application that includes an asynchronous image upload so each image, once uploaded, calls the action. Image uploading can be cpu intensive and require time so we are trying to avoid in-action processing.
I have read about using async actions, but we are also processing images at other times, so we have opted to handle image processing through a console application.
What is the proper method for calling a console application from an MVC action asynchronously? Basically we just want to pass the console app some parameters, and tell it start without waiting for any kind of response from the console.
Our program file is an exe.
Speed is our main concern here.
Thanks so much for your help!
EDIT
As per Brian's suggestion here is what we added:
Process pcx = new System.Diagnostics.Process();
ProcessStartInfo pcix = new System.Diagnostics.ProcessStartInfo();
pcix.FileName = "C:\\utils_bin\\fileWebManager\\ppWebFileManager.exe";
pcix.Arguments = WrkGalId.ToString() + " " + websiteId.ToString() + "" + " " + "19" + " \"" + dFileName + "\" ";
pcix.UseShellExecute = false;
pcix.WindowStyle = ProcessWindowStyle.Hidden;
pcx.StartInfo = pcix;
pcx.Start();
You would use Process.Start to execute an external application, e.g.:
Process.Start(#"C:\\path\to\my\application.exe");
I have a jenkins job configured to run hourly. I want the success build mail to be sent as email only once a day. Email-Ext gives me the option to send emails for all success , failures etc. But what i wanted is the ability to send success email only once.
This is an old question and you have probably found your own workaround already, but I had a similar need and I thought I'd share my solution anyway. What I was trying to do was generate a once-daily summary email of jobs in a failed state. This is fundamentally very similar to sending a once-daily success report for a single job.
My solution uses a Groovy build step coupled with the Email-Ext plugin's pre-send script feature. I got the idea from the Nabble thread referenced in the comments above. See also Email-Ext Recipes on the Jenkins site.
Here's the initial groovy script that determines which builds are failed, configured under Execute System Groovy Script. You could do something similar to determine whether your build succeeded or failed:
// List the names of jobs you want to ignore for this check
ignore = [ ]
// Find all failed and unstable jobs
failed = hudson.model.Hudson.instance.getView("All").items.findAll{ job ->
job.getDisplayName() != "Daily Jenkins Job Nag" &&
!ignore.contains(job.getDisplayName()) &&
job.isBuildable() &&
job.lastCompletedBuild &&
(job.lastCompletedBuild.result == hudson.model.Result.FAILURE ||
job.lastCompletedBuild.result == hudson.model.Result.UNSTABLE)
}
// Log the job names so the build results are legible
failed.each { job ->
println(job.getDisplayName() +
" " + job.lastCompletedBuild.result +
" at build " + job.lastCompletedBuild.number +
" (" + job.lastCompletedBuild.timestamp.format("yyyy-MM-dd'T'HH:mm ZZZZ") + ")");
}
// Return failure if there are any failed jobs
return failed.size
Then, down in the Editable Email Notification section, I set the Email-Ext plugin to notify on failure. I set Content Type to Plain Text (text/plain), left Default Content empty, and set the following as the Pre-send Script:
failed = hudson.model.Hudson.instance.getView("All").items.findAll{ job ->
job.getDisplayName() != "Daily Jenkins Job Nag" &&
job.isBuildable() &&
job.lastCompletedBuild &&
(job.lastCompletedBuild.result == hudson.model.Result.FAILURE ||
job.lastCompletedBuild.result == hudson.model.Result.UNSTABLE)
}
def output = StringBuilder.newInstance()
output << "<html>\n"
output << " <body>\n"
output << "<p>Jenkins reports the following failed jobs:</p>"
output << " <ul>\n"
failed.each { job ->
url = hudson.model.Hudson.instance.rootUrl + job.url + "/" + job.lastCompletedBuild.number + "/"
output << " <li>"
output << "" + job.displayName + ""
output << " " + job.lastCompletedBuild.result
output << " at build " + job.lastCompletedBuild.number
output << " (" + job.lastCompletedBuild.timestamp.format("yyyy-MM-dd'T'HH:mm ZZZZ") + ")"
output << "</li>\n"
}
output << " </ul>\n"
output << " </body>\n"
output << "</html>"
msg.setContent(output.toString(), "text/html")
The key is that you have access to the msg object, which is a MimeMessage. You can set the content of the MIME message to whatever you want.
In this case, I'm generating a list of failed jobs, but in your case it would be whatever message you want to receive for your once-daily success report. Depending on what you need, you could have Email-Ext send a result for every build rather than just for failed builds.
How about suppressing e-mails if insufficient time has lapsed since the previous e-mail? Although not precisely what was requested, a pre-send script like this might be worth considering for its simplicity?
if (build.result != hudson.model.Result.SUCCESS) {
cancel = true;
}
else {
try {
long minEmailGap = 1000 * 60 * 60 * 16; // 16 hours in milliseconds
File file = new File("/TimestampForMyJob.txt");
if (file.exists() == false) {
file.createNewFile();
}
else {
long currentTime = (new Date()).getTime();
if (file.lastModified() + minEmailGap > currentTime) {
cancel = true;
}
else {
file.setLastModified(currentTime);
}
}
}
catch(IOException e) {
// We can't tell whether the e-mail should be sent out or not, so we do nothing
// and it just gets sent anyway - probably the best we can do with this exception.
}
}
Well, there is no plugin that can do that for you. The default email feature in Jenkins is very simple and it works fine. There is Email-ext plugin though, and this one can do lot more for you.
First of all, with Email-ext, you can configure a specific trigger to send the email notification - it can be on success or failure, which is similar to the default behaviour of Jenkins. But then you have the more refined one, like First failure and Still failing. This will give you a great deal of control on when and to whom (Recipients list, Commiter or Requester) your Jenkins will send an email. In my case a good configuration here will help a lot with email traffic generated by Jenkins. And you can send specific emails in specific situation to specific list of people - great!
The other option, if you really do not need that level of control and want to just to limit the email traffic to one summary per day is to set up a mailing list. Most mailing list engines will let you send a daily digest of all email traffic to the list. It should be enough, although I really do not feel like it is actually a good option on the long term. I would definitely give a try to Email-ext plugin.