I'm building some simple load testing for my API, and to make sure everything is on the up and up I'd like to also review the response headers and data. But when I run my test using the command line and then re-open the GUI to add a View Results Tree listener and load the created file the response headers or response data is empty.
I entered the following values into user.properties (also tried uncommenting those values in jmeter.properties and changing them there, same result)
jmeter.save.saveservice.output_format=csv (tried xml, omitting it, jtl)
jmeter.save.saveservice.data_type=false
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data.on_error=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.hostname=true
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=true
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.timestamp_format=HH:mm:ss
jmeter.save.saveservice.default_delimiter=;
jmeter.save.saveservice.print_field_names=true
But still no luck when opening the result file. I tried declaring the file after the -l tag as results.csv, .jtl, even .xml but none of them show me the headers and data.
I'm running it locally on Mac OS X 10.10 using the following command, jmeter version is 2.12
java -jar ApacheJMeter.jar -n -t /Users/[username]/Documents/API_test.jmx -l results_15.jtl
I don't know if it's not even saving that data, or if the Listeners can't read it or if I've been cursed but any help is appreciated.
It works fine if I add a Listener and run it using the GUI, but if I try to run my larger tests that way, well, things don't end well for anyone.
So my question is:
How do I save the response header and data to a file when using the command line, and how do I then view said file in jmeter?
Add a Simple Data Writer (under Listeners) and output to a file (NB: different file than your log). Under the 'configure' button, there are all sorts of options of what to save. One of the check boxes is Save Response Header.
This file can get huge if you're saving a bunch of things for every request- one strategy is to check everything, but only save for errors. But you can do whatever works for you.
You can also turn on "Functional Test Mode" which will produce a large file but will contain pretty much anything you might need to debug your test.
Beware, this can create a very large JTL file, so don't forget to turn it off for your large test runs! See JMeter Maven mojo throws IllegalArgumentException with large JTL file
Alternatively use a Tree View Listener in the GUI for a small sample of the requests and check the request/response in the GUI (including headers) to debug or check your test.
Add Below lines in user.properties file
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
Restart cmd prompt.
Related
I'm using the NodeMCU environment to write a Lua script for the ESP8266. It uses the FATFS module to create several files with the following pattern:
LOG_xxxxyymmdd_hhmmss.txt, where xxxx: #file(incremental), and the rest is a timestamp.
I was running a test, creating one file, filling it with a small amount of data (~200 bytes), closing it, and repeating this for X times. After the first hour had passed, the files stopped being created. Here is the list of files created successfully:
<script src="https://pastebin.com/embed_js/0xn3MBQt"></script>
And here are some filenames it couldn't create:
LOG_0193000101_005909.txt
LOG_0194000101_005908.txt
LOG_0000000000_000000.txt
LOG.txt
I'm really troubled by this. Is there some kind of filename size limit so that when searching for files during open(), it returns an error because of a name ambiguity or something like that? If anyone has a clue about this, please tell me so I can test it. Thanks.
I'm trying to use the simplestorage from my extension, but I can't retrieve values between browser sessions. Here's the thing: From my main code, I created a value this way:
var ss = require("sdk/simple-storage");
ss.storage.foo = [{id:"bar1", properties:{a:"aaa", b:"bbb"}}]
console.log(ss.storage.foo);
This is ok, I coud see the object through the log. But then I closed the browser, commented the "foo definition" (line 2) and the console log was "undefined".
I know cfx run by default uses a fresh profile each time it runs, so simple storage won't persist from one run to the next. But I'm using
cfx -b firefox run --profiledir=$HOME/.mozilla/firefox/nightly.ext-dev
So I'm sure I'm using the same profile everytime.
What could be happening? What am I missing? Any idea is welcome! Thanks in advance!
Thanks to the answer of Notidart, I could discover that the problem was the file is saved when you close Firefox in the right way. When you just kill it through console, it's not persisting data.
This is how simple storage works. It creates a folder in your ProfD folder which is your profile directory: https://github.com/mozilla/addon-sdk/blob/master/lib/sdk/simple-storage.js#L188
let storeFile = Cc["#mozilla.org/file/directory_service;1"].
getService(Ci.nsIProperties).
get("ProfD", Ci.nsIFile);
storeFile.append(JETPACK_DIR_BASENAME);
storeFile.append(jpSelf.id);
storeFile.append("simple-storage");
file.mkpath(storeFile.path);
storeFile.append("store.json");
return storeFile.path;
The exact location of the file made is in a your profile folder, in a folder named jetpack then your addon id, then a folder called simple-storage, then in a file in that folder called store.json. Example path:
ProfD/jetpack/addon-id/simple-storage/store.json
It then writes data to that file. Every time your profile folder is recreated (due to the nature of temp profile, due to jpm / cfx), your data is erased.
You should just use OS.File to create your own file to save data. OS.File is better way then nsIFile which is what simple-storage does. Save it outside that ProfD folder, so but make sure to remove it on uninstall of your addon otherwise you pollute your users computers
Just in case someone else finds this question while using jpm, note that --profiledir is removed from jpm, so to make jpm run using the same profile directory (and thereby the same simple-storage data), you have to run it with the --profile option pointing at the profile path - not the profile name.
jpm run --profile path/to/profile
For future readers, an alternative to #Noitidart's recommendation of using OS.File, is to use the Low-Level API io/file
You can create a file using fileIO.open(path). If the file doesn't exist, it will be created. You can read and write by including the second argument fileIO.open(path, mode).
The mode can be:
r - Read-only mode
w - Write-only Mode
b - Binary mode
It defaults to r. You can use this to read and write to a file (obviously the file cannot be in the ProfD folder or it will get removed each time jpm / cfx is run)
I am trying to work on whats wrong with my lithium current setup. I have installed the Xdebug and verified that remote host can establish the connection as requested.
http://myinstance.com/test/lithium/tests/cases/analysis/logger/adapter/CacheTest?filters[]=lithium\test\filter\Coverage
Please note in fresh installation in local environment , "Coverage" Filter is working as expected.
I added some test code inside the "apply" function in coverage.php but it is not even called !!!! Can some have experience in debugging the above URL ?
I am not able to understand why coverage filter is not called up and executed ...Any hints are highly appreciated !
The filters in the query string are added to the options in lithium\test\Controller::__invoke() and then passed into the test Report object created by the test Dispatcher. The Report object finds the test filter class and then runs the applyFilter() method for that test filter as can be seen in lines 140 to 143 of the current code. So those lines would be another place to debug. Those should wrap the run() method of your tests with this filter code inside the apply() method that uses xdebug_get_code_coverage() and related functions. You said you added test code in the apply method and it isn't called. I'm not sure what the issue is. Are you sure you are pointing to the right server and code location? It is possible to run tests from the command line. Maybe you should try that. See code comments in lithium\console\command\Test or run li3 test --help for info on how to use the command-line test runner.
I can confirm on nginx I also have /test/lithium/tests/cases/analysis/logger/adapter/CacheTest?filters[]=lithium\x5Ctest\x5Cfilter\x5CCoverage in my access log. The \x5C is expected url encoding of the backslash character.
I have been scouring the internet for days, I have a problem similar to this.
I need to retrieve the console output in raw (plain) text. But if I can get it in HTML that is fine too, I can always parse it. The only thing is that I need to get it during the build step, which is a problem since the location where it should be available is truncated...
I have tried retrieving the console output from the following URL's (relative to the job):
/consoleText
/logText/progressiveText
/logText/progressiveHTML
The two text ones are plain text and would be perfect if not for the truncation, same goes for the HTML one... exactly what I need - only its truncated....
I am sure it is possible to retrieve this information somehow, since when viewing /consoleFull there is a real-time update of the console, without truncating or buffering.
However, upon examining that web page, instead of finding the content I desired, I found this code where it should have been (I did not include the full pages code, since it would be mostly irrelevant, and I believe those answering would be able to find out and know what should be there on their own)
new Ajax.Request(href,{
method: "post",
parameters: {"start":e.fetchedBytes},
requestHeaders: headers,
onComplete: function(rsp,_) {
var stickToBottom = scroller.isSticking();
var text = rsp.responseText;
if(text!="") {
var p = document.createElement("DIV");
e.appendChild(p); // Needs to be first for IE
// Use "outerHTML" for IE; workaround for:
// http://www.quirksmode.org/bugreports/archives/2004/11/innerhtml_and_t.html
if (p.outerHTML) {
p.outerHTML = '<pre>'+text+'</pre>';
p = e.lastChild;
}
else p.innerHTML = text;
Behaviour.applySubtree(p);
if(stickToBottom) scroller.scrollToBottom();
}
e.fetchedBytes = rsp.getResponseHeader("X-Text-Size");
e.consoleAnnotator = rsp.getResponseHeader("X-ConsoleAnnotator");
if(rsp.getResponseHeader("X-More-Data")=="true")
setTimeout(function(){fetchNext(e,href);},1000);
else
$("spinner").style.display = "none";
}
});
Specifically, I am hoping there is a way for me to get the content from text whatever it may be. I am not familiar with this language and so am not sure how I might be able to get the content I want. Plugins won't help since I want to retrieve this content as part of my script during the build step
You did pretty much good investigation already. I can only add the following: all console related plug-ins I know are designed as a post build actions.
The Log Trigger plugin provides a post-build action that allows Hudson
builds to search their console log for a given regular expression and
if found, trigger additional downstream jobs.
So it looks like there is no straightforward solution to your problem. I can see the following options:
1. Use tee or something similar (applicable to shell build steps only)
This solution is far from being universal, but it can provide quick access to the latest console output, produced by a command or set of command.
tee - read from standard input and write to standard output and files
Using synonyms on the system level other Jenkins build steps can modified in order to produce console output. File with console output can be referenced through Jenkins or using any other way.
2. Modify Jenkins code
You can just do a quick fix for internal usage or provide a patch introducing specific system-wide setting.
3. Mimic /console behavior
Code in your example is used to request updates from the Jenkins server. As you may expect the server side can return piece of information starting with some offset. Let me show.
Periodically console page sends requests to the server:
Parameters are straightforward:
Response is a chunk of information to be added:
Another request with updated offset (start) value
You can easily understand there is no data by analyzing Content-Length
So the answer is: use url/job-name/build-number/logText/progressiveHtml, specify start offset, send request and receive console update.
I had a similar issue, the last part of my Jenkinsfile build script needs to parse the ConsoleLog for particular error messages to put in an email build report.
First attempt: http request.
It felt like a hack, it mostly worked, but ran into issues when we locked down access to the Jenkins server & my build nodes could no longer perform annon http gets on the page
Second attempt: use the APIs to enumerate the log lines.
It felt like the right thing to do, but it failed horribly as my nodes would take 30 minutes to get through the 100 meg log files. My presumption is that the Jenkins server was not caching the file, so each request involved a re-reading of the entire file up until the point of the last read.
Third and most successful solution: run grep on the server.
node('master') {
sh 'grep some_criteria $JENKINS_HOME/workspace/path/to/job/console.log'
}
it was fast, reliable, and it didn't matter how big the log files were.
Yes, this required trust of the Jenkins admin and knowledge of the directory paths on the Jenkins server - but since I was the admin, I trusted myself to do the right thing. Your mileage may vary.
To add some insight: when the Jenkins build was in progress, the response for the .../consoleText URL maxed out at 10000 lines, exactly.
I was using 'requests()' package in Python. I have tried the same URL with curl and again received only the first 10K lines.
Only after the build has finished both methods returned the full log (>22K lines in my case).
I will research further and hope to report back.
[2015-08-18] Update: It seems that this is a known issue (see here) and it's fixed in Jenkins 1.618 and later. I am still running 1.615 so I cannot verify.
Amir
I'm pretty sure suds is not caching my WSDLs and XSDs like I expect it to. Here's how I know that cached objects are not being used:
It takes about 30 seconds to create a client: client = Client(url)
The logger entries show consistent digestion of the XSD and WSDL files during the entire 30 seconds
Wireshark is showing consistent TCP traffic to the server storing the XSD and WSDL files during the entire 30 seconds
I see the files in the cache being updated each time I run my program
I have a small program that creates a suds client, sends a single request, gets the response, then ends. My expectation is that each time I run the program, it should fetch the WSDL and XSD files from the file cache, not from the URLs. Here's why I think that:
client.options.cache.duration is set to ('days', 1)
client.options.cache.location is set to c:\docume~1\mlin\locals~1\temp\suds and I see the cache files being generated and re-generated each time I run the program
For a moment I thought that maybe the cache is not reused between runs of a program, but I don't think a file cache would be used if that were the case, because an in-memory cache would do just fine
Am I misunderstanding how suds caching is supposed to work?
The problem is in the suds library itself. In cache.py, although ObjectCache.get() is always getting a valid file pointer, it's hitting an exception (EOFError) doing pickle.load(fp). When that happens, the file is just downloaded again.
Here's the sequence of events:
DocumentReader.open():
Trying http://172.28.50.249/wsdl/billingServices/v3.0/RequestScrubAddress.wsdl
Loading ObjectCache 51012453-document
Loading pickled object...
Exception raised:
Got None from cache
Downloading... Done
Saving FileCache 51012453-document... Done
So it doesn't really matter that the new cache file was saved, because the same thing happens the next time I run. This happens for ALL of WSDL and XSD files.
I fixed that problem by opening the cache file in binary mode when reading and writing. Specifically, the changes I made were in cache.py:
1) In FileCache.put(), change this line:
f = self.open(fn, 'w')
to
f = self.open(fn, 'wb')
2) In FileCache.getf(), change this line:
return self.open(fn)
to
return self.open(fn, 'rb')
I don't know the codebase well enough to know if these changes are safe, but it is pulling the objects from the file cache, the service is still running successfully, and loading the client went from 16 seconds down to 2.5 seconds. Much better if you ask me.
Hopefully this fix, or something similar can be introduced back into the suds main line. I've already sent this to the suds mailing list (fedora-suds-list at redhat dot com).