I am using ANT (build tool) to run jmeter functional scripts. I want to get the hostname or website name where all my jmeter scripts are running.
I have checked the jmeter.properties file to do some changes but no luck but no luck.
I fixed and I want to share solution.
I have uncommented the configuration in jmeter.properties.
jmeter.save.saveservice.hostname=true
so that the hostname will be written to the jtl file and from the jtl file, i got it by xpath as below,
/testResults/httpSample/#Host
Thats it, you can use this as xsl variable for reporting or for any purpose.
You can use below inbuilt functions in JMeter.
${__machineName} - to get the machine name
${__machineIP} - to get the IP address
Related
My goal is to put my telegraf config into source control. To do so, I have a repo in my user's home directory with the appropriate config file which has already been tested and proven working.
I have added the path to the new config file in the "default" environment variables file:
/etc/default/telegraf
like this:
TELEGRAF_CONFIG_PATH="/home/ubuntu/some_repo/telegraf.conf"
... as well as other required variables such as passwords.
However, when I attempt to run
telegraf --test
It says No config file specified, and could not find one in $TELEGRAF_CONFIG_PATH etc.
Further, if I force it by
telegraf --test --config /home/ubuntu/some_repo/telegraf.conf
Then the process fails because it is missing the other required variables.
Questions:
What am I doing wrong?
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
What am I doing wrong?
See what user:group owns /etc/default/telegraf. This file is better used when running telegraf as a service via systemd. Additionally, if you run env do you see the TELEGRAF_CONFIG_PATH variable? What about your other variables? If not, then you probably need to source the file first.
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Yes! Take a look at all the options of telegraf with telegraf --help and you will find:
--config-directory <directory> directory containing additional *.conf files
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
That is not the method I would suggest going down. Check out the config directory option above I mentioned.
Ok, after a LOT of trial and error, I figured everything out. For those facing similar issues, here is your shortcut to the answer:
Firstly, remember that when adding variables to the /etc/default/telegraf file, it must effectively be reloaded. So for example using ubuntu systemctl, that requires a restart.
You can verify that the variables have been loaded successfully using this:
$ sudo strings /proc/<pid>/environ
where <pid> is the "Main PID" from the telegraf status output
Secondly, when testing (eg telegraf --test) then (this is the part that is not necessarily intuitive and isn't documented) you will have to ALSO load the same environmental variables into the current user (eg: SET var=value) such that running
$ env
shows the same results as the previous command.
Hint: This is a good method for loading the current env file directly rather than doing it manually.
I have changed our Jenkins setup from everything running on one machine to a master-agent/slave setup. Before that everything worked fine, now I am facing issues that some programs I am calling that access files can't find them.
Case 1:
(Pls don't ask why it is so complicated, but the file structure is given and I can't change it)
I am calling a python script, that iself calls a batch file
filepath= os.path.abspath(os.path.join(pamFolder, "run.bat"))
p = subprocess.Popen(filepath, cwd=pamFolder, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
... and the batch file again a jar file with the actual program
java -XX:-UseGCOverheadLimit -cp "../..;../../libs/*" -jar ..\..\myjarfile.jar
Within the jar file there is an access to a file on the disk failing with an error message that the file can't be found:
ERR : The file was not found in the specified path 'U:\somefile.txt'. Please check this path for access and your configuration!
Case 2:
I am calling a batch file from Jenkins that is calling some other exe and in the end trying to open a file in Excel via the COM interface. Here I am getting the following exception (Excel can't access the file):
Unhandled Exception: System.Runtime.InteropServices.COMException: Microsoft Excel kann auf die Datei 'D:\Jenkins\workspace\myJob\someDir\someFile.xlsm' nicht zugreifen.
Question
As previously mentioned, both jobs were working in the previous setup. Both files DO exist.
I suspect that Jenkins / the programs are trying to find the files on the master where they are not available.
Is there any way to tell Jenkins that the called tools are fully executed on the slave node or in some other way tell them where to find these files?
EDIT
The job is already running on the slave. The console shows Running on [slave name] in D:/Jenkins/workspace/xxxxx.
The master is configured in a way that only jobs assigned to it run on the master. So pretty much all jobs should run on the slave.
EDIT2 / SOLUTION
It turned out that the 2 issues are caused by different things.
Case1: Solved this by using the UNC path
Case2: Solved by a mixture of giving the necessary permissions as described here and starting the slave service with a user with admin rights.
From my experience with this issue, usually it has to do with your SCM setup.
But as you stated that the files DO exist I think there might be the possibility that U:\ is a network share? Then consider changing your path to use a UNC path.
If that's not the case check if your jenkins slave as sufficient user rights to access said file.
You can tell Jenkins to run the job on the designated slave as follows:
Under Nodes > [SLAVE] > Configure, specify a label for the slave.
Under [Job] > Configure > Restrict where this project can be run, enter the label.
Now when you build, the console output of the job should read correctly along the lines Running on [SLAVE] (build_agent_01) in C:/jenkins and the files must be accessible.
I am running following maven command on jenkins
clean org.jacoco:jacoco-maven-plugin:prepare-agent install
The jacaco exec file created as shown below.
target/coverage-reports/jacoco-int-test.exec
I would like to generate this file under following path, since all other Project use same conventions.
target/jacoco.exec
I could not figureout why it is generated in this way and how to modify it as "target/jacoco.exec"
I will use this report in sonarqube analysis.
I would appriciate your helps, thanks in advance.
As per documentation of prepare-agent - destFile parameter controls location of output file, whose default is ${project.build.directory}/jacoco.exec which is exactly target/jacoco.exe. So check your POMs to find where it is modified to be target/coverage-reports/jacoco-int-test.exec.
I have recorded a UI test with selenium builder (firefox plugin) and have saved it as a .json file.
Now I am trying to run this through commandline using SeInterpreter jar.
My command is this:
java -jar SeInterpreter.jar <path of .json test file>
I have found information here.
I have downloaded the project but have not found SeInterpreter.jar file anywhere, I have searched exclusively for the jar file as well but could not get it.
Is there any other better way to achieve this?
After some analysis and searching i finally found the SeInterpreter.jar here:
http://seleniumbuilder.github.io/se-builder/tools.html
Hope will be helpful for others with similar query.
I try to start neo4j on OSX and change the chosen configuration file. I'd like to start a test server for unit tests with a different port and a database, which shall be deleted while startup (I will solve the deletion part in a shell script, which should stop and start the server).
My problem is neo4j ignores the configuration file from the parameter. My call looks like this (from the terminal and the current folder is bin:
./neo4j start -server -Dorg.neo4j.server.properties=conf/neo4j-server-test.properties
The default configuration file is still chosen.
Thanks for your help
There are no 'ad hoc' arguments to the neo4j command script, so your arguments after start are ignored. You need to either make a modified version of the neo4j command script, or swap out neo4j-server.properties files.
You can use ineo to manage two or more Neo4j instances:
https://github.com/cohesivestack/ineo
You can specify the path to a folder containing a customized configuration file with an environment variable:
NEO4J_CONF=../conf neo4j start