I am using Jenkins as my deployment pipeline. I have a JMETER project that I will be executing in a build step in Jenkins. That JMETER project has a dependency on a csv file for parameters. How do I get that file included in the Jenkins pipeline and how do I tell JMETER where to look for it in the csv data set config?
I could also do it via a gradle command if that is an option.
Thanks.
The easiest option is running JMeter in command-line non-GUI mode, the relevant Jenkins Pipeline snippet would be:
node {
stage 'Run JMeter Test'
bat 'c:/jmeter/bin/jmeter.bat -n -t c:/jmeter/extras/Test.jmx -l test.jtl'
}
The above setup assumes Windows operating system, if your Jenkins master or build agent is running Linux, Unix or MacOSX - just change bat to sh. You will also need to amend JMeter installation path to reflect your environment. See Running a JMeter Test via Jenkins Pipeline - A Tutorial to see example configuration.
In case of command-line non-GUI mode of JMeter execution you need to copy your CSV file(s) to "bin" folder of your JMeter installation.
In case of using JMeter Ant Task - the same approach, drop your CSV file(s) to JMeter's "bin" folder
In case of JMeter Maven Plugin you will need to copy CSV file(s) to src/test/jmeter folder of your Maven project.
And finally you can just use full path(s) (not relative) to CSV file(s) in CSV Data Set Config elements.
Related
I try to configure Jenkins' seed job, where whole business is in provided DSL script. I want to seperate that script from its configuration, which I want to locate in additional yml file. When I try to read that file:
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.Yaml
def workDir = SEED_JOB.getWorkspace()
def config = new Yaml().load(("${workDir}/config.yml" as File).text)
I receive error
java.io.FileNotFoundException: /var/lib/jenkins/workspace/test.dsl/config.yml (No such file or directory)
I suppose that Jenkins is looking for the file on a master host, not an agent node where workspace is located.
Is it possible to read yml file in DSL build step on the agent node? Or maybe I have to execute that seed job always on my master host?
This seems not possible as the jobDsl script is executed on master. You can try force to run the job on master with label master.
From the documentation in section Script location:
Job DSL scripts are executed on the Jenkins master node, but the seed job's workspace which contains the script files may reside on a build node. This mean that direct access to the file specified by FILE may not be possible from a DSL script. See Distributed builds for details.
I have some windows slave at my Jenkins so I need to copy file to them in pipeline. I heard about Copy To Slave and Copy Artifact plugins, but they doesn't have pipeline syntax manual. So I don't know how to use them in pipeline.
Direct copy doesn't work.
def inputFile = input message: 'Upload file', parameters: [file(name: 'parameters.xml')]
new hudson.FilePath(new File("${ENV:WORKSPACE}\\parameters.xml")).copyFrom(inputFile)
This code returns and error:
Caused: java.io.IOException: Failed to copy /var/lib/jenkins/jobs/_dev/jobs/(TEST)job/builds/107/parameters.xml to d:\Jenkins\workspace\_dev\(TEST)job\parameters.xml
Is there any way to copy file from master to slave in Jenkins Pipeline?
As I understand copyFrom is executed on your Windows node, therefore the source path is not accessible.
I think you want to look into the stash/unstash steps (Jenkins Pipeline: Basic Steps), which work across different nodes. Also this example might be helpful.
Pipeline DSL context runs on master node even that your write node('someAgentName') in your pipeline.
Try to use stash/unstash, but it is bad for large files.
Try External Workspace Manager Plugin. It has
pipelines steps and good for large files.
Try to use an intermediate storage. archive() and sh("wget $url") will be helpful.
If the requirement is to copy an executable to the test slave and to publish the test results, this is easy to do without the Copy to Slave plugin.
A shared folder should be created on each test slave (normal Windows shared folder).
After build: Build script copies the executable to the shared directory on each slave. A simple batch script using copy command is sufficient for this.
stage ('Copy to slaves') {
steps {
bat 'call "copy-to-slave.bat"'
}
}
During test: The test script copies the executable to another directory and runs it.
After test: Post-build action "Publish Robot Framework test results" can be used to report the test results. It is not necessary to copy the test result files back to the master first.
I recommend on Pipeline: Phoenix AutoTest plugin
Jenkins plugin website:
https://plugins.jenkins.io/phoenix-autotest/#documentation
GitHub repository of plugin:
https://github.com/jenkinsci/phoenix-autotest-plugin
I have a groovy script with ant commands on it. The script is successfully run in my local machine but when I tried it with Jenkins the groovy script always fail. Jenkins always return error that "ant can't create task or type p4Change". I already added Apache ant support in the global configuration. How do I configure ant to successfully run the groovy script I have. Any idea. Thanks.
Sample code snippets:
I have execute.groovy file with ant commands
ant=new AntBuilder()
def checkChanges {
ant.p4change(description:"Checking",port:'perforce:1666',user:optional,view:"'${workspace}'...")}
And I created a batch file that will run the execute.groovy file
call groovy execute.groovy project bopolz18 -c bopolz18.Workspace 150718
In my machine this works well but in Jenkins when I execute the batch command it fails giving the error mention above.
I have a jenkins pipeline that builds a java artifact,
copies it to a directory and then attempts to execute a external script.
I am using this syntax within the pipeline script to execute the external script
dir('/opt/script-directory') {
sh './run.sh'
}
The script is just a simple docker build script, but the build will fail
with this exception:
java.io.IOException: Failed to mkdirs: /opt/script-directory#tmp/durable-ae56483c
The error is confusing because the script does not create any directories. It is just building a docker image and placing the freshly built java artifact in that image.
If I create a different job in jenkins that executes the external script as
its only build step and then call that job from my pipeline script using this syntax:
build 'docker test build'
everything works fine, the script executes within the other job and the pipeline
continues as expected.
Is this the only way to execute a script that is external to the workspace?
What am I doing wrong with my attempt at executing the script from within
the pipeline script?
The issue is that the jenkins user (or whatever the user is that runs the Jenkins slave process) does not have write permission on /opt and the sh step wants to create the script-directory#tmp/durable-ae56483c sub-directory there.
Either remove the dir block and use the absolute path to the script:
sh '/opt/script-directory/run.sh'
or give write permission to jenkins user to folder /opt (not preferred for security reasons)
Looks like a bug in Jenkins, durable directories are meant to store recovery information e.g. before executing an external script using sh.
For now all you can do is make sure that /opt/script-directory has +r +w and +x set for jenkins user.
Another workaround would be not to change the current directory, just execute sh with it:
sh '/opt/script-directory/run.sh'
I had a similar concern when trying to execute a script in a Jenkins pipeline using a Jenkinsfile.
I was trying to run a script restart_rb.sh with sudo.
To run it I specified the present working directory ($PWD):
sh 'sudo sh $PWD/restart_rb.sh'
Our internal build system uses a shell script to setup the environment for building projects. Then the actual build tools (ant or make) can reference environment variables for configuring various things. In essence, it does:
$ /path/to/setup_env.sh .
[build env] $ ant compile
Note that the first command launches and initializes a new shell and expects all subsequent build operations to be performed in that shell.
Now I am trying to replicate the same within Jenkins. How do I run a shell script and then have the subsequent ant build step take place in the same environment?
The 'Execute Shell' built-in as well as the EnvInject plugin didn't help since they discard any changes to the environment before moving to the next build step.
I'd prefer not to modify the ant build file since the same should continue to work in the current internal build system.
This is a "solution" that worked out for us. The key idea is that the setup_env.sh script launches a new shell in which it exports a bunch of environment variables. What we needed was access to those variable definitions. So we did a three part Jenkins Build:
Step 1 - Execute Shell
Use the 'Execute Shell' Jenkins built-in to run our setup_env.sh script. Then feed the newly launched shell a simple python script that dumps the environment to a file.
/path/to/setup_env.sh . <<< 'python <<SC
print "Exporting env to buildenv.properties file"
import os
f = open("buildenv.properties", "w")
env = os.environ
for k in env:
f.write("%s=%s\n" % (k, env[k]))
f.close()
print "Done exporting env"
SC'
Step 2 - Inject Environment Variables
Now we use the EnvInject Plugin to inject environment variables from the file dumped in the previous step. The config here is simple, just specify the dumped properties file name as the Properties File Path field value.
Step 3 - Invoke Ant
Here, we kick off the normal ant build. Since the environment now contains all the required definitions, the build completes as normal.
Try EnvInject Plugin.