Jenkins running parallel scripts - jenkins

I am new to Jenkins and need some help..
I have 4 shell scripts : test1.sh, test2.sh, test3.sh and test4.sh
I want test2.sh to only run if test1.sh runs successfully and test4.sh to only run if test3.sh runs successfully.
I also want test1.sh and test3.sh to run in parallel.
How could I achieve it in Jenkins?
I am using "Execute shell script on remote host using ssh" and "Conditional steps(multiple)" (just exploring). I have also set up keys so as to communicate with remote server.
Illustration using screen shot or other way would be helpful.
Thank you!

First, ensure that test1.sh and test3.sh return the standard success code when they succeed (0). Then the simple way, which works in any command line, not just Jenkins, is to use this command line:
((test1.sh && test2.sh) &) ; ((test3.sh && test4.sh) &)
Each pair of parentheses forms a subshell, double-amperands mean "if first succeeds then run second", and single ampersand mean "run in backgorund". So you get the equivalent of two backgrounded shells each running two scripts, which will exit if the first script doesn't return 0.
The Jenkins-specific solution is to have a node with two (or more) runners. Create two jobs, and tie both to that node. Each job runs a single shell, either test1.sh && test2.sh, or test3.sh && test4.sh.

In Jenkins declarative or scripted pipelines you can create parallel executions for each shell script or any other command or program execution you like.
stage('run-parallel-branches') {
steps {
parallel(
a: {
echo "This is branch a"
},
b: {
echo "This is branch b"
}
)
}
}
Reference: https://www.jenkins.io/blog/2017/09/25/declarative-1/

Related

How to transform a 'bat' directive from Jenkinsfile for execution in the Script Console?

A 'bat' script from my Jenkinsfile is failing for no apparent reason. I already tested it by physically running it on the agent machine, so now I want to run it under Jenkins manually - through the Script Console. How do I go about transforming this line into the exactly equivalent console command?
bat 'set \"ANDROID_HOME=%USERPROFILE%\\AppData\\Local\\Android\\Sdk\" && gradlew.bat assembleDebug'
I tried this, no luck, I probably didn't escape something correctly, perhaps too many inner quotes for the cmd /c command?
println "cmd \\c \"set \"ANDROID_HOME=%USERPROFILE%\\AppData\\Local\\Android\\Sdk\" && gradlew.bat assembleDebug\" ".execute().text
cmd \c should be cmd /c, and you can also return the command's output like so:
"cmd /c \"set \"ANDROID_HOME=%USERPROFILE%\\AppData\\Local\\Android\\Sdk\" && gradlew.bat assembleDebug\"".execute().text
Anyway, the script console is only the first step to determining run-time issues in pipeline steps, as you only get to know whether the command itself works.
Next you want to isolate your problem in a separate pipeline, so you can work out CPS and sand-boxing problems. Fun stuff.

Jenkinsfile Pipeline dynamic environment modification at runtime

I need to get GitVersion.exe variables in my Jenkins pipeline.
The GitVersion documentation gives a hint on how to do that. Essentially call gitversion /output buildserver.
This call does add the variables to the current step and they are lost once the step completes. I can show this call executes when combining a set command in the same bat execution. The second set shows the variables are gone from the environment.
bat 'nuget install GitVersion.CommandLine -OutputDirectory c:/packages -Version 3.6.5'
bat 'c:/packages/GitVersion.CommandLine.3.6.5/tools/GitVersion.exe /output buildserver && set'
bat 'set'
The documentation of GitVersion is aware of that and suggests to use EnvInject.
Installing the plugin and executing the same pipeline did not change the result. I read that the Plugin is not made for pipelines so that may have something to do with it.
Pipelines support a syntax for environment.
Following that syntax I can set static variables at the top of my pipeline like this:
environment {
ASuperVariable = 'MySuperVariable'
}
What I need is combining those calls so that I can add run time variables to the Jenkinsfile pupeline.
environment {
bat 'gitversion /output buildserver'
}
Now obviously the above call is not even syntax correct. Is there a way to mark a section so that the contained environment changes are available for other steps?
EDIT:
This is still unsolved. At the moment I need to create a batch script and pass the tool into it as an argument. Inside the batch I can call the tool to add to the environment of the batch script and use that wile the batch is running. A Multi line batch in the Jenkins file could be a solution if the process remains the same over all the multiple lines.
Not sure whether you would be able to use scripted pipeline or at least a script block inside declarative. It'd be quite easy doing so:
withEnv(['ASuperVariable=MySuperVariable']) {
echo env.ASuperVariable
}
Or when calling a windows cmd script:
node('win') {
withEnv(['ASuperVariable=MySuperVariable']) {
bat 'echo %ASuperVariable%'
}
}

jenkins pipeline. Ssh to a server get stuck on job

I need to ssh to a server from a simple jenkin pipeline and make a deploy which is simply moving to a directory and do a git fetch and some other comands (nmp install among others). Thing is that when jenkin job ssh to the remote server it connects ok but then It gets stucked, I have to stop it. I just now modify the script to simply do a "ssh to server " and a "pwd command" to go to the easiest but it connects to it and it get stuck untill I abort. What Am I missing? here is the simpe pipeline script and the output on an screenshot
pipeline {
agent any
stages {
stage('Connect to server') {
steps {
sh "ssh -t -t jenkins#10.x.x.xx"
sh "pwd"
}
}
stage('branch status') {
steps {
sh "git status"
}
}
}
}
Jenkins executes each "sh" step as a separate shell script. Content is written to a temporary file on Jenkins node and only then executed. Each command is executed in separate session and is not aware of previous one. So neither ssh session or changes in environment variable will persist between the two.
More importantly though, you are forcing pseudo-terminal allocation with -t flag. This is pretty much opposite to what you want to achieve, i.e. run shell commands non-interactively. Simply
sh "ssh jenkins#10.x.x.xx pwd"
is enough for your example to work. Placing the commands on separate lines would not work with regular shell script, regardless of Jenkins. However you still need to have private key available on node, otherwise the job will hang, waiting for you to provide password interactively. Normally, you will want to use SSH Agent Plugin to provide private key at runtime.
script {
sshagent(["your-ssh-credentals"]) {
sh "..."
}
}
For execution on longer commands see What is the cleanest way to ssh and run multiple commands in Bash?

How to run a docker-compose instance in jenkins pipeline

I've set up a home based CI server for working with a personal project. Below you can see what happens for the branch "staging". It works fine, however the problems with such a pipeline config are:
1) The only way to stop the instance seem to be to abort the build in jenkins whiсh leads to the exit code 143 and build marked as red instead of green
2) If the machine reboots I have to trigger build manually
3) I suppose there should be a better way of handling this?
Thanks
stage('Staging') {
when {
branch 'staging'
}
environment {
NODE_ENV = 'production'
}
steps {
sh 'docker-compose -f docker-compose/staging.yml build'
sh 'docker-compose -f docker-compose/staging.yml up --abort-on-container-exit'
}
post {
always {
sh 'docker-compose -f docker-compose/staging.yml rm -f -s'
sh 'docker-compose -f docker-compose/staging.yml down --rmi local --remove-orphans'
}
}
}
So, what's the goal here? Are you trying to deploy to staging? If so, what do you mean by that? If jenkins is to launch a long running process (say a docker container running a webserver) then the shell command line must be able to start and then have its exit status tell jenkins pipeline if the start was successful.
One option is to wrap the docker compose in a script that executes, checks and exits with the appropriate exit code. Another is to use yet another automation tool to help (e.g. ansible)
The first question remains, what are you trying to get jenkins to do and how on the commandline will that work. If you can model the command line then you can encapsulate in a script file and have jenkins start it.
Jenkins pipeline code looks like groovy and is much like groovy. This can make us believe that adding complex logic to the pipeline is a good idea, but this turns jenkins into our IDE and that's hard to debug and a trap into which I've fallen several times.
A somewhat easier approach is to have some other tool allow you to easily test on the commandline and then have jenkins build the environment in which to run that command line process. Jenkins handles what it is good at:
scheduling jobs
determining on which nodes jobs run
running steps in parallel
making the output pretty or easily understood by we carbon based life forms.
I am using parallel stages.
Here is a minimum example:
pipeline {
agent any
options {
parallelsAlwaysFailFast() // https://stackoverflow.com/q/54698697/4480139
}
stages {
stage('Parallel') {
parallel {
stage('docker-compose up') {
steps {
sh 'docker-compose up'
}
}
stage('test') {
steps {
sh 'sleep 10'
sh 'docker-compose down --remove-orphans'
}
}
}
}
}
post {
always {
sh 'docker-compose down --remove-orphans'
}
}
}

Jenkins Job fails when pytest test fails

Just wanted to explore pytest and integrating it into Jenkins. My sample pytest test cases are
def a(x):
return x+1
def test_answer():
assert a(2) == 3
def test_answer2():
assert a(0) == 2
I then generated a standalone pytest script which I run in Jenkins, generating an xml to be parsed for results.
As test_answer2 fails, the Jenkins job also fails. I'm assuming this is because the exit code returned is non-zero. How would I go around this, i.e the Jenkins job doesn't even if 1 or more tests do indeed fail. Thanks
If you are calling this test execution in a batch file or shell script or directly using the command execution in Jenkins. You can follow the below way:
Windows:
<your test execution calls>
exit 0
Linux:
set +e
<your test execution calls>
set -e
This will ignore the error if at all it is called with in the batch scripts and the Jenkins will show status as successful.
In addition to already posted answers:
You also can mark your test as xfail, what means you know it will fail, just like skipping:
#pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...
more about skipping you can find in pytest documentation
and on Jenkins side you also can manipulate of state of your build via simply try/catch statement:
try {
bat "python -m pytest ..."
} catch (pytestError) {
// rewrite state of you build as you want, Success or Failed
// currentBuild.result = "FAILED"
currentBuild.result = "SUCCESS" // your case
println pytestError
}
But be aware, it will mark whole build each time as success for that step of pytest run. Best practice just to skip tests via #pytest.mark.skip as described above.
If you are calling this test execution in a batch file or shell script or directly using the command execution in Jenkins. You can follow the below way:
Below code is NOT Working
Linux:
set +e
set -e
We use Jenkins running on a Windows system so our tests are listed in the Jenkins "Execute Windows Batch command" section.
I was able to solve this by separating the tests that might have failures with a single & (rather than &&). For example:
"C:\Program Files\Git\bin\sh.exe" -c python -m venv env && pip3 install -r requirements.txt && py.test --tap-files test_that_may_fail.py & py.test --tap-files next_test.py & py.test
Since we use pytest, any failures are flagged in python with an assert. If you use the &&, this will cause Jenkins job to abort and not run the other tests.

Resources