How to start up a server in jenkins to run acceptance tests in parallel? - jenkins

As part of the build, these are the stages I have:
Compile and package the project
Parallel Jobs:
Run unit tests
Start up server by executing the jar file created in stage 1
Wait for server to be ready and run acceptance tests
against that
The question is when acceptance tests are finished, how do I signal the other parallel job to shutdown the server? At the moment the jar execution process remains running even if I kill the build from the Jenkins.
Also, in the event of the build being cancelled manually, how can we capture that event and signal the server to shut itself down using scripted pipeline/groovy?
Sample code:
node('jenkins-jenkins-slave') {
stage('Parallel jobs') {
parallel
stage('Preparing server for acceptance test') {
script {
def propertiesString = ''
def jvmOptions = ''
def properties = readProperties file: "${propertiesFile}".toString()
properties.each { k, v ->
propertiesString = propertiesString + "-D${k}=${v} ".toString()
}
sh "java ${jvmOptions} -Dloader.path=${loaderPath} ${propertiesString}-jar ${jarFile} --spring.profiles.active=${activeProfiles}"
}
},
stage('Acceptance tests') {
script {
def scriptName = 'wait_until_env_ready.sh'
def exists = fileExists "$scriptName"
if(!exists) {
def scriptContent = libraryResource "${scriptName}"
writeFile file: "${scriptName}", text: scriptContent
sh "chmod +x ${scriptName}"
}
sh "./wait_until_env_ready.sh http://localhost:8090/manage/health"
sh "mvn -B test -Dmaven.repo.local=/root/.m2/repository -DskipTests=true -DskipITs=false"
// How to signal the above stage to shut down the jar execution?
}
}
}
}

What you are looking for is a post-always -section.
Also you need some way to terminate the server. For example find its PID and kill it within the post.

Related

Create process dump when Jenkins pipeline step runs into timeout

We run unit tests on Jenkins and one of our tests freezes sometimes.
We have timeouts defined in the Jenkins pipeline and the freeze triggers the timeout and that kills the testing process.
Is there a way (via Jenkins pipelines, maybe via Groovy) to execute a command (e.g. create a process dump of the testing process) as soon as we run into a timeout, but (of course) before the timeout kills the testing process?
You can wrap our test execution with a try-catch and do whatever you need after catching the timeout exception. Here is a sample Pipeline.
pipeline {
agent any
stages {
stage('Hello') {
steps {
script{
try {
timeout(unit: 'SECONDS', time: 5) {
echo "Running your Tests here!!!!"
sleep 10
}
} catch (e){
echo "The tests erroredout!!!" + e.getCauses()
if(e.getCauses()[0] instanceof org.jenkinsci.plugins.workflow.steps.TimeoutStepExecution$ExceededTimeout) {
echo "This is a timeout, do whatever you want..."
}
}
}
}
}
}
}

How to run a task if tests fail in Jenkins

I have a site in production. And I have a simple playwright test that browses to the site and does some basic checks to make sure that it's up.
I'd like to have this job running in Jenkins every 5 minutes, and if the tests fail I want to run a script that will restart the production server. If the tests pass, I don't want to do anything.
What's the easiest way of doing this?
I have the MultiJob plugin that I thought I could use, and have the restart triggered on the failed test step, but it doesn't seem to have the ability to trigger specifically on fail.
Something like the following will do the Job for you. I'm assuming you have a second Job that will take care of the restart.
pipeline {
agent any
triggers{
cron('*/5 * * * *')
}
stages {
stage("Run the Test") {
steps{
echo "Running the Test"
// I'm returning exit code 1 so jenkins will think this failed
sh '''
echo "RUN SOMETHING"
exit 1
'''
}
}
}
post {
success {
echo "Success: Do nothing"
}
failure {
echo 'I failed :(, Execute restart Job'
// Executing the restart Job.
build job: 'RestartJob'
}
}
}

Jenkins declarative pipeline with docker and git

I am trying to build a pipeline for Node JS application using git and dockers. I have made declarative Jenkinsfile from which everything works smoothly. I have set SCM Poll for every two minutes and it gets invoked correctly but the problem comes as old pipeline still running so new poll get queued with the message Waiting for next available executor. I wanted to know if I have done all correctly and what I am missing.
My complete code can be found here.
I have tried making npm start in deliver.sh file with & to make it run in daemon mode and used input message option in Jenkinsfile to stop the pipeline from finishing as otherwise only with "npm start &" and without "input message" pipeline reaches to the end of pipeline and app container created get killed. I am sure this approach is not correct. I did then with npm start without & and wihtout input message and scm poll when invoked and pipeline also started executing stages but as the last container is already published to port 3000, obviously it won't publish new to 3000, so pipeline returns error.
Dockerfile
FROM node:alpine
COPY . .
EXPOSE 3000
Jenkinsfile
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}
deliver.sh script
# set -x
# npm start &
npm start
# sleep 1
# copying process ID of npm start to file name pidfile, this id will
# be used when the user press any key to stop the app
# echo $! > .pidfile
# set +x
Any help in this regard would be highly appreciated.
Add
disableConcurrentBuilds()
inside an 'options' section to prevent 2 builds running at the same time.
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
options {
disableConcurrentBuilds()
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}

Re run Jenkins job on a different slave upon failure

I have a job which is compatible with 2 slaves(configured on the different locations). I often experience connectivity issues due to VPN session timeout so I am trying to figure out a way to automatically run a job on the slave 2 if the job gets fail on slave 1. Please let me know if there is any plugin or any way to accomplish it.
I think with a free style project, it would be hard to implement your requirement.
Pipeline script
Check this if you don't know this plugin : How create a pipeline script
According to this answer, the Pipeline Plugin allows you to write jobs that run on multiple slave nodes using labels:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
I created this simple pipeline script and work (this example does not have label, but you could use it):
def exitStatusInMasterNode = 'success';
node {
echo 'Hello World in node master'
echo 'status:'+exitStatusInMasterNode
exitStatusInMasterNode = 'failure'
}
node {
echo 'Hello World in node slave'
echo 'master status:'+exitStatusInMasterNode
}
exitStatusInMasterNode variable could be shared across nodes.
So if your slave1 fail, you could set exitStatusInMasterNode to failure. And at the start of your slave2, you could validate if exitStatusInMasterNode is failure in order to execute the same build but in this slave.
Example:
def exitStatusInMasterNode = 'none';
node {
try{
echo 'Hello World in Slave-1'
throw new Exception('Simulating an error')
exitStatusInMasterNode = 'success'
} catch (err) {
echo err.message
exitStatusInMasterNode = 'failure'
}
}
node {
if(exitStatusInMasterNode == 'success'){
echo 'Job in slave 1 was success. Slave-2 will not be executed'
currentBuild.result = 'SUCCESS'
return;
}
echo 'Re launch the build in Slave-2 due to failure on Slave-1'
// exec simple tasks or stages
}
Log of simulated error in slave1
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Simulating an error
Running on Jenkins in .../multiple_nodes
Re launch the build in Slave-2 due to failure on Slave-1
Finished: SUCCESS
Log when there is not error in slave1 (comment this line: throw new Exception)
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Running on Jenkins in .../multiple_nodes
Job in slave 1 was success. Slave-2 will not be executed
Finished: SUCCESS

Pass variables between Jenkins stages

I want to pass a variable which I read in stage A towards stage B somehow. I see in some examples that people write it to a file, but I guess that is not really a nice solution. I tried writing it to an environment variable, but I'm not really successful on that. How can I set it up properly?
To get it working I tried a lot of things and read that I should use the """ instead of ''' to start a shell and escape those variables to \${foo} for example.
Below is what I have as a pipeline:
#!/usr/bin/env groovy
pipeline {
agent { node { label 'php71' } }
environment {
packageName='my-package'
packageVersion=''
groupId='vznl'
nexus_endpoint='http://nexus.devtools.io'
nexus_username='jenkins'
nexus_password='J3nkins'
}
stages{
// Package dependencies
stage('Install dependencies') {
steps {
sh '''
echo Skip composer installation
#composer install --prefer-dist --optimize-autoloader --no-interaction
'''
}
}
// Unit tests
stage('Unit Tests') {
steps {
sh '''
echo Running PHP code coverage tests...
#composer test
'''
}
}
// Create artifact
stage('Package') {
steps {
echo 'Create package refs'
sh """
mkdir -p ./build/zpk
VERSIONTAG=\$(grep 'version' composer.json)
REGEX='"version": "([0-9]+.[0-9]+.[0-9]+)"'
if [[ \${VERSIONTAG} =~ \${REGEX} ]]
then
env.packageVersion=\${BASH_REMATCH[1]}
/usr/bin/zs-client packZpk --folder=. --destination=./build/zpk --name=${env.packageName}-${env.packageVersion}.zpk --version=${env.packageVersion}
else
echo "No version found!"
exit 1
fi
"""
}
}
// Publish ZPK package to Nexus
stage('Publish packages') {
steps {
echo "Publish ZPK Package"
sh "curl -u ${env.nexus_username}:${env.nexus_password} --upload-file ./build/zpk/${env.packageName}-${env.packageVersion}.zpk ${env.nexus_endpoint}/repository/zpk-packages/${groupId}/${env.packageName}-${env.packageVersion}.zpk"
archive includes: './build/**/*.{zpk,rpm,deb}'
}
}
}
}
As you can see the packageVersion which I read from stage Package needs to be used in stage Publish as well.
Overall tips against the pipeline are of course always welcome as well.
A problem in your code is that you are assigning version of environment variable within the sh step. This step will execute in its own isolated process, inheriting parent process environment variables.
However, the only way of passing data back to the parent is through STDOUT/STDERR or exit code. As you want a string value, it is best to echo version from the sh step and assign it to a variable within the script context.
If you reuse the node, the script context will persist, and variables will be available in the subsequent stage. A working example is below. Note that any try to put this within a parallel block can be of failure, as the version information variable can be written to by multiple processes.
#!/usr/bin/env groovy
pipeline {
environment {
AGENT_INFO = ''
}
agent {
docker {
image 'alpine'
reuseNode true
}
}
stages {
stage('Collect agent info'){
steps {
echo "Current agent info: ${env.AGENT_INFO}"
script {
def agentInfo = sh script:'uname -a', returnStdout: true
println "Agent info within script: ${agentInfo}"
AGENT_INFO = agentInfo.replace("/n", "")
env.AGENT_INFO = AGENT_INFO
}
}
}
stage("Print agent info"){
steps {
script {
echo "Collected agent info: ${AGENT_INFO}"
echo "Environment agent info: ${env.AGENT_INFO}"
}
}
}
}
}
Another option which doesn't involve using script, but is just declarative, is to stash things in a little temporary environment file.
You can then use this stash (like a temporary cache that only lives for the run) if the workload is sprayed out across parallel or distributed nodes as needed.
Something like:
pipeline {
agent any
stages {
stage('first stage') {
steps {
// Write out any environment variables you like to a temporary file
sh 'echo export FOO=baz > myenv'
// Stash away for later use
stash 'myenv'
}
}
stage ("later stage") {
steps {
// Unstash the temporary file and apply it
unstash 'myenv'
// use the unstashed vars
sh 'source myenv && echo $FOO'
}
}
}
}

Resources