Jenkins pipeline files present in sh blocks not present in later steps - jenkins

Here is my stage block
stage('Test') {
steps {
echo 'Testing ${env.JOB_NAME}:${env.BUILD_ID} on ${env.JENKINS_URL}..'
sh """
docker run -v /tmp/work/report:/report ${env.REPO}:${env.BUILD_ID} ./manage.py jenkins --enable-coverage --output-dir=/report
ls /work/report
cat /work/report/*.xml
"""
archiveArtifacts artifacts: '/work/report/*.xml'
// junit '/work/report/*.xml'
}
}
The files are present in the 'sh' block, as the output of the ls and cat show. However, in the next step 'archiveArtifacts' and (if I enable it) junit, the files are not found. What am I missing?

You are mounting the local folder /tmp/work/report to /report, but then you are testing /work/report
Also make sure to test it outside a pipeline, doing the docker run manually: when the container exit, it might reset its content.

Related

how to go inside a specific directory and run commands inside it in a jenkins pipeline

I am trying to run a gradle command inside a jenkins pipeline and for that i should cd <location> where gradle files are.
I added a cd command inside my pipeline but that is not working. I did this
stage('build & SonarQube Scan') {
withSonarQubeEnv('sonarhost') {
sh 'cd $WORKSPACE/sonarqube-scanner-gradle/gradle-basic'
sh 'echo ${PWD}'
sh 'gradle tasks --all'
sh 'gradle sonarqube --debug'
}
}
But the cd is not working, I tried dir step as suggested in pipeline docs, but i want to cd inside $WORKSPACE folder.
How can i fix this?
Jenkins resets the directory after each command. So after the first sh, it goes back to the previous location. The dir command is the correct approach, but it should be used like this:
dir('') {
}
Similar to how you have used withSonarQubeEnv
Alternatively, you can simply chain all the commands
sh 'cd $WORKSPACE/sonarqube-scanner-gradle/gradle-basic & echo ${PWD} & ...'
But this is not recommended. Since this will all be in the same command, it will run fine though.

Docker Multi Stage Build access Test Reports in Jenkins when Tests Fail

I am doing a multi stage build in docker to separate the app from testing. Now at some point in my Dockerfile I run:
RUN pytest --junit=test-reports/junit.xml
In my Jenkinsfile respectivly I do:
def app = docker.build("app:${BUILD_NUMBER}", "--target test .")
app.inside {
sh "cp -r /app/test-reports test-reports"
}
junit 'test-reports/junit.xml'
Now if my test fail, the build fails which is good. But the rest of the stage is not executed, i.e. I dont have access to the test-reports folder. How can I manage that?
I resolved similar task by using always block after build stage.
Please check if below code can help.
always{
script{
sh '''#!/bin/bash
docker create -it --name test_report app:${BUILD_NUMBER} /bin/bash
docker cp test_report:/app/test-reports ./test-reports
docker rm -f test_report
'''
}
junit 'test-reports/junit.xml'
}

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

Run commands inside Docker container without mounting project directory

My Jenkins pipeline uses the docker-workflow plugin. It builds a Docker image and tags it app. The build step fetches some dependencies and bakes them into the image along with my app.
I want to run two commands inside a container based on that image. The command should be executed in the built environment, with access to the dependencies. I tried using Image.inside, but it seems to fail because inside mounts the project directory over the working directory (?) and so the dependencies aren't available.
docker.image("app").inside {
sh './run prepare-tests'
sh './run tests'
}
I tried using docker.script.withDockerContainer too, but the commands don't seem to run inside the container. The same seems to be true for Image.withRun. At least with that I could specify a command, but it seems that I'd have to run specify both commands in one statement. Also it seems that withRun doesn't fail the build if the command doesn't exit cleanly.
docker
.image("app")
.withRun('', 'bash -c "./app prepare-tests && ./app tests"') { container ->
sh "exit \$(docker wait ${container.id})"
}
Is there a way to use Image.inside without mounting the project directory? Or is there are more elegant way of doing this?
docker DSL, like docker.image().inside() {} etc will mount jenkins job workspace dir to container and make it as the WORKDIR which will overwrite the WORKDIR in Dockerfile.
You can verify that from jenkins console output .
1) CD workdir fristly
docker.image("app").inside {
sh '''
cd <WORKDIR of image specifyed in Dockerfile>
./run prepare-tests
./run tests
'''
}
2) Run container in sh , rather than via docker DSL
sh '''
docker run -i app bash -c "./run prepare-tests && ./run tests"
'''

How to Create Jenkins Input thats no blocking, and based on previous command output

I have 2 issues, that are both part of the same problem. I am running terraform inside a JenkinsFile, this is all happening on a docker container that runs on a specific node. I have a few different environments with the ec2_plugin, that are labeled 'environment_ec2'. Its done this way since we use ansible, and I want to be able to execute ansible locally in the VPC.
1) How do you create an input and stage that are only executed if a previous command returns a specific output?
2) How can I make this non blocking?
node('cicd_ec2') {
stage('Prepare Environment'){
cleanWs()
checkout scm
}
withAWSParameterStore(credentialsId: 'jenkin_cicd', naming: 'relative', path: '/secrets/cicd/', recursive: true, regionName: 'us-east-1') {
docker.image('jseiser/jenkins_devops:0.7').inside {
stage('Configure Git Access') {
sh 'mkdir -p ~/.ssh'
sh 'mv config ~/.ssh/config'
sh 'chmod 600 ~/.ssh/config'
sh "echo '$BITBUCKET_CLOUD' > ~/.ssh/bitbucket_rsa"
sh 'chmod 600 ~/.ssh/bitbucket_rsa'
sh "echo '$CICD_CODE_COMMIT_KEY' > ~/.ssh/codecommit_rsa"
sh 'chmod 600 ~/.ssh/codecommit_rsa'
sh "echo '$IDAUTO_CICD_MGMT_PEM' > ~/.ssh/idauto-cicd-mgmt.pem"
sh 'chmod 600 ~/.ssh/idauto-cicd-mgmt.pem'
sh 'ssh-keyscan -t rsa bitbucket.org >> ~/.ssh/known_hosts'
sh 'ssh-keyscan -t rsa git-codecommit.us-east-1.amazonaws.com >> ~/.ssh/known_hosts'
}
stage('Terraform'){
sh './init-ci.sh'
sh 'terraform validate'
sh 'terraform plan -detailed-exitcode -out=create.tfplan'
}
input 'Deploy stack?'
stage ('Terraform Apply') {
sh 'terraform apply -no-color create.tfplan'
}
stage('Ansible'){
sh 'ansible-galaxy -vvv install -r requirements.yml'
sh 'ansible-playbook -i ~/ vpn.yml'
}
}
}
}
I only want to run the input and terraform apply, if the result of the below command is == 2.
terraform plan -detailed-exitcode
Since this all has to run on a ec2 instance, and it all has to use this container, I am not sure how I can do this input outside of a node like its recommended. Since if the input sits long enough, this instance may go down and the rest of the code would be run on a new instance/workspace and the information I need from the git repo's and the terraform plan would not be present. The git repo that I checkout contains the terraform configurations, the ansible configurations, and some configuration for SSH so that terraform and ansible are able to pull in their modules/roles from private git repos. The 'create.tfplan' that I would need to use IF terraform has a change would also need to be passed around.
Just really confused how I can get a good input, only get that input if I really need to run terraform apply, and how I can make it non blocking.
I had to adopt this from my work-in-progess which is based on declarative pipeline, but I hope it still mostly works..
def tfPlanExitCode
node {
stage('Checkout') {
checkout scm
}
stage('Plan') {
tfPlanExitCode = sh('terraform plan -out=create.tfplan -detailed-exitcode', [returnStatus: true])
stash 'workspace'
}
}
if (tfPlanExitCode == "2") {
input('Deploy stack?')
stage('Apply') {
node {
unstash 'workspace'
sh 'terraform apply -no-color create.tfplan'
}
}
}
The building blocks are:
don't allocate an executor while the input is waiting (for hours..)
stash your workspace contents (you can optionally specify which files to copy) and unstash later on the agent that continues the build
The visualization might be a bit screwed up, when some builds have the Apply stage and some don't. That's why I'm using the declarative pipelines, which allows to nicely/explicitly skip stages.

Resources