Inside a jenkins stage and steps, I am trying to build an image; run the container with a volume and then stash a file in order to unstash it after.
But unfortunately it doesn't create the volume and doesn't stash.
Here is the jenkins code
stage('Android') {
agent {
label buildLabel()
}
steps {
checkout scm
sh '''
mkdir -p `pwd`/build_target
docker build -t android_build -f docker/Dockerfile.android .
docker run --rm -v `pwd`/build_target:/home/gradle/reactapp/android/app/build/outputs/apk/ android_build
ls -la `pwd`/build_target/*
'''
stash includes: 'build_target/app-release.apk', name: 'apk'
androidApkUpload apkFilesPattern: '**/app-release.apk', googleCredentialsId: 'jenkins_apk_upload', trackName: 'internal'
}
}
My solution would be configuring a ENV in Global properties on http://jenkins-server/configure.
In build script, I can use the env to get the shared path in the host.
And in all agent hosts, I mount the same NFS path to it.
mount -t nfs 10.6.188.1:/root /root/pacotest1
on every node
Related
I'm building a Jenkins pipeline, I've a builde image in my repo, and I've uploaded a secret file that I need to provide to my building job to Jenkins credentials as a Secret file. I need to copy this file to the working directory of a docker run command that run a command on the builder image.
I'm using this to retrieve the file as env variable:
withCredentials([
file(credentialsId: 'keystore', variable: 'KEYSTORE'), {
try {
docker run parameters ${image} -e ${KEYSTORE} command...
}
Any ideas on how can I make that file available inside the container when run the docker image?
If you want to get the credentials file inside a docker container, you'll also need to volume mount it.
docker run parameters ${image} -e ${KEYSTORE} -v ${KEYSTORE}:${KEYSTORE}:ro
In case you use the built-in docker support, jenkins will take care of that:
pipeline {
agent {
dockerfile {
dir 'build'
}
}
stages {
stage('Build') {
steps {
withCredentials([file(credentialsId: 'keystore', variable: 'KEYSTORE')]) {
sh 'ls -l'
}
}
}
}
}
This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'
I have 3 different Docker images. I need to build these images from Jenkins file. I have Wildfly, Postgres, Virtuoso Docker images with individual Docker file. As of now, I am using the below command to build these images:
The directory structure is, Docker is the root diretory.
Docker->build->1. wildfly 2. postgres 3. virtuoso
In my jenkins file I have below command to build the image:
stage('Building test images')
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
But I am getting error as below:
Step 7/16 : COPY ./install $VIRT_HOME/install
COPY failed: stat /var/lib/docker/tmp/docker-builder636357036/install: no such file or directory
[Pipeline] }
For reference below is my dockerfile:
FROM virtuoso:latest
ENV var1 /opt/virtuoso-opensource
ENV VIRT_db /opt/virtuoso-opensource/var/lib/virtuoso/db
ENV RUN_CONFIG=/opt/virtuoso-opensource/install/config
RUN export PATH=$PATH:/opt/virtuoso-opensource/bin
RUN mkdir $var1/install
COPY ./install $var1/install
WORKDIR $VIRT_db
CMD ["/opt/virtuoso-opensource/bin/init.sh"]
And the workspace is /home/jenkins/Docker and my guess is I am running docker build command from $workspace directory and this command should run from the virtuoso directory.
My question is how build image from Jenkins file?
Thanks in advance.
the easiest solution to solve this would be to enter the proper folder in the script before executing the docker build command.
e.g.:
stage('Building test images') {
steps {
sh '''
cd $WORKSPACE/build/virtuoso
docker build -t virtuoso .
'''
}
}
Below is the answer:
stage('Build images'){
echo "workspace directory is ${workspace}"
dir ("$workspace/build/virtuoso")
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
dir ("$workspace/build/wildfly")
{
sh 'docker build -t wildfly -f $WORKSPACE/build/wildfly/Dockerfile .'
}
dir ("$workspace/build/postgres")
{
sh 'docker build -t postgres -f $WORKSPACE/build/postgres/Dockerfile .'
}
}
Thanks for helping me out.
My Jenkins pipeline uses the docker-workflow plugin. It builds a Docker image and tags it app. The build step fetches some dependencies and bakes them into the image along with my app.
I want to run two commands inside a container based on that image. The command should be executed in the built environment, with access to the dependencies. I tried using Image.inside, but it seems to fail because inside mounts the project directory over the working directory (?) and so the dependencies aren't available.
docker.image("app").inside {
sh './run prepare-tests'
sh './run tests'
}
I tried using docker.script.withDockerContainer too, but the commands don't seem to run inside the container. The same seems to be true for Image.withRun. At least with that I could specify a command, but it seems that I'd have to run specify both commands in one statement. Also it seems that withRun doesn't fail the build if the command doesn't exit cleanly.
docker
.image("app")
.withRun('', 'bash -c "./app prepare-tests && ./app tests"') { container ->
sh "exit \$(docker wait ${container.id})"
}
Is there a way to use Image.inside without mounting the project directory? Or is there are more elegant way of doing this?
docker DSL, like docker.image().inside() {} etc will mount jenkins job workspace dir to container and make it as the WORKDIR which will overwrite the WORKDIR in Dockerfile.
You can verify that from jenkins console output .
1) CD workdir fristly
docker.image("app").inside {
sh '''
cd <WORKDIR of image specifyed in Dockerfile>
./run prepare-tests
./run tests
'''
}
2) Run container in sh , rather than via docker DSL
sh '''
docker run -i app bash -c "./run prepare-tests && ./run tests"
'''
I am running Docker in Docker (specifically to run Jenkins which then runs Docker builder containers to build a project images and then runs these and then the test containers).
This is how the jenkins image is built and started:
docker build --tag bb/ci-jenkins .
mkdir $PWD/volumes/
docker run -d --network=host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-v $PWD/volumes/jenkins_home:/var/jenkins_home \
--name ci-jenkins bb/ci-jenkins
Jenkins works fine. But then there is a Jenkinsfile based job, which runs this:
docker run -i --rm -v /var/jenkins_home/workspace/forkMV_jenkins-VOLTRON-3057-KQXKVJNXOU4DGSUG3P27IR3QEDHJ6K7HPDEZYN7W6HCOTCH3QO3Q:/tmp/build collab/collab-services-api-mvn-builder:2a074614 mvn -B -T 2C install
And this ends up with an error:
The goal you specified requires a project to execute but there is no POM in this directory (/tmp/build).
When I do docker exec -it sh to the container, the /tmp/build is empty. But when I am in the Jenkins container, the path /var/jenkins_home/...QO3Q/ exists and it contains the workspace with all the files checked out and prepared.
So I wonder - how can Docker happily mount the volume and then it's empty?*
What's even more confusing, this setup works for my colleague on Mac.
I am on Linux, Ubuntu 17.10, Docker latest.
After some research, calming down and thinking, I realized that Docker-in-Docker is not really so much "-in-", as it is rather "Docker-next-to-Docker".
The trick to make a container able to run another container is sharing /var/run/docker.sock through a volume: -v /var/run/docker.sock:/var/run/docker.sock
And then the docker client in the container actually calls Docker on the host.
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
After realizing that, the fix is to make the paths to the Jenkins workspace directory the same in the host filesystem and the Jenkins (middle) container:
docker run -d --network=host \
...
-v /var/jenkins_home:/var/jenkins_home
And voilá! It works. (I created a symlink instead of moving it, seems to work too.)
It is a bit complicated if you're looking at colleague's Mac, because Docker is implemented a bit differently there - it is running in an Alpine Linux based VM but pretending not to. (Not 100 % sure about that.) On Windows, I read that the paths have another layer of abstraction - mapping from C:/somewhere/... to a Linux-like path.
I hope I'll save someone hours of figuring out :)
Alternative Solution with Docker cp
I was facing the same problem of mounting volumes from a Build that runs in a Docker Container running in a Jenkins server in Kubernetes. As we use docker-in-docker, dind, I couldn't mount the volume in either ways proposed here. I'm still not sure what the reason is, but I found an alternative way: use docker cp to copy the build artifacts.
Multi-stage Docker Image for Tests
I'm using the following Dockerfile stage for Unit + Integration tests.
#
# Build stage to for building the Jar
#
FROM maven:3.2.5-jdk-8 as builder
MAINTAINER marcello.desales#gmail.com
# Only copy the necessary to pull only the dependencies from registry
ADD ./pom.xml /opt/build/pom.xml
# As some entries in pom.xml refers to the settings, let's keep it same
ADD ./settings.xml /opt/build/settings.xml
WORKDIR /opt/build/
# Prepare by downloading dependencies
RUN mvn -s settings.xml -B -e -C -T 1C org.apache.maven.plugins:maven-dependency-plugin:3.0.2:go-offline
# Run the full packaging after copying the source
ADD ./src /opt/build/src
RUN mvn -s settings.xml install -P embedded -Dmaven.test.skip=true -B -e -o -T 1C verify
# Building only this stage can be done with the --target builder switch
# 1. Build: docker build -t config-builder --target builder .
# When running this first stage image, just verify the unit tests
# Overriden them by removing the "!" for integration tests
# 2. docker run --rm -ti config-builder mvn -s settings.xml -Dtest="*IT,*IntegrationTest" test
CMD mvn -s settings.xml -Dtest="!*IT,!*IntegrationTest" -P jacoco test
Jenkins Pipeline For tests
My Jenkins pipeline has a stage for running parallel tests (Unit + Integration).
What I do is to build the Test Image in a stage, and run the tests in parallel.
I use docker cp to copy the build artifacts from inside the test docker container that can be started after running the tests in a named container.
Alternatively, you can use Jenkins stash to carry the test results to a Post stage
At this point, I solved the problem with a docker run --name test:SHA and then I use docker start test:SHA and then docker cp test:SHA:/path ., where . is the current workspace directory, which is similar to what we need with a docker volume mounted to the current directory.
stage('Build Test Image') {
steps {
script {
currentBuild.displayName = "Test Image"
currentBuild.description = "Building the docker image for running the test cases"
}
echo "Building docker image for tests from build stage ${env.GIT_COMMIT}"
sh "docker build -t tests:${env.GIT_COMMIT} -f ${paas.build.docker.dockerfile.runtime} --target builder ."
}
}
stage('Tests Execution') {
parallel {
stage('Execute Unit Tests') {
steps {
script {
currentBuild.displayName = "Unit Tests"
currentBuild.description = "Running the unit tests cases"
}
sh "docker run --name tests-${env.GIT_COMMIT} tests:${env.GIT_COMMIT}"
sh "docker start tests-${env.GIT_COMMIT}"
sh "docker cp tests-${env.GIT_COMMIT}:/opt/build/target ."
// https://jenkins.io/doc/book/pipeline/jenkinsfile/#advanced-scripted-pipeline#using-multiple-agents
stash includes: '**/target/*', name: 'build'
}
}
stage('Execute Integration Tests') {
when {
expression { paas.integrationEnabled == true }
}
steps {
script {
currentBuild.displayName = "Integration Tests"
currentBuild.description = "Running the Integration tests cases"
}
sh "docker run --rm tests:${env.GIT_COMMIT} mvn -s settings.xml -Dtest=\"*IT,*IntegrationTest\" -P jacoco test"
}
}
}
}
A better approach is to use Jenkins Docker plugin and let it do all the mountings for you and just add -v /var/run/docker.sock:/var/run/docker.sock in its inside function arguments.
E.g.
docker.build("bb/ci-jenkins")
docker.image("bb/ci-jenkins").inside('-v /var/run/docker.sock:/var/run/docker.sock')
{
...
}