Copy build artifacts from insider docker to host - docker

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin

Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

Related

Using a docker command as an variable in a Jenkins sh script?

I am working on a CI/DC Pipeline where I have a DEV, TEST and Prod Server. With a Jenkins Pipeline i deploy my newest Image onto my DEV-Server. Now i want to take the image of my DEV-Server by reading out the sha256 id and put it on my TEST-Server.
I have a Jenkins Pipeline for that:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always angular:latest'
}
}
}
}
As you see i currently use the :latest tag, but i want something like this:
pipeline {
agent any
tools {
dockerTool 'docker-19.03.9'
}
environment {
}
stages {
stage('DeployToTEST-Angular') {
steps {
script {
echo 'Deploying image...'
sh 'docker stop mycontainertest'
sh 'docker rm mycontainertest'
sh 'docker run -d --name mycontainertest [cant show the envirmoments i give with] --restart always \$imageofDev'
}
}
}
}
$imageofDev = docker inspect mycontainerdev | grep -o 'sha256:[^"]*' // this command works and give my back the raw sha256 number
So that it uses the actuall sha256 number of my dev image
I dont know how i can define this variable and later use the value of it in this Jenkins Pipeline. How can i do this?
When you build the image, choose an image name and tag that you can remember later. Jenkins provides several environment variables that you can use to construct this, such as BUILD_NUMBER; in a multibranch pipeline you also have access to BRANCH_NAME and CHANGE_ID; or you can directly run git in your pipeline code.
def shortCommitID = sh script: 'git rev-parse --short HEAD', returnStdout: true
def dockerImage = "project:${shortCommitID.trim()}"
def registry = 'registry.example.com'
def fullDockerImage "${registry}/${dockerImage}"
Now that you know the Docker image name you're going to use everywhere you can just use it; you never need to go off and look up the image ID. Using the scripted pipeline Docker integration, for example:
docker.withRegistry("https://${registry}") {
def image = docker.build(dockerImage)
image.push
}
Since you know the registry/image:tag name, you can just use it in other Jenkins directives too
def containerName = 'container'
// stop an old container, if any
sh "docker stop ${containerName} || true"
sh "docker rm ${containerName} || true"
// start a new container that outlives this pipeline
sh "docker run -d --name ${containerName} ${fullDockerImage}"

Jenkins pipeline docker agent, Start docker conatiner from Dockerfile with previliged mode

In my jenkins pipeline, the pipeline code and Dockerfile is available at gitlab
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh '''
java -version
chmod 777 /data
'''
}
}
}
}
From the Dockerfile the image gets created and docker container gets started but missing some privilages.
can not even create a directory
Need to start the docker container with privilages so that I can perform this chmod, mkdir, etc.
agent { dockerfile .. supports arguments. See docs
agent {
// Equivalent to "docker build -f Dockerfile.build
dockerfile {
filename 'Dockerfile.build'
args '--privileged'
}
}

Jenkins pipeline and Docker multi-stage builds howto

Question
I have to configure CI/CD for number of Git repositories with help of Jenkins (and DockerHub as CD target). I did that with help of Docker multi-stage build (see Considerations). I'm afraid to misunderstand/overcomplicate a simple idea.
Is Jenkins + Docker multi-stage build = best/good practice? Am I applying the idea in the correct way?
Considerations
From this presentation I assume using Docker inside Jenkins is a good idea. After reading an article Using Multi-Stage Builds to Simplify and Standardize Build Processes, Docker multi-stage builds looks to be the next step of using Jenkins + Docker.
Answers to similar question also say Docker multi-stage makes sense, but doesn't provide an example of realisation.
Implementation
Jenkins creates pipeline from SCM repository.
Git repository
Dockerfile
Jenkinsfile
project-folder
|-src
|-pom.xml
Dockerfile
FROM alpine as source
RUN apk --update --no-cache add git
COPY project-folder repo
FROM maven:3.6.3-jdk-8 as test
COPY --from=source repo repo
WORKDIR repo
RUN mvn clean test
FROM maven:3.6.3-jdk-8 as build
COPY --from=test repo repo
WORKDIR repo
RUN mvn clean package
FROM openjdk:8 as final
MAINTEINER xxx <xxx#gmail.com>
LABEL owner="xxx"
COPY --from=build repo/target/some-lib-1.8.jar /usr/local/some-lib.jar
ENTRYPOINT ["java", "-jar", "/usr/local/some-lib.jar"]
Jenkinsfile
I used docker build --target for more granularity on Jenkins UI.
#!/usr/bin/env groovy
def imageId = "use-name/image-name:1.$BUILD_NUMBER"
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
stages {
stage('Test') {
steps {
script {
sh "docker build --no-cache --target test -t ${imageId} ."
}
}
}
stage('Build') {
steps {
script {
sh "docker build --target build -t ${imageId} ."
}
}
}
stage('Image') {
steps {
script {
sh "docker build --target final -t ${imageId} ."
}
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('' , 'dockerhub') {
dockerImage = docker.build("${imageId}")
dockerImage.push()
}
}
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
following taleodor's answer I would suggest next jenkinsfile:
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
environment {
imageId = 'use-name/image-name:1.$BUILD_NUMBER'
docker_registry = 'your_docker_registry'
docker_creds = credentials('your_docker_registry_creds')
}
stages {
stage('Docker build') {
steps {
sh "docker build --no-cache --force-rm -t ${imageId} ."
}
}
stage('Docker push') {
steps {
sh'''
docker login $docker_registry --username $docker_creds_USR --password $docker_creds_PSW
docker push $imageId
docker logout
'''
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}

Passing jenkins secrets file to docker image run

I'm building a Jenkins pipeline, I've a builde image in my repo, and I've uploaded a secret file that I need to provide to my building job to Jenkins credentials as a Secret file. I need to copy this file to the working directory of a docker run command that run a command on the builder image.
I'm using this to retrieve the file as env variable:
withCredentials([
file(credentialsId: 'keystore', variable: 'KEYSTORE'), {
try {
docker run parameters ${image} -e ${KEYSTORE} command...
}
Any ideas on how can I make that file available inside the container when run the docker image?
If you want to get the credentials file inside a docker container, you'll also need to volume mount it.
docker run parameters ${image} -e ${KEYSTORE} -v ${KEYSTORE}:${KEYSTORE}:ro
In case you use the built-in docker support, jenkins will take care of that:
pipeline {
agent {
dockerfile {
dir 'build'
}
}
stages {
stage('Build') {
steps {
withCredentials([file(credentialsId: 'keystore', variable: 'KEYSTORE')]) {
sh 'ls -l'
}
}
}
}
}

How to build multiple docker containers from jenkinsfile?

I have 3 different Docker images. I need to build these images from Jenkins file. I have Wildfly, Postgres, Virtuoso Docker images with individual Docker file. As of now, I am using the below command to build these images:
The directory structure is, Docker is the root diretory.
Docker->build->1. wildfly 2. postgres 3. virtuoso
In my jenkins file I have below command to build the image:
stage('Building test images')
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
But I am getting error as below:
Step 7/16 : COPY ./install $VIRT_HOME/install
COPY failed: stat /var/lib/docker/tmp/docker-builder636357036/install: no such file or directory
[Pipeline] }
For reference below is my dockerfile:
FROM virtuoso:latest
ENV var1 /opt/virtuoso-opensource
ENV VIRT_db /opt/virtuoso-opensource/var/lib/virtuoso/db
ENV RUN_CONFIG=/opt/virtuoso-opensource/install/config
RUN export PATH=$PATH:/opt/virtuoso-opensource/bin
RUN mkdir $var1/install
COPY ./install $var1/install
WORKDIR $VIRT_db
CMD ["/opt/virtuoso-opensource/bin/init.sh"]
And the workspace is /home/jenkins/Docker and my guess is I am running docker build command from $workspace directory and this command should run from the virtuoso directory.
My question is how build image from Jenkins file?
Thanks in advance.
the easiest solution to solve this would be to enter the proper folder in the script before executing the docker build command.
e.g.:
stage('Building test images') {
steps {
sh '''
cd $WORKSPACE/build/virtuoso
docker build -t virtuoso .
'''
}
}
Below is the answer:
stage('Build images'){
echo "workspace directory is ${workspace}"
dir ("$workspace/build/virtuoso")
{
sh 'docker build -t virtuoso -f $WORKSPACE/build/virtuoso/Dockerfile .'
}
dir ("$workspace/build/wildfly")
{
sh 'docker build -t wildfly -f $WORKSPACE/build/wildfly/Dockerfile .'
}
dir ("$workspace/build/postgres")
{
sh 'docker build -t postgres -f $WORKSPACE/build/postgres/Dockerfile .'
}
}
Thanks for helping me out.

Resources