Jenkins pipeline marked as failed but all steps succeded - docker

I set up pipeline to build my wildfly-swarm based microservice build docker image and push it to docker repository. So I set up pipeline script and executed build and from logs (attached bellow) we see that build was successful but I get failed status in stage view with error: "Script returned exit code 1".
And the script that failed was:
docker build -f Dockerfile -t swarm-microservice .
It does not matter if I change script execution from:
sh '''docker build -f Dockerfile -t swarm-microservice .'''
to:
script {
docker.build("swarm-microservice", '.')
}
I have also tried to change the output of the script result by changing script to:
sh '''docker build -f Dockerfile -t swarm-microservice . || true '''
But its not helping. What am I missing?
Started by user test
[Pipeline] node
Running on node-lin02 in /build/workspace/XYZ/some-docker-image
[Pipeline] copyArtifacts
Copied 2 artifacts from "XYZ » swarm-microservice" build number 49
Copied 0 artifacts from "XYZ » swarm-microservice » XYZ-swarm-microservice" build number 49
[some-docker-image] Running shell script
[some-docker-image] Running shell script
+ docker build -f Dockerfile -t swarm-microservice .
Sending build context to Docker daemon 380.9MB
Step 1/4 : FROM openjdk:8u151-jdk-slim
---> 22f79f57057d
Step 2/4 : ADD swarm-microservice-swarm.jar /opt/swarm-microservice-swarm.jar
---> Using cache
---> 16f16f07a4da
Step 3/4 : EXPOSE 8281
---> Using cache
---> 26820815d1d1
Step 4/4 : ENTRYPOINT java -jar -Djava.net.preferIPv4Stack=true -XX:MaxRAM=512m /opt/swarm-microservice-swarm.jar -S src
---> Using cache
---> 41c896987ba6
Successfully built 41c896987ba6
Successfully tagged swarm-microservice:latest
[Pipeline] sh
[some-docker-image] Running shell script
+ set -e
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (tag docker image)
[Pipeline] sh
[some-docker-image] Running shell script
+ docker tag swarm-microservice my.awesome.demo.so.test.repo.com:5000/XYZ/swarm-microservice
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (push docker image to repository)
[Pipeline] sh
[some-docker-image] Running shell script
+ docker push my.awesome.demo.so.test.repo.com:5000/XYZ/swarm-microservice
The push refers to a repository [my.awesome.demo.so.test.repo.com:5000/XYZ/swarm-microservice]
2e35fec0db40: Preparing
c09cee929b6f: Preparing
5f09fc66f922: Waiting
cec7521cdf36: Waiting
2e35fec0db40: Layer already exists
063e7100cc44: Layer already exists
latest: digest: sha256:xxx size: 1788
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Related

Trying to push image to DockerHub in containerized Jenkins

I'm setting up CI pipeline in Jenkins running in container. I'm using official jenkins/jenkins:latest docker image with no modifications. On Jenkins itself I installed docker plugins and added docker installation in global tool configuration as well as dockerTool in pipeline tools section.
I created container with following command:
docker run -d -u root -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
I mounted docker.sock to use part of docker on host machine for building image.
Here is part of Jenkinsfile, that is failing:
stage('PUSH') {
steps {
script {
dockerImage = docker.build 'mygithub/spring-petclinic:latest'
docker.withRegistry( '', 'dockerHubCreds' ) {
dockerImage.push()
}
}
}
}
The building of image is successful. The build fails only when I'm trying to push image to DockerHub. It says that there is no such file/dir called Docker, but previous step literally printed out docker build command.
I provided logs below.
+ docker build -t qeqoos/spring-petclinic:latest .
Sending build context to Docker daemon 63.25MB
Step 1/4 : FROM openjdk:8-jre-alpine3.9
---> f7a292bbb70c
Step 2/4 : COPY target/spring-petclinic-2.5.0-SNAPSHOT.jar /usr/bin/spring-petclinic.jar
---> ced11038c9dd
Step 3/4 : EXPOSE 80
---> Running in f222a20aad19
Removing intermediate container f222a20aad19
---> 3cd6a16e7890
Step 4/4 : ENTRYPOINT ["java", "-jar", "/usr/bin/spring-petclinic.jar", "--server.port=80"]
---> Running in 0a392d01e56b
Removing intermediate container 0a392d01e56b
---> 9afe8b544a7b
Successfully built 9afe8b544a7b
Successfully tagged qeqoos/spring-petclinic:latest
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
$ docker login -u qeqoos -p ******** https://index.docker.io/v1/
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.io.IOException: error=2, No such file or directory
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:340)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:271)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1107)
Caused: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1128)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1071)
at hudson.Proc$LocalProc.<init>(Proc.java:252)
at hudson.Proc$LocalProc.<init>(Proc.java:221)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:995)
at hudson.Launcher$ProcStarter.start(Launcher.java:507)
at hudson.Launcher$ProcStarter.join(Launcher.java:518)
at org.jenkinsci.plugins.docker.commons.impl.RegistryKeyMaterialFactory.materialize(RegistryKeyMaterialFactory.java:101)
at org.jenkinsci.plugins.docker.workflow.AbstractEndpointStepExecution2.doStart(AbstractEndpointStepExecution2.java:53)
at org.jenkinsci.plugins.workflow.steps.GeneralNonBlockingStepExecution.lambda$run$0(GeneralNonBlockingStepExecution.java:77)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Finished: FAILURE
Is there any better approach as for Jenkins in container, or any advice how to make Jenkins push image? Thank you.
jenkins/jenkins:latest default won't have docker client binary, you just mount unix socket to container, it's defintely not enough.
The command in output I think just print out: what the docker command it plan to use, not means it already run it.
So, for you, you need to install docker client in jenkins container:
Either use bind mount:
docker run -v `which docker`:/usr/bin/docker ......
Or, if the host's docker client not sutiable for container environment, directly download prebuilt docker client from here, copy the docker binary to container.

Jenkins pipeline build a dockerfile which contains a base image of my dockerhub repository

I am extremely new to Jenkins. I tried out few basic pipeline examples which worked.
My concrete use case is following:
I have a base image in my docker hub repository : my_dockerhub_rep/myImage:v1
Now I want to build another image based on this base image through a Jenkins pipeline.
So i wrote the following dockerfile :
FROM my_docker_hub_rep/myImage:v1
RUN /bin/bash -c 'echo entering in MC container'
To build this image from Jenkins, i wrote the following Jenkinsfile:
pipeline {
agent { dockerfile {
filename "/home/user/Desktop/Dockerfile"
registryUrl ""
registryCredentialsId 'dockerHub'
}}
stages {
stage('Build') {
steps {
echo 'hello !'
sh 'echo LM_LICENSE_FILE = $LM_LICENSE_FILE'
}
}
The jenkins server can successfully login to the docker repository at first but then as soon as it tries to fetch the base image it throws an error that pull access denied : repository doesnt exist or requires docker login.
What i dont understand is if it could login into the docker repo once then why not again ?
Here is the console output of jenkins :
Started by user unknown or anonymous
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/docker_test
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
$ docker login -u mydockerID-p ******** https://index.docker.io/v1/
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/workspace/docker_test#tmp/a548cbfa-5d55-4a2c-87a7-4954052d7e5b/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Agent Setup)
[Pipeline] isUnix
[Pipeline] readFile
[Pipeline] sh
+ docker build -t b2f2e9020bdfdbcd1bc3d0a6f0f28b1c7abff41b -f /home/user/Desktop/Dockerfile .
Sending build context to Docker daemon 2.095kB
Step 1/8 : FROM my_docker_rep/myImage:v1
pull access denied for my_docker_rep/myImage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
ps: i already added jenkins to the user group.
Your docker file should have a full image reference. <registry>/<repository>/<imagename>:<image_tag> if not, the docker demon will always try to pull image from the default docker registry. Change the FROM part of your DOCKERFILE to contain the full image path. It will work

Jenkins: how to run shell script

Jenkins could not run shell script. I've installed Jenkins on my kubernetes cluster.
Here is the part of build console output:
Start building Frontend and Backend Docker images
[Pipeline] }
[Pipeline] container
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] echo
Building Bmf Frontend Image
[Pipeline] sh
[Pipeline] echo
Buildinging Bmf Backend Images
[Pipeline] sh
+ chmod +x build.sh
[Pipeline] // stage
[Pipeline] }
[Pipeline] sh
+ chmod +x build.sh
[Pipeline] sh
+ ./build.sh --build_bmf_frontend
build.sh - Script for building the Web Application, Docker image and Helm chart
Usage: ./build.sh <options>
--build_bmf_frontend : [optional] Build Bmf Frontend image
--build_bmf_backend : [optional] Build Bmf Backend image
--push_bmf_frontend : [optional] Push Bmf Frontend image
--push_bmf_backend : [optional] Push Bmf Backend image
--delete_frontend : [optional] Delete Bmf Frontend image
--delete_backend : [optional] Delete Bmf Backend image
--deploy_stage : [optional] Deploy to Stage Server
--deploy_production : [optional] Deploy to Production Server
--registry reg : [optional] A custom docker registry
--docker_usr user : [optional] Docker registry username
--docker_psw pass : [optional] Docker registry password
--tag tag : [optional] A custom app version
-h | --help : Show this usage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
The job is executed on kubernetes worker provisioned from template Kubernetes Pod Template.
Here is my Jenkinsfile command:
stage('Build BMF Frontend') {
steps {
container('jnlp') {
echo 'Building Bmf Frontend Image'
sh "chmod +x build.sh"
sh "./build.sh --build_bmf_frontend"
}
}
}
Below is a screenshot of my Jenkins workspace;
When using the snippet generator, all sh commands are generated with single quotes. Perhaps you should use that instead?
https://jenkins.io/doc/book/pipeline/getting-started/#snippet-generator
Your pipeline is executed correctly.
The output returned suggests that the issue is from your script.
Even though Jenkins executed successfully, you do not get it to work as projected owing to a problem in your code.
You may want to check your build.sh script, probably the section where it parses input options.
I am executing it with sh which has a less extensive syntax ;)

Bind Volumes to Docker container in a pipeline job

So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?
I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'lvthillo/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Suppose you are under Linux, run the following code
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit Use bind mounts to get more information.
ps:
run
sudo -s
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
docker run [options] 5ed
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
you can simply run
docker run [options] 5ed6
to run the image My_Image_with_very_long_name.

Jenkins Docker Pipelining inside Docker

I'm following along with this tutorial:
https://www.linkedin.com/pulse/building-docker-pipeline-cloudbees-jenkins-jay-johnson
I'm running Jenkins on Docker 17:
docker run -d -p 8080:8080 -p 50000:50000 --name jenkins jenkins
I followed the instructions and replaced Jay's credentials with my own. I added my creds to Global and then renamed the creds in the pipeline script. When I attempt the build, though I'm getting the following error:
Proceeding
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Wrote authentication to /var/jenkins_home/.dockercfg
[Pipeline] {
[Pipeline] stage (Building)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Building
Proceeding
[Pipeline] sh
[alfred-master] Running shell script
+ docker build -t jayjohnson/django-slack-sphinx:testing django
/var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: 2: /var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: docker: not found
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
I'm assuming this is looking for the docker binary.
How can I build a docker image from a repo from inside a Docker container?
The issue is here:
/var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: 2: /var/jenkins_home/workspace/alfred-master#tmp/durable-713ce0d7/script.sh: docker: not found
I'm assuming your build is running on the master instance, which is just a basic installation of Jenkins - no extra tools.
You'll want to run an agent slave and connect it to your master - this agent should ensure it has Docker installed, and then you will be able to run those commands.
You can either set this up yourself; or use an open source option - Currently in my own setup I'm using this image which has everything I need (Well, personally - I've forked it and added some of my own tools as well).

Resources