The problem is Jenkins can't remove all images by itself. Sometimes it removes all images, sometimes only part of it, leaving dangling images. At some point it happens by random.
My setup:
jenkins 2.346.2-jdk11
docker 20.10.17, build 100c701
Ubuntu 20.04.4 LTS
Some code snippets from Jenkins file:
pipeline {
agent any
options {
skipStagesAfterUnstable()
buildDiscarder(logRotator(numToKeepStr: '30'))
timestamps()
}
...build some jar file...
stages {
stage("Build docker images") {
steps {
script {
echo "Bulding docker images"
def buildArgs = """\
-f Dockerfile \
."""
def image = docker.build(
"simple-java-maven-app:latest",
buildArgs)
}
}
}
stage("Push to Dockerhub") {
steps {
script {
echo "Pushing the image to docker hub"
def localImage = "${params.Image_Name}:${params.Image_Tag}"
def repositoryName = "generaltao725/${localImage}"
sh "docker tag ${localImage} ${repositoryName} "
...push to hub...
}
}
}
}
post {
always {
script {
echo 'I will always say Hello again!'
sh "docker rmi -f generaltao725/simple-java-maven-app simple-java-maven-app"
sh "docker system prune -f"
sh "docker images"
}
}
}
}
The full code you is here https://github.com/GeneralTao2/simple-java-maven-app/blob/for_sharing/Jenkinsfile
A snippet from logs:
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
21:23:40 I will always say Hello again!
[Pipeline] sh
21:23:41 + docker rmi -f generaltao725/simple-java-maven-app simple-java-maven-app
21:23:41 Untagged: generaltao725/simple-java-maven-app:latest
21:23:41 Untagged: simple-java-maven-app:latest
21:23:41 Deleted: sha256:daffc41b3af93166db4c19d8b4414051b3b4518e6ddd257c748ab6706060ca0d
21:23:41 Deleted: sha256:68b669ea8fdc6f95d9b3804098adc846d08e205f01a5766a9ce9c406a82f41d2
21:23:41 Deleted: sha256:1eafd5ac1d9d3f6e3b578ac0faea1cf6bbda85ab1a2106b590e6a67cc2cfa887
21:23:41 Deleted: sha256:a4f900510305bbd11d46c1b09dabbb03864e46e1b3e9fe4839dbd96a917f6958
21:23:41 Deleted: sha256:f0a6ad878e8be605a4753d9b1aa2d076fcdd3040ddc6d220d60d03e27f4a3864
[Pipeline] sh
21:23:41 + docker system prune -f
21:23:41 Total reclaimed space: 0B
[Pipeline] sh
21:23:42 + docker images
21:23:42 REPOSITORY TAG IMAGE ID CREATED SIZE
21:23:42 openjdk 11 47a932d998b7 2 months ago 654MB
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
What I actually see after docker images after pipeline execution:
root#470d20ccbca3:/# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
generaltao725/simple-java-maven-app latest baa11625ecc8 23 hours ago 674MB
<none> <none> 2a8333ccbffb 23 hours ago 674MB
I am working on it several days, nothing specific about it in the docs... I will be glad for any help!
The method I wanted to use still doesn't work. But I found a workaround with pretty same affect. You can use docker exec from the host side:
docker exec jenkins-blueocean /bin/bash -c "docker rmi \$(docker images -q -f dangling=true)"
It can be triggered by cron or any web-hook handler. I use webhook and so far it works.
Related
I have this script run in sequential stages:
stage("Set up first image"){
agent {
docker {
label "${RUNNING_NODE}"
image "${IMAGE}"
reuseNode true
}
}
steps {
echo "This is treasure!"
}
}
And this is output:
16:01:55 [Pipeline] {
16:01:55 [Pipeline] echo
16:01:55 This is treasure!
16:01:55 [Pipeline] }
16:01:55 $ docker stop --time=1 ca63
16:01:56 $ docker rm -f ca63
16:01:56 [Pipeline] // withDockerContainer
Docker stops container immediately after stage is built, but I want this container alive to use in another stage. Is there any way to keep container still running after stage is built?
Thanks in advance!
I want to delete image from previous build. I'm able to get its image id, however the job dies every time it hits docker rmi command.
stage('Clean old Image') {
steps {
script {
def imageName = "${registry}" + "/" + "${branchName}"
env.imageName = "${imageName}"
def oldImageID = sh(
script: 'docker images -qf reference=\${imageName}:\${imageTag}',
returnStdout: true
)
echo "Image Name: " + "${imageName}"
echo "Old Image: ${oldImageID}"
if ( "${oldImageID}" != '' ) {
echo "Deleting image id: ${oldImageID}..."
sh 'docker rmi -f $oldImageID'
} else {
echo "No image to delete..."
}
}
}
}
stage log console shows these error messages
Shell Script -- docker rmi -f $oldImageID (self time 282ms)
+ docker rmi -f
"docker rmi" requires at least 1 argument.
See 'docker rmi --help'.
Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
Remove one or more images
but actually, the image id is already persists as it shows in stage log
Print Message -- Old Image: 267848fadb74 (self time 11ms)
Old Image: 267848fadb74
Try passing in " instead of ' with ${oldImageID}
sh "docker rmi -f ${oldImageID}"
This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'
When making a jenkinsfile, I have steps to run dockers image which pulling from my docker hub.
stage('pull image and run') {
steps {
sh '''
docker login -u <username> -p <password>
docker run -d -p 9090:3000 <tag>
'''
}
}
This step is okay if I run this script the first time. However, if I run this script the second time, it will get this error.
Login Succeeded
+ docker run -d -p 9090:3000 <tag>
669955464d74f9b5186b437b7127ca0a24f6ea366f3a903c673489bec741cf78
docker: Error response from daemon: driver failed programming external connectivity on endpoint distracted_driscoll (db16abd899cf0cbd4f26cf712b1eee4ace5b491e061e2e31795c2669296068eb): Bind for 0.0.0.0:9090 failed: port is already allocated.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 125
Finished: FAILURE
Obviously, the port 9090 is not available so the execution failed.
Question:
What is the correct way to upgrade an app inside a docker container?
I can stop the container before running the docker run, but I can't find a proper way to do that in jenkinsfile steps.
Any suggestion?
Thanks
Jenkins has really good docker support to make your build proceed within docker container. good example can be found here
One declarative example to do maven build will be:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp:/tmp'
registryUrl 'https://myregistry.com/'
registryCredentialsId 'myPredefinedCredentialsInJenkins'
}
}
stages {
stage("01") {
steps {
sh "mvn -v"
}
}
stage("02") {
steps {
sh "mvn --help"
}
}
}
}
In a scripted pipeline, it would be
node {
docker.withRegistry('https://registry.example.com', 'credentials-id') {
docker.image('node:14-alpine').inside("-v /tmp:/tmp") {
stage('Test') {
sh 'node --version'
}
}
}
}
So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?
I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'lvthillo/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Suppose you are under Linux, run the following code
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit Use bind mounts to get more information.
ps:
run
sudo -s
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
docker run [options] 5ed
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
you can simply run
docker run [options] 5ed6
to run the image My_Image_with_very_long_name.