I want to delete image from previous build. I'm able to get its image id, however the job dies every time it hits docker rmi command.
stage('Clean old Image') {
steps {
script {
def imageName = "${registry}" + "/" + "${branchName}"
env.imageName = "${imageName}"
def oldImageID = sh(
script: 'docker images -qf reference=\${imageName}:\${imageTag}',
returnStdout: true
)
echo "Image Name: " + "${imageName}"
echo "Old Image: ${oldImageID}"
if ( "${oldImageID}" != '' ) {
echo "Deleting image id: ${oldImageID}..."
sh 'docker rmi -f $oldImageID'
} else {
echo "No image to delete..."
}
}
}
}
stage log console shows these error messages
Shell Script -- docker rmi -f $oldImageID (self time 282ms)
+ docker rmi -f
"docker rmi" requires at least 1 argument.
See 'docker rmi --help'.
Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
Remove one or more images
but actually, the image id is already persists as it shows in stage log
Print Message -- Old Image: 267848fadb74 (self time 11ms)
Old Image: 267848fadb74
Try passing in " instead of ' with ${oldImageID}
sh "docker rmi -f ${oldImageID}"
Related
The problem is Jenkins can't remove all images by itself. Sometimes it removes all images, sometimes only part of it, leaving dangling images. At some point it happens by random.
My setup:
jenkins 2.346.2-jdk11
docker 20.10.17, build 100c701
Ubuntu 20.04.4 LTS
Some code snippets from Jenkins file:
pipeline {
agent any
options {
skipStagesAfterUnstable()
buildDiscarder(logRotator(numToKeepStr: '30'))
timestamps()
}
...build some jar file...
stages {
stage("Build docker images") {
steps {
script {
echo "Bulding docker images"
def buildArgs = """\
-f Dockerfile \
."""
def image = docker.build(
"simple-java-maven-app:latest",
buildArgs)
}
}
}
stage("Push to Dockerhub") {
steps {
script {
echo "Pushing the image to docker hub"
def localImage = "${params.Image_Name}:${params.Image_Tag}"
def repositoryName = "generaltao725/${localImage}"
sh "docker tag ${localImage} ${repositoryName} "
...push to hub...
}
}
}
}
post {
always {
script {
echo 'I will always say Hello again!'
sh "docker rmi -f generaltao725/simple-java-maven-app simple-java-maven-app"
sh "docker system prune -f"
sh "docker images"
}
}
}
}
The full code you is here https://github.com/GeneralTao2/simple-java-maven-app/blob/for_sharing/Jenkinsfile
A snippet from logs:
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
21:23:40 I will always say Hello again!
[Pipeline] sh
21:23:41 + docker rmi -f generaltao725/simple-java-maven-app simple-java-maven-app
21:23:41 Untagged: generaltao725/simple-java-maven-app:latest
21:23:41 Untagged: simple-java-maven-app:latest
21:23:41 Deleted: sha256:daffc41b3af93166db4c19d8b4414051b3b4518e6ddd257c748ab6706060ca0d
21:23:41 Deleted: sha256:68b669ea8fdc6f95d9b3804098adc846d08e205f01a5766a9ce9c406a82f41d2
21:23:41 Deleted: sha256:1eafd5ac1d9d3f6e3b578ac0faea1cf6bbda85ab1a2106b590e6a67cc2cfa887
21:23:41 Deleted: sha256:a4f900510305bbd11d46c1b09dabbb03864e46e1b3e9fe4839dbd96a917f6958
21:23:41 Deleted: sha256:f0a6ad878e8be605a4753d9b1aa2d076fcdd3040ddc6d220d60d03e27f4a3864
[Pipeline] sh
21:23:41 + docker system prune -f
21:23:41 Total reclaimed space: 0B
[Pipeline] sh
21:23:42 + docker images
21:23:42 REPOSITORY TAG IMAGE ID CREATED SIZE
21:23:42 openjdk 11 47a932d998b7 2 months ago 654MB
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
What I actually see after docker images after pipeline execution:
root#470d20ccbca3:/# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
generaltao725/simple-java-maven-app latest baa11625ecc8 23 hours ago 674MB
<none> <none> 2a8333ccbffb 23 hours ago 674MB
I am working on it several days, nothing specific about it in the docs... I will be glad for any help!
The method I wanted to use still doesn't work. But I found a workaround with pretty same affect. You can use docker exec from the host side:
docker exec jenkins-blueocean /bin/bash -c "docker rmi \$(docker images -q -f dangling=true)"
It can be triggered by cron or any web-hook handler. I use webhook and so far it works.
I have a Job in a pipeline that cleans up docker images. It runs the job on each worker individually. This is frustrating because when I add jenkins-cpu-worker3, I'll have to update this job.
I'd like to run this job in such a way that it runs on all workers without having to update it each time a new worker is present. I also want the job to be able to run regardless of what I name each worker. It needs to run on all workers no matter what.
Is there a way to query jenkins from within the pipeline to get me a list or array of all the workers that exist. I was leafing through documentation and posts online and I have not found a solution that works. If possible I'd like to do this without any additional Jenkins Plugins.
pipeline {
agent any
stages {
stage('Cleanup jenkins-cpu-worker1') {
agent {
node {
label 'jenkins-cpu-worker1'
}
}
steps {
sh "docker container prune -f"
sh "docker image prune -f"
sh '''docker images | awk '{print $1 ":" $2}' | xargs docker image rm || true'''
sh "docker network prune -f"
sh "docker volume prune -f"
}
}
stage('Cleanup jenkins-cpu-worker2') {
agent {
node {
label 'jenkins-cpu-worker2'
}
}
steps {
sh "docker container prune -f"
sh "docker image prune -f"
sh '''docker images | awk '{print $1 ":" $2}' | xargs docker image rm || true'''
sh "docker network prune -f"
sh "docker volume prune -f"
}
}
Here is an improved version of your Pipeline. This will dynamically get all the active agents, and run your cleanup task in parallel.
pipeline {
agent any
stages {
stage('CleanupWorkers') {
steps {
script {
echo "Something"
parallel parallelJobs()
}
}
}
}
}
def parallelJobs() {
jobs = [:]
for (def nodeName in getAllNodes()) {
jobs[nodeName] = getStage(nodeName)
}
return jobs
}
def getStage(def nodeName){
return {
stage("Cleaning $nodeName") {
node(nodeName){
sh'''
echo "Srating cleaning"
docker container prune -f
docker image prune -f
docker images | awk '{print $1 ":" $2}' | xargs docker image rm || true
docker network prune -f
docker volume prune -f
'''
}
}
}
}
def getAllNodes() {
def nodeNames = []
def jenkinsNodes = Jenkins.instance.getNodes().each { node ->
// Ignore offline agents
if (!node.getComputer().isOffline()){
nodeNames.add(node.getNodeName())
}
}
return nodeNames
}
I'm trying to use a variable related to the SHA of a Gemfile. The problem is that when I use it in a sh command, the other arguments won't be interpreted.
So, for example:
docker build ${VAR} .
will result in an error stating that "docker build" requires exactly 1 argument, since the " ." of the command is not being interpreted.
Here is the code that tries to pull an image, builds it and publish it:
def GEMFILE_SHA = ""
pipeline {
.....
stages {
stage("Build Docker Image and Push to Artifactory - Snapshot Repository") {
steps {
container("docker") {
script {
GEMFILE_SHA = sh(returnStdout: true, script: "sha256sum Gemfile | cut -d ' ' -f1 | head -n1", label: "Set Gemfile sha")
}
sh script: "docker login -u ${DOCKER_REGISTRY_CREDS_USR} -p ${DOCKER_REGISTRY_CREDS_PSW} ${DOCKER_REGISTRY_URL}", label: "Docker Login."
catchError(buildResult: 'SUCCESS', stageResult: 'SUCCESS') {
sh script: "docker pull ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA}", label: "Pull Cached Image."
}
sh script: "docker build --network=host --no-cache -t ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA} .", label: "Build Docker Image."
sh script: "docker push ${DOCKER_REGISTRY_URL}/${DOCKER_REPO}:${GEMFILE_SHA}", label: "Push Docker Image."
}
}
}
}
}
}
So I have this setup
stage('Build') {
steps {
sh """ docker-compose -f docker-compose.yml up -d """
sh """ docker-compose -f docker-compose.yml exec -T app buildApp """
}
stage('Start UI server') {
steps {
script { env.NETWORK_ID = get network id with some script }
sh """ docker-compose -f docker-compose.yml exec -d -T app startUiServer """
}
}
stage('UI Smoke Testing') {
agent {
docker {
alwaysPull true
image 'some custom image'
registryUrl 'some custom registry'
registryCredentialsId 'some credentials'
args "-u root --network ${env.NETWORK_ID}"
}
}
steps { sh """ run the tests """ }
}
And for some reason
the pipeline fails with this error. Most of the time, not all the time
java.io.IOException: Failed to run image 'my image'. Error: docker: Error response from daemon: network 3c5b5b45ca0e not found.
So the Network ID is the right one. I've checked.
Any ideas why this is failing?
i really appreciate any help.
In my jenkinsfile I have this
stage ('Build Docker') {
steps {
script {
image1 = docker.build "docker1:${BRANCH_NAME}"
}
script {
image2 = docker.build "docker2:${BRANCH_NAME}"
}
}
}
stage ('Run Docker Acceptance Tests') {
steps {
script {
container1 = image1.run "-v /tmp/${BRANCH_NAME}:/var/lib/data"
container1Id = container1.id
container1IP = sh script: "docker inspect ${container1Id} | grep IPAddress | grep -v null| cut -d \'\"\' -f 4 | head -1", returnStdout: true
}
//let containers start up
sleep 20
script {
container2= image2.run("-v /tmp/${BRANCH_NAME}:/var/lib/data --add-host=MY_HOST:${container1IP}")
}
}
}
When it gets to run container2 I get this output.
[resources] Running shell script
00:01:33.775 + docker run -d -v /tmp/master:/var/lib/data --add-host=MY_HOST:172.17.0.3
00:01:33.775 "docker run" requires at least 1 argument(s).
00:01:33.775 See 'docker run --help'.
Clearly its not appending the container name when running the docker image.
I tried just hardcoding in the IP address to test if it worked like this
container2= image2.run("-v /tmp/${BRANCH_NAME}:/var/lib/data --add-host=MY_HOST:172.17.0.3")
And then it worked and ran the command correctly
00:00:29.386 [resources] Running shell script
00:00:29.641 + docker run -d -v /tmp/master:/var/lib/data --add-host=MY_HOST:172.17.0.3 docker-name:branch
I dont understand why its not picking up the container image name.
I have even tried doing this - getting the same error
container2= image2.run("-v /tmp/${BRANCH_NAME}:/var/lib/data --add-host=MY_HOST:${container1IP} docker2:${BRANCH_NAME}")
My final step I tried
sh "docker run -v /tmp/${BRANCH_NAME}:/var/lib/data --add-host=MY_HOST:${container1IP} docker2:${BRANCH_NAME}"
Again it seems like it is stripping off the final command after resolving ${container1IP}
managed to fix it, it was due to a hidden new line char
container1IP = sh (script: "docker inspect ${container1Id} | grep IPAddress | grep -v null| cut -d \'\"\' -f 4 | head -1", returnStdout: true).trim()
Trimming the var fixed it