Jenkins - How to run - docker system prune - in the pipeline - docker

This is my Jenkinsfile for building docker image and pushing it to dockerhub. Everything works just great.
I would like to clean up the untagged images after the build process. Currently I do docker system prune -f manually on the Jenkins node. Is there anyway to incorporate when agent is none?
pipeline {
agent none
stages {
stage('Build Jar') {
agent {
docker {
image 'maven:3.6.0'
args '-v $HOME/.m2:/root/.m2'
}
}
steps {
sh 'mvn clean package'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myimagename")
}
}
}
stage('Push Image') {
steps {
script {
app.push("latest")
}
}
}
}
}

Related

Enabling Jenkins to build on a separate Docker in Docker container

I have a docker in docker setup that build's docker images which is not on the same node as the Jenkins node. When I try to build using the Jenkins node I receive:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
To fix I can build a docker image using below within Jenkinsfile:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
This works as expected, I can use the above Jenkins pipeline config to build a Docker container.
I'm attempting to use the Docker server running at tcp://10.44.10.8:2375 to package a Java Maven project on a new container running on Docker. I've defined the pipeline build as :
pipeline {
agent any
stages {
stage('Maven package') {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
}
}
And receive this message from Jenkins with no further output:
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Maven package)
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘dockerserverlabel’
I've configured the Docker label in Jenkins as :
Which matches the 'Docker Build' settings from the Jenkins file above.
But it seems I've not included some other config within Jenkins and/or the Jenkinsfile to enable the docker image to be built on tcp://10.44.10.8:2375 ?
I'm working through https://www.jenkins.io/doc/tutorials/build-a-java-app-with-maven/ which describes a pipeline for building a maven project on Docker:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
But how to configure the build on a separate Docker container is not described.
Can this Jenkins config:
stage('Docker Build') {
agent any
steps {
script {
withDockerServer([uri: "tcp://10.44.10.8:2375"]) {
withDockerRegistry([credentialsId: 'docker', url: "https://index.docker.io/v1/"]) {
def image = docker.build("ron/reactive")
image.push()
}
}
}
}
}
be used with
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
?

How to push an openshift built container image to a private registry through a Jenkinsfile

Using the OpenShift oc new-app command, I have built a container image. Instead of pushing the image to a local container registry, I want to push the generated image to a private registry. As I am using Jenkins for CI/CD, I want to automate the process of generating the image and pushing to the private registry.
I am able to achieve the generation part. But struck with pushing the image to a private registry through Jenkinsfile. Any pointers on how to achieve this is appreciated.
This Jenkins Building Docker Image and Sending to Registry article discusses how this might be done.
pipeline {
environment {
registry = "gustavoapolinario/docker-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
tools {nodejs "node" }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/gustavoapolinario/node-todo-frontend'
}
}
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}

Jenkins declarative pipeline with separate docker images for a whole pipeline and some stage(s)

It's possible to provide different docker images for different jenkins stages, but is it possible to provide some default docker image for a whole pipeline while providing some specific docker image for some stage(s)?
For example (it's possible):
pipeline {
agent none
stages {
stage('first stage') {
agent {
docker { image 'first_docker' }
}
steps {
sh 'echo "just do it"'
}
}
stage('second stage') {
agent {
docker { image 'second_docker' }
}
steps {
sh 'echo "did it"'
}
}
}
}
The question is about:
pipeline {
agent {
docker { image 'default_docker' }
}
stages {
stage('first stage') {
steps {
sh 'echo "just do it"'
}
}
stage('second stage') {
agent {
docker { image 'second_docker' }
}
steps {
sh 'echo "did it"'
}
}
}
}
I don't mean a case where a default docker image has a docker inside and thus providing 'Matryoshka' (a nested doll).

How can I run something during agent setup in a Jenkins declarative pipeline?

Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?

Reuse agent (docker container) in Jenkins between multiple stages

I have a pipeline with multiple stages, and I want to reuse a docker container between only "n" number of stages, rather than all of them:
pipeline {
agent none
stages {
stage('Install deps') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm install'
}
}
stage('Build, test, lint, etc') {
agent {
docker { image 'node:10-alpine' }
}
parallel {
stage('Build') {
agent {
docker { image 'node:10-alpine' }
}
// This fails because it runs in a new container, and the node_modules created during the first installation are gone at this point
// How do I reuse the same container created in the install dep step?
steps {
sh 'npm run build'
}
}
stage('Test') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm run test'
}
}
}
}
// Later on, there is a deployment stage which MUST deploy using a specific node,
// which is why "agent: none" is used in the first place
}
}
See reuseNode option for Jenkins Pipeline docker agent:
https://jenkins.io/doc/book/pipeline/syntax/#agent
pipeline {
agent any
stages {
stage('NPM install') {
agent {
docker {
/*
* Reuse the workspace on the agent defined at top-level of
* Pipeline, but run inside a container.
*/
reuseNode true
image 'node:12.16.1'
}
}
environment {
/*
* Change HOME, because default is usually root dir, and
* Jenkins user may not have write permissions in that dir.
*/
HOME = "${WORKSPACE}"
}
steps {
sh 'env | sort'
sh 'npm install'
}
}
}
}
You can use scripted pipelines, where you can put multiple stage steps inside a docker step, e.g.
node {
checkout scm
docker.image('node:10-alpine').inside {
stage('Build') {
sh 'npm run build'
}
stage('Test') {
sh 'npm run test'
}
}
}
(code untested)
For Declarative pipeline, one solution can be to use Dockerfile in the root of the project. For e.g.
Dockerfile
FROM node:10-alpine
// Further Instructions
Jenkinsfile
pipeline{
agent {
dockerfile true
}
stage('Build') {
steps{
sh 'npm run build'
}
}
stage('Test') {
steps{
sh 'npm run test'
}
}
}

Resources