Jenkinsfile - Agents questions - docker

I have two questions to following example:
pipeline {
agent { label "docker" }
stages {
stage('Build') {
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
steps {
...
}
}
}
}
Question 1:
When I declare agent in top level of Jenkinsfile it means that it will be used for all below stages. So what is difference between:
agent { label "docker" }
and
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
First one will use docker agent and second will use docker agent with maven image as executable environment? Where label "docker" agent is configured/installed?
Question 2:
How label tag is working? I know that somewhere is already created agent and using label I just point to it - like in example above: by default I use "docker" agent? it also means that during steps {...} this agent will be overridden by maven agent?
Question 3:
Last question for following example:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v ... -e ...'
}
}
stages {
stage('Maven Build') {
steps {
sh '...'
}
}
stage('Docker push') {
agent {
docker 'openjdk:8-jdk-alpine'
}
steps {
script {
docker.build("my-image")
}
}
}
}
post {
...
}
}
I want to build first stage using docker container with maven:3-alpine image. During build following error is printed:
...tmp/durable-54b54bdc/script.sh: line 1: docker: not found
So I modified this example, here is the working result:
pipeline {
agent any
stages {
stage('Docker push') {
steps {
script {
docker.build("my-image")
}
}
}
}
}
How is it working agent any in this case? Which agent can execute docker.build?

Answer 1:
agent { label "docker" }
This will try to find an agent with label docker and execute the steps in that agent.
agent {
docker {
image 'maven:3.5.0-jdk-8'
}
}
This will try to pull the docker image with name maven:3.x.x and start the container and execute the steps mentioned in the pipeline. If you are using MultiJob this will be executed in the slave with the label based on this configuration:
Answer 2:
Defining agent at the top-level of the Pipeline ensures that an Executor will be assigned to the agent labeled docker. To my knowledge, I assume that docker container will be created in the agent labeled docker and steps will be executed inside the container.
Answer 3:
The reason could be, you may not have configured Docker Label (refer above image). The task could have executed in the master where the docker is not installed. The reason for other one working could be because, the job is executed in an agent where docker is installed.

Related

Using container and Docker image for Windows and Linux task

I have a requirement to run the specific stages between Linux and Windows agent. I am able to achieve this by putting agents and labels in every single stage but every time pipeline is looking for new executor in every stage which is consuming a lot of time.
So I am trying to search out for the way where the pipeline runtime can be shorten by using only same linux label for Linux tasks and docker windows image for Windows task.
With only Linux task - Pipeline is working fine and no new executor is being assigned. Indeed this saving pipeline runtime. However when there is a parameter to build the project then I am unable to invoke the Docker agent. I am appending the sample Jenkins pipeline code below for quick reference.
error - docker: not found.
I think this is because Docker is not available in label linux. But if I put agent in every single stage then it is working fine but consuming a lot of time in starting the agent every time. So with the same resources I want to achieve this.
pipeline {
agent {
label 'Linux'
}
stages {
stage ('Linux') {
steps {
container('linux') {
script{
some task
}
}
}
}
}
stage('env setup') {
steps {
container('linux') {
script {
some task
}
}
}
}
stage ('windows') {
agent {
docker {
label 'docker label'
image "docker image"
reuseNode true
}
}
stages {
stage ('build') {
steps {
script {
build with windows
}
}
}
stage ('Push') {
steps {
script {
push with windows
}
}
}
}
}
stage ('Deploy') {
steps {
container('linux') {
script {
deploy using linux
}
}
}
}
}

Jenkins Pipeline with Dockerfile configuration

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.
You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline
So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

passing Jenkins env variables between stages on different agents

I've looked at this Pass Artifact or String to upstream job in Jenkins Pipeline and this Pass variables between Jenkins stages and this How do I pass variables between stages in a declarative Jenkins pipeline?, but none of these questions seem to deal with my specific problem.
Basically I have a pipeline consisting of multiple stages, each run in its own agent.
In the first stage I run a shell script. Here two variables are generated. I would like to use these variables in the next stage. The methods I've seen so far seem to only work when passing variables within the same agent.
pipeline {
stages {
stage("stage 1") {
agent {
docker {
image 'my_image:latest'
}
}
steps {
sh ("""
export VAR1=foo
export VAR2=bar
""")
}
}
stage("stage 2") {
agent {
docker {
image 'my_other_image:latest'
}
}
steps {
sh ("echo "$VAR1 $VAR2")
//expecting to see "foo bar" printed here
}
}

Can't use specific images in Jenkinsfile

I've got a problem with a pipeline in which I want to build ARM images.
So, I was willing to use an arm32v7 image to run as build agent:
pipeline {
agent {
docker {
image 'arm32v7/docker:dind'
}
}
stages {
stage('Build atom images') {
steps {
// Building my images.
}
stage('Push to registry') {
agent any
steps{
script {
withDockerRegistry(credentialsId: 'cred', url: 'https://registry.custom') {
// Pushing images to registry.
}
}
}
}
}
}
But, when pipeline runs, here's what I got:
+ docker inspect -f . arm32v7/docker:dind
.
Failed to run top '81cc646256b727780420048da5ff10e5a3256510fc8a787137651941ee54d8a0'. Error: Error response from daemon: Container 81cc646256b727780420048da5ff10e5a3256510fc8a787137651941ee54d8a0 is not running
This happens with every kind of image I choose. Can you help me on this? Am i missing something?

Jenkins declarative pipline multiple slave

I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run.
pipeline {
triggers { pollSCM '0 0 * * 0' }
agent { dockerfile { label 'jenkins-slave'
filename 'Dockerfile'
}
}
stages{
stage('1'){
steps{ sh "blah" }
} // stage
} // stages
} // pipeline
What I need to do now is run a new stage on a different slave, NOT in docker.
I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave.
stage('test new slave') {
agent { node { label 'e2e-aws' } }
steps {
sh "ifconfig"
} // steps
} // stage
I get the following error message
13:14:23 unknown flag: --workdir
13:14:23 See 'docker exec --help'.
I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues
1. My post actions show an error
2. The stages that have parallel stages also had an error.
I can't find any examples that are similar to what I am doing.
You can use the node block to select a node to run a particular stage.
pipeline {
agent any
stages {
stage('Init') {
steps {
node('master'){
echo "Run inside a MASTER"
}
}
}
}
}

Resources