How do I specify a node type that has docker installed in Jenkinsfile? - docker

I want to use docker to build my code. Not all jenkin slaves have docker installed, so if I just specify:
agent {
dockerfile true
}
It fails withe message "docker not found". This I believe because the docker is not installed in that corresponding jenkins slave/master that picked up the job. So my question is: How can I ensure that this jenkins job gets picked by a node that has docker installed?
If I use:
agent {
node {
label 'has-docker'
dockerfile true
}
}
I get the following error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 2: Only one agent type is allowed per agent section # line 2, column 5.
agent {
^
WorkflowScript: 2: No agent type specified. Must be one of [docker, dockerfile, label, any, none] # line 2, column 5.
agent {

Using two agent directives would resolve the issue:
agent none for the entire pipeline and a docker agent for the specific stage.
Your jenkinsfile should look like this:
pipeline{
agent none
stages{
stage('stage_name'){
agent {
dockerfile{
label 'slave-label'
dir 'relative/location/of/Dockerfile'
}
}
...
}
}
}

You can provide an extra argument label inside the docker block as documented here.
docker
Execute the Pipeline, or stage, with the given container which will be dynamically provisioned on a node pre-configured to accept Docker-based Pipelines, or on a node matching the optionally defined label parameter.
pipeline {
agent {
docker {
image 'node:14-alpine'
label 'docker-node'
}
}
stages {
...
}
}
Its similar if you're using dockerfile instead of docker.
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
dir 'build'
label 'docker-node'
}
}
stages {
...
}
}

You can use below one as example. (modify as per your need)
agent {
// Equivalent to "docker build -f Dockerfile.build --build-arg version=1.0.2 ./build/
dockerfile {
filename 'Dockerfile.build'
dir 'build'
label 'my-defined-label'
additionalBuildArgs '--build-arg version=1.0.2'
args '-v /tmp:/tmp'
}
}

You need to give the agents that have docker installed a common label and use that label inside the agent section for the pipeline. The jenkins documentation state the following for the label tag when defining a node.
Labels (or tags) are used to group multiple agents into one logical
group. For example, if you have multiple Windows agents and you have a
job that must run on Windows, then you could configure all your
Windows agents to have the label windows, and then tie that job to
this label. This would ensure that your job runs on one of your
Windows agents, but not on any agents without this label.
Labels do not necessarily have to represent the operating system on
the agent; you can also use labels to note the CPU architecture, or
that a certain tool is installed on the agent.
Multiple labels must be separated by a space. For example, windows
docker would assign two labels to the agent: windows and docker.
Labels may contain any non-space characters, but you should avoid
special characters such as any of these: !&|<>(), as other Jenkins
features allow for defining label expressions, where these characters
may be used.
In your case, do the same by labeling nodes that have docker installed with a common label such as `docker
As for the error you are encountering, the fix is mentioned here
agent {
docker {
dockerfile true
label 'docker'
}
}

Related

Conditionally use docker agent or different agentLabel based on user choice in jenkins pipeline

I want to be able to use an AWS node to run our jenkins build stages or one of our PCs connected to hardware that can also test the systems. We have some PCs installed that have a recognised agent, and for the AWS node I want to run the steps in a docker container.
Based on this, I would like to decide whether to use the docker agent or the custom agent based on parameters passed in by the user at build time. My jenkinsfile looks like this:
#!/usr/bin/env groovy
#Library('customLib')
import groovy.transform.Field
pipeline {
agent none
parameters{
choice(name: 'NODE', choices: ['PC1', 'PC2', 'dockerNode', 'auto'], description: 'Select which node to run on? Select auto for auto assignment.')
}
stages {
stage('Initialise build environment'){
agent { // agent
docker {
image 'docker-image'
label 'docker-agent'
registryUrl 'url'
registryCredentialsId 'dockerRegistry'
args '--entrypoint=\'\''
alwaysPull true
}
}
steps {
script {
if (params.NODE != 'auto' && params.NODE != 'dockerNode'){
agentLabel = params.NODE + ' && ' + 'custom_string' // this will lead to running the stages on our PCs
}else {
agentLabel = 'docker-agent'
}
env.NODE = params.NODE
}
}
} // initialise build environment
stage('Waiting for available node') {
agent {label agentLabel} // if auto or AWSNode is chosen, I want to run in the docker container defined above; otherwise, use the PC.
post { cleanup { cleanWs() } }
stages {
stage('Import and Build project') {
steps {
script {
...
}
}
} // Build project
...
}
} // Waiting for available node
}
}
Assuming the PC option works fine if I don't add this docker option and the docker option also works fine on its own, how do I add a label to my custom docker agent and use it conditionally like above? Running this Jenkinsfile, I get the message 'There are no nodes with the label ‘docker-agent’ which seems to say that it can't find the label I defined.
Edit: I worked around this problem by adding two stages that run a when block that decides whether the stage runs or not, with a different agent.
Pros: I'm able to choose where the steps are run.
Cons:
The inside build and archive stages are identical, so they are simply repeated for each outer stage. (This may become a pro if I want to run a separate HW test stage only on one of the agents)
both outer stages always run, and that means that the docker container is always pulled from the registry, even if it is not used (i.e. if we choose to run the build process on a PC)
Seems like stages do not lend themselves easily to being encapsulated into functions.

Why is Jenkins not using the correct image for docker builds?

No matter what I try, I seem to be unable toget a declerative pipeline to build my project inside a docker container, with the correct image.
I have verified the following:
Jenkins does build the correct image (based on messages in the log)
When I build the image manually, it is build correctly
When building the project inside a container with the correct image, the build succeeds
The Jenkins steps do run in a container with some image.
As far as I can tell, Jenkins simply uses the base image and not the correct one, resulting from the dockerfile I specify.
Things I've tried:
Let Jenkins figure it out
pipeline {
agent dockerfile
Using docker at the top level:
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
reuseNode true
}
}
stages {
stage('configure') {
steps {
Use docker in each step
pipeline {
agent none
stages {
stage('configure') {
agent {
dockerfile {
filename 'Dockerfile'
reuseNode true
}
}
steps {
Abbreviations, due to the number of examples. Docker is not mentioned anywhere outside of the specified areas and simply removing the docker parts and using a regular agent works fine.
Logs
The logs are useless. They simply state that they build the image and verify that they exist and then fail to execute commands that have just been installed (meson in this case).
First of all, I suggest you to read:
The Pipeline syntax for the agent declaration and in particular the dockerfile section
This basic example on how to use dockerfile. Start with the minimal pipeline with agent { dockerfile true } and please show us the logs for example.
Without any logs or a more detailed explanation on your setup, it is difficult to help you.
I can certainly tell you that the second try is wrong because
reuseNode is valid for docker and dockerfile, and only has an effect when used on an agent for an individual stage.
Instead, I am not sure how the third try could ever work: with agent none you are forcing each stage to have an agent section, but in the stage's agent section you have the reuseNode option set to true. Isn't it a contradiction? How could you reuse a top-level node if this one does not exist?
I know it is not an answer, but it is also too long to stay in the comments in my opinion.
I always use it like this, with a pre-build image:
pipeline {
agent {
docker { image 'node:16-alpine' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
But I can only guess what you want to do inside the docker environment.

Creating a Python Pipeline on Jenkins but get access denied from docker

I've created a Jenkinsfile in my Git repository that is defined as this:
pipeline {
//None parameter in the agent section means that no global agent will be allocated for the entire Pipeline’s
//execution and that each stage directive must specify its own agent section.
agent none
stages {
stage('Build') {
agent {
docker {
//This image parameter (of the agent section’s docker parameter) downloads the python:3.8
//Docker image and runs this image as a separate container. The Python container becomes
//the agent that Jenkins uses to run the Build stage of the Pipeline project.
image 'python:3.8.3'
}
}
steps {
//This sh step runs the Python command to compile the application
sh 'pip install -r requirements.txt'
}
}
}
}
When I tried to run the job with this Pipeline, I've got the following error:
I also tried to use image python:latest but this option didn't work either.
Can someone explain me :)?
Go to Computer Management -> Local Users and Groups and make sure the user used by jenkins is added to the docker-users group

Run multiple stages of Jenkins declarative pipe line in same docker slave

Objective
Objective here to migrate a scripted jenkins pipe line to declarative .Scripted pipe line is running on docker slave managed by kubernetes and the working syntax is as below
slave = 'dtr#tes.com/namespace/image:1.0'
dockerNode(image:slave)
{
stage('1'){echo "1"}
stage('2'){echo "2"}
}
The scripted pipe line is working perfect .
Concerns
Trying to use dockerNode to declrative pipeline but in declarative the dockerNode syntax is allowed only after stepes inside a stage
eg:
pipeline{
agent any
stages{
stage('1and2'}{
dockerNode(image:slave){
echo "1"
echo "2"
}
}
}
}
This is making concern to club bulky steps in to one stage than in to multiple one .So we would like your help to understand how can we better align and have multiple stages that is running in same container always .The container images is managed by kubernetes (kube pod with docker images)
To use one container for all steps you need to specify it in agent section
pipeline{
agent {
label 'docker-agent-label'
}
}
To use like so you need to configure pod template in 'Manage jenkins' -> 'Manage nodes and
Clouds' -> 'Configure clouds' -> 'Add new cloud' or use existing one.
It must be kubernetes if your jenkins host integrated with k8s.
UPDATE:

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

Resources