How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file? - docker

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?

There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

Related

I can't mix ECS and k8s slaves

My top level pipeline directive looks like this:
pipeline {
agent {
kubernetes {
defaultContainer "mycontainer"
yaml 'my full podspec'
}
}
Then later in a stage I want to use ECS like this:
stage('my stage') {
agent { node { label 'my-ecs-cluster'; customWorkspace 'myworkspace'; } }
I get this error:
Also: java.lang.ClassCastException: com.cloudbees.jenkins.plugins.amazonecs.ECSSlave cannot be cast to org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave
If I put the ecs agent in the toplevel pipeline directive and then switch to k8s slaves in subsequent stages I don't get an error. So this is only an issue one way. Very frustrating. I only have a few stages that need the ecs slave for so having to define that as the default in the pipeline definition is a pain.
The bigger issue too is the if I define the ECS slaves in the pipeline definition they are provisioned every time and I have some cases where I don't want them to be provisioned at all because they aren't needed.
Just to add more info, setting agent any or none I don't think will work either because when I specify the agent in multiple stages they use a different container. I want to be able to set the ECS agent at the top-level pipeline when required because then all the stages will run with the same container and use the same workspace.

How to set an agent{} in a shared pipeline library in Jenkins?

I am writing a shared pipeline library for projects using either a Docker agent or agents specified via labels. I want the agent{} section to be configurable.
In a regular Jenkinsfile for a project using Docker the agent section looks like this:
agent
{
docker
{
label 'docker'
image 'my-image'
}
}
The agent section of projects not using Docker looks like this:
agent
{
node
{
label 'FOO'
label 'BAR'
}
}
What is the correct syntax for the agent section of the shared pipeline lib that would yield either the first or the second agent{} examples from above?
// /vars/sharedPipeline.groovy
def call(body) {
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
agent {
// <---- What goes here? What is the value that pipelineParams.buildAgent should have?
}
}
...
}
I want to avoid solutions that will force me to:
manually invoke 'docker run' from within a stage
check the label to decide what to do next
Update:
Based on what I have found so far it is not possible to dynamically choose between docker and non-docker agents at the top level of the pipeline. There are some workarounds available and they require specifying an agent for each individual stage or they use docker.image().inside() chains protected by some control flow constructs.

Scripted Jenkinsfile Docker Agent, how to specify the reuseNode flag and is it required?

According to jenkins documentation referenced here, to ensure that docker agent defined on a particular stage run on the same node defined in the pipeline, the flag reuseNode must be set to true.
reuseNode
A boolean, false by default. If true, run the container on the node specified at the top-level of the Pipeline, in the same workspace, rather than on a new node entirely.This option is valid for docker and dockerfile, and only has an effect when used on an agent for an individual stage.
For declarative this can be achieved using
agent {
docker {
image 'gradle-java:0.0.1'
reuseNode true
}
}
However I am unable to find any example of how to set this in scripted pipelines.
Can somebody help with how to achieve this in scripted pipelines?
I found that the way to do this in the scripted pipeline, via the use of docker.image(dockerImage).inside(dockerArgs), is not to include it at all. From what I can tell, opposite of the declarative pipeline, this runs on the same node by default.
Instead, if you wanted to run on a different node, you would insert use of node:
node {
docker.image(dockerImage).inside(dockerArgs) {
sh '''echo container'''
}
}
In new version of declarative pipeline, it has been enhanced and suggesting to use label
agent {
docker {
image 'maven:3-alpine'
label 'my-defined-label'
args '-v /tmp:/tmp'
}
}
If you want to do the same with the scripted pipeline , mention the agent label name in node(agentName), it would be like
node("my-defined-label") {
docker.image('maven:3-alpine').inside('-v $HOME/.m2:/root/.m2') {
stage('Build') {
sh 'mvn -B'
}
}
}

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins Pipeline: Are agents required to utilize Jenkinsfile?

I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!
The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.
If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.

Resources