Jenkins Docker Cloud - Dynamic Agent Template Image - docker

Setup
I'm trying to setup a Jenkins CI pipeline:
A Jenkins Main node which manages the pipeline
A build node generates docker containers (Stage 1 & 2)
Another node runs these containers. These containers are launched by Jenkins using Docker remote API (Stage 3)
Pipeline
Here is the pipeline (truncated)
pipeline {
agent {
label 'Build'
}
options {
skipDefaultCheckout()
}
stages {
stage('1')
{
steps {
sh script: "$WORKSPACE/jenkins/build_packages.sh", label: "Build Packages"
}
}
stage('2')
{
steps {
sh script: "$WORKSPACE/jenkins/build_container.sh", label: "Build Container"
}
}
stage('3')
{
steps {
script {
for (def key in testsuites.keySet())
{
def testsuite = key
executions[testsuite] = {
node('Docker-Test')
{
execute_testsuite(testsuite, testsuites[testsuite])
}
}
}
parallel(executions)
}
}
}
}
}
Stage 2 will push to a remote registry the built image. In stage 3, this image is used. This is working perfectly.
Problem
My problem is if the CI is triggered by two different developpers at the same time, then the image may change during execution of testsuites...
What i would like to do is push a different image (e.g. using git hash as docker tag) and use that image in Stage 3. And probably use the internal Jenkins function to build the image and push it instead of my own scripts.
But the image is set within the Docker Cloud - Agent Template configuration (see in screen capture below). Is there a way to modify this from within the pipeline ?
Docker Cloud - Agent Template
This seems to be something possible at least using Kubernetes:
https://support.cloudbees.com/hc/en-us/articles/360049905312-Dynamically-selecting-the-docker-image-for-a-downstream-Pipeline-using-Kubernetes-Pod-Templates

Related

Conditionally use docker agent or different agentLabel based on user choice in jenkins pipeline

I want to be able to use an AWS node to run our jenkins build stages or one of our PCs connected to hardware that can also test the systems. We have some PCs installed that have a recognised agent, and for the AWS node I want to run the steps in a docker container.
Based on this, I would like to decide whether to use the docker agent or the custom agent based on parameters passed in by the user at build time. My jenkinsfile looks like this:
#!/usr/bin/env groovy
#Library('customLib')
import groovy.transform.Field
pipeline {
agent none
parameters{
choice(name: 'NODE', choices: ['PC1', 'PC2', 'dockerNode', 'auto'], description: 'Select which node to run on? Select auto for auto assignment.')
}
stages {
stage('Initialise build environment'){
agent { // agent
docker {
image 'docker-image'
label 'docker-agent'
registryUrl 'url'
registryCredentialsId 'dockerRegistry'
args '--entrypoint=\'\''
alwaysPull true
}
}
steps {
script {
if (params.NODE != 'auto' && params.NODE != 'dockerNode'){
agentLabel = params.NODE + ' && ' + 'custom_string' // this will lead to running the stages on our PCs
}else {
agentLabel = 'docker-agent'
}
env.NODE = params.NODE
}
}
} // initialise build environment
stage('Waiting for available node') {
agent {label agentLabel} // if auto or AWSNode is chosen, I want to run in the docker container defined above; otherwise, use the PC.
post { cleanup { cleanWs() } }
stages {
stage('Import and Build project') {
steps {
script {
...
}
}
} // Build project
...
}
} // Waiting for available node
}
}
Assuming the PC option works fine if I don't add this docker option and the docker option also works fine on its own, how do I add a label to my custom docker agent and use it conditionally like above? Running this Jenkinsfile, I get the message 'There are no nodes with the label ‘docker-agent’ which seems to say that it can't find the label I defined.
Edit: I worked around this problem by adding two stages that run a when block that decides whether the stage runs or not, with a different agent.
Pros: I'm able to choose where the steps are run.
Cons:
The inside build and archive stages are identical, so they are simply repeated for each outer stage. (This may become a pro if I want to run a separate HW test stage only on one of the agents)
both outer stages always run, and that means that the docker container is always pulled from the registry, even if it is not used (i.e. if we choose to run the build process on a PC)
Seems like stages do not lend themselves easily to being encapsulated into functions.

Will a Jenkins pipeline compile twice when building a tag?

I want to setup a Jenkins pipeline which builds a Docker image whenever Jenkins is building a tag, so I used buildingTag() in the when condition. This works fine but I have some trouble understanding Jenkins at this point.
Every commit triggers the "Compile" stage. If a tag is built, will the "Compile" stage be executed twice? In a first run on the e.g. master branch and in a second run when explicitly starting the "Tag" build job? If so, how could this be avoided?
pipeline {
agent any
environment {
APP_NAME = 'myapp'
}
stages {
stage('Compile') {
steps {
echo "Start compiling..."
}
}
stage('Build Docker Image') {
when { buildingTag() }
steps {
echo "Building a Docker image..."
}
}
}
}
For a multibranch project branch builds are separate from tag builds, so yes, each build would have the compile stage running. They will also have separate workspaces, so they should not affect each other.
If you don't want a stage to run at tag build, just add a when { not { buildingTag() } } expression to that stage.

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

Jenkins Declarative Pipeline - SCM

I am taking some Jenkins tutorial. The sample code I read is
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'python:2-alpine'
}
}
steps {
sh 'python -m py_compile sources/add2vals.py sources/calc.py'
}
}
stage('Test') {
agent {
docker {
image 'qnib/pytest'
}
}
steps {
sh 'py.test --verbose --junit-xml test-reports/results.xml sources/test_calc.py'
}
post {
always {
junit 'test-reports/results.xml'
}
}
}
stage('Deliver') {
agent {
docker {
image 'cdrx/pyinstaller-linux:python2'
}
}
steps {
sh 'pyinstaller --onefile sources/add2vals.py'
}
post {
success {
archiveArtifacts 'dist/add2vals'
}
}
}
}
}
So basically there are three steps Build, Test and Deliver. They all use different images to generate different containers. But this Jenkins job is configured to use the Git as the SCM.
So if this Jenkins build is run, says the project is built on the first container. Then the second stage is testing the project on another container, followed by the deliver on the third container. How does this Jenkins job make sure that these three steps are performing on the code sequentially.
Based on my understanding, each stage needs to perform git clone/git pull, and before the stage finishes, the git push is required.
If SCM IS configured through Jenkins to use Git, do we need to include the git clone/git pull', as well as 'git push' in the corresponding shell scripts(understeps, or it it already taken into consideration by theSCM` function of Jenkins?
Thanks
In this case, you must ensure that the binary that is in the QA environment must be the same as it should be in the UAT environment and then in Production.
For this, you must use an artifact repository or registry (Artifactory, Nexus, Docker Registry, etc.) to promote the artifacts to the Production environment.
See this link and see how it was done in the Pipeline.

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

Resources