Jenkins Declarative: Kubernetes Plugin with multiple agents - jenkins

I am trying to setup a Jenkins declarative pipeline to use two different agents during its execution. The agents are dynamically spawned by the Kubernetes plugin. For sake of argument and simplicity, let's assume I want to do this:
On Agent 1 (Cloud name: "ubuntu"):
Run apt-get and some installs
Run a shell script
Additional steps
On Agent 2 (Cloud name: "fedora"):
Run dnf and some installs
Run a shell script
Additional steps
The problem I have is that if if I use a global agent declaration:
pipeline {
agent {
kubernetes {
cloud 'ubuntu'
label "ubuntu-agent"
containerTemplate {
name 'support'
image 'blenderfox/support'
ttyEnabled true
command 'cat'
}
}
}
...
}
Then that is used across all the stages if I don't declare an agent on each of the stages.
If I use agent none:
pipeline {
agent none
...
}
Then I have to declare an agent spec for each stage, for example:
stage ("apt update") {
agent {
kubernetes {
cloud 'ubuntu'
label "ubuntu-agent"
containerTemplate {
name 'support'
image 'blenderfox/support'
ttyEnabled true
command 'cat'
}
}
}
steps {
sh """
apt update
"""
}
}
While this would work for me in that I can declare per stage which agent I want, the problem this method causes, is that it spins up a new agent for each stage, meaning the state isn't carried between, for example, these two stages:
stage ("apt-update") {
agent {
....
}
steps {
sh """
apt update
"""
}
}
stage ("apt-install") {
agent {
....
}
steps {
sh """
apt install -y ....
"""
}
}
Can I reuse the same agent across stages? For example, something like this:
stage ("provision agent") {
agent {
...
label "ubuntu-agent"
...
}
steps {
sh """
echo "Provisioning agent"
"""
}
}
stage ("apt-update") {
agent {
label "ubuntu-agent" //reuse agent from previous stage
}
steps {
sh """
apt update
"""
}
}
stage ("apt-install") {
agent {
label "ubuntu-agent" //reuse agent from previous stage
}
steps {
sh """
apt install -y ....
"""
}
}

Found a solution. Very hacky but it works:
pipeline {
agent none
stages {
stage ("Provision dev agent") {
agent {
kubernetes {
cloud 'dev-cloud'
label "dev-agent-${env.BUILD_NUMBER}"
slaveConnectTimeout 300
idleMinutes 5
yamlFile "jenkins-dev-agent.yaml"
}
}
steps {
sh """
## Do any agent init steps here
"""
}
}
stage ("Do something on dev agent") {
agent {
kubernetes {
label "dev-agent-${env.BUILD_NUMBER}"
}
}
steps {
sh """
## Do something here
"""
}
}
stage ("Provision production agent") {
agent {
kubernetes {
cloud 'prod-cloud'
label "prod-agent-${env.BUILD_NUMBER}"
slaveConnectTimeout 300
idleMinutes 5
yamlFile "jenkins-prod-agent.yaml"
}
}
steps {
sh """
## Do any agent init steps here
"""
}
}
stage ("Do something on prod agent") {
agent {
kubernetes {
label "prod-agent-${env.BUILD_NUMBER}"
}
}
steps {
sh """
## Do something here
"""
}
}
}
}
The agent yamls vary, but you can do something like this:
spec:
containers:
- name: docker
image: docker:18.06.1
command: ["tail", "-f", "/dev/null"]
imagePullPolicy: Always
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "docker"
And then use the agent like so:
stage ("docker build") {
agent {
kubernetes {
label "dev-agent-${env.BUILD_NUMBER}"
}
}
steps {
container('docker') {
sh """
## docker build....
"""
}
}
}

There's a solution for this using sequential stages - you define a stage with your agent, and then you can nest other stages inside it
pipeline {
agent none
stages {
stage ("Provision dev agent") {
agent {
kubernetes {
cloud 'dev-cloud'
slaveConnectTimeout 300
yamlFile "jenkins-dev-agent.yaml"
}
}
stages {
stage ("Do something on dev agent") {
steps {
sh """
## Do something here
"""
}
}
stage ("Do something else on dev agent") {
steps {
sh """
## Do something here
"""
}
}
}
}
stage ("Provision prod agent") {
agent {
kubernetes {
cloud 'prod-cloud'
slaveConnectTimeout 300
yamlFile "jenkins-prod-agent.yaml"
}
}
stages {
stage ("Do something on prod agent") {
steps {
sh """
## Do something here
"""
}
}
stage ("Do something else on prod agent") {
steps {
sh """
## Do something here
"""
}
}
}
}
}
}

Related

Executing Jenkins Pipeline on a single agent with docker

What I'm trying to achieve:
I'm trying to execute a pipeline script where SCM (AccuRev) is checked out on 'any' agent and then the stages are executed on that same agent per the local workspace. The build stage specifically is expecting the code checkout to just be available in the workspace that is mapped into the container.
The problem:
When I have more than one agent added to the Jenkins configuration, the SCM step will checkout the code on one agent and then start the build step starting the container on the other agent, which is a problem because the code was checked out on the other agent.
What works:
Jenkins configured with a single agent/node
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
What I have tried, but doesn't work:
Jenkins configured with 2 agent(s)/node(s)
pipeline {
agent {
docker {
image 'ubuntu'
}
}
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The above doesn't work because it is expecting AccuRev to be installed in the container. I could go this route, but it is not really scalable and will cause issues on containers that are based on an older OS. There are also permission issues within the container.
I also tried adding 'reuseNode true' to the docker agent, as in the below:
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
I'm somewhat aware or have read about 'automatic checkout scm' as with the following, but this is odd as there is no place to define the target stream/branch to checkout. This is why I'm declaring a specific stage to handle scm checkout. It is possible this would handle the checkout without needing to specify the agent, but I don't get how to do this.
pipeline {
agent any
stages {
stage ('Build') {
steps {
sh 'cat Jenkinsfile'
}
}
}
}
Edit: adding a solution that seems to work, but need more testing before confirming.
The following seems to do what I want, executing the checkout stage on 'any' agent and then reusing the same agent to execute the build state in a container.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The below appears to have given me the functionality that I needed. The pipeline starts on "any" agent allowing the host level to handle the Checkout stage, and the "reuseNode" informs the pipeline to start the container on the same node, where the workspace is located.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}

How to run dynamic stages in paralell on jenkins with a separate kubernetes agent for each stage

I tried combining things I have found on the syntax but this is as close as I can get. It creates multiple stages but says they have no steps.
I can get it to run a bunch of parallel steps on the same agent if I move the agent syntax down to where the "test" stage is defined but I want to spin up separate pods for each one so I can actually use the kubernetes cluster effectively and do my work parallel.
attached is an example Jenkinsfile for reference
def parallelStagesMap
def generateStage(job) {
return {
stage ("$job.key") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
sh """
do some important stuff
"""
}
}
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelStagesMap = map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelStagesMap
}
}
}
stage('Release') {
agent etc
steps {
etc
}
}
}
}
To run your dynamically created jobs in parallel you will have to use scripted pipeline syntax.
The equivalent syntax for the declarative kubernetes agent in the scripted pipeline is podTemplate and node (see full Doucumentation):
podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
...
}
}
Notice that the podTemplate can receive the cloud parameter in addition to the yaml but it defaults to kubernetes so there is no need to pass it.
So in your case you can use this syntax to run the jobs in parallel on different agents:
// Assuming yaml is same for all nodes - if not it can be passed as parameter
podYaml= """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = ["name" : "aparam",
"name2" : "aparam2"]
parallel map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
}
}
def generateStage(job) {
return {
stage(job.key) {
podTemplate(yaml:podYaml) {
node(POD_LABEL) {
// Each execution runs on its own node (pod)
sh "do some important stuff with ${job.value}"
}
}
}
}
}
As explained in this answer:
Dynamic parallel stages could be created only by using Scripted Pipelines. The API built-it Declarative Pipeline is not available (like agent).
So, you can't run dynamic stages in parallel on different agents.
To achieve what you want to do, a solution would be to trigger another pipeline that run on a new kube pod and wait for its completion before next steps.
Here is the Jenkinsfiles for more understanding:
Main job Jenkinsfile:
def parallelJobsMap
def triggerJob(item) {
return {
build job: 'myChildJob', parameters: [string(name: 'MY_PARAM', value: "${item.value}")], wait: true
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelJobsMap = map.collectEntries {
["${it.key}" : triggerJob(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelJobsMap
}
}
}
stage('Release') {
agent any
steps {
echo "Release stuff"
}
}
}
}
Child job Jenkinsfile:
pipeline {
agent none
parameters {
string(
name: 'MY_PARAM',
description: 'My beautiful parameter',
defaultValue: 'A default value',
trim: true
)
}
stages {
stage ("Job") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
echo "Do some important stuff with the parameter " + params.MY_PARAM
}
}
}
}

Jenkins Environment Variables Conditional set

So I had to do a lot of different renditions of this with no success unless the environment was set before the stages. I am trying to define the environment for aws creds depending on the branch im in. qa then use qa creds for the env BUT it does not set when its inside the stage phase
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
stages {
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
environment {
TF_VAR_aws_access_key = credentials('dev-aws-access-key-id')
TF_VAR_aws_secret_key = credentials('dev-aws-secret-access-key')
AWS_ACCESS_KEY_ID = credentials('dev-aws-access-key-id')
AWS_SECRET_ACCESS_KEY = credentials('dev-aws-secret-access-key')
AWS_REGION = "us-west-2"
}
steps {
sh 'terraform init -backend-config="bucket=${GIT_BRANCH}-terraform-state" -backend-config="dynamodb_table=${GIT_BRANCH}-terraform-state-locking" -backend-config="region=$AWS_REGION" -backend-config="key=${GIT_BRANCH}-terraform-state/terraform.tfstate"'
}
}
IF i obviously set it before the stage phase in the pipeline of course it works.
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
environment {
TF_VAR_aws_access_key = credentials('dev-aws-access-key-id')
TF_VAR_aws_secret_key = credentials('dev-aws-secret-access-key')
AWS_ACCESS_KEY_ID = credentials('dev-aws-access-key-id')
AWS_SECRET_ACCESS_KEY = credentials('dev-aws-secret-access-key')
AWS_REGION = "us-west-2"
}
stages {
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
steps {
sh 'terraform init -backend-config="bucket=${GIT_BRANCH}-terraform-state" -backend-config="dynamodb_table=${GIT_BRANCH}-terraform-state-locking" -backend-config="region=$AWS_REGION" -backend-config="key=${GIT_BRANCH}-terraform-state/terraform.tfstate"'
}
}
My question is , is there a way to set the environment variables before the stages phase BUT conditionally depending on the branch?
Well, yes, there is.
First option: you can run a combination of scripted and declarative pipeline (please note that I haven't checked it works, this is just to send you down a right path):
// scripted pipeline
node('master') {
stage("Init variables") {
if (env.GIT_BRANCH == 'dev') {
env.AWS_REGION = "us-west-2"
}
else {
// ...
}
}
}
// declarative pipeline
pipeline {
agent {
docker {
image '/terraform-npm:latest'
registryCredentialsId 'dockerhubPW'
}
}
stages {
stage('Use variables') {
steps {
sh 'echo $AWS_REGION'
}
}
}
Another option is to use withEnv directive inside steps:
stage('Initialize Dev Environment') {
when {
branch 'dev'
}
steps {
withEnv(['AWS_REGION=us-west-2']) {
sh 'echo $AWS_REGION'
}
}
Thanks you MaratC for guiding me in the right path, it def helped. here is what i used
steps {
withCredentials([string(credentialsId: 'qa-aws-access-key-id', variable: 'TF_VAR_aws_access_key'),string(credentialsId: 'qa-aws-secret-access-key', variable: 'TF_VAR_aws_secret_key'),string(credentialsId: 'qa-aws-access-key-id', variable: 'AWS_ACCESS_KEY_ID'),string(credentialsId: 'qa-aws-secret-access-key', variable: 'AWS_SECRET_ACCESS_KEY')])
{
sh 'terraform plan -var-file=${GIT_BRANCH}.tfvars -out=${GIT_BRANCH}-output.plan'
}
}

Jenkins declarative pipeline with separate docker images for a whole pipeline and some stage(s)

It's possible to provide different docker images for different jenkins stages, but is it possible to provide some default docker image for a whole pipeline while providing some specific docker image for some stage(s)?
For example (it's possible):
pipeline {
agent none
stages {
stage('first stage') {
agent {
docker { image 'first_docker' }
}
steps {
sh 'echo "just do it"'
}
}
stage('second stage') {
agent {
docker { image 'second_docker' }
}
steps {
sh 'echo "did it"'
}
}
}
}
The question is about:
pipeline {
agent {
docker { image 'default_docker' }
}
stages {
stage('first stage') {
steps {
sh 'echo "just do it"'
}
}
stage('second stage') {
agent {
docker { image 'second_docker' }
}
steps {
sh 'echo "did it"'
}
}
}
}
I don't mean a case where a default docker image has a docker inside and thus providing 'Matryoshka' (a nested doll).

How can I run something during agent setup in a Jenkins declarative pipeline?

Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?

Resources