The documentation of the Jenkins Kubernetes Plugin states:
Unlike scripted k8s template, declarative templates do not inherit from parent template. You need to explicitly declare the inheritance if necessary.
Plugin Readme
Unfortunately there is no example of how to explicitly state the inheritance from the build's main template. I tried using the label, but then the inheritance seems to be ignored.
def parentLabel = "my-project-${UUID.randomUUID().toString()}"
pipeline {
agent {
kubernetes {
label parentLabel
yamlFile "jenkins-agent.yml"
// a global template in the cloud configuration
inheritFrom "docker"
}
}
stages {
// .. stages using the above agent
stage( 'Test Container' ) {
agent {
kubernetes {
label "xcel-spring-stage-${UUID.randomUUID().toString()}"
inheritFrom parentLabel
yaml """
apiVersion: v1
kind: Pod
metadata:
namespace: build
labels:
project: x-celerate-spring-application
spec:
containers:
- name: spring-application
# defined in previous stages, skipped for brevity
image: ${env.IMAGE_NAME}:${version}.${env.BUILD_NUMBER}
"""
}
}
}
}
}
How / by what template name can I reference the template declared at the top of the pipeline in an inheritFrom statement in the stage agent declaration to actually define the inheritance explicitly?
The documentation about default inheritance has been updated, it states now:
You need to explicitly declare the inheritance if necessary using the field inheritFrom.
There is two examples in the documentation:
podTemplate(inheritFrom: 'mypod', containers: [
containerTemplate(name: 'maven', image: 'maven:3.8.1-jdk-11')
]) {
node(POD_LABEL) {
…
}
}
or in declarative pipeline:
pipeline {
agent {
kubernetes {
inheritFrom 'mypod'
yaml '''
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-11
'''
…
}
}
stages {
…
}
}
Related
Using Jenkins declarative pipeline and an Artifactory file spec, how do I download an Artifactory artifact that contains parenthesis in the artifact name? Is there a way to escape the parenthesis?
For example, I have two artifacts in my Artifactory repository:
default-generic-local/one/two/aaabbbccc(1234).txt
default-generic-local/one/two/aaabbbccc1234.txt
When I run the pipeline defined below, it downloads aaabbbccc1234.txt. I would expect it to download aaabbbccc(1234).txt instead.
Here's an example of the pipeline script and file spec I'm using with my pipeline job:
pipeline {
agent any
stages {
stage('Download') {
steps {
rtServer(
id: 'my-art-server',
url: 'https://my.artifactory.url',
credentialsId: 'my-artifactory-creds')
rtDownload(
serverId: 'my-art-server',
spec: '''
{
"files": [
{
"pattern": "default-generic-local/one/two/aaabbbccc(1234).txt",
"target": "output/",
"flat": "true"
}
]
}''',
failNoOp: true)
}
}
}
post {
always {
cleanWs()
}
}
}
Jenkins version 2.235.2
kubernetes-plugin version 1.26.4
I'm trying to parametrize the yamlFile used as pod template with a env variable based on the branch I'm building. What I have right now is:
pipeline {
environment {
MASTER_BRANCH = "origin/dev"
BUILD_POD = "${env.GIT_BRANCH == env.MASTER_BRANCH ? 'jenkins/build-pod-prod.yaml' : 'jenkins/build-pod.yaml' }"
}
agent {
kubernetes {
idleMinutes 3
yamlFile env.BUILD_POD
defaultContainer 'docker'
}
}
}
But that is taking a default template with just the jnlp container. I've tried also putting:
yamlFile env.BUILD_POD
yamlFile "${env.BUILD_POD}"
yamlFile "${BUILD_POD}"
yamlFile "$BUILD_POD"
yamlFile $BUILD_POD
But none of that worked. I don't know if it's some misunderstanding from my side or a bug.
I tried also to do the pipeline as a scripted one, which seems like more versatile, but I cannot now neither how to accomplish what I need.
Thanks all in advance.
I'm trying to setup "generic" build system and using Docker with Jenkins to build and run tests with pipeline.
I use wrapper script (pulled from repo) that contains most of the stuff docker needs. Only thing that changes is a tag for images.
How can I somehow define this tag in build configuration as an environment variable or similar which can be then passed to actual pipeline script.
Simplified script:
pipeline {
stages {
stage("Build test image") {
dockerImage = docker.build("...", "--build-arg MYBRANCH=${SOMEVAR}")
}
}
}
So how I can set (per build config) SOMEVAR?
I could have per custom Jenkinsfile branch but eventually that will just end up with maintenance nightmare (I already now do have 7 branches to build)
It can be defined by static in environment or dynamic in parameters. In case of parameters then you should provide values when run a build through interface or api.
pipeline {
environment {
SOMEVAR = "123"
}
parameters {
choice(name: 'CHOICE_VAR', choices: ['1', '2', '3'], description: 'Type...')
string(name: 'STRING_VAR', defaultValue: '', description: 'Type...')
}
stages {
stage("Build test image") {
dockerImage = docker.build("...", "--build-arg MYBRANCH=${env.SOMEVAR}")
dockerImage = docker.build("...", "--build-arg MYBRANCH=${params.CHOICE_VAR}")
dockerImage = docker.build("...", "--build-arg MYBRANCH=${params.STRING_VAR}")
}
}
}
I am using Jenkins scripted pipeline in a multibranch job.
There is a parameter that is only supposed to be available in the trunk, not in any of the branches of the multibranch job.
Currently with scripted pipeline this is easy to do (inside a shared library or directly on Jenkinsfile):
def jobParams = [
booleanParam(defaultValue: false, description: 'param1', name: 'param1')
]
if (whateverCondition) {
jobParams.add(booleanParam(defaultValue: false, description: 'param2', name: 'param2'))
}
properties([
parameters(jobParams)
])
I am currently trying to migrate to jenkins declarative syntax, but i don't see a simple way to create a parameter that is only available in some conditions (i know i can ignore it, but i don't really want it to show it at all).
The only solution so far is to move the pipeline to a shared library also (possible since Declarative 1.2). I don't like this solution because the entire pipeline must be replicated, which seem a bit too extreme just for one line.
if (whateverCondition) {
pipeline {
agent any
parameters {
booleanParam(defaultValue: false, description: 'param1', name: 'param1')
booleanParam(defaultValue: false, description: 'param2', name: 'param2')
}
(...)
}
} else {
pipeline {
agent any
parameters {
booleanParam(defaultValue: false, description: 'param1', name: 'param1')
}
(...)
}
}
Is there a way i can extract just the part of parameter definition of the declarative pipeline to a global variable of a shared library or something?
Thanks in advance for any help!
My company has a small pipeline library that we implicitly load for every build. Is there a way to overload the node { block of every build transparently?
My specific case is that I'm provisioning kubernetes slaves with the kubernetes plugin, and I want to provide a default YAML template, while allowing users to pick another template or override specific values. Eg:
node {
// Gets you a Pod with a DinD engine with a low CPU/Mem request/limit
}
Optionally overridden by name:
node('2-core') {
// Gets you a Pod with a DinD engine with 2 CPU/ more Mem request/limit
}
Or overridden with a template:
import com.foo.utils.PodTemplates
slaveTemplates = new PodTemplates()
slaveTemplates.bigPod {
node {
// Big node
}
}
Or:
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: redis
image: redis
"""
) {
node (label) {
// Same small pod as before PLUS a redis container
}
}
This seems trickiest, since you want the values of the parent to override the values of the child.
You can do this, but, in my opinion, it will lead to confusing behavior and possibly strange error cases.
For example:
echo.groovy
def call(String string) {
steps.echo "Calling step echo: $string"
}
Jenkinsfile
echo 'hello'
Output:
Calling step echo: hello
There is a blog post here that demonstrates this a little more in depth
Paid support for some pipeline restriction tools are offered by CloudBees that might solve your use case
The heaviest way to accomplish this is to of course write a plugin.