Jenkins kubernetes-plugin not understanding env variable in scripted pipeline - jenkins

Jenkins version 2.235.2
kubernetes-plugin version 1.26.4
I'm trying to parametrize the yamlFile used as pod template with a env variable based on the branch I'm building. What I have right now is:
pipeline {
environment {
MASTER_BRANCH = "origin/dev"
BUILD_POD = "${env.GIT_BRANCH == env.MASTER_BRANCH ? 'jenkins/build-pod-prod.yaml' : 'jenkins/build-pod.yaml' }"
}
agent {
kubernetes {
idleMinutes 3
yamlFile env.BUILD_POD
defaultContainer 'docker'
}
}
}
But that is taking a default template with just the jnlp container. I've tried also putting:
yamlFile env.BUILD_POD
yamlFile "${env.BUILD_POD}"
yamlFile "${BUILD_POD}"
yamlFile "$BUILD_POD"
yamlFile $BUILD_POD
But none of that worked. I don't know if it's some misunderstanding from my side or a bug.
I tried also to do the pipeline as a scripted one, which seems like more versatile, but I cannot now neither how to accomplish what I need.
Thanks all in advance.

Related

Kubernetes cloud agent for Jenkins always offline during pipeline execution

I try to deploy microservices into k8s pod using Jenkins pipeline and k8s cloud agent.
I configured my agent on Jenkins but during execution I always have message saying my agent is offline.
You could find my Jenkins configuration and my Jenkins file.
Regards,
Jenkinsfile
pipeline {
environment {
MAVEN_SETTINGS = ''
MAVEN_ENV = 'maven-3.6.3'
JDK_ENV = 'jdk-1.2'
GIT_URL = 'PRIVATE REPO'
}
agent any
parameters{
booleanParam(name:"RELEASE",
description:"Release",
defaultValue:false
)
}
stages {
stage ('build & deploy'){
when{
expression {!params.RELEASE}
}
steps{
withMaven(
maven: env.MAVEN_ENV,
mavenSettingsConfig: env.MAVEN_SETTINGS,
jdk: env.JDK_ENV) {
sh "mvn clean deploy -U"
}
}
}
stage ('create image'){
steps{
script {
docker.withRegistry('https://registry.digitalocean.com', 'images-credential') {
def customImage = docker.build("images-repo:${params.IMAGE_VERSION}","./target")
customImage.push("${params.IMAGE_VERSION}")
}
}
}
}
stage ('Deploy Pod') {
agent { label 'kubepod' }
steps {
script {
kubernetesDeploy(configs: "/kubernetes/pod.yml", kubeconfigId: "mykubeconfig")
kubernetesDeploy(configs: "/kubernetes/services.yml", kubeconfigId: "mykubeconfig")
}
}
}
}
}
Kubernetes agent config
Pod template config
Jenkins global security config
Build log
Use TCP port for inbound agents: Fixed and put 50000
You do not have to create direction connection.Please go to Kubernetes cloud details and add your service token and ssl of that token to make connection. you have a issue in your pipeline.Please correct it like
stage(test){
agent{
cloud 'cloud name'
yaml"""
"""
}
}
Please follow this article to start from the basic details.

passing Jenkins env variables between stages on different agents

I've looked at this Pass Artifact or String to upstream job in Jenkins Pipeline and this Pass variables between Jenkins stages and this How do I pass variables between stages in a declarative Jenkins pipeline?, but none of these questions seem to deal with my specific problem.
Basically I have a pipeline consisting of multiple stages, each run in its own agent.
In the first stage I run a shell script. Here two variables are generated. I would like to use these variables in the next stage. The methods I've seen so far seem to only work when passing variables within the same agent.
pipeline {
stages {
stage("stage 1") {
agent {
docker {
image 'my_image:latest'
}
}
steps {
sh ("""
export VAR1=foo
export VAR2=bar
""")
}
}
stage("stage 2") {
agent {
docker {
image 'my_other_image:latest'
}
}
steps {
sh ("echo "$VAR1 $VAR2")
//expecting to see "foo bar" printed here
}
}

How to build docker images using a Declarative Jenkinsfile

I'm new to using Jenkins....
I'm trying to automate the production of an image (to be stashed in a repo) using a declarative Jenkinsfile. I find the documentation to be confusing (at best). Simply put, how can I convert the following scripted example (from the docs)
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
to a declarative Jenkinsfile....
You can use scripted pipeline blocks in a declarative pipeline as a workaround
pipeline {
agent any
stages {
stage('Build image') {
steps {
echo 'Starting to build docker image'
script {
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
}
}
I'm using following approach:
steps {
withDockerRegistry([ credentialsId: "<CREDENTIALS_ID>", url: "<PRIVATE_REGISTRY_URL>" ]) {
// following commands will be executed within logged docker registry
sh 'docker push <image>'
}
}
Where:
CREDENTIALS_ID stands for key in Jenkis under which you store credentials to your docker registry.
PRIVATE_REGISTRY_URL stands for url of your private docker registry. If you are using docker hub then it should be empty.
I cannot recommend the declarative syntax for building a Docker image bcos it seems that every important step requires falling back to the old scripting syntax. But if you must, a hybrid approach seems to work.
First a detail about the scm step: when I defined the Jenkins "Pipeline script from SCM" project that fetches my Jenkinsfile with a declarative pipline from git, Jenkins cloned the repo as the first step in the pipeline even tho I did not define a scm step.
For the build and push steps, I can only find solutions that are a hybrid of old-style scripted pipeline steps inside the new-style declarative syntax. For example see gustavoapolinario's work at Medium:
https://medium.com/#gustavo.guss/jenkins-building-docker-image-and-sending-to-registry-64b84ea45ee9
which has this hybrid pipeline definition:
pipeline {
environment {
registry = "gustavoapolinario/docker-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/gustavoapolinario/microservices-node-example-todo-frontend.git'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
Because the first step here is a clone, I think he built this example as a standalone pipeline project in Jenkins (not a Pipeline script from SCM project).

External workspace manager plugin with declarative pipeline

I want to use the mentioned plugin with a declarative pipeline, to be precise I want to convert the following documentation example to a declarative pipeline:
The pipeline code in the upstream job is the following:
stage ('Stage 1. Allocate workspace in the upstream job')
def extWorkspace = exwsAllocate 'diskpool1'
node ('linux') {
exws (extWorkspace) {
stage('Stage 2. Build in the upstream job')
git url: 'https://github.com/alexsomai/dummy-hello-world.git'
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn clean install -DskipTests'
}
}
And the downstream's Pipeline code is:
stage ('Stage 3. Select the upstream run')
def run = selectRun 'upstream'
stage ('Stage 4. Allocate workspace in the downstream job')
def extWorkspace = exwsAllocate selectedRun: run
node ('test') {
exws (extWorkspace) {
stage('Stage 5. Run tests in the downstream job')
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn test'
}
}
Thanks!
I searched everywhere for a clear answer to this, yet never found a definitive answer. So, I pulled the External Workspace Plugin code and read it. The answer is simple as long as the plugins Model doesn't change.
Anytunc's answer is very close, but the issue is getting the path from the External Workspace Plugin and getting it into the customWorkspace configuration.
What I ended up doing was creating a method:
def getExternalWorkspace() {
extWorkspace = exwsAllocate diskPoolId: "jenkins"
return extWorkspace.getCompleteWorkspacePath()
}
and setting my agent to:
agent {
node {
label 'Linux'
customWorkspace getExternalWorkspace()
}
}
If you'd rather not set the entire pipeline to that path, you could create as many external workspaces as you want, then use
...
steps {
dir(getExternalWorkspace()) {
do fancy stuff
...
}
}
...
You can use this agent directive:
agent {
node {
label 'my-defined-label'
customWorkspace '/some/other/path'
}
}

How to set PATH in Jenkins Declarative Pipeline

In Jenkins scripted pipeline you can set PATH env variable like this :
node {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
withEnv(["PATH+MAVEN=${tool 'M3'}/bin"]) {
sh 'mvn -B verify'
}
}
Notice the PATH+MAVEN as explained here https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-withenv-code-set-environment-variables :
A list of environment variables to set, each in the form
VARIABLE=value or VARIABLE= to unset variables otherwise defined. You
may also use the syntax PATH+WHATEVER=/something to prepend /something
to $PATH.
But I didn't find how to do it in declarative pipeline using environment syntax (as explained here : https://jenkins.io/doc/pipeline/tour/environment).
environment {
DISABLE_AUTH = 'true'
DB_ENGINE = 'sqlite'
}
Ideally I would like to update the PATH to use custom tools for all my stages.
It is possible with environment section:
pipeline {
agent { label 'docker' }
environment {
PATH = "/hot/new/bin:${env.PATH}"
}
stages {
stage ('build') {
steps {
echo "PATH is: ${env.PATH}"
}
}
}
}
See this answer for info.
As a workaround, you can define an environment variable and use it in the sh step:
pipeline {
environment {
MAVEN_HOME = tool('M3')
}
stages {
stage(Maven') {
sh '${MAVEN_HOME}/bin/mvn -B verify'
}
}
}
Check the following link, this explains how to configure your tools.
Using the declarative pipeline things become a bit different but overall it is easier to understand.
declarative-maven-project
Using the tool section in pipeline is only allowed for pre-installed Global Tools. Some tools are provided by plugins, but if it not exists I'am afraid you cannot use the environment setup via pipeline tool declaration.
I hope to be wrong!

Resources