How can I template my Pod definition with the Jenkins kubernetes plugin? - jenkins

I am using the Jenkins kubernetes plugin to run pipeline builds:
pipeline {
agent {
kubernetes {
label 'kind'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: dind
...
I want to template a particular field of the yaml with an integer between 0 and 5 that is rotated in a round robin fashion (i.e. first build is templated with 0, second build templated with 1 etc. and goes back to 0 again after 4).
How can I achieve this?

You can use podTemplates next code is from https://github.com/jenkinsci/kubernetes-plugin, you can use variables to prepare any kind of pods you need.
If this is not what you need, can you provide an example of what you are trying to do?
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(label) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}

Related

Jenkins pipeline - start pod with image built in previous stage in a different pod

I have a Jenkins deployed on a kubernetes cluster A, and that can spawn pods in the same cluster A. My workflow is the following
Start a pod with an image designed to build a docker image. This image will be built with a deterministic tag example/appname:${GIT_COMMIT}
After the image was build in the previous pod, start a new pod with the image from the previous build (example/appname:${GIT_COMMIT}), and run multiple test tools
Currently I am successfully making a build with
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: aws-dockerizer
image: example/aws-dockerizer:0.1.7
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
""") {
node(POD_LABEL) {
stage('Clone') {
git url: 'https://github.com/example/my-app/', branch: '${build_branch_name}', credentialsId: 'github-app'
container('aws-dockerizer') {
stage('Build and deploy') {
withAWS(credentials: 'aws-credentials', region: 'eu-central-1') {
sh '''#!/usr/bin/env bash
git config --global --add safe.directory ${WORKSPACE}
scripts/build_and_push_docker_image_to_aws.sh
'''
}
}
}
}
}
}
I'd like to add the following stage to my pipeline. Note that the new pod "experience-${GIT_COMMIT}" CANNOT be started because he image is not available until the previous step is complete.
podTemplate(
label: "experience-${GIT_COMMIT}"
yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: experience
image: example.dkr.ecr.eu-central-1.amazonaws.com/example/appname:${GIT_COMMIT}
command: ['cat']
tty: true
""")
stage('Run tests') {
node("experience-${GIT_COMMIT}") {
stage('Run tests') {
container('experience') {
stage('Run Rspec') {
sh 'bundle exec rspec'
}
post {}
}
}
}
}
}
Any idea if this is possible ? What's the DSL / concepts I need to use ? How do I "merge" the two pieces of code to achieve what I want ?
I've tried to play around a bit, but when I declare both pod templates at the beginning, it hangs the job until the 2nd pod is ready, which it never will be...

Set environment variables to Jenkins pipeline for deploying kubernetes pods

I have Kubernetes Cluster where deployed and scaled Jenkins,
below podTemplate yaml file which I run in Jenkins pipeline:
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker/compose
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
) {
def image = "image/name"
node(POD_LABEL) {
stage('Build and run Docker image') {
git branch: 'test',
credentialsId: 'github-credentials',
url: 'https://github.com/project/project.git'
container('docker') {
sh "docker-compose up -d --build"
}
}
}
}
I have got an error:
[Pipeline] sh
+ docker-compose up -d --build
The ENV variable is not set. Defaulting to a blank string.
[Pipeline] }
Which is best practice to set env vars during the this kind of deployments?
Update:
yes, it's working outside of Jenkins,
I have listed env var files in docker compose yaml file:
...
context: ./validator
ports:
- '${VALIDATOR_PROD_ONE_PORT}:${VALIDATOR_PROD_ONE_PORT}'
volumes:
- ./validator/logs_1:/usr/src/app/logs
***env_file: .env.validator.test***
...
Of course I cat set env var in Jenkins pipeline before executing docker-compose build like this, for example:
container('docker') {
***sh ' echo "someEnvVar=value" > .env.validator.test'***
sh "docker-compose up -d --build"
}
This way also working, but not beautiful (:
You should be able to set environment variables on your stage as following:
stage('Build and run Docker image') {
environment {
SOME_ENV_VAR = "SOME_VAL"
}
git branch: 'test',
credentialsId: 'github-credentials',
url: 'https://github.com/project/project.git'
container('docker') {
sh "docker-compose up -d --build"
}
}
This would essentially set shell Environment variables, which should precede over those in .env file.

Use "label" or define a pod template in jenkinsfile for kubernetes-plugin?

In General
I'm trying to use label when using kubernetes-plugin for Jenkins, but I get a little bit confused.
In my pipeline bellow I'm trying to build test job in parallel steps with different labels (agents).
I already have configured the plugin with pod template and container in my Jenkins config, where I use same settings as it's in the pipeline podTemplate defined.
Issue
The problem is that when I use agent label in stage 2 there is jnpl image running instead the image that I point in the config someimage:latest.
In stage 1 where I define the pod in pipeline there is no problem and the required images are running fine.
Question
What I'm doing wrong?
Here is my jenkinsfile and config of the kubernetes-plugin in Jenkins:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: k8s
image: someimage:latest
command:
- sleep
args:
- infinity
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
workingDir: "/home/jenkins/agent"
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: "jenkins-worker-pvc"
readOnly: false
"""
pipeline {
agent none
stages {
stage("Parallel") {
parallel {
stage("1.k8s") {
agent {
kubernetes {
yaml podTemplate
defaultContainer 'k8s'
}
}
steps {
sh """
mvn -version
"""
}
}
stage("2. k8s") {
agent { label 'k8s' }
steps {
sh """
mvn -version
"""
}
}
stage("win") {
agent { label 'windows' }
steps {
bat "dir"
}
}
}
}
}
}
You did not specified an image for stage with label k8s and windows.
You can read in the docs that:
The plugin creates a Kubernetes Pod for each agent started, defined by the Docker image to run, and stops it after each build.
Agents are launched using JNLP, so it is expected that the image connects automatically to the Jenkins master.
You are using podTemplate and I would advice setting up container , this might look like the following:
podTemplate(containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}
You can read more about Container Configuration and Container Group Support

Kubernetes Pod Template Pre Pod,Service,Deployment yaml file

I am newly studying kubernetes for my own interest, i am trying to create jenkins jobs to deploy our application. I have one master and worker machine and both are up and running i can able ping both machine from one to another.
As of now, i don't have any pods and deployment services my cluster its fresh setup environment.
right now jenkins file contains Pod Template for nodejs and docker with single stage to install NPM modules.
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
Now the question is if run the jenkins jobs with above code, above mentioned docker and nodejs image will download from docker registry and this will save into my local machine ? how this will work, can you please some one explain me ?
The above code is for jenkins plugin https://github.com/jenkinsci/kubernetes-plugin.
So the running the above jenkins would run job on a agent or on master. The images would be downloaded to that agent/master. The above plugin is used to setup jenkins agent, so if there are no agents, it would be run on master.

Run Kaniko in Jenkins Slave

I want to run the kaniko as a slave in jenkins . My pipeline is running on the docker plugin and how can I set the gcr credentials with the kaniko.
I want to upload GCR credentials to the Jenkins Master server .
My pipeline groovy is shown as below :
node("kaniko-jnlp") {
stage('Building Stage') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
sh ''' /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-
skip-tls-verify --cache=true
--- destination=gcr.io/project/project:v1 '''
}
I am using Kaniko to build images and push to a private repo. My Kaniko docker image uses a Kubernetes pull-secret for authentication, but you should be able to use the following code:
stage('Kaniko') {
environment {
ARTIFACTORY_CREDS = credentials('your-credentials')
}
steps{
sh "echo ********** EXAMPLE APP **********"
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh
/kaniko/executor --context `pwd` --cleanup --dockerfile=your/Dockerfile --build-arg ARTIFACTORY_USER=$ARTIFACTORY_CREDS_USR --build-arg ARTIFACTORY_PASS=$ARTIFACTORY_CREDS_PSW --destination=your.docker.repo/team/image:tag
'''
}
}
}
}
I run my whole pipeline encapsulated inside a pod, here how I use Kaniko:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}

Resources