Jenkins pipeline - start pod with image built in previous stage in a different pod - docker

I have a Jenkins deployed on a kubernetes cluster A, and that can spawn pods in the same cluster A. My workflow is the following
Start a pod with an image designed to build a docker image. This image will be built with a deterministic tag example/appname:${GIT_COMMIT}
After the image was build in the previous pod, start a new pod with the image from the previous build (example/appname:${GIT_COMMIT}), and run multiple test tools
Currently I am successfully making a build with
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: aws-dockerizer
image: example/aws-dockerizer:0.1.7
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
""") {
node(POD_LABEL) {
stage('Clone') {
git url: 'https://github.com/example/my-app/', branch: '${build_branch_name}', credentialsId: 'github-app'
container('aws-dockerizer') {
stage('Build and deploy') {
withAWS(credentials: 'aws-credentials', region: 'eu-central-1') {
sh '''#!/usr/bin/env bash
git config --global --add safe.directory ${WORKSPACE}
scripts/build_and_push_docker_image_to_aws.sh
'''
}
}
}
}
}
}
I'd like to add the following stage to my pipeline. Note that the new pod "experience-${GIT_COMMIT}" CANNOT be started because he image is not available until the previous step is complete.
podTemplate(
label: "experience-${GIT_COMMIT}"
yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: experience
image: example.dkr.ecr.eu-central-1.amazonaws.com/example/appname:${GIT_COMMIT}
command: ['cat']
tty: true
""")
stage('Run tests') {
node("experience-${GIT_COMMIT}") {
stage('Run tests') {
container('experience') {
stage('Run Rspec') {
sh 'bundle exec rspec'
}
post {}
}
}
}
}
}
Any idea if this is possible ? What's the DSL / concepts I need to use ? How do I "merge" the two pieces of code to achieve what I want ?
I've tried to play around a bit, but when I declare both pod templates at the beginning, it hangs the job until the 2nd pod is ready, which it never will be...

Related

Using Parameters in yamlFile of kubernetes agent in jenkins decorative pipeline

I have a shared yaml file for multiple pipelines and I would like to parameterize the tag of one of the images in the yaml file.
What would be the simplest way to do this? At the moment I am maintaining multiple KubernetesPods.yaml, such as KubernetesPods-1.5.0.yaml and interpolating the parameter into the name ( yamlFile "KubernetesPods-${params.AGENT_POD_SPEC}.yaml"), but this does not scale well.
Can I get parameters into the yaml without having to have the yaml written out in every pipeline?
Example pipeline:
pipeline {
agent {
kubernetes {
yamlFile 'KubernetesPods.yaml'
}
}
parameters {
choice(
name: 'AGENT_POD_SPEC',
choices: ['1.5.0','1.3.0','1.2.0','1.4.0'],
description: 'Agent pod configuration'
)
}
}
Example KubernetesPods.yaml:
kind: Pod
spec:
containers:
- name: example-image
image: example/image:<IMAGE-TAG-I-WANT-TO-PARAMETERIZE>
imagePullPolicy: Always
command:
- cat
You can do it using yaml instead of yamlFile
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
spec:
containers:
- name: example-image
image: example/image:${params.AGENT_POD_SPEC}
imagePullPolicy: Always
command:
- cat
"""
}
}
parameters {
choice(
name: 'AGENT_POD_SPEC',
choices: ['1.5.0','1.3.0','1.2.0','1.4.0'],
description: 'Agent pod configuration'
)
}
}

Use "label" or define a pod template in jenkinsfile for kubernetes-plugin?

In General
I'm trying to use label when using kubernetes-plugin for Jenkins, but I get a little bit confused.
In my pipeline bellow I'm trying to build test job in parallel steps with different labels (agents).
I already have configured the plugin with pod template and container in my Jenkins config, where I use same settings as it's in the pipeline podTemplate defined.
Issue
The problem is that when I use agent label in stage 2 there is jnpl image running instead the image that I point in the config someimage:latest.
In stage 1 where I define the pod in pipeline there is no problem and the required images are running fine.
Question
What I'm doing wrong?
Here is my jenkinsfile and config of the kubernetes-plugin in Jenkins:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: k8s
image: someimage:latest
command:
- sleep
args:
- infinity
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
workingDir: "/home/jenkins/agent"
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: "jenkins-worker-pvc"
readOnly: false
"""
pipeline {
agent none
stages {
stage("Parallel") {
parallel {
stage("1.k8s") {
agent {
kubernetes {
yaml podTemplate
defaultContainer 'k8s'
}
}
steps {
sh """
mvn -version
"""
}
}
stage("2. k8s") {
agent { label 'k8s' }
steps {
sh """
mvn -version
"""
}
}
stage("win") {
agent { label 'windows' }
steps {
bat "dir"
}
}
}
}
}
}
You did not specified an image for stage with label k8s and windows.
You can read in the docs that:
The plugin creates a Kubernetes Pod for each agent started, defined by the Docker image to run, and stops it after each build.
Agents are launched using JNLP, so it is expected that the image connects automatically to the Jenkins master.
You are using podTemplate and I would advice setting up container , this might look like the following:
podTemplate(containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}
You can read more about Container Configuration and Container Group Support

kubernetes jenkins docker command not found

Installed Jenkins using helm
helm install --name jenkins -f values.yaml stable/jenkins
Jenkins Plugin Installed
- kubernetes:1.12.6
- workflow-job:2.31
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.3
- docker:1.1.6
Defined Jenkins pipeline to build docker container
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
Throws the error : docker not found
You can define agent pod with containers with required tools(docker, Maven, Helm etc) in the pipeline for that:
First, create agentpod.yaml with following values:
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- name: m2
mountPath: /root/.m2
- name: docker
image: docker:19.03
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: m2
hostPath:
path: /root/.m2
Then configure the pipeline as:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'agentpod.yaml'
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn package'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t dockerimage ."
}
}
}
}
}
It seems like you have only installed plugins but not packages. Two possibilities.
Configure plugins to install packages using Jenkins.
Go to Manage Jenkins
Global Tools Configuration
Docker -> Fill name (eg: Docker-latest)
Check on install automatically and then add installer (Download from
here).
Then save
If you have installed on your machine then update the PATH variable in Jenkins with the location of Docker.

Run Kaniko in Jenkins Slave

I want to run the kaniko as a slave in jenkins . My pipeline is running on the docker plugin and how can I set the gcr credentials with the kaniko.
I want to upload GCR credentials to the Jenkins Master server .
My pipeline groovy is shown as below :
node("kaniko-jnlp") {
stage('Building Stage') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
sh ''' /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-
skip-tls-verify --cache=true
--- destination=gcr.io/project/project:v1 '''
}
I am using Kaniko to build images and push to a private repo. My Kaniko docker image uses a Kubernetes pull-secret for authentication, but you should be able to use the following code:
stage('Kaniko') {
environment {
ARTIFACTORY_CREDS = credentials('your-credentials')
}
steps{
sh "echo ********** EXAMPLE APP **********"
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh
/kaniko/executor --context `pwd` --cleanup --dockerfile=your/Dockerfile --build-arg ARTIFACTORY_USER=$ARTIFACTORY_CREDS_USR --build-arg ARTIFACTORY_PASS=$ARTIFACTORY_CREDS_PSW --destination=your.docker.repo/team/image:tag
'''
}
}
}
}
I run my whole pipeline encapsulated inside a pod, here how I use Kaniko:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}

Error pushing kaniko build image to azure container registry from jenkins groovy pipeline

I have a scenario, I run my Jenkins in K8s cluster in Minikube. I run a groovy script within my Jenkins Pipeline which builds the docker image
using Kaniko (which builds a docker image without docker daemon) and pushes to Azure container registry. I have created secret to authenticate to Azure.
But when I push an image - I get an error
" [36mINFO[0m[0004] Taking snapshot of files...
[36mINFO[0m[0004] ENTRYPOINT ["jenkins-slave"]
error pushing image: failed to push to destination Testimage.azurecr.io/test:latest: unexpected end of JSON input
[Pipeline] }"
My Script
My groovy script --
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-pv
mountPath: /root
volumes:
- name: jenkins-pv
projected:
sources:
- secret:
name: pass
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=testimage.azurecr.io/test:latest
'''
}
}
}
}
Could you please help how to overcome the error? And also :
How do I know the name of the image build by kaiko?
I m just pushing like - registry.acr.io/test: latest, probably it's an incorrect image name that's the reason I get JSON output error?

Resources