Jenkins Pipeline Kubernetes: Define pod yaml dynamically - jenkins

I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.

It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.

Related

Check if container exists when using Kubernetes plugin for jenkins

Our pipeline by default tries to use a container that matches the name of the current stage.
If this container doesn't exist, the container 'default' is used.
This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control.
Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.
I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:
node(POD_LABEL)
stage('Check Version (Maven)') {
container('containerThatDoesNotExist'}{
try{
sh 'mvn --version'
}catch(Exception e){
// catch Exception
}
}
java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:
pull an image of kubectl on worker.
Add a stage for getting the running container - use a label or timestamp to get the desire one.
Use the right container 'default' or rather 'some-container'.
Example:
pipeline {
environment {
CURRENT_CONTAINER="default"
}
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: some-app
image: XXX/some-app
imagePullPolicy: IfNotPresent
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
'''
}
}
stages {
stage('Set Container Name') {
steps {
container('kubectl') {
withCredentials([
string(credentialsId: 'minikube', variable: 'api_token')
]) {
script {
CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath="{.items[*].spec.containers[0].name}"',
returnStdout: true
).trim()
echo "Exec container ${CURRENT_CONTAINER}"
}
}
}
}
}
stage('Echo Container Name') {
steps {
echo "CURRENT_CONTAINER is ${CURRENT_CONTAINER}"
}
}
}
}

Passing variable to jenkins yaml podTemplate

I am using Jenkins with the kubernetes plugin to run my jobs and I need to run a pipeline that:
builds a docker image
submit it to the registry
Uses that same image in the following steps to perform the tests.
Container(image:A): build image B
Container(image:B) : test image B
So I would like to use variables and substitute them inside the kubernetes podtemplate as here:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
def image_name = "busybox"
}
}
}
stage('Run tests') {
environment {
image = "$image_name"
}
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
but the variable is empty as I get:
[Normal][ci/test-10-g91lr-xtc20-s1ng1][Pulling] Pulling image "null"
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Error: ErrImagePull
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Failed to pull image "null": rpc error: code = Unknown desc = Error response from daemon: pull access denied for null, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Do you know how I can achieve a similar behaviour ?
Thank you zett42 for your answer, I was able to achieve my objective with your suggestions.
Basically the solution was to set in the build stage a global environment variable. I post here the full solution to help others in my same problem:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
env.image_name = "busybox"
}
}
}
stage('Run tests') {
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image_name}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
To better understand it it was useful to read this article:
https://e.printstacktrace.blog/jenkins-pipeline-environment-variables-the-definitive-guide/

Running parallel kubernetes jobs in Jenkins pipeline

I'm running perfomance tests on Jenkins. Test that may include multiple instances of the same container to generate necessary load. I can't hardcode number of instances as it varies based on params for tests.
I've tried to use the following code:
pipeline {
agent any
stages {
stage('Running Jmeter') {
agent {
kubernetes {
label "jmeter_tests_executor"
yaml '''
apiVersion: batch/v1
kind: Job
metadata:
name: jmeter
namespace: jenkins
spec:
parallelism: 2
backoffLimit: 1
ttlSecondsAfterFinished: 100
...
But it doesn't work. It's hanging on pod scheduling(jobs works ok if you apply this manifest directly on kubernetes cluster without Jenkins).
If someone had experience with it, please share your workarounds or ideas how to implement this idea.
Maybe try something like this
stage("RUN LOAD TEST") {
steps {
script {
//params.each creates an array of stages
paramsToTest.each {param ->
load["load test"] = {
stage("Executing run ${param}") {
agent {
kubernetes {
label "jmeter_tests_executor"
yaml '''
apiVersion: batch/v1
kind: Job
metadata:
name: jmeter
namespace: jenkins
spec:
parallelism: 2
backoffLimit: 1
ttlSecondsAfterFinished: 100
...
'''
}
}
steps {
<EXECUTE LOAD TEST COMMAND>
}
}
}
parallel(load) //actually executes the parallel stages
}
}
}
}
What this does is use an array of something and then generates stages based on that array. The agent params in the stage should tell Jenkins to create a new pod with each execution in parallel.

Jenkins writeYaml before pipelines starts and read in agent.kubernetes.yamlFile

I have a pipeline that needs a modified yaml file for different environments. For that I read the template, overwrite the parameter and save it again before the pipeline { ... } part starts.
node {
stage('Adjust serviceAccountName to env') {
checkout scm
def valuesYaml = readYaml (file: 'build_nodes.yaml')
valuesYaml.spec.serviceAccountName = 'user-test'
sh 'rm -f build_nodes_new.yaml'
writeYaml file: 'build_nodes_new.yaml', data: valuesYaml
}
}
Once I want to load the file the problem is that it can't be found:
pipeline {
environment {
ENV_VAR=....
}
agent {
kubernetes {
label 'some_label'
yamlFile 'build_nodes_new.yaml'
}
}
stages {
stage('Assume Role') { ... }
Throws an error:
java.io.FileNotFoundException: URL:
/rest/api/1.0/projects/PROJECT/repos/backend/browse/build_nodes_new.yaml?at=feature%2Fmy-branch-name&start=0&limit=500
Do I have to save the yaml file somewhere else? If I ls -la it is displayed.
This is because you wrote the yaml file on a regular node, and then try to read it from a container in k8s. It's like they're on different machines. In fact, they very likely are. You could pass the contents as a string to the k8s node, or you could write it to a filesystem that the k8s pod can mount
I had similar issue and below worked for me. thanks #sam_ste
def get_yaml() {
node {
sh 'env'
echo GERRIT_PATCHSET_REVISION
echo "${GERRIT_PATCHSET_REVISION}"
return """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: simplekube
image: dhub.net/jenkins/simplekube:${GERRIT_PATCHSET_REVISION}
command:
- cat
tty: true
securityContext:
runAsUser: 0
"""
}
}

Jenkins Kubernetes Plugin doesn't executre entrypoint of Docker image

I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - https://github.com/jenkinsci/kubernetes-plugin
I want to use the plugin for E2E tests setup inside my CI.
Inside my Jenkinsfile I have a podTemplate which looks and used as follows:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: website
image: ${WEBSITE_INTEGRATION_IMAGE_PATH}
command:
- cat
tty: true
ports:
- containerPort: 3000
- name: cypress
resources:
requests:
memory: 2Gi
limit:
memory: 4Gi
image: ${CYPRESS_IMAGE_PATH}
command:
- cat
tty: true
"""
pipeline {
agent {
label 'docker'
}
stages {
stage('Prepare') {
steps {
timeout(time: 15) {
script {
ci_machine = docker.build("${WEBSITE_IMAGE_PATH}")
}
}
}
}
stage('Build') {
steps {
timeout(time: 15) {
script {
ci_machine.inside("-u root") {
sh "yarn build"
}
}
}
}
post {
success {
timeout(time: 15) {
script {
docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) {
integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}")
integrationImage.push()
}
}
}
}
}
}
stage('Browser Tests') {
agent {
kubernetes {
label "${KUBERNETES_LABEL}"
yaml podTemplate
}
}
steps {
timeout(time: 5, unit: 'MINUTES') {
container("website") {
sh "yarn start"
}
container("cypress") {
sh "yarn test:e2e"
}
}
}
}
}
In Dockerfile that builds an image I added an ENTRYPOINT
ENTRYPOINT ["bash", "./docker-entrypoint.sh"]
However it seems that it's not executed by the kubernetes plugin.
Am I missing something?
As per Define a Command and Arguments for a Container docs:
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image.
This table summarizes the field names used by Docker and Kubernetes:
| Docker field name | K8s field name |
|------------------:|:--------------:|
| ENTRYPOINT | command |
| CMD | args |
Defining a command implies ignoring your Dockerfile ENTRYPOINT:
When you override the default ENTRYPOINT and CMD, these rules apply:
If you supply a command but no args for a Container, only the supplied command is used. The default ENTRYPOINT and the default CMD defined in the Docker image are ignored.
If you supply only args for a Container, the default ENTRYPOINT
defined in the Docker image is run with the args that you supplied.
So you need to replace the command in your pod template by args, which will preserve your Dockerfile ENTRYPOINT (acting equivalent to a Dockerfile CMD).

Resources