Running parallel kubernetes jobs in Jenkins pipeline - jenkins

I'm running perfomance tests on Jenkins. Test that may include multiple instances of the same container to generate necessary load. I can't hardcode number of instances as it varies based on params for tests.
I've tried to use the following code:
pipeline {
agent any
stages {
stage('Running Jmeter') {
agent {
kubernetes {
label "jmeter_tests_executor"
yaml '''
apiVersion: batch/v1
kind: Job
metadata:
name: jmeter
namespace: jenkins
spec:
parallelism: 2
backoffLimit: 1
ttlSecondsAfterFinished: 100
...
But it doesn't work. It's hanging on pod scheduling(jobs works ok if you apply this manifest directly on kubernetes cluster without Jenkins).
If someone had experience with it, please share your workarounds or ideas how to implement this idea.

Maybe try something like this
stage("RUN LOAD TEST") {
steps {
script {
//params.each creates an array of stages
paramsToTest.each {param ->
load["load test"] = {
stage("Executing run ${param}") {
agent {
kubernetes {
label "jmeter_tests_executor"
yaml '''
apiVersion: batch/v1
kind: Job
metadata:
name: jmeter
namespace: jenkins
spec:
parallelism: 2
backoffLimit: 1
ttlSecondsAfterFinished: 100
...
'''
}
}
steps {
<EXECUTE LOAD TEST COMMAND>
}
}
}
parallel(load) //actually executes the parallel stages
}
}
}
}
What this does is use an array of something and then generates stages based on that array. The agent params in the stage should tell Jenkins to create a new pod with each execution in parallel.

Related

Check if container exists when using Kubernetes plugin for jenkins

Our pipeline by default tries to use a container that matches the name of the current stage.
If this container doesn't exist, the container 'default' is used.
This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control.
Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.
I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:
node(POD_LABEL)
stage('Check Version (Maven)') {
container('containerThatDoesNotExist'}{
try{
sh 'mvn --version'
}catch(Exception e){
// catch Exception
}
}
java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:
pull an image of kubectl on worker.
Add a stage for getting the running container - use a label or timestamp to get the desire one.
Use the right container 'default' or rather 'some-container'.
Example:
pipeline {
environment {
CURRENT_CONTAINER="default"
}
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: some-app
image: XXX/some-app
imagePullPolicy: IfNotPresent
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
'''
}
}
stages {
stage('Set Container Name') {
steps {
container('kubectl') {
withCredentials([
string(credentialsId: 'minikube', variable: 'api_token')
]) {
script {
CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath="{.items[*].spec.containers[0].name}"',
returnStdout: true
).trim()
echo "Exec container ${CURRENT_CONTAINER}"
}
}
}
}
}
stage('Echo Container Name') {
steps {
echo "CURRENT_CONTAINER is ${CURRENT_CONTAINER}"
}
}
}
}

Passing variable to jenkins yaml podTemplate

I am using Jenkins with the kubernetes plugin to run my jobs and I need to run a pipeline that:
builds a docker image
submit it to the registry
Uses that same image in the following steps to perform the tests.
Container(image:A): build image B
Container(image:B) : test image B
So I would like to use variables and substitute them inside the kubernetes podtemplate as here:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
def image_name = "busybox"
}
}
}
stage('Run tests') {
environment {
image = "$image_name"
}
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
but the variable is empty as I get:
[Normal][ci/test-10-g91lr-xtc20-s1ng1][Pulling] Pulling image "null"
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Error: ErrImagePull
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Failed to pull image "null": rpc error: code = Unknown desc = Error response from daemon: pull access denied for null, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Do you know how I can achieve a similar behaviour ?
Thank you zett42 for your answer, I was able to achieve my objective with your suggestions.
Basically the solution was to set in the build stage a global environment variable. I post here the full solution to help others in my same problem:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
env.image_name = "busybox"
}
}
}
stage('Run tests') {
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image_name}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
To better understand it it was useful to read this article:
https://e.printstacktrace.blog/jenkins-pipeline-environment-variables-the-definitive-guide/

Jenkins writeYaml before pipelines starts and read in agent.kubernetes.yamlFile

I have a pipeline that needs a modified yaml file for different environments. For that I read the template, overwrite the parameter and save it again before the pipeline { ... } part starts.
node {
stage('Adjust serviceAccountName to env') {
checkout scm
def valuesYaml = readYaml (file: 'build_nodes.yaml')
valuesYaml.spec.serviceAccountName = 'user-test'
sh 'rm -f build_nodes_new.yaml'
writeYaml file: 'build_nodes_new.yaml', data: valuesYaml
}
}
Once I want to load the file the problem is that it can't be found:
pipeline {
environment {
ENV_VAR=....
}
agent {
kubernetes {
label 'some_label'
yamlFile 'build_nodes_new.yaml'
}
}
stages {
stage('Assume Role') { ... }
Throws an error:
java.io.FileNotFoundException: URL:
/rest/api/1.0/projects/PROJECT/repos/backend/browse/build_nodes_new.yaml?at=feature%2Fmy-branch-name&start=0&limit=500
Do I have to save the yaml file somewhere else? If I ls -la it is displayed.
This is because you wrote the yaml file on a regular node, and then try to read it from a container in k8s. It's like they're on different machines. In fact, they very likely are. You could pass the contents as a string to the k8s node, or you could write it to a filesystem that the k8s pod can mount
I had similar issue and below worked for me. thanks #sam_ste
def get_yaml() {
node {
sh 'env'
echo GERRIT_PATCHSET_REVISION
echo "${GERRIT_PATCHSET_REVISION}"
return """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: simplekube
image: dhub.net/jenkins/simplekube:${GERRIT_PATCHSET_REVISION}
command:
- cat
tty: true
securityContext:
runAsUser: 0
"""
}
}

How can I use pipeline environment variables to configure my Jenkins agent?

I am trying to use environment variables to configure my Jenkins agent as follows:
pipeline {
environment {
TEST = "test"
}
agent {
kubernetes {
label 'kubernetes'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: "${env.TEST}"
...
but ${env.TEST} is coming out as null. Using ${env.BUILD_NUMBER} works as expected so it seems the agent doesn't have access to environment variables defined in the pipeline.
Is there any way to get this to work?
You have it basically right. env.VALUE are used for specific user environment variables (e.g. If I run jenkins in an agent environment with KUBECONFIG set, as per an AMI or otherwise, that would be considered env.KUBECONFIG). It is confusing, but typically in a library you define global environment variables as follows:
env.MY_VALUE = "some value"
When referencing the env.VALUE, it is the actual user environment variables you are checking. For the values you set in the environment closure, you can just call them by MY_VALUE:
pipeline {
environment {
TEST = "test"
}
agent {
kubernetes {
label 'kubernetes'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: "${TEST}"
...

Jenkins Pipeline Kubernetes: Define pod yaml dynamically

I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.
It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.

Resources