Check if container exists when using Kubernetes plugin for jenkins - jenkins

Our pipeline by default tries to use a container that matches the name of the current stage.
If this container doesn't exist, the container 'default' is used.
This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control.
Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.
I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:
node(POD_LABEL)
stage('Check Version (Maven)') {
container('containerThatDoesNotExist'}{
try{
sh 'mvn --version'
}catch(Exception e){
// catch Exception
}
}
java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:
pull an image of kubectl on worker.
Add a stage for getting the running container - use a label or timestamp to get the desire one.
Use the right container 'default' or rather 'some-container'.
Example:
pipeline {
environment {
CURRENT_CONTAINER="default"
}
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: some-app
image: XXX/some-app
imagePullPolicy: IfNotPresent
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
'''
}
}
stages {
stage('Set Container Name') {
steps {
container('kubectl') {
withCredentials([
string(credentialsId: 'minikube', variable: 'api_token')
]) {
script {
CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath="{.items[*].spec.containers[0].name}"',
returnStdout: true
).trim()
echo "Exec container ${CURRENT_CONTAINER}"
}
}
}
}
}
stage('Echo Container Name') {
steps {
echo "CURRENT_CONTAINER is ${CURRENT_CONTAINER}"
}
}
}
}

Related

Passing variable to jenkins yaml podTemplate

I am using Jenkins with the kubernetes plugin to run my jobs and I need to run a pipeline that:
builds a docker image
submit it to the registry
Uses that same image in the following steps to perform the tests.
Container(image:A): build image B
Container(image:B) : test image B
So I would like to use variables and substitute them inside the kubernetes podtemplate as here:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
def image_name = "busybox"
}
}
}
stage('Run tests') {
environment {
image = "$image_name"
}
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
but the variable is empty as I get:
[Normal][ci/test-10-g91lr-xtc20-s1ng1][Pulling] Pulling image "null"
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Error: ErrImagePull
[Warning][ci/test-10-g91lr-xtc20-s1ng1][Failed] Failed to pull image "null": rpc error: code = Unknown desc = Error response from daemon: pull access denied for null, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Do you know how I can achieve a similar behaviour ?
Thank you zett42 for your answer, I was able to achieve my objective with your suggestions.
Basically the solution was to set in the build stage a global environment variable. I post here the full solution to help others in my same problem:
pipeline {
agent none
stages {
stage("Build image"){
// some script that builds the image
steps{
script{
env.image_name = "busybox"
}
}
}
stage('Run tests') {
agent {
kubernetes {
yaml """\
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: "${env.image_name}"
command:
- cat
tty: true
""".stripIndent()
}
}
steps {
container('busybox') {
sh 'echo "I am alive!!"'
}
}
}
}
}
To better understand it it was useful to read this article:
https://e.printstacktrace.blog/jenkins-pipeline-environment-variables-the-definitive-guide/

Jenkins writeYaml before pipelines starts and read in agent.kubernetes.yamlFile

I have a pipeline that needs a modified yaml file for different environments. For that I read the template, overwrite the parameter and save it again before the pipeline { ... } part starts.
node {
stage('Adjust serviceAccountName to env') {
checkout scm
def valuesYaml = readYaml (file: 'build_nodes.yaml')
valuesYaml.spec.serviceAccountName = 'user-test'
sh 'rm -f build_nodes_new.yaml'
writeYaml file: 'build_nodes_new.yaml', data: valuesYaml
}
}
Once I want to load the file the problem is that it can't be found:
pipeline {
environment {
ENV_VAR=....
}
agent {
kubernetes {
label 'some_label'
yamlFile 'build_nodes_new.yaml'
}
}
stages {
stage('Assume Role') { ... }
Throws an error:
java.io.FileNotFoundException: URL:
/rest/api/1.0/projects/PROJECT/repos/backend/browse/build_nodes_new.yaml?at=feature%2Fmy-branch-name&start=0&limit=500
Do I have to save the yaml file somewhere else? If I ls -la it is displayed.
This is because you wrote the yaml file on a regular node, and then try to read it from a container in k8s. It's like they're on different machines. In fact, they very likely are. You could pass the contents as a string to the k8s node, or you could write it to a filesystem that the k8s pod can mount
I had similar issue and below worked for me. thanks #sam_ste
def get_yaml() {
node {
sh 'env'
echo GERRIT_PATCHSET_REVISION
echo "${GERRIT_PATCHSET_REVISION}"
return """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: simplekube
image: dhub.net/jenkins/simplekube:${GERRIT_PATCHSET_REVISION}
command:
- cat
tty: true
securityContext:
runAsUser: 0
"""
}
}

Jenkins Kubernetes Plugin doesn't executre entrypoint of Docker image

I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - https://github.com/jenkinsci/kubernetes-plugin
I want to use the plugin for E2E tests setup inside my CI.
Inside my Jenkinsfile I have a podTemplate which looks and used as follows:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: website
image: ${WEBSITE_INTEGRATION_IMAGE_PATH}
command:
- cat
tty: true
ports:
- containerPort: 3000
- name: cypress
resources:
requests:
memory: 2Gi
limit:
memory: 4Gi
image: ${CYPRESS_IMAGE_PATH}
command:
- cat
tty: true
"""
pipeline {
agent {
label 'docker'
}
stages {
stage('Prepare') {
steps {
timeout(time: 15) {
script {
ci_machine = docker.build("${WEBSITE_IMAGE_PATH}")
}
}
}
}
stage('Build') {
steps {
timeout(time: 15) {
script {
ci_machine.inside("-u root") {
sh "yarn build"
}
}
}
}
post {
success {
timeout(time: 15) {
script {
docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) {
integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}")
integrationImage.push()
}
}
}
}
}
}
stage('Browser Tests') {
agent {
kubernetes {
label "${KUBERNETES_LABEL}"
yaml podTemplate
}
}
steps {
timeout(time: 5, unit: 'MINUTES') {
container("website") {
sh "yarn start"
}
container("cypress") {
sh "yarn test:e2e"
}
}
}
}
}
In Dockerfile that builds an image I added an ENTRYPOINT
ENTRYPOINT ["bash", "./docker-entrypoint.sh"]
However it seems that it's not executed by the kubernetes plugin.
Am I missing something?
As per Define a Command and Arguments for a Container docs:
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image.
This table summarizes the field names used by Docker and Kubernetes:
| Docker field name | K8s field name |
|------------------:|:--------------:|
| ENTRYPOINT | command |
| CMD | args |
Defining a command implies ignoring your Dockerfile ENTRYPOINT:
When you override the default ENTRYPOINT and CMD, these rules apply:
If you supply a command but no args for a Container, only the supplied command is used. The default ENTRYPOINT and the default CMD defined in the Docker image are ignored.
If you supply only args for a Container, the default ENTRYPOINT
defined in the Docker image is run with the args that you supplied.
So you need to replace the command in your pod template by args, which will preserve your Dockerfile ENTRYPOINT (acting equivalent to a Dockerfile CMD).

How to run sidecar container in jenkins pipeline running inside kubernetes

I need to build and run some tests using a fresh database. I though of using a sidecar container to host the DB.
I've installed jenkins using helm inside my kubernetes cluster using google's own tutorial.
I can launch simple 'hello world' pipelines which will start on a new pod.
Next, I tried Jenkin's documentation
for running an instance of mysql as a sidecar.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
At first, it complained that docker was not found, and the internet suggested using a custom jenkins slave image with docker installed.
Now, if I run the pipeline, it just hangs in the loop waiting for the db to be ready.
Disclaimer: New to jenkins/docker/kubernetes
Eventually I've found this method.
It relies on the kubernetes pipeline plugin, and allows running multiple containers in the agent pod while sharing resources.
Note that label should not be an existing label, otherwise when you go to run, your podTemplate will be unable to find the container you made. With this method you are making a new set of containers in an entirely new pod.
def databaseUsername = 'app'
def databasePassword = 'app'
def databaseName = 'app'
def databaseHost = '127.0.0.1'
def jdbcUrl = "jdbc:mariadb://$databaseHost/$databaseName".toString()
podTemplate(
label: label,
containers: [
containerTemplate(
name: 'jdk',
image: 'openjdk:8-jdk-alpine',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'JDBC_URL', value: jdbcUrl),
envVar(key: 'JDBC_USERNAME', value: databaseUsername),
envVar(key: 'JDBC_PASSWORD', value: databasePassword),
]
),
containerTemplate(
name: "mariadb",
image: "mariadb",
envVars: [
envVar(key: 'MYSQL_DATABASE', value: databaseName),
envVar(key: 'MYSQL_USER', value: databaseUsername),
envVar(key: 'MYSQL_PASSWORD', value: databasePassword),
envVar(key: 'MYSQL_ROOT_PASSWORD', value: databasePassword)
],
)
]
) {
node(label) {
stage('Checkout'){
checkout scm
}
stage('Waiting for environment to start') {
container('mariadb') {
sh """
while ! mysqladmin ping --user=$databaseUsername --password=$databasePassword -h$databaseHost --port=3306 --silent; do
sleep 1
done
"""
}
}
stage('Migrate database') {
container('jdk') {
sh './gradlew flywayMigrate -i'
}
}
stage('Run Tests') {
container('jdk') {
sh './gradlew test'
}
}
}
}
you should be using kubectl cli (using manifests yaml files)to create those mysql and centos pods,svc and other k8s objects. run tests on mysql database using mysql service dns.
This is how we have tested new database deployments

Jenkins Pipeline Kubernetes: Define pod yaml dynamically

I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.
It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.

Resources