Using kubernetes-plugin Parallelise declarative Jenkinsfile how do you run stages in parallel, when each stage uses the same container? (eg: sonar_qube analysis and unit testing both run on a maven container.
I have tried the following in Jenkinsfile:
def label = "my-build-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.5.3-jdk-8-alpine', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'maven2', image: 'maven:3.5.3-jdk-8-alpine', command: 'cat', ttyEnabled: true),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/root/.m2/repository', hostPath: '/root/.m2'),
hostPathVolume(mountPath: '/tmp', hostPath: '/tmp')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def didFail = false
def throwable = null
try {
stage('Tests In Parallel') {
parallel StaticCodeAnalysis: {
container('maven'){
withSonarQubeEnv('sonarqube-cluster') {
// requires SonarQube Scanner for Maven 3.2+
sh " mvn help:evaluate -Dexpression=settings.localRepository"
sh "mvn clean"
sh "mvn compiler:help jar:help resources:help surefire:help clean:help install:help deploy:help site:help dependency:help javadoc:help spring-boot:help org.jacoco:jacoco-maven-plugin:prepare-agent"
sh "mvn org.jacoco:jacoco-maven-plugin:prepare-agent package sonar:sonar -Dsonar.host.url=$SONAR_HOST_URL -Dsonar.login=$SONAR_AUTH_TOKEN -Dsonar.exclusions=srcgen/**/* -Dmaven.test.skip=true"
}
}
}, unitTests: {
container('maven2') {
sh "mvn clean package"
junit allowEmptyResults: true, testResults: '**/surefire-reports/*.xml'
}
},
failFast: true
}
} catch (e) {
didFail = true
throwable = e
} finally {
sh "rm -rf /tmp/${label}"
}
if (didFail) {
echo "Something went wrong."
error throwable
}
}
}
All seems to work, and on the blue ocean UI I can see the two stages running correctly at the same time. However when one of the stages in parralell stage finishes, the other fails with 'java.lang.NoClassDefFoundError's' for classes that have definitely already been used and where there as stage was running.
It almost looks like all slaves that a spun up use the same workspace directoy ie: /home/jenkins/workspace/job_name/
The maven commands create folder
- /home/jenkins/workspace/job_name/target/classes
- but when you see the failing, its ask if the container that was been used lots all its content from the classes folder.
Related
I am configuring jenkins + jenkins agents in kubernetes using this guide:
https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/
which gives the below example of a jenkins pipeline using multiple/different containers for different stages:
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Test') {
try {
container('gradle') {
sh """
pwd
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
}
catch (exc) {
println "Failed to test - ${currentBuild.fullDisplayName}"
throw(exc)
}
}
stage('Build') {
container('gradle') {
sh "gradle build"
}
}
stage('Create Docker images') {
container('docker') {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
}
}
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
stage('Run helm') {
container('helm') {
sh "helm list"
}
}
}
}
But why would you bother with this level of granularity? E.g. why not just have one container that have all you need, jnlp, helm, kubectl, java etc. and use that for all your stages?
I know from a purist perspective its good to keep container/images as small as possible but if that's the only argument I would rather have it one container + not having to bother my end users (developers writing jenkinsfiles) with picking the right container - they should not have to worry about stuff at this level instead they you need to be able to get an agent and that's it.
Or am I missing some functional reason for this multiple container setup?
Using one single image to handle all process is funtionally feasible, but it adds burden to your operation.
We don't always find an image that fulfills all our needs, i.e. desired tools with desired version. Most likely, you are going to build one.
To achieve this, you need to build docker images for different arch (amd/arm) and maintain/use a docker registry to store your built image, this process can be time consuming as your image gets more complicated. More importantly, it is very likely that some of your tools 'favour' some particular linus distro, you will find it difficult and not always functionally ok.
Imagine you need to use a newer version of docker image in on of your pipeline's step, you will have you repeat the whole process of building and uploading images. Alternatively, you only need to change the image version in your pipeline, it minimises your operation effort.
I have a problem with a partial pass of variables from Jenkins pipeline to Ansible playbook.
My variables in pipeline:
pipeline {
agent {
label agentLabel
}
parameters {
string(
defaultValue: 'build/promo-api.zip',
description: 'name and path of the artifact',
name: 'ARTIFACT_ZIP')
string(
defaultValue: 'qa-promoapi-mbo.example.com',
description: 'name and path of the vhost QA',
name: 'QA_NGINX_VHOST')
}
stages {
[...]
stage ('Deploy') {
steps {
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
def ENV_SERVER = '192.168.1.30'
def UML_SUFFIX = 'stage-mon'
sh 'ansible-playbook nginx-depl.yml --limit 127.0.0.1'
echo 'Run STAGE SG deploy'
ENV_SERVER = 'stage-sg-mbo-api.example.com'
UML_SUFFIX = 'stage-sg'
sh 'ansible-playbook nginx-depl.yml --limit 127.0.0.1'
} else {
echo 'Run QA build'
def ENV_SERVER = '192.168.1.28'
def UML_SUFFIX = 'qa'
sh "ansible-playbook nginx-depl.yml --limit 127.0.0.1"
}
}
}
}
}
Using this command I'm able to see in Ansible scope variables, defined in parameters part - ARTIFACT_ZIP and QA_NGINX_VHOST:
tasks:
- name: "Ansible | List all known variables and facts"
debug:
var: hostvars[inventory_hostname]
Problem is that I cannot pass variables from the script part - ENV_SERVER and UML_SUFFIX (these variables are unique to each server and must be changed accordingly).
In playbook vars are defined like this:
vars:
params_ENV_SERVER: "{{ lookup('env', 'ENV_SERVER') }}"
params_ARTIFACT_ZIP: "{{ lookup('env', 'ARTIFACT_ZIP') }}"
params_STG_NGINX_VHOST: "{{ lookup('env', 'STG_NGINX_VHOST') }}"
params_UML_SUFFIX: "{{ lookup('env', 'UML_SUFFIX') }}"
How to define variables correctly, to pass to Ansible playbook from Jenkins pipeline script block?
To make the env variables available to the bash task that executes ansible, you can use the withEnv step as follows:
[...]
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
withEnv(["ENV_SERVER=192.168.1.30","UML_SUFFIX=stage-mon"]) {
sh 'ansible-playbook nginx-depl.yml --limit 127.0.0.1'
}
[...]
I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - https://github.com/jenkinsci/kubernetes-plugin
I want to use the plugin for E2E tests setup inside my CI.
Inside my Jenkinsfile I have a podTemplate which looks and used as follows:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: website
image: ${WEBSITE_INTEGRATION_IMAGE_PATH}
command:
- cat
tty: true
ports:
- containerPort: 3000
- name: cypress
resources:
requests:
memory: 2Gi
limit:
memory: 4Gi
image: ${CYPRESS_IMAGE_PATH}
command:
- cat
tty: true
"""
pipeline {
agent {
label 'docker'
}
stages {
stage('Prepare') {
steps {
timeout(time: 15) {
script {
ci_machine = docker.build("${WEBSITE_IMAGE_PATH}")
}
}
}
}
stage('Build') {
steps {
timeout(time: 15) {
script {
ci_machine.inside("-u root") {
sh "yarn build"
}
}
}
}
post {
success {
timeout(time: 15) {
script {
docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) {
integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}")
integrationImage.push()
}
}
}
}
}
}
stage('Browser Tests') {
agent {
kubernetes {
label "${KUBERNETES_LABEL}"
yaml podTemplate
}
}
steps {
timeout(time: 5, unit: 'MINUTES') {
container("website") {
sh "yarn start"
}
container("cypress") {
sh "yarn test:e2e"
}
}
}
}
}
In Dockerfile that builds an image I added an ENTRYPOINT
ENTRYPOINT ["bash", "./docker-entrypoint.sh"]
However it seems that it's not executed by the kubernetes plugin.
Am I missing something?
As per Define a Command and Arguments for a Container docs:
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image.
This table summarizes the field names used by Docker and Kubernetes:
| Docker field name | K8s field name |
|------------------:|:--------------:|
| ENTRYPOINT | command |
| CMD | args |
Defining a command implies ignoring your Dockerfile ENTRYPOINT:
When you override the default ENTRYPOINT and CMD, these rules apply:
If you supply a command but no args for a Container, only the supplied command is used. The default ENTRYPOINT and the default CMD defined in the Docker image are ignored.
If you supply only args for a Container, the default ENTRYPOINT
defined in the Docker image is run with the args that you supplied.
So you need to replace the command in your pod template by args, which will preserve your Dockerfile ENTRYPOINT (acting equivalent to a Dockerfile CMD).
I need to build and run some tests using a fresh database. I though of using a sidecar container to host the DB.
I've installed jenkins using helm inside my kubernetes cluster using google's own tutorial.
I can launch simple 'hello world' pipelines which will start on a new pod.
Next, I tried Jenkin's documentation
for running an instance of mysql as a sidecar.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
At first, it complained that docker was not found, and the internet suggested using a custom jenkins slave image with docker installed.
Now, if I run the pipeline, it just hangs in the loop waiting for the db to be ready.
Disclaimer: New to jenkins/docker/kubernetes
Eventually I've found this method.
It relies on the kubernetes pipeline plugin, and allows running multiple containers in the agent pod while sharing resources.
Note that label should not be an existing label, otherwise when you go to run, your podTemplate will be unable to find the container you made. With this method you are making a new set of containers in an entirely new pod.
def databaseUsername = 'app'
def databasePassword = 'app'
def databaseName = 'app'
def databaseHost = '127.0.0.1'
def jdbcUrl = "jdbc:mariadb://$databaseHost/$databaseName".toString()
podTemplate(
label: label,
containers: [
containerTemplate(
name: 'jdk',
image: 'openjdk:8-jdk-alpine',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'JDBC_URL', value: jdbcUrl),
envVar(key: 'JDBC_USERNAME', value: databaseUsername),
envVar(key: 'JDBC_PASSWORD', value: databasePassword),
]
),
containerTemplate(
name: "mariadb",
image: "mariadb",
envVars: [
envVar(key: 'MYSQL_DATABASE', value: databaseName),
envVar(key: 'MYSQL_USER', value: databaseUsername),
envVar(key: 'MYSQL_PASSWORD', value: databasePassword),
envVar(key: 'MYSQL_ROOT_PASSWORD', value: databasePassword)
],
)
]
) {
node(label) {
stage('Checkout'){
checkout scm
}
stage('Waiting for environment to start') {
container('mariadb') {
sh """
while ! mysqladmin ping --user=$databaseUsername --password=$databasePassword -h$databaseHost --port=3306 --silent; do
sleep 1
done
"""
}
}
stage('Migrate database') {
container('jdk') {
sh './gradlew flywayMigrate -i'
}
}
stage('Run Tests') {
container('jdk') {
sh './gradlew test'
}
}
}
}
you should be using kubectl cli (using manifests yaml files)to create those mysql and centos pods,svc and other k8s objects. run tests on mysql database using mysql service dns.
This is how we have tested new database deployments
I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.
It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.