I need to build and run some tests using a fresh database. I though of using a sidecar container to host the DB.
I've installed jenkins using helm inside my kubernetes cluster using google's own tutorial.
I can launch simple 'hello world' pipelines which will start on a new pod.
Next, I tried Jenkin's documentation
for running an instance of mysql as a sidecar.
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
At first, it complained that docker was not found, and the internet suggested using a custom jenkins slave image with docker installed.
Now, if I run the pipeline, it just hangs in the loop waiting for the db to be ready.
Disclaimer: New to jenkins/docker/kubernetes
Eventually I've found this method.
It relies on the kubernetes pipeline plugin, and allows running multiple containers in the agent pod while sharing resources.
Note that label should not be an existing label, otherwise when you go to run, your podTemplate will be unable to find the container you made. With this method you are making a new set of containers in an entirely new pod.
def databaseUsername = 'app'
def databasePassword = 'app'
def databaseName = 'app'
def databaseHost = '127.0.0.1'
def jdbcUrl = "jdbc:mariadb://$databaseHost/$databaseName".toString()
podTemplate(
label: label,
containers: [
containerTemplate(
name: 'jdk',
image: 'openjdk:8-jdk-alpine',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'JDBC_URL', value: jdbcUrl),
envVar(key: 'JDBC_USERNAME', value: databaseUsername),
envVar(key: 'JDBC_PASSWORD', value: databasePassword),
]
),
containerTemplate(
name: "mariadb",
image: "mariadb",
envVars: [
envVar(key: 'MYSQL_DATABASE', value: databaseName),
envVar(key: 'MYSQL_USER', value: databaseUsername),
envVar(key: 'MYSQL_PASSWORD', value: databasePassword),
envVar(key: 'MYSQL_ROOT_PASSWORD', value: databasePassword)
],
)
]
) {
node(label) {
stage('Checkout'){
checkout scm
}
stage('Waiting for environment to start') {
container('mariadb') {
sh """
while ! mysqladmin ping --user=$databaseUsername --password=$databasePassword -h$databaseHost --port=3306 --silent; do
sleep 1
done
"""
}
}
stage('Migrate database') {
container('jdk') {
sh './gradlew flywayMigrate -i'
}
}
stage('Run Tests') {
container('jdk') {
sh './gradlew test'
}
}
}
}
you should be using kubectl cli (using manifests yaml files)to create those mysql and centos pods,svc and other k8s objects. run tests on mysql database using mysql service dns.
This is how we have tested new database deployments
Related
Our pipeline by default tries to use a container that matches the name of the current stage.
If this container doesn't exist, the container 'default' is used.
This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control.
Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.
I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:
node(POD_LABEL)
stage('Check Version (Maven)') {
container('containerThatDoesNotExist'}{
try{
sh 'mvn --version'
}catch(Exception e){
// catch Exception
}
}
java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:
pull an image of kubectl on worker.
Add a stage for getting the running container - use a label or timestamp to get the desire one.
Use the right container 'default' or rather 'some-container'.
Example:
pipeline {
environment {
CURRENT_CONTAINER="default"
}
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: some-app
image: XXX/some-app
imagePullPolicy: IfNotPresent
tty: true
- name: kubectl
image: gcr.io/cloud-builders/kubectl
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
'''
}
}
stages {
stage('Set Container Name') {
steps {
container('kubectl') {
withCredentials([
string(credentialsId: 'minikube', variable: 'api_token')
]) {
script {
CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath="{.items[*].spec.containers[0].name}"',
returnStdout: true
).trim()
echo "Exec container ${CURRENT_CONTAINER}"
}
}
}
}
}
stage('Echo Container Name') {
steps {
echo "CURRENT_CONTAINER is ${CURRENT_CONTAINER}"
}
}
}
}
I am configuring jenkins + jenkins agents in kubernetes using this guide:
https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/
which gives the below example of a jenkins pipeline using multiple/different containers for different stages:
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Test') {
try {
container('gradle') {
sh """
pwd
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
}
catch (exc) {
println "Failed to test - ${currentBuild.fullDisplayName}"
throw(exc)
}
}
stage('Build') {
container('gradle') {
sh "gradle build"
}
}
stage('Create Docker images') {
container('docker') {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
}
}
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
stage('Run helm') {
container('helm') {
sh "helm list"
}
}
}
}
But why would you bother with this level of granularity? E.g. why not just have one container that have all you need, jnlp, helm, kubectl, java etc. and use that for all your stages?
I know from a purist perspective its good to keep container/images as small as possible but if that's the only argument I would rather have it one container + not having to bother my end users (developers writing jenkinsfiles) with picking the right container - they should not have to worry about stuff at this level instead they you need to be able to get an agent and that's it.
Or am I missing some functional reason for this multiple container setup?
Using one single image to handle all process is funtionally feasible, but it adds burden to your operation.
We don't always find an image that fulfills all our needs, i.e. desired tools with desired version. Most likely, you are going to build one.
To achieve this, you need to build docker images for different arch (amd/arm) and maintain/use a docker registry to store your built image, this process can be time consuming as your image gets more complicated. More importantly, it is very likely that some of your tools 'favour' some particular linus distro, you will find it difficult and not always functionally ok.
Imagine you need to use a newer version of docker image in on of your pipeline's step, you will have you repeat the whole process of building and uploading images. Alternatively, you only need to change the image version in your pipeline, it minimises your operation effort.
Using kubernetes-plugin Parallelise declarative Jenkinsfile how do you run stages in parallel, when each stage uses the same container? (eg: sonar_qube analysis and unit testing both run on a maven container.
I have tried the following in Jenkinsfile:
def label = "my-build-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.5.3-jdk-8-alpine', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'maven2', image: 'maven:3.5.3-jdk-8-alpine', command: 'cat', ttyEnabled: true),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/root/.m2/repository', hostPath: '/root/.m2'),
hostPathVolume(mountPath: '/tmp', hostPath: '/tmp')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def didFail = false
def throwable = null
try {
stage('Tests In Parallel') {
parallel StaticCodeAnalysis: {
container('maven'){
withSonarQubeEnv('sonarqube-cluster') {
// requires SonarQube Scanner for Maven 3.2+
sh " mvn help:evaluate -Dexpression=settings.localRepository"
sh "mvn clean"
sh "mvn compiler:help jar:help resources:help surefire:help clean:help install:help deploy:help site:help dependency:help javadoc:help spring-boot:help org.jacoco:jacoco-maven-plugin:prepare-agent"
sh "mvn org.jacoco:jacoco-maven-plugin:prepare-agent package sonar:sonar -Dsonar.host.url=$SONAR_HOST_URL -Dsonar.login=$SONAR_AUTH_TOKEN -Dsonar.exclusions=srcgen/**/* -Dmaven.test.skip=true"
}
}
}, unitTests: {
container('maven2') {
sh "mvn clean package"
junit allowEmptyResults: true, testResults: '**/surefire-reports/*.xml'
}
},
failFast: true
}
} catch (e) {
didFail = true
throwable = e
} finally {
sh "rm -rf /tmp/${label}"
}
if (didFail) {
echo "Something went wrong."
error throwable
}
}
}
All seems to work, and on the blue ocean UI I can see the two stages running correctly at the same time. However when one of the stages in parralell stage finishes, the other fails with 'java.lang.NoClassDefFoundError's' for classes that have definitely already been used and where there as stage was running.
It almost looks like all slaves that a spun up use the same workspace directoy ie: /home/jenkins/workspace/job_name/
The maven commands create folder
- /home/jenkins/workspace/job_name/target/classes
- but when you see the failing, its ask if the container that was been used lots all its content from the classes folder.
I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.
It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.
I couldn't find such a specific command around the internet so I kindly ask for your help with this one :)
Context
I have defined a podTemplate with a few containers, by using the containerTemplate methods:
ubuntu:trusty (14.04 LTS)
postgres:9.6
and finally, wurstmeister/kafka:latest
Doing some Groovy coding in Pipeline, I install several dependencies into my ubuntu:trusty container, such as latest Git, Golang 1.9, etc., and I also checkout my project from Github.
After all that dependencies are dealt with, I manage to compile, run migrations (which means Postgres is up and running and my app is connected to it), and spin up my app just fine until it complains that Kafka is not running because it couldn't connect to any broker.
Debugging sessions
After some debug sessions I have ps aux'ed each and every container to make sure all the services I needed were running in their respective containers, such as:
container(postgres) {
sh 'ps aux' # Show Postgres, as expected
}
container(linux) {
sh 'ps aux | grep post' # Does not show Postgres, as expected
sh 'ps aux | grep kafka' # Does not show Kafka, as expected
}
container(kafka) {
sh 'ps aux' # Does NOT show any Kafka running
}
I have also exported KAFKA_ADVERTISED_HOST_NAME var to 127.0.0.1 as explained in the image docs, without success, with the following code:
containerTemplate(
name: kafka,
image: 'wurstmeister/kafka:latest',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'KAFKA_ADVERTISED_HOST_NAME', value: '127.0.0.1'),
envVar(key: 'KAFKA_AUTO_CREATE_TOPICS_ENABLE', value: 'true'),
]
)
Questions
This image documentation details https://hub.docker.com/r/wurstmeister/kafka/ is explicit about starting a Kafka cluster with docker-compose up -d
1) How do I actually do that with this Kubernetes plugin + Docker + Groovy + Pipeline combo in Jenkins?
2) Do I actually need to do that? Postgres image docs (https://hub.docker.com/_/postgres/) also mentions about running the instance with docker run, but I didn't need to do that at all, which makes me think that containerTemplate is probably doing it automatically. So why is it not doing this for the Kafka container?
Thanks!
So... problem is with this image, and way how kubernetes works with them.
Kafka does not start because you override dockers CMD with command:'cat' which causes start-kafka.sh to never run.
Because of above I suggest using different image. Below template worked for me.
containerTemplate(
name: 'kafka',
image: 'quay.io/jamftest/now-kafka-all-in-one:1.1.0.B',
resourceRequestMemory: '500Mi',
ttyEnabled: true,
ports: [
portMapping(name: 'zookeeper', containerPort: 2181, hostPort: 2181),
portMapping(name: 'kafka', containerPort: 9092, hostPort: 9092)
],
command: 'supervisord -n',
envVars: [
containerEnvVar(key: 'ADVERTISED_HOST', value: 'localhost')
]
),