I am trying to set up a testing framework for my kubernetes cluster using jenkins and the jenkins kubernetes plugin.
I can get jenkins to provision pods and run basic unit tests, but what is less clear is how I can run tests that involve coordination between multiple pods.
Essentially I want to do something like this:
podTemplate(label: 'pod 1', containers: [ containerTemplate(...)]) {
node('pod1') {
container('container1') {
// start service 1
}
}
}
podTemplate(label: 'pod 2', containers[ containerTemplate(...)]) {
node('pod2') {
container('container2') {
// start service 2
}
}
stage ('Run test') {
node {
sh 'run something that causes service 1 to query service 2'
}
}
I have two main problems:
Pod lifecycle:
As soon as the block after the podtemplate is cleared, the pods are terminated. Is there an accepted way to keep the pods alive until a specified condition has been met?
ContainerTemplate from docker image:
I am using a docker image to provision the containers inside each kubernetes pod, however the files that should be inside those images do not seem to be visible/accessable inside the 'container' blocks, even though the environments and dependencies installed are correct for the repo. How do I actually get the service defined in the docker image to run in a jenkins provisioned pod?
It has been some time since I have asked this question, and in the meantime I have learned some things that let me accomplish what I have been asking, though maybe not as neatly as I would have liked.
The solution to multi-service tests ended up being simply using an pod template that has the google cloud library, and assigning that worker a service-account credential plus a secret key so that it can kubectl commands on the cluster.
Dockerfile for worker, replace "X"s with desired version:
FROM google/cloud-sdk:alpine
// Install some utility functions.
RUN apk add --no-cache \
git \
curl \
bash \
openssl
// Used to install a custom version of kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.XX.X/bin/linux/amd64/kubectl &&\
chmod +x ./kubectl &&\
mv ./kubectl /usr/local/bin/kubectl
// Helm to manage deployments.
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh &&\
chmod 700 get_helm.sh && ./get_helm.sh --version vX.XX.X
Then in the groovy pipeline:
pipeline {
agent {
kubernetes {
label 'kubectl_helm'
defaultContainer 'jnlp'
serviceAccount 'helm'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: gcloud
image: your-docker-repo-here
command:
- cat
tty: true
"""
}
}
environment {
GOOGLE_APPLICATION_CREDENTIALS = credentials('google-creds')
}
stages {
stage('Do something') {
steps {
container('gcloud') {
sh 'kubectl apply -f somefile.yaml'
sh 'helm install something somerepo/somechart'
}
}
}
}
Now that I can access both helm and kubectl commands, I can bring pods or services up and down at will. It still doesn't solve the problem of being able to use the internal "context" of them to access files, but at least it gives me a way to run integration tests.
NOTE: For this to work properly you will need a service account of the name you use for your service account name, and credentials stored in jenkins credentials store. For the helm commands to work, you will need to make sure Tiller is installed on your kubernetes cluster. Also, do not change the name of the env key from GOOGLE_APPLICATION_CREDENTIALS as the gsutils tools will be looking for that environmental variable.
Related
I am trying to deploy container using helm from Jenkins pipeline. I have install Kubernetes plugin for jenkins and provided it local running kubernetes URL and the config file in credentials. When I am doing 'Test connection', it is showing'Connected to Kubernetes 1.16+'.
But when I run helm install command from pipeline it gives error
Error: Kubernetes cluster unreachable: the server could not find the requested resource
Note: I am able to do all operation using CLI and also from Jenkins pipeline by using withCredentials and passing cred file variable name(created in jenkins credentials). I just want to do this without wrapping it inside 'withCredentials' .
Both Jenkins and kubernetes are running seperately on windows 10. Please help
Helm uses kubectl config file. I'm using an step like this.
steps {
container('helm') {
withCredentials([file(credentialsId: 'project-credentials', variable: 'PULL_KEYFILE')]) {
sh """
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project project-name
gcloud container clusters get-credentials cluster-name --zone us-east1
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
"""
helm upgrade --install release-name .
}
}
}
I'm trying to automate deployments using the official ArgoCD docker image (https://hub.docker.com/r/argoproj/argocd/dockerfile)
I've created a declarative jenkins pipeline using the kubernetes plugin for the agents and define the pod using yaml, the container definition looks like this:
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: agent
spec:
containers:
- name: maven
image: maven:slim
command:
- cat
tty: true
volumeMounts:
- name: jenkins-maven-cache
mountPath: /root/.m2/repository
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
...
I'm trying to run commands inside that container, that step in the pipeline looks like this:
stage('Build') {
steps {
container('maven') {
sh 'echo testing' // this works just fine
}
}
}
stage('Deploy') {
steps {
container('argocd') {
sh "echo testing" // this does not work
// more deploy scripts here, once sh works
}
}
}
So I have two containers, one where the sh script works just fine and another where it doesn't. The sh scripts in the "argocd" container just hangs for 5 minutes and then Jenkins kills it, the exit message is:
process apparently never started in /home/jenkins/agent/workspace/job-name#tmp/durable-46cefcae (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I can't do echo a simple string in this particular container.
It works fine in other containers like the official for Maven from Docker, I use to build the spring boot application. I can also run commands directly in the argocd container manually from commandline with docker exec, but jenkins just won't in the pipeline for some reason.
What could it be?
I am running the latest version (1.33) of the durable task plugin.
Update:
Turns out that the image for argo-cd (continuous deployment tool) argoproj/argocd:latest does not include other commands except argocd, so the issue was with the container image I tried to use and not Jenkins itself. My solution was to install the Argo-CD CLI into a custom docker container and use that instead of the official one.
I've just encountered a similar issue with a custom docker image created by myself.
It turns out, I was using USER nobody in Dockerfile of that image and somehow, this way Jenkins agent pod was unable to run cat command or any other shell command from my pipeline script. Running specific container with root user worked for me.
So in your case I would add securityContext: runAsUser: 0 like below.
...
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
securityContext:
runAsUser: 0
...
Kubernetes reference: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
If the issue is Jenkins related here are some things that may help to solve the problem:
Issues with the working directory, if you updated Jenkins from some older version the workdir was /home/jenkins while in the recent versions it should be /home/jenkins/agent
or if you are running it in Windows the path should start with C:\dir and not with /dir
You can try a new clean install with apt-get --purge remove jenkins and then apt-get install jenkins
This is not your case as you run latest version of durable task plugin. But for other people reference versions prior to 1.28-1.30 caused the same issue.
If your Jenkins is clean the issue should be investigated in a different way, it seems that it's not returning an exit code to the sh command and/or the script is executed in a different shell.
I would try to do an sh file to be placed in the working directory of the container
#!/bin/bash
echo "testing"
echo $?
and try to run it with source my_script.sh
or with bash my_script.sh
$? is the exit code of the latest bash operation, having it printed will make sure that your script is terminated correctly. The source command to run the script will make it run in the same shell that is calling it so the shell variables are accessible. Bash command will run it in another subshell instead.
Currently I am trying to add version number or build number for Docker image to deploy on Kubernetes cluster. Previously I was working only with :latest tag. But when I am using latest tag , I found problem for pulling from Dockerhub image registry. So when I am using the build number to my docker image like <image-name>:{build-number} .
Application Structure
In my Kubernetes deployment, I am using deployment and service. I am defining my image repository in my deployment file like the following,
containers:
- name: test-kube-deployment-container
image: samplekubernetes020/testimage:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Here instead of latest tag, I want to put build number with my image in deployment YAML.
Can I use an environment variable for holding the random build number for accessing like <image-name>:${buildnumber} ?
If i want to use a environment variable which providing the random number how I can generate a random number to a environment variable?
Updates On Image Version Implementation
My modified Jenkinsfile contains the step like following to assign the image version number to image. But still I am not getting the updated result after changes to repository,
I created step like the following in Jenkinsfile
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=my-username --password=my-password'
sh "docker tag spacestudymilletech010/spacestudykubernetes:latest spacestudymilletech010/spacestudykubernetes:${VERSION}"
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
And my deployment YAML file contains like the following,
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Confusions
NB: When I am checking the dockerhub repository, every time it showing the latest push status
So my confusions are:
Is there any problem with pulling latest image in my deployment.yaml file?
Is the problem when I am tagging the image at my machine from where I am building the image and pushing?
The standard way or at least the way that has worked for most of us is to create versioned or tagged images. For example
samplekubernetes020/testimage:1
samplekubernetes020/testimage:2
samplekubernetes020/testimage:3
...
...
Now I will try to answer your actual question which is how do I update the image which is in my deployment when my image tag upgrades?
Enter Solution
When you compile and build a new image with latest version of code, tag it with an incremental unique version. This tag can be anything unique or build number, etc.
Then push this tagged image to docker registry
Once the image is uploaded, this is when you can use kubectl or kubernetes API to update the deployment with the latest container image.
kubectl set image deployment/my-deployment test-kube-deployment-container=samplekubernetes020/testimage:1 --record
The above set of steps generally take place in your CI pipeline, where you store the image version or the image: version in the environment variable itself.
Update Post comment
Since you are using Jenkins, you can get the current build number and commit-id and many other variables in Jenkinsfile itself as Jenkins injects these values at builds runtime. For me, this works. Just a reference.
environment {
NAME = "myapp"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
stage('Build') {
steps {
echo "Running ${VERSION} on ${env.JENKINS_URL}"
git branch: "${BRANCH}", .....
echo "for brnach ${env.BRANCH_NAME}"
sh "docker build -t ${NAME} ."
sh "docker tag ${NAME}:latest ${IMAGE_REPO}/${NAME}:${VERSION}"
}
}
}
This Jenkins pipeline approach worked for me.
I am using Jenkins build number as a tag for docker image, pushing to docker hub. Now applying yaml file to k8s cluster and then updating the image in deployment with same tag.
Sample pipeline script snippet is here,
stage('Build Docker Image'){
sh 'docker build -t {dockerId}/{projectName}:${BUILD_NUMBER} .'
}
stage('Push Docker Image'){
withCredentials([string(credentialsId: 'DOKCER_HUB_PASSWORD', variable: 'DOKCER_HUB_PASSWORD')]) {
sh "docker login -u {dockerId} -p ${DOKCER_HUB_PASSWORD}"
}
sh 'docker push {dockerId}/{projectName}:${BUILD_NUMBER}'
}
stage("Deploy To Kuberates Cluster"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}
I am using Build Toolkit to build docker image for each microservice.
./build.sh
export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
This works on my machine with docker (18.09.2).
However, it does not work with Jenkins, that I setup as following :
EKS is provisioned with a Terraform module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
Jenkins is deployed on EKS (v1.12.10-eks-ffbd9 , docker://18.6.1) via this Helm Chart.
Jenkins plugins as defined in Values of the helm release:
kubernetes:1.18.1
workflow-job:2.33
workflow-aggregator:2.6
credentials-binding:1.19
git:3.11.0
blueocean:1.19.0
bitbucket-oauth:0.9
Jenkins Pipeline is declarative, and it uses a Pod template where the container image is docker:18-dind and the container name is dind.
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
}
When Jenkins executes this pipeline, it shows this error :
buildkit not supported by daemon
I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?
Terraform eks Module which is now 5.0.0 ?
Or
docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?
Or
the Jenkins Plugin kubernetes:1.18.1 ?
As per docker-ce sources, there are two requirements to make successful check isSessionSupported for starting buildkit session:
dockerCli.ServerInfo().HasExperimental
versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"
So:
check version of your docker-cli library
and is HasExperimental option enabled.
To check if it has Experimantal support, run:
docker version -f '{{.Server.Experimental}}'
Docker buildkit support came out of experimental in 18.09, so you may need to upgrade docker inside of EKS:
EKS (v1.12.10-eks-ffbd9 , docker://18.6.1
Or perhaps you have an old dind image (the 18-dind should be new enough, but an older version of this tag pointing to 18.06 or 18.03 would not). You can try 18.09-dind and 19-dind which should both work if the build is actually happening inside dind.
I have created a Dockerfile (for a Node JNLP slave which can be used with the Kubernetes Plugin of Jenkins ). I am extending from from the official image jenkinsci/jnlp-slave
FROM jenkinsci/jnlp-slave
USER root
MAINTAINER Aryak Sengupta <aryak.sengupta#hyland.com>
LABEL Description="Image for NodeJS slave"
COPY cert.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install -y nodejs
ENTRYPOINT ["jenkins-slave"]
I have this image saved inside my Pod template (in K8s plugin configuration). Now, when I'm trying to run a build on this slave, I find that two containers are getting spawned up inside the Pod (A screenshot to prove the same.).
My Pod template looks like this:
And my Kubernetes configuration looks like this:
Now if I do a simple docker ps, I find that there are two containers which started up (Why?):
Now, inside the Jenkins Job configuration of Jenkins, whatever I add in the build step, the steps get executed in the first container .
Even if I use the official Node container inside my PodTemplate, the result is still the same:
I have tried to print the Node version inside my Jenkins Job, and the output is "Node not found" . Also, to verify my haunch, I have done a docker exec into my second container and tried to print the Node version. In this case, it works absolutely fine.
This is what my build step looks like:
So, to boil it down, I have two major questions:
Why does two separate (one for JNLP and one with all custom changes) containers start up whenever I fire up the Jenkins Job?
Why is my job running on the first container where Node isn't installed? How do I achieve the desired behaviour of building my project with Node using this configuration?
What am I missing?
P.S. - Please do let me know if the question turns out to be unclear in some parts.
Edit: I understand that this can be done using the Pipeline Jenkins plugin where I can explicitly mention the container name, but I need to do this from the Jenkins UI. Is there any way to specify the container name along with the slave name which I am already doing like this:
The Jenkins kubernetes plugin will always create a JNLP slave container inside the pod that is created to perform the build. The podTemplate is where you define the other containers you need in order to perform your build.
In this case it seems you would want to add a Node container to your podTemplate. In your build you would then have the build happen inside the named Node container.
You shouldn't really care where the Pod runs. All you need to do is make sure you add a container that has the resources you need (like Node in this case). You can add as many containers as you want to a podTemplate. I have some with 10 or more containers for steps like PMD, Maven, curl, etc.
I use a Jenkinsfile with pipelines.
podTemplate(cloud: 'k8s-houston', label: 'api-hire-build',
containers: [
containerTemplate(name: 'maven', image: 'maven:3-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'pmd', image: 'stash.company.com:8443/pmd:pmd-bin-5.5.4', alwaysPullImage: false, ttyEnabled: true, command: 'cat')
],
volumes: [
persistentVolumeClaim(claimName: 'jenkins-pv-claim', mountPath: '/mvn/.m2nrepo')
]
)
{
node('api-hire-build') {
stage('Maven compile') {
container('maven') {
sh "mvn -Dmaven.repo.local=/mvn/.m2nrepo/repository clean compile"
}
}
stage('PMD SCA (docker)') {
container('pmd') {
sh 'run.sh pmd -d "$PWD"/src -f xml -reportfile "$PWD"/target/pmd.xml -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh pmd -d "$PWD"/src -f html -reportfile "$PWD"/target/pmdreport.html -failOnViolation false -rulesets java-basic,java-design,java-unusedcode -language java'
sh 'run.sh cpd --files "$PWD"/src --minimum-tokens 100 --failOnViolation false --language java --format xml > "$PWD"/target/duplicate-code.xml'
}
archive 'target/duplicate-code.xml'
step([$class: 'PmdPublisher', pattern: 'target/pmd.xml'])
}
}
}
Alright so I've figured out the solution. mhang li's answer was the clue but he didn't explain it one bit.
Basically, you need to modify the official Jenkins Slave image found here and modify it to include the changes for your slave as well. Essentially, you are clubbing the JNLP and Slave containers into one and building a combined image.
The modification format will just look like this (picking up from the Dockerfile linked)
FROM jenkins/slave:3.27-1
MAINTAINER Oleg Nenashev <o.v.nenashev#gmail.com>
LABEL Description="This is a base image, which allows connecting Jenkins agents via JNLP protocols" Vendor="Jenkins project" Version="3.27"
COPY jenkins-slave /usr/local/bin/jenkins-slave
**INCLUDE CODE FOR YOUR SLAVE. Eg install node, java, whatever**
ENTRYPOINT ["jenkins-slave"] # Make sure you include this file as well
Now, name the slave container jnlp (Reason - bug). So now, you will have one container that spawns which will be your JNLP + Slave. All in all, your Kubernetes Plugin Pod Template will look something like this. Notice the custom url to the docker image I have put in. Also, make sure you don't include a Command To Run unless you need one.
Done! Your builds should now run within this container and should function exactly like you programmed the Dockerfile!
To set Container Template -> Name as jnlp.
https://issues.jenkins-ci.org/browse/JENKINS-40847