docker buildkit not supported by daemon in AWS EKS kubernetes cluster - docker

I am using Build Toolkit to build docker image for each microservice.
./build.sh
export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
This works on my machine with docker (18.09.2).
However, it does not work with Jenkins, that I setup as following :
EKS is provisioned with a Terraform module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
Jenkins is deployed on EKS (v1.12.10-eks-ffbd9 , docker://18.6.1) via this Helm Chart.
Jenkins plugins as defined in Values of the helm release:
kubernetes:1.18.1
workflow-job:2.33
workflow-aggregator:2.6
credentials-binding:1.19
git:3.11.0
blueocean:1.19.0
bitbucket-oauth:0.9
Jenkins Pipeline is declarative, and it uses a Pod template where the container image is docker:18-dind and the container name is dind.
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
}
When Jenkins executes this pipeline, it shows this error :
buildkit not supported by daemon
I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?
Terraform eks Module which is now 5.0.0 ?
Or
docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?
Or
the Jenkins Plugin kubernetes:1.18.1 ?

As per docker-ce sources, there are two requirements to make successful check isSessionSupported for starting buildkit session:
dockerCli.ServerInfo().HasExperimental
versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"
So:
check version of your docker-cli library
and is HasExperimental option enabled.
To check if it has Experimantal support, run:
docker version -f '{{.Server.Experimental}}'

Docker buildkit support came out of experimental in 18.09, so you may need to upgrade docker inside of EKS:
EKS (v1.12.10-eks-ffbd9 , docker://18.6.1
Or perhaps you have an old dind image (the 18-dind should be new enough, but an older version of this tag pointing to 18.06 or 18.03 would not). You can try 18.09-dind and 19-dind which should both work if the build is actually happening inside dind.

Related

Connect kubernetes with Jenkins pipeline

I am trying to deploy container using helm from Jenkins pipeline. I have install Kubernetes plugin for jenkins and provided it local running kubernetes URL and the config file in credentials. When I am doing 'Test connection', it is showing'Connected to Kubernetes 1.16+'.
But when I run helm install command from pipeline it gives error
Error: Kubernetes cluster unreachable: the server could not find the requested resource
Note: I am able to do all operation using CLI and also from Jenkins pipeline by using withCredentials and passing cred file variable name(created in jenkins credentials). I just want to do this without wrapping it inside 'withCredentials' .
Both Jenkins and kubernetes are running seperately on windows 10. Please help
Helm uses kubectl config file. I'm using an step like this.
steps {
container('helm') {
withCredentials([file(credentialsId: 'project-credentials', variable: 'PULL_KEYFILE')]) {
sh """
gcloud auth activate-service-account --key-file=${PULL_KEYFILE} --project project-name
gcloud container clusters get-credentials cluster-name --zone us-east1
kubectl create namespace ${NAMESPACE} --dry-run -o yaml | kubectl apply -f -
"""
helm upgrade --install release-name .
}
}
}

How to copy code from Jenkins server to it's docker agent

I have a Jenkins server, I plan to use it for CI for cross platform c++ project. To simplify the process, I have two docker images, one is building the project for android NDK, and the other is building for Ubuntu. For example, I am using a Jenkins file as shown below for android build:
pipeline {
agent {
docker {
image 'image4android:latest'
}
}
stages {
stage('Build') {
steps {
sh 'cd /path/to/project && cmake --build .'
}
}
}
}
I want to run linting and formatting on the Jenkins master, as that is the same for all platforms. Then I need to copy the linted/formatted code to each container for building. How can I use things like docker cp on the Jenkins master to copy the project code to the android/ubuntu container?
You can set volumes in Jenkins -> Manage jenkins -> Manage Nodes and Clouds -> Configure Clouds -> Docker Agent templates -> (Add new template or use existing) -> Container settings -> Volumes
Also you can use jenkins global variables - WORKSPACE. Not shure if jenkins will use variables in the container settings.

Jenkins sh script hangs when run in specific container

I'm trying to automate deployments using the official ArgoCD docker image (https://hub.docker.com/r/argoproj/argocd/dockerfile)
I've created a declarative jenkins pipeline using the kubernetes plugin for the agents and define the pod using yaml, the container definition looks like this:
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: agent
spec:
containers:
- name: maven
image: maven:slim
command:
- cat
tty: true
volumeMounts:
- name: jenkins-maven-cache
mountPath: /root/.m2/repository
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
...
I'm trying to run commands inside that container, that step in the pipeline looks like this:
stage('Build') {
steps {
container('maven') {
sh 'echo testing' // this works just fine
}
}
}
stage('Deploy') {
steps {
container('argocd') {
sh "echo testing" // this does not work
// more deploy scripts here, once sh works
}
}
}
So I have two containers, one where the sh script works just fine and another where it doesn't. The sh scripts in the "argocd" container just hangs for 5 minutes and then Jenkins kills it, the exit message is:
process apparently never started in /home/jenkins/agent/workspace/job-name#tmp/durable-46cefcae (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I can't do echo a simple string in this particular container.
It works fine in other containers like the official for Maven from Docker, I use to build the spring boot application. I can also run commands directly in the argocd container manually from commandline with docker exec, but jenkins just won't in the pipeline for some reason.
What could it be?
I am running the latest version (1.33) of the durable task plugin.
Update:
Turns out that the image for argo-cd (continuous deployment tool) argoproj/argocd:latest does not include other commands except argocd, so the issue was with the container image I tried to use and not Jenkins itself. My solution was to install the Argo-CD CLI into a custom docker container and use that instead of the official one.
I've just encountered a similar issue with a custom docker image created by myself.
It turns out, I was using USER nobody in Dockerfile of that image and somehow, this way Jenkins agent pod was unable to run cat command or any other shell command from my pipeline script. Running specific container with root user worked for me.
So in your case I would add securityContext: runAsUser: 0 like below.
...
- name: argocd
image: argoproj/argocd:latest
command:
- cat
tty: true
securityContext:
runAsUser: 0
...
Kubernetes reference: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
If the issue is Jenkins related here are some things that may help to solve the problem:
Issues with the working directory, if you updated Jenkins from some older version the workdir was /home/jenkins while in the recent versions it should be /home/jenkins/agent
or if you are running it in Windows the path should start with C:\dir and not with /dir
You can try a new clean install with apt-get --purge remove jenkins and then apt-get install jenkins
This is not your case as you run latest version of durable task plugin. But for other people reference versions prior to 1.28-1.30 caused the same issue.
If your Jenkins is clean the issue should be investigated in a different way, it seems that it's not returning an exit code to the sh command and/or the script is executed in a different shell.
I would try to do an sh file to be placed in the working directory of the container
#!/bin/bash
echo "testing"
echo $?
and try to run it with source my_script.sh
or with bash my_script.sh
$? is the exit code of the latest bash operation, having it printed will make sure that your script is terminated correctly. The source command to run the script will make it run in the same shell that is calling it so the shell variables are accessible. Bash command will run it in another subshell instead.

How to version docker images with build number in Jenkins to deploy as Kubernetes deployment?

Currently I am trying to add version number or build number for Docker image to deploy on Kubernetes cluster. Previously I was working only with :latest tag. But when I am using latest tag , I found problem for pulling from Dockerhub image registry. So when I am using the build number to my docker image like <image-name>:{build-number} .
Application Structure
In my Kubernetes deployment, I am using deployment and service. I am defining my image repository in my deployment file like the following,
containers:
- name: test-kube-deployment-container
image: samplekubernetes020/testimage:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Here instead of latest tag, I want to put build number with my image in deployment YAML.
Can I use an environment variable for holding the random build number for accessing like <image-name>:${buildnumber} ?
If i want to use a environment variable which providing the random number how I can generate a random number to a environment variable?
Updates On Image Version Implementation
My modified Jenkinsfile contains the step like following to assign the image version number to image. But still I am not getting the updated result after changes to repository,
I created step like the following in Jenkinsfile
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=my-username --password=my-password'
sh "docker tag spacestudymilletech010/spacestudykubernetes:latest spacestudymilletech010/spacestudykubernetes:${VERSION}"
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
And my deployment YAML file contains like the following,
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Confusions
NB: When I am checking the dockerhub repository, every time it showing the latest push status
So my confusions are:
Is there any problem with pulling latest image in my deployment.yaml file?
Is the problem when I am tagging the image at my machine from where I am building the image and pushing?
The standard way or at least the way that has worked for most of us is to create versioned or tagged images. For example
samplekubernetes020/testimage:1
samplekubernetes020/testimage:2
samplekubernetes020/testimage:3
...
...
Now I will try to answer your actual question which is how do I update the image which is in my deployment when my image tag upgrades?
Enter Solution
When you compile and build a new image with latest version of code, tag it with an incremental unique version. This tag can be anything unique or build number, etc.
Then push this tagged image to docker registry
Once the image is uploaded, this is when you can use kubectl or kubernetes API to update the deployment with the latest container image.
kubectl set image deployment/my-deployment test-kube-deployment-container=samplekubernetes020/testimage:1 --record
The above set of steps generally take place in your CI pipeline, where you store the image version or the image: version in the environment variable itself.
Update Post comment
Since you are using Jenkins, you can get the current build number and commit-id and many other variables in Jenkinsfile itself as Jenkins injects these values at builds runtime. For me, this works. Just a reference.
environment {
NAME = "myapp"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
stage('Build') {
steps {
echo "Running ${VERSION} on ${env.JENKINS_URL}"
git branch: "${BRANCH}", .....
echo "for brnach ${env.BRANCH_NAME}"
sh "docker build -t ${NAME} ."
sh "docker tag ${NAME}:latest ${IMAGE_REPO}/${NAME}:${VERSION}"
}
}
}
This Jenkins pipeline approach worked for me.
I am using Jenkins build number as a tag for docker image, pushing to docker hub. Now applying yaml file to k8s cluster and then updating the image in deployment with same tag.
Sample pipeline script snippet is here,
stage('Build Docker Image'){
sh 'docker build -t {dockerId}/{projectName}:${BUILD_NUMBER} .'
}
stage('Push Docker Image'){
withCredentials([string(credentialsId: 'DOKCER_HUB_PASSWORD', variable: 'DOKCER_HUB_PASSWORD')]) {
sh "docker login -u {dockerId} -p ${DOKCER_HUB_PASSWORD}"
}
sh 'docker push {dockerId}/{projectName}:${BUILD_NUMBER}'
}
stage("Deploy To Kuberates Cluster"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}

Calling docker stack deploy on a docker host from within a Jenkins container

On my OS X host, I'm using Docker CE (18.06.1-ce-mac73 (26764)) with Kubernetes enabled and using Kubernetes orchestration. From this host, I can run a stack deploy to deploy a container to Kubernetes using this simple docker-compose file (kube-compose.yml):
version: '3.3'
services:
web:
image: dockerdemos/lab-web
volumes:
- "./web/static:/static"
ports:
- "9999:80"
and this command-line run from the directory containing the compose file:
docker stack deploy --compose-file ./kube-compose.yml simple_test
However, when I attempt to run the same command from my Jenkins container, Jenkins returns:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
I do not want the docker client in the Jenkins container to be initialized for a swarm since I'm not using Docker swarm on the host.
The Jenkins container is defined in a docker-compose to include a volume mount to the docker host socket endpoint:
version: '3.3'
services:
jenkins:
# contains embedded docker client & blueocean plugin
image: jenkinsci/blueocean:latest
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- ./jenkins_home:/var/jenkins_home
# run Docker from the host system when the container calls it.
- /var/run/docker.sock:/var/run/docker.sock
# root of simple project
- .:/home/project
container_name: jenkins
I have also followed this guide to proxy requests to the docker host with socat: https://github.com/docker/for-mac/issues/770 and here: Docker-compose: deploying service in multiple hosts.
Finally, I'm using the following Jenkins definition (Jenkinsfile) to call stack to deploy on my host. Jenkins has the Jenkins docker plug-in installed:
node {
checkout scm
stage ('Deploy To Kube') {
docker.withServer('tcp://docker.for.mac.localhost:1234') {
sh 'docker stack deploy app --compose-file /home/project/kube-compose.yml'
}
}
}
I've also tried changing the withServer signature to:
docker.withServer('unix:///var/run/docker.sock')
and I get the same error response. I am, however, able to telnet to the docker host from the Jenkins container so I know it's reachable. Also, as I mentioned earlier, I know the message is saying to run swarm init, but I am not deploying to swarm.
I checked the version of the docker client in the Jenkins container and it is the same version (Linux variant, however) as I'm using on my host:
Docker version 18.06.1-ce, build d72f525745
Here's the code I've described: https://github.com/ewilansky/localstackdeploy.git
Please let me know if it's possible to do what I'm hoping to do from the Jenkins container. The purpose for all of this is to provide a simple, portable demonstration of a pipeline and deploying to Kubernetes is the last step. I understand that this is not the approach that would be taken anywhere outside of a local development environment.
Here is an approach that's working well for me until the Jenkins Docker plug-in or the Kubernetes Docker Stack Deploy command can support the remote deployment scenario I described.
I'm now using the Kubernetes client kubectl from the Jenkins container. To minimize the size increase of the Jenkins container, I added just the Kubernetes client to the jenkinsci/blueocean image that was built on Alpine Linux. This DockerFile shows the addition:
FROM jenkinsci/blueocean
USER root
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
RUN mkdir /root/.kube
COPY kube-config /root/.kube/config
I took this approach, which added ~100 mb to the image size rather than getting the Alpine Linux Kubernetes package, which almost doubled the size of the image in my testing. Granted, the Kubernetes package has all Kubernetes components, but all I needed was the Kubernetes client. This is similar to the requirement that the docker client be resident to the Jenkins container in order to run Docker commands on the host.
Notice in the DockerFile that there is reference to the Kuberenetes config file:
kube-config /root/.kube/config
I started with the Kubernetes configuration file on my host machine (the computer running Docker for Mac). I believe that if you enable Kubernetes in Docker for Mac, the Kubernetes client configuration will be present at ~/.kube/config. If not, install the Kubernetes client tools separately. In the Kubernetes configuration file that you will copy over to the Jenkins container via DockerFile, just change the server value so that the Jenkins container is pointing at the Docker for Mac host:
server: https://docker.for.mac.localhost:6443
If you're using a Windows machine, I think you can use docker.for.win.localhost. There's a discussion about this here: https://github.com/docker/for-mac/issues/2705 and other approaches described here: https://github.com/docker/for-linux/issues/264.
After recomposing the Jenkins container, I was then able to use kubectl to create a deployment and service for my app that's now running in the Kubernetes Docker for Mac host. In my case, here are the two commands I added to my Jenkins file:
stage ('Deploy To Kube') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-deployment.yaml'
}
stage('Configure Kube Load Balancer') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-service.yaml'
}
There are loads of options for Kubernetes container deployments. In my case, I simply needed to deploy my web app (with replicas) behind a load balancer. All of that is defined in the two yaml files called by kubectl. This is a bit more involved than docker stack deploy, but achieves the same end result.

Resources