As you already know , Kubernetes in version 1.24 is moving on from DockerShim.
I will need your help here because all of our deployments in Jenkins are running through a Docker Pod agent via Kubernetes plugin from Jenkins.
I will give you an example of part from our pipelines in Jenkins:
agent {
kubernetes {
// label 'test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: jenkins
containers:
- name: docker
image: docker:latest
#image: debian:buster
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
And then basically in this stage we build our image :
stage('Create & Tag Image') {
steps {
container('docker') {
sh '''
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin < AWS ECR URL >
docker build --build-arg -t < AWS ECR URL > --network=host .
'''
}
}
The result is the error that doesn't listen to the docker socket as i mention in version 1.24 Kubernetes doesnt support docker daemon anymore.
I would like to ask you how you deploy now in Kubernetes 1.24.
I read that there are some tools img, buildah, kaniko, or buildkit-cli-for-kubectl that don’t require Docker.
Can you recommend me any solution or help in this subject ?
We are using EKS from AWS.
Thank you
You can try Mirantis cri-dockerd, some explanations can be found here.
I'm new to Dev Ops and trying to build my code using Jenkins and upload it on the kubernetes cluster which is hosted on the IBM cloud. But when I run the Docker run command in the Jenkins script I keep getting this error. Installed all the latest plugins and
+ docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
Here's the Jenkins script which I don't know is right or wrong. I searched a couple of articles and question. They all were not giving me a positive result.
Tried this Jenkins Docker in Docker on GCP/Kubernetes.
podTemplate(
cloud: "kubernetes",
label:"mypod",
containers:[
containerTemplate(
name:"nodejs",
image:"node",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
),
containerTemplate(
name:"docker",
image:"",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
),
containerTemplate(
name:"helm",
image:"alpine/helm",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
)
],
volumes:[
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
]
){
node("mypod"){
def commitId
stage ("Fetch repo"){
checkout scm
commitId = sh(script: 'git rev-parse --short HEAD',returnStdout:true).trim()
}
stage ("Installing packages"){
container("nodejs"){
sh 'npm install'
}
}
stage ("Build"){
container("nodejs"){
sh 'npm run build'
}
}
def repository
stage ("Docker"){
container('docker'){
docker.withRegistry("https://us.icr.io/api","ibm-cloud"){
sh "docker run hello-world"
}
}
}
stage ("Deploy"){
container ("helm"){
sh 'helm version'
}
}
}
}
This is the deployment file of my Jenkins pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-uat
labels:
app: jenkins
chart: jenkins-5.0.18
release: jenkins-uat
heritage: Helm
spec:
selector:
matchLabels:
app: jenkins
release: jenkins-uat
template:
metadata:
labels:
app: jenkins
chart: jenkins-5.0.18
release: jenkins-uat
heritage: Helm
spec:
securityContext:
fsGroup: 1001
containers:
- name: jenkins
image: docker.io/bitnami/jenkins:2.235.1-debian-10-r7
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: JENKINS_USERNAME
value: "hlpjenkin"
- name: JENKINS_PASSWORD
valueFrom:
secretKeyRef:
name: jenkins-uat
key: jenkins-password
- name: JENKINS_HOME
value: "/opt/bitnami/jenkins/jenkins_home"
- name: DISABLE_JENKINS_INITIALIZATION
value: "no"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
livenessProbe:
httpGet:
path: /login
port: http
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /login
port: http
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
resources:
limits: {}
requests:
cpu: 300m
memory: 512Mi
volumeMounts:
- name: jenkins-data
mountPath: /bitnami/jenkins
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-uat
So I have installed Jenkins as a container in my k8s cluster :) and managed to reproduce the same error:
docker run --rm hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
How to fix it.
In order to fix you definitely need to have access to the Docker on your K8s Node. Very good explanation of how that works was given by jpetazzo.
Technically you do not need "Docker in Docker" (that is the "full Docker setup" in Docker). You just want to be able to run Docker from your CI system, while this CI system itself is in a container. So that that your CI system like Jenkins can start containers.
So when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with the access to /var/run/docker.sock on main host.
Below you can see the part of my Yamls that a responsible for that.
That allows my CI container to have access to the Docker socket, and CI container will, therefore, be able to start containers.
Except that instead of starting “child” containers, it will start “sibling” containers, but that is perfectly fine in our context.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
template:
spec:
containers:
- env:
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
...
volumes:
- hostPath:
path: /var/run/docker.sock
type: File
name: docker-sock
So in my case, the pipeline I've created produces the following logs:
####pipeline
pipeline {
agent any
stages {
stage('second_stage'){
steps{
sh 'docker run --rm hello-world'
}
}
}
}
####logs
+ docker run --rm hello-world
Hello from Docker!
I had this similar problem and I fixed this by enabling my user to be part of docker group and execute docker. This happens when your user is unable to find docker.
You need follow the post installation steps after installing docker.
Create the docker group
sudo groupadd docker
Add your user to the docker group.
sudo usermod -aG docker $USER
Restart docker service
sudo service docker stop and sudo service docker start
Exit/Logout from current user and Log back in to verify
So I see a couple of problems in your podtemplate.
First of all, for docker container, you didn't specify any image. You should use a docker image in this container. Create your own container with docker installed in it or you can use https://hub.docker.com/r/volaka/ibm-cloud-cli this image. It includes ibmcloud cli, kubectl, helm and docker for kubernetes automation on IBM Cloud.
Second thing is that I think it is related with Jenkins Kubernetes. Once you create a podTemplate in a pipeline, even if you edit the template, sometimes the changes are not seen in the latest pod. I had this kind of error so I deleted and recreated the pipeline with the edited podTemplate. I am saying this because even if you have declared your volume binding in podTemplate, I don't see it in the created pod's yaml. So I recommend you to recreate your pipeline with your final podTemplate.
I have created a detailed walkthrough about how to install, configure and automate Jenkins pipelines on IBM Kubernetes Service. Feel free to check it. https://volaka.gitbook.io/jenkins-on-k8s/
Installed Jenkins using helm
helm install --name jenkins -f values.yaml stable/jenkins
Jenkins Plugin Installed
- kubernetes:1.12.6
- workflow-job:2.31
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.3
- docker:1.1.6
Defined Jenkins pipeline to build docker container
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
Throws the error : docker not found
You can define agent pod with containers with required tools(docker, Maven, Helm etc) in the pipeline for that:
First, create agentpod.yaml with following values:
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- name: m2
mountPath: /root/.m2
- name: docker
image: docker:19.03
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: m2
hostPath:
path: /root/.m2
Then configure the pipeline as:
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'agentpod.yaml'
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn package'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t dockerimage ."
}
}
}
}
}
It seems like you have only installed plugins but not packages. Two possibilities.
Configure plugins to install packages using Jenkins.
Go to Manage Jenkins
Global Tools Configuration
Docker -> Fill name (eg: Docker-latest)
Check on install automatically and then add installer (Download from
here).
Then save
If you have installed on your machine then update the PATH variable in Jenkins with the location of Docker.
I want to run the kaniko as a slave in jenkins . My pipeline is running on the docker plugin and how can I set the gcr credentials with the kaniko.
I want to upload GCR credentials to the Jenkins Master server .
My pipeline groovy is shown as below :
node("kaniko-jnlp") {
stage('Building Stage') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
sh ''' /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --insecure-
skip-tls-verify --cache=true
--- destination=gcr.io/project/project:v1 '''
}
I am using Kaniko to build images and push to a private repo. My Kaniko docker image uses a Kubernetes pull-secret for authentication, but you should be able to use the following code:
stage('Kaniko') {
environment {
ARTIFACTORY_CREDS = credentials('your-credentials')
}
steps{
sh "echo ********** EXAMPLE APP **********"
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh
/kaniko/executor --context `pwd` --cleanup --dockerfile=your/Dockerfile --build-arg ARTIFACTORY_USER=$ARTIFACTORY_CREDS_USR --build-arg ARTIFACTORY_PASS=$ARTIFACTORY_CREDS_PSW --destination=your.docker.repo/team/image:tag
'''
}
}
}
}
I run my whole pipeline encapsulated inside a pod, here how I use Kaniko:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: worker
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["/busybox/cat"]
tty: true
volumeMounts:
- name: dockercred
mountPath: /root/.docker/
volumes:
- name: dockercred
secret:
secretName: dockercred
"""
}
}
stages {
stage('Stage 1: Build with Kaniko') {
steps {
container('kaniko') {
sh '/kaniko/executor --context=git://github.com/repository/project.git \
--destination=repository/image:tag \
--insecure \
--skip-tls-verify \
-v=debug'
}
}
}
}
}
I have a scenario, I run my Jenkins in K8s cluster in Minikube. I run a groovy script within my Jenkins Pipeline which builds the docker image
using Kaniko (which builds a docker image without docker daemon) and pushes to Azure container registry. I have created secret to authenticate to Azure.
But when I push an image - I get an error
" [36mINFO[0m[0004] Taking snapshot of files...
[36mINFO[0m[0004] ENTRYPOINT ["jenkins-slave"]
error pushing image: failed to push to destination Testimage.azurecr.io/test:latest: unexpected end of JSON input
[Pipeline] }"
My Script
My groovy script --
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-pv
mountPath: /root
volumes:
- name: jenkins-pv
projected:
sources:
- secret:
name: pass
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=testimage.azurecr.io/test:latest
'''
}
}
}
}
Could you please help how to overcome the error? And also :
How do I know the name of the image build by kaiko?
I m just pushing like - registry.acr.io/test: latest, probably it's an incorrect image name that's the reason I get JSON output error?