Deploying in 1.24 version Kubernetes - docker

As you already know , Kubernetes in version 1.24 is moving on from DockerShim.
I will need your help here because all of our deployments in Jenkins are running through a Docker Pod agent via Kubernetes plugin from Jenkins.
I will give you an example of part from our pipelines in Jenkins:
agent {
kubernetes {
// label 'test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: jenkins
containers:
- name: docker
image: docker:latest
#image: debian:buster
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
And then basically in this stage we build our image :
stage('Create & Tag Image') {
steps {
container('docker') {
sh '''
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin < AWS ECR URL >
docker build --build-arg -t < AWS ECR URL > --network=host .
'''
}
}
The result is the error that doesn't listen to the docker socket as i mention in version 1.24 Kubernetes doesnt support docker daemon anymore.
I would like to ask you how you deploy now in Kubernetes 1.24.
I read that there are some tools img, buildah, kaniko, or buildkit-cli-for-kubectl that don’t require Docker.
Can you recommend me any solution or help in this subject ?
We are using EKS from AWS.
Thank you

You can try Mirantis cri-dockerd, some explanations can be found here.

Related

How to upload and download docker images using nexus registry/repository?

I was able to publish a Docker image using the jenkins pipeline, but not pull the docker image from the nexus.I used kaniko to build the image.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
hostNetwork: false
containers:
- name: test-app
image: ip_adress/demo:0.1.0
imagePullPolicy: Always
resources:
limits: {}
imagePullSecrets:
- name: registrypullsecret
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-app
name: test-app-service
namespace: jenkins
spec:
ports:
- nodePort: 32225
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: test-app
type: NodePort
Jenkins pipeline main script
stage ('Build Image'){
container('kaniko'){
script {
sh '''
/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination="$ip_adress:8082/demo:0.1.0" --insecure --skip-tls-verify
'''
}
stage('Kubernetes Deployment'){
container('kubectl'){
withKubeConfig([credentialsId: 'kube-config', namespace:'jenkins']){
sh 'kubectl get pods'
sh 'kubectl apply -f deployment.yml'
sh 'kubectl apply -f service.yml'
}
I've created a dockerfile of a Spring boot Java application. I've sent the image to Nexus using the Jenkins pipeline, but I can't deploy it.
kubectl get pod -n jenkins
test-app-... 0/1 ImagePullBackOff
kubectl describe pod test-app-.....
Error from server (NotFound): pods "test-app-.." not found
docker pull $ip_adress:8081/repository/docker-releases/demo:0.1.0 ```
Error response from daemon: Get "https://$ip_adress/v2/": http:server
gave HTTP response to HTTPS client
ip adress: private ip address
How can I send as http?
First of all try to edit /etc/containerd/config.toml and add your registry ip:port like this { "insecure-registries": ["172.16.4.93:5000"] }
if there was still a problem, add your nexus registry credential to yaml kubernetes file like link below
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
If we want to use a private registry to pull the images in kubernetes we need to configure the registry endpoint and credentials as a secret and use it in pod deployment configuration.
Note: The secrets must have to be in the same namespace as Pod
Refer this official k8 document to know more details about configuring private registry in Kubernetes
In your case you are using secret registrypullsecret . So check the secret one more time whether it is configured properly or not. If not, try following the documentation mentioned above.

Jenkins Docker Container can't access docker.sock in GKE Autopilot

I have installed Jenkins using helm v3 in GKE Autopilot Clustor using default chart value. I am trying to create a Docker image from Jenkins but getting permission issue (autogke-no-write-mode-hostpath).
Jenkinsfile
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: nodejs
image: node:16
command:
- cat
tty: true
resources:
requests:
memory: "4Gi"
cpu: "1000m"
- name: gcloud-sdk
image: google/cloud-sdk:latest
command:
- cat
tty: true
- name: docker
image: docker:latest
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
'''
}
}
environment {
// GKE
PROJECT_ID = 'example-0000'
// Docker
GCR_HOSTNAME = 'us.gcr.io'
DOCKER_IMG = "${env.GCR_HOSTNAME}/${env.PROJECT_ID}/my-app"
DOCKER_IMG_TAG = "${env.DOCKER_IMG}:${env.BRANCH_NAME}-${env.BUILD_NUMBER}"
NODE_ENV = 'production'
HOME = "${WORKSPACE}"
NPM_CONFIG_CACHE = "${WORKSPACE}/.npm"
}
options {
disableConcurrentBuilds(abortPrevious: true)
parallelsAlwaysFailFast()
}
stages {
stage('Build: Docker Image') {
when {
beforeAgent true
anyOf { branch 'master'; branch 'sandbox' }
}
steps {
container('docker') {
sh "sed -i 's#__NODE_ENV__#${NODE_ENV}#' ./Dockerfile"
sh "docker build -t ${env.DOCKER_IMG_TAG} ."
withCredentials([file(credentialsId: 'my-gcr-cred', variable: 'GCR_MANAGER_KEY')]) {
sh 'chmod 600 $GCR_MANAGER_KEY'
sh('cat $GCR_MANAGER_KEY | docker login -u _json_key --password-stdin https://' + "${env.GCR_HOSTNAME}")
sh "docker push ${DOCKER_IMG_TAG}"
sh "docker logout https://${env.GCR_HOSTNAME}"
}
}
}
post {
always {
sh "docker rmi ${env.DOCKER_IMG_TAG}"
}
}
}
}
}
Error that I am getting
ERROR: Unable to create pod kubernetes jenkins/pro-7-bmvlz-n2z9s-bqxmd.
Failure executing: POST at: https://10.100.108.3/api/v1/namespaces/jenkins/pods. Message: admission webhook "gkepolicy.common-webhooks.networking.gke.io" denied the request: GKE Policy Controller rejected the request because it violates one or more policies: {"[denied by autogke-no-write-mode-hostpath]":["hostPath volume docker-sock in container docker is accessed in write mode; disallowed in Autopilot. Requested by user: 'system:serviceaccount:jenkins:jenkinsv2', groups: 'system:serviceaccounts,system:serviceaccounts:jenkins,system:authenticated'."]}. Received status: Status(apiVersion=v1, code=400, details=null, kind=Status, message=admission webhook "gkepolicy.common-webhooks.networking.gke.io" denied the request: GKE Policy Controller rejected the request because it violates one or more policies: {"[denied by autogke-no-write-mode-hostpath]":["hostPath volume docker-sock in container docker is accessed in write mode; disallowed in Autopilot. Requested by user: 'system:serviceaccount:jenkins:jenkinsv2', groups: 'system:serviceaccounts,system:serviceaccounts:jenkins,system:authenticated'."]}, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=[denied by autogke-no-write-mode-hostpath], status=Failure, additionalProperties={}).
What you are trying to access is at the /var/run and Docker in Jenkin
However, if we see GKE auto pilot is managed K8s service and you don't have access to underlying the infrastructure.
Autopilot GKE does not allow you to use the Hostpath method or mount the folder with written permission.
You are only allowed to perform the read operation.
HostPort and hostNetwork are not permitted because node management is handled by GKE. Using hostPath volumes in write mode is prohibited while using hostPath volumes in read mode is allowed only for /var/log/ path prefixes.
Read more at : https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview

Jenkins build Docker daemon not running on kubernetes cluster

I'm new to Dev Ops and trying to build my code using Jenkins and upload it on the kubernetes cluster which is hosted on the IBM cloud. But when I run the Docker run command in the Jenkins script I keep getting this error. Installed all the latest plugins and
+ docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
Here's the Jenkins script which I don't know is right or wrong. I searched a couple of articles and question. They all were not giving me a positive result.
Tried this Jenkins Docker in Docker on GCP/Kubernetes.
podTemplate(
cloud: "kubernetes",
label:"mypod",
containers:[
containerTemplate(
name:"nodejs",
image:"node",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
),
containerTemplate(
name:"docker",
image:"",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
),
containerTemplate(
name:"helm",
image:"alpine/helm",
ttyEnabled:true,
command:'cat',
alwaysPullImage: true,
resourceRequestCpu: '200m',
resourceRequestMemory: '100Mi',
)
],
volumes:[
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
]
){
node("mypod"){
def commitId
stage ("Fetch repo"){
checkout scm
commitId = sh(script: 'git rev-parse --short HEAD',returnStdout:true).trim()
}
stage ("Installing packages"){
container("nodejs"){
sh 'npm install'
}
}
stage ("Build"){
container("nodejs"){
sh 'npm run build'
}
}
def repository
stage ("Docker"){
container('docker'){
docker.withRegistry("https://us.icr.io/api","ibm-cloud"){
sh "docker run hello-world"
}
}
}
stage ("Deploy"){
container ("helm"){
sh 'helm version'
}
}
}
}
This is the deployment file of my Jenkins pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-uat
labels:
app: jenkins
chart: jenkins-5.0.18
release: jenkins-uat
heritage: Helm
spec:
selector:
matchLabels:
app: jenkins
release: jenkins-uat
template:
metadata:
labels:
app: jenkins
chart: jenkins-5.0.18
release: jenkins-uat
heritage: Helm
spec:
securityContext:
fsGroup: 1001
containers:
- name: jenkins
image: docker.io/bitnami/jenkins:2.235.1-debian-10-r7
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: JENKINS_USERNAME
value: "hlpjenkin"
- name: JENKINS_PASSWORD
valueFrom:
secretKeyRef:
name: jenkins-uat
key: jenkins-password
- name: JENKINS_HOME
value: "/opt/bitnami/jenkins/jenkins_home"
- name: DISABLE_JENKINS_INITIALIZATION
value: "no"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
livenessProbe:
httpGet:
path: /login
port: http
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /login
port: http
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
resources:
limits: {}
requests:
cpu: 300m
memory: 512Mi
volumeMounts:
- name: jenkins-data
mountPath: /bitnami/jenkins
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-uat
So I have installed Jenkins as a container in my k8s cluster :) and managed to reproduce the same error:
docker run --rm hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
How to fix it.
In order to fix you definitely need to have access to the Docker on your K8s Node. Very good explanation of how that works was given by jpetazzo.
Technically you do not need "Docker in Docker" (that is the "full Docker setup" in Docker). You just want to be able to run Docker from your CI system, while this CI system itself is in a container. So that that your CI system like Jenkins can start containers.
So when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with the access to /var/run/docker.sock on main host.
Below you can see the part of my Yamls that a responsible for that.
That allows my CI container to have access to the Docker socket, and CI container will, therefore, be able to start containers.
Except that instead of starting “child” containers, it will start “sibling” containers, but that is perfectly fine in our context.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
template:
spec:
containers:
- env:
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
...
volumes:
- hostPath:
path: /var/run/docker.sock
type: File
name: docker-sock
So in my case, the pipeline I've created produces the following logs:
####pipeline
pipeline {
agent any
stages {
stage('second_stage'){
steps{
sh 'docker run --rm hello-world'
}
}
}
}
####logs
+ docker run --rm hello-world
Hello from Docker!
I had this similar problem and I fixed this by enabling my user to be part of docker group and execute docker. This happens when your user is unable to find docker.
You need follow the post installation steps after installing docker.
Create the docker group
sudo groupadd docker
Add your user to the docker group.
sudo usermod -aG docker $USER
Restart docker service
sudo service docker stop and sudo service docker start
Exit/Logout from current user and Log back in to verify
So I see a couple of problems in your podtemplate.
First of all, for docker container, you didn't specify any image. You should use a docker image in this container. Create your own container with docker installed in it or you can use https://hub.docker.com/r/volaka/ibm-cloud-cli this image. It includes ibmcloud cli, kubectl, helm and docker for kubernetes automation on IBM Cloud.
Second thing is that I think it is related with Jenkins Kubernetes. Once you create a podTemplate in a pipeline, even if you edit the template, sometimes the changes are not seen in the latest pod. I had this kind of error so I deleted and recreated the pipeline with the edited podTemplate. I am saying this because even if you have declared your volume binding in podTemplate, I don't see it in the created pod's yaml. So I recommend you to recreate your pipeline with your final podTemplate.
I have created a detailed walkthrough about how to install, configure and automate Jenkins pipelines on IBM Kubernetes Service. Feel free to check it. https://volaka.gitbook.io/jenkins-on-k8s/

Rolling out a new backend version + Kubernetes + Gitlab CI + Google Cloud

I don't know how to update my backend workload on my Kubernetes cluster. My Gitlab Pipeline is running without errors. My active revision is still on my first push, so how can I update the revision to call the rolling update action? Can I integrate an automatic rollout into the Gitlab Ci?
.gitlab-ci
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MYNAME --docker-password=$REGISTRY_PASSWD --docker-email=MYMAIL
- kubectl apply -f deployment.yml
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ft-backend
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ft-backend
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: registry.gitlab.com
Google Cloud Workload
As discussed in comments, you have to update your Deployment .spec.template to trigger a rollout. An easy way for you to do it is to tag your image upon release.
In your .gitlab-ci.yml file you can use the CI_COMMIT_SHA variable:
# in your docker-build job, update build and push:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
# in your k8s-deploy job add this:
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
That would both version your image on your GitLab project registry, and trigger a rollout.
Like Clorichel mentioned in the comments, you'd need to modify your deployment to trigger a rollout. You could use something like Gitflow and Semantic Versioning (if you're not already) to tag your container image. For example, in the .gitlab-ci you could add the Git tag to your container image:
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG
In the deployment.yml you would reference the new version:
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend:YOUR_NEW_GIT_TAG
imagePullPolicy: Always
ports:
- containerPort: 8080

Error pushing kaniko build image to azure container registry from jenkins groovy pipeline

I have a scenario, I run my Jenkins in K8s cluster in Minikube. I run a groovy script within my Jenkins Pipeline which builds the docker image
using Kaniko (which builds a docker image without docker daemon) and pushes to Azure container registry. I have created secret to authenticate to Azure.
But when I push an image - I get an error
" [36mINFO[0m[0004] Taking snapshot of files...
[36mINFO[0m[0004] ENTRYPOINT ["jenkins-slave"]
error pushing image: failed to push to destination Testimage.azurecr.io/test:latest: unexpected end of JSON input
[Pipeline] }"
My Script
My groovy script --
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-pv
mountPath: /root
volumes:
- name: jenkins-pv
projected:
sources:
- secret:
name: pass
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=testimage.azurecr.io/test:latest
'''
}
}
}
}
Could you please help how to overcome the error? And also :
How do I know the name of the image build by kaiko?
I m just pushing like - registry.acr.io/test: latest, probably it's an incorrect image name that's the reason I get JSON output error?

Resources