I don't know how to update my backend workload on my Kubernetes cluster. My Gitlab Pipeline is running without errors. My active revision is still on my first push, so how can I update the revision to call the rolling update action? Can I integrate an automatic rollout into the Gitlab Ci?
.gitlab-ci
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MYNAME --docker-password=$REGISTRY_PASSWD --docker-email=MYMAIL
- kubectl apply -f deployment.yml
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ft-backend
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ft-backend
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: registry.gitlab.com
Google Cloud Workload
As discussed in comments, you have to update your Deployment .spec.template to trigger a rollout. An easy way for you to do it is to tag your image upon release.
In your .gitlab-ci.yml file you can use the CI_COMMIT_SHA variable:
# in your docker-build job, update build and push:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
# in your k8s-deploy job add this:
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
That would both version your image on your GitLab project registry, and trigger a rollout.
Like Clorichel mentioned in the comments, you'd need to modify your deployment to trigger a rollout. You could use something like Gitflow and Semantic Versioning (if you're not already) to tag your container image. For example, in the .gitlab-ci you could add the Git tag to your container image:
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG
In the deployment.yml you would reference the new version:
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend:YOUR_NEW_GIT_TAG
imagePullPolicy: Always
ports:
- containerPort: 8080
Related
As you already know , Kubernetes in version 1.24 is moving on from DockerShim.
I will need your help here because all of our deployments in Jenkins are running through a Docker Pod agent via Kubernetes plugin from Jenkins.
I will give you an example of part from our pipelines in Jenkins:
agent {
kubernetes {
// label 'test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
component: ci
spec:
# Use service account that can deploy to all namespaces
serviceAccountName: jenkins
containers:
- name: docker
image: docker:latest
#image: debian:buster
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
And then basically in this stage we build our image :
stage('Create & Tag Image') {
steps {
container('docker') {
sh '''
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin < AWS ECR URL >
docker build --build-arg -t < AWS ECR URL > --network=host .
'''
}
}
The result is the error that doesn't listen to the docker socket as i mention in version 1.24 Kubernetes doesnt support docker daemon anymore.
I would like to ask you how you deploy now in Kubernetes 1.24.
I read that there are some tools img, buildah, kaniko, or buildkit-cli-for-kubectl that don’t require Docker.
Can you recommend me any solution or help in this subject ?
We are using EKS from AWS.
Thank you
You can try Mirantis cri-dockerd, some explanations can be found here.
We're using Gitlab for CI/CD. I'll include the script which we're using gitlab ci-cd file
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: development
before_script:
- apt update
- apt-get install gettext-base
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: stage
before_script:
- apt update
- apt-get install gettext-base
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
According to the script, we just merged the script not to face conflicts/clashes for stage and development enviornment while deployment. Previously, we having each docker files for each environment(stage and developer). Now I want to merge the dockerfile & k8's yml file also, I merged, but the dockerfile is not fetching. Having clashes (its showing the warning message "back-off restarting failed container"after pipeline succeeds) in Kubernetes. I don't know how to clear the warning in Kubernetes. I'll enclose the docker file and yml file for your reference which I merged.
k8's yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: patient-app
labels:
app: patient-app
spec:
replicas: 1
selector:
matchLabels:
app : patient-app
template:
metadata:
labels:
app: patient-app
spec:
containers:
- name: patient-app
image: registry.gitlab.com/stella-center/backend-services/patient-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8094
env:
- name: ENV_VAR_NAME
value: "${ENV_VAR_NAME}"
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: patient-service
spec:
type: NodePort
selector:
app: patient-app
ports:
- port: 8094
targetPort: 8094
Docker file
FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active=$ENV_VAR_NAME","-jar","/app/patient-service.jar"]
In dockerfile , before we used the last line, we used before,
ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"] -for developer dockerfile
ENTRYPOINT ["java","-Dspring.profiles.active=stage","-jar","/app/patient-service.jar"] - for stage dockerfile
At the time, its working fine, I'm not facing any issue on Kubernetes. I just added environment variable to fetch along with whether development or stage .I don't know why the warning is happening. Please help me to sort this out . Thanks in advance.
kubectl describe pods
> Name: patient-app-6cd8c88d6-s7ldt Namespace:
> stellacenter-dev Priority: 0 Node:
> ip-192-168-49-35.us-east-2.compute.internal/192.168.49.35 Start Time:
> Wed, 25 May 2022 20:09:23 +0530 Labels: app=patient-app
> pod-template-hash=6cd8c88d6 Annotations: kubernetes.io/psp: eks.privileged Status: Running IP:
> 192.168.50.146 IPs: IP: 192.168.50.146 Controlled By: ReplicaSet/patient-app-6cd8c88d6 Containers: patient-app:
> Container ID: docker://2d3431a015a40f551e51285fa23e1d39ad5b257bfd6ba75c3972f422b94b12be
> Image: registry.gitlab.com/stella-center/backend-services/patient-service:96e21d80
> Image ID: docker-pullable://registry.gitlab.com/stella-center/backend-services/patient-service#sha256:3f9774efe205c081de4df5b6ee22cba9940f974311b094
> 2a8473ee02b9310b43
> Port: 8094/TCP
> Host Port: 0/TCP
> State: Running
> Started: Wed, 25 May 2022 20:09:24 +0530
> Ready: True
> Restart Count: 0
> Environment: <none>
> Mounts:
> /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxbzc (ro) Conditions: Type Status
> Initialized True Ready True ContainersReady
> True PodScheduled True Volumes: kube-api-access-sxbzc:
> Type: Projected (a volume that contains injected data from multiple sources)
> TokenExpirationSeconds: 3607
> ConfigMapName: kube-root-ca.crt
> ConfigMapOptional: <nil>
> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations:
> node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
> node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
Your Dockerfile uses exec form ENTRYPOINT syntax. This form doesn't expand environment variables; Spring is literally getting the string $ENV_VAR_NAME as the profile name, and failing on this.
Spring knows how to set properties from environment variables, though. Rather than building that setting into the Dockerfile, you can use an environment variable to set the profile name at deploy time.
# Dockerfile: do not set `-Dspring.profiles.active`
ENTRYPOINT ["java", "-jar", "/app/patient-service.jar"]
# Deployment YAML: do set `$SPRING_PROFILES_ACTIVE`
env:
- name: SPRING_PROFILES_ACTIVE
value: "${ENV_VAR_NAME}" # Helm: {{ quote .Values.environment }}
However, with this approach, you still need to set deployment-specific settings in your src/main/resources/application-*.yml file, then rebuild the jar file, then rebuild the Docker image, then redeploy. This doesn't make sense for most settings, particularly since you can set them as environment variables. If one of these values needs to change you can just change the Kubernetes configuration and redeploy, without recompiling anything.
# Deployment YAML: don't use Spring profiles; directly set variables instead
env:
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://postgres-dev/database"
Run the following command to get the output of why your pod crashes:
kubectl describe pod -n <your-namespace> <your-pod>.
Additionally the output of kubectl get pod -o yaml -n <your-namespace> <your-pod> has a status section that holds the reason for restarts. You might have to lookup the exit code. E.g. 137 stands for OOM.
I build a spring boot project and I want to deploy it to minikube using GitLab CI/CD. I'm able to deploy the application by directly accessing the deployment.yml from local machine.
But I'm getting the following error when I tried to deploy it from GitLab.
Error
$ kubectl apply -f deployment.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-management
spec:
# the target number of Pods
replicas: 2
selector:
matchLabels:
app: user-management
template:
metadata:
labels:
app: user-management
spec:
containers:
- name: user-management7
image: registry.gitlab.com/PROFILE_NAME/user-management
imagePullPolicy: Always
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- mysql:8
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- test
- deploy-tb
- deploy-prod
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/PROFILE_NAME/user-management .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/PROFILE_NAME/user-management
test:
image: maven:3-jdk-8
services:
- mysql:8
script:
- "mvn clean test"
artifacts:
when: always
reports:
junit:
- target/surefire-reports/TEST-*.xml
deploy-tb:
image:
name: bitnami/kubectl:latest
entrypoint: [ "" ]
stage: deploy-tb
script:
- kubectl apply -f deployment.yml
environment:
name: prod
url: registry.gitlab.com/PROFILE_NAME/user-management
I don't know what I'm missing here.
According to GitLab documentation, you need first to install the GitLab Agent for Kubernetes.
These are the steps for the installation process:
To install the Agent in your cluster:
Define a configuration repository.
Register an agent with GitLab.
Install the agent into the cluster.
Note: On self-managed GitLab instances, a GitLab administrator needs to set up the GitLab Agent Server (KAS).
I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
I am working on setting up a cloud DevOps deployment pipeline using Gitlab CI online, Kubernetes, and docker. I am following an example post at Continous delivery of a spring boot application with Gitlab CI and kubernetes and Kubectl delete/create secret forbidden (Google cloud platform) .
Find below my .gitlab-ci.yml file's source
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project mta-hosting-optimizer
- gcloud config unset container/use_client_certificate
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials mta-hosting-optimizer
- kubectl create -f admin.yaml --validate=false
- kubectl create clusterrolebinding serviceaccounts-cluster-admin--clusterrole=cluster-admin --group=system:serviceaccounts
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=username --docker-password=$REGISTRY_PASSWD --docker-email=email#email.com
- kubectl apply -f deployment.yml
Deployment fails at the line below
- kubectl create -f admin.yaml --validate=false
The error message displayed upon this failure is as follow:
error: error converting YAML to JSON: yaml: mapping values are not allowed in this context
ERROR: Job failed: exit code 1
The admin.yaml file's source is as follows:
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
The Maven build and Docker build/package stages work find. This is the only stage that fails. I will appreciate everyone's help in resolving this issue.
Thank you very much.
You have a YAML validation error. This means that your YAML isn't formatted correctly.
You most likely wanted to format your admin.yaml file this way:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
Also: As Matthew L Daniel already pointed out you shouldn't disable validation of the YAML files.