I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
Related
Been trying to build a simple gitlab CI pipeline which builds an image and pushes it to Google container repository. I am running through this error -
ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial
tcp: lookup docker on 169.254.169.254:53: no such host
I have tried all the solutions posted across gitlab issues threads but no help. I am using public runners, it's a pretty simple ci script.
image: docker:latest
variables:
GCR_IMAGE: <GCR_IMAGE>
services:
- docker:dind
build:
stage: build
before_script:
- docker info
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Relevant issue thread: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4794
Using gitlab-runner 15.7.1
A few weeks ago I encountered this problem and was able to solve it with this method:
image:
name: docker:20.10.16
services:
- name: docker:20.10.16-dind
variables:
DOCKER_HOST: tcp://docker:2376/
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
before_script:
- until docker info; do sleep 1; done
- echo $GOOGLE_CLOUD_ACCOUNT | docker login -u _json_key --password-stdin https://us.gcr.io
script:
- docker build -t $GCR_IMAGE:latest .
- docker push $GCR_IMAGE:$CI_COMMIT_SHA
Also add this configuration to runner
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
image: atlassian/default-image:3
pipelines:
tags:
ecr-release-*:
- step:
services:
- docker
script:
- apt update -y
- apt install python3-pip -y
- pip3 --version
- pip3 install awscli
- aws configure set aws_access_key_id "AKIA6J47DSdaUIAZH46DKDDID6UH"
- aws configure set aws_secret_access_key "2dWgDxx5i7Jre0aZJ+tQ3oDve5biYk0ZMDKKASA7554QoJSJSJS"
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl /usr/local/bin/kubectl
- aws eks update-kubeconfig --name build_web --region us-west-2
- kubectl apply -f eks/aws-auth.yaml
- kubectl apply -f eks/deployment.yaml
- kubectl apply -f eks/service.yaml
definitions:
services:
docker:
memory: 3072
Here is my bitbucket-pipelines.yml.
When i am running bitbucket pipeline i am getting below error in screenshot.
I think i already added aws access credentials
Please take a look
You need to create service account and give permissions, also you need certificate to connect Kubernetes API server.
Here is nice explanation with all details which might be helpful for you: https://medium.com/codeops/continuous-deployment-with-bitbucket-pipelines-ecr-and-aws-eks-791a30b7c84b
The problem is resolved changing the kube config file. You need to specify the profile you need to use. By default the update-kubeconfig creates the authentication credentials, and put inside the file something like this:
- name: arn:aws:eks:{region}:{}account-id:cluster/{cluster-name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- {region}
- eks
- get-token
- --cluster-name
- {cluster-name}
command: aws
env:
- name: AWS_PROFILE
value: {profile}
interactiveMode: IfAvailable
provideClusterInfo: false
For son reason aws cli is not picking up the AWS_PROFILE env variable value, so in this case I solved manualy updating the kube config and specifying the --profile in the aws command part:
- name: arn:aws:eks:{region}:{}account-id:cluster/{cluster-name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- {region}
- eks
- get-token
- --profile
- {profile}
- --cluster-name
- {cluster-name}
command: aws
#env:
#- name: AWS_PROFILE
# value: {profile}
interactiveMode: IfAvailable
provideClusterInfo: false
I just created my first CI/CD pipeline on Gitlab, which creates a docker container for a Next.js app, and deploys it on Google Cloud Run.
My cloudbuild.yaml:
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/inook-web', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/inook-web']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'inook-web', '--image', 'gcr.io/$PROJECT_ID/inook-web', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
My .gitlab-ci.yml:
# File: .gitlab-ci.yml
image: docker:latest
stages: # List of stages for jobs, and their order of execution
- deploy-test
- deploy-prod
deploy-test:
stage: deploy-test
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
I get the following error message in the CI/CD pipeline:
https://ibb.co/ZXLWrj1
However, the deployment actually succeeds on GCP: https://ibb.co/ZJjtXzG
Any idea what I can do to fix the pipeline error?
What worked for me was to add a custom bucket for the gcloud builds submit to push logs to. Thanks #slauth for pointing me in the right direction.
Updated command:
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://inook_test_logs
If you add a bucket at the end of the command, then it works.
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://my_bucket_name_on_gcp
Remember to create a bucket on GCP :D
I am trying to build a CI/CD pipeline in GitLab. The goal is to build a docker image from a Dockerfile, run tests on the running container, push the image to DockerHub, then deploy it to a Kubernetes cluster. This is what I currently have for my gitlab-ci.yml.
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: ${DOCKER_USER}/my_app
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
stage: build
script:
- echo ${DOCKER_PASSWORD} | docker login --username ${DOCKER_USER} --password-stdin
- docker pull ${CONTAINER_IMAGE}:latest || true
- docker build --cache-from ${CONTAINER_IMAGE}:latest --tag ${CONTAINER_IMAGE}:$CI_COMMIT_SHA --tag ${CONTAINER_IMAGE}:latest .
- docker push ${CONTAINER_IMAGE}:$CI_COMMIT_SHA
- docker push ${CONTAINER_IMAGE}:latest
deploy:
image:
name: bitnami/kubectl:1.16.15
entrypoint: [""]
stage: deploy
variables:
GIT_STRATEGY: none
script:
- kubectl get pods -A # <- Won't work until I pass a Kubeconfig file with cluster details
I have a few main questions:
How can I deploy this image? I know I need to pass a KUBECONFIG file to bitnami/kubectl, but not sure how to do that with GitLab CI/CD
Can I pass the built image to a test stage before pushing to DockerHub
---
stages:
- test app
- build
- test
- deploy
test app:
stage: test_app
image: node:latest
script:
- git clone (path to code)
- npm install
- lint
- audit fix
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: anchor:latest (one you have built yourself or use another testing suite)
script:
- anchore-cli image add user/image:v1
- anchore-cli image wait user/image:v1
- anchore-cli image content user/image:v1
- image vuln user/image:v1 all
- anchore-cli evaluate check user/image:v1 > result .txt
- if ( grep -ci "fail" result.txt >= 1); then exit 1 fi
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
image:
name: kubectl:latest (build your own image that installed kubectl)
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
1. have variables passed in for cluter address, cert data, and token stuff... so you can target other clusters, pre-prod, prod, qa...
2. you can't test an image that isn't on the repo, as the testing suite needs to pull the image from somewhere... You should have a clean up script running to cleanup old image in your repo anyway, so the initial push should be a (test location)
like: docker push untrusted/image:v1
You should also have before scripts and after scripts... before calls docker login
after calls docker logout...
I do not have an answer for deploying to Kubernetes, but I do recommend publishing a test/construction image to Dockerhub when working a merge request/development branch of building the image. Then only deploy the latest image when you merge the branch to master .
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_iamge:test
- deploy push your_image:test
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
- export BRANCH=${CI_COMMIT_TITLE#*\'}; export BRANCH=${BRANCH%\' into*}
- docker delete your_image:$BRANCH
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
I don't know how to update my backend workload on my Kubernetes cluster. My Gitlab Pipeline is running without errors. My active revision is still on my first push, so how can I update the revision to call the rolling update action? Can I integrate an automatic rollout into the Gitlab Ci?
.gitlab-ci
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MYNAME --docker-password=$REGISTRY_PASSWD --docker-email=MYMAIL
- kubectl apply -f deployment.yml
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ft-backend
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: ft-backend
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: registry.gitlab.com
Google Cloud Workload
As discussed in comments, you have to update your Deployment .spec.template to trigger a rollout. An easy way for you to do it is to tag your image upon release.
In your .gitlab-ci.yml file you can use the CI_COMMIT_SHA variable:
# in your docker-build job, update build and push:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
# in your k8s-deploy job add this:
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
That would both version your image on your GitLab project registry, and trigger a rollout.
Like Clorichel mentioned in the comments, you'd need to modify your deployment to trigger a rollout. You could use something like Gitflow and Semantic Versioning (if you're not already) to tag your container image. For example, in the .gitlab-ci you could add the Git tag to your container image:
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:$CI_COMMIT_TAG
In the deployment.yml you would reference the new version:
spec:
containers:
- name: ft-backend
image: registry.gitlab.com/projectX/ft-backend:YOUR_NEW_GIT_TAG
imagePullPolicy: Always
ports:
- containerPort: 8080