Preferred way to Build/Test/Deploy docker images in GitLab CI/CD - docker

I am trying to build a CI/CD pipeline in GitLab. The goal is to build a docker image from a Dockerfile, run tests on the running container, push the image to DockerHub, then deploy it to a Kubernetes cluster. This is what I currently have for my gitlab-ci.yml.
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: ${DOCKER_USER}/my_app
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
stage: build
script:
- echo ${DOCKER_PASSWORD} | docker login --username ${DOCKER_USER} --password-stdin
- docker pull ${CONTAINER_IMAGE}:latest || true
- docker build --cache-from ${CONTAINER_IMAGE}:latest --tag ${CONTAINER_IMAGE}:$CI_COMMIT_SHA --tag ${CONTAINER_IMAGE}:latest .
- docker push ${CONTAINER_IMAGE}:$CI_COMMIT_SHA
- docker push ${CONTAINER_IMAGE}:latest
deploy:
image:
name: bitnami/kubectl:1.16.15
entrypoint: [""]
stage: deploy
variables:
GIT_STRATEGY: none
script:
- kubectl get pods -A # <- Won't work until I pass a Kubeconfig file with cluster details
I have a few main questions:
How can I deploy this image? I know I need to pass a KUBECONFIG file to bitnami/kubectl, but not sure how to do that with GitLab CI/CD
Can I pass the built image to a test stage before pushing to DockerHub

---
stages:
- test app
- build
- test
- deploy
test app:
stage: test_app
image: node:latest
script:
- git clone (path to code)
- npm install
- lint
- audit fix
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: anchor:latest (one you have built yourself or use another testing suite)
script:
- anchore-cli image add user/image:v1
- anchore-cli image wait user/image:v1
- anchore-cli image content user/image:v1
- image vuln user/image:v1 all
- anchore-cli evaluate check user/image:v1 > result .txt
- if ( grep -ci "fail" result.txt >= 1); then exit 1 fi
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
image:
name: kubectl:latest (build your own image that installed kubectl)
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
1. have variables passed in for cluter address, cert data, and token stuff... so you can target other clusters, pre-prod, prod, qa...
2. you can't test an image that isn't on the repo, as the testing suite needs to pull the image from somewhere... You should have a clean up script running to cleanup old image in your repo anyway, so the initial push should be a (test location)
like: docker push untrusted/image:v1
You should also have before scripts and after scripts... before calls docker login
after calls docker logout...

I do not have an answer for deploying to Kubernetes, but I do recommend publishing a test/construction image to Dockerhub when working a merge request/development branch of building the image. Then only deploy the latest image when you merge the branch to master .
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_iamge:test
- deploy push your_image:test
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
- export BRANCH=${CI_COMMIT_TITLE#*\'}; export BRANCH=${BRANCH%\' into*}
- docker delete your_image:$BRANCH
rules:
- if: '$CI_COMMIT_REF_NAME == "master"

Related

Gitlab pipeline fails, even though deployment happened on GCP

I just created my first CI/CD pipeline on Gitlab, which creates a docker container for a Next.js app, and deploys it on Google Cloud Run.
My cloudbuild.yaml:
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/inook-web', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/inook-web']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'inook-web', '--image', 'gcr.io/$PROJECT_ID/inook-web', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
My .gitlab-ci.yml:
# File: .gitlab-ci.yml
image: docker:latest
stages: # List of stages for jobs, and their order of execution
- deploy-test
- deploy-prod
deploy-test:
stage: deploy-test
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
I get the following error message in the CI/CD pipeline:
https://ibb.co/ZXLWrj1
However, the deployment actually succeeds on GCP: https://ibb.co/ZJjtXzG
Any idea what I can do to fix the pipeline error?
What worked for me was to add a custom bucket for the gcloud builds submit to push logs to. Thanks #slauth for pointing me in the right direction.
Updated command:
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://inook_test_logs
If you add a bucket at the end of the command, then it works.
gcloud builds submit . --config=cloudbuild.yaml --gcs-log-dir=gs://my_bucket_name_on_gcp
Remember to create a bucket on GCP :D

How to configure Gitlab Runner to connect to Artifactory?

I am trying to set Gitlab runner to connect to Artifactory and pull images.My yml file to set RUnner looks like below :
gitlabUrl: https://gitlab.bayer.com/
runnerRegistrationToken: r*******-
rbac:
create: false
serviceAccountName: iz-sai-s
serviceAccount.name: iz-sai-s
runners:
privileged: true
resources:
limits:
memory: 32000Mi
cpu: 4000m
requests:
memory: 32000Mi
cpu: 2000m
What changes are needed to configure my runner properly to connect to Artifactory URL and pull images from there ?
This is an example where my runner runs as docker container with an image having artifactory cli configured in it, so in your case your runner should have jfrog cli configured , next it needs an api key to access artifactory which you ll generate in artifactory and store in gitlab like below picture , exact path would be your repo - settings - CICD - variables
First it authenticates then uploads
publish_job:
stage: publish_artifact
image: xxxxxplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxxx.com/artifactory
REPO_NAME: my-rep
ARTIFACT_NAME: my-artifact
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-0.0.1-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
Mark the answer as accepted if it fulfils your requirement
Also make sure to use indentation in your question which is not there
Edit 1 : Adding the whole gitlab_ci.yml
stages:
- build_unittest
- static_code_review
- publish_artifact
image: maven:3.6.1-jdk-8-alpine
cache:
paths:
- .m2/repository
- target/
variables:
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
build_unittest_job:
stage: build_unittest
script: 'mvn clean install'
tags:
- my-docker
artifacts:
paths:
- target/*.jar
expire_in: 20 minutes
when: manual
code_review_job:
stage: static_code_review
variables:
SONARQUBE_BASE_URL: https://xxxxxx.com
script:
- mvn sonar:sonar -Dsonar.projectKey=xxxxxx -Dsonar.host.url=https://xxxxx -Dsonar.login=xxxxx
tags:
- my-docker
cache:
paths:
- /root/.sonar/cache
- target/
- .m2/repository
when: manual
publish_job:
stage: publish_artifact
image: plan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxx/artifactory
REPO_NAME: maven
ARTIFACT_NAME: myart
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
tags:
- my-docker
when: manual

gitlab-ci.yml deployment on multiple hosts

I need to deploy my application on multiple servers.
I have hosted my source code on gitlab-ci.
I have setup envrionnement variables and .gitlab-ci.yml file
It works great for a single server: I can build docker images and push this images to a registry.
Then i am deploying this images on a kubernetes infrastructure.
All operations are described in .gitlab-ci.yml
What i need to do is to "repeat" .gitlab-ci.yml steps for each server.
I need a different set of envrionment variables for each server. (I will need one docker image for each server, for each upgrade of my application).
Is there a way to do this with gitlab-ci ?
Thanks
** EDIT **
Here is my .gitlab-ci.yml:
stages:
- build
- deploy
build:
stage: build
script:
- docker image build -t my_ci_registry_url/myimagename .
- docker login -u "${CI_REGISTRY_USER}" -p "${CI_REGISTRY_PASSWORD}" "${CI_REGISTRY}"
- docker push my_ci_registry_url/myimagename
deploy:
stage: deploy
environment: production
script:
- kubectl delete --ignore-not-found=true secret mysecret
- kubectl create secret docker-registry mysecret --docker-server=$CI_REGISTRY --docker-username=$CI_REGISTRY_USER --docker-password=$CI_REGISTRY_PASSWORD
- kubectl apply -f myapp.yml
- kubectl rollout restart deployment/myapp-deployment
In order to run same job with different environment variables you can use Yaml Anchors.
For example:
stages:
- build
- deploy
.deploy: &deploy
stage: deploy
environment: production
script:
- some use of $SPECIAL_ENV # from `variables` defined in each job
- some use of $OTHER_SPECIAL_ENV # from `variables` defined in each job
build:
stage: build
script:
- ...
deploy env 1:
variables:
SPECIAL_ENV: $SPECIAL_ENV_1 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV-1 # from `CI/CD > Variable`
<<: *deploy
deploy env 2:
variables:
SPECIAL_ENV: $SPECIAL_ENV_2 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV_2 # from `CI/CD > Variable`
<<: *deploy
deploy env 3:
variables:
SPECIAL_ENV: $SPECIAL_ENV_3 # from `CI/CD > Variable`
OTHER_SPECIAL_ENV: $OTHER_SPECIAL_ENV_3 # from `CI/CD > Variable`
<<: *deploy
That way on deploy stage the 3 jobs will run (parallel).
You can save the variables in Settings > CI/CD > Variable if they contain sensitive data. If not, just write them in your .gitlab-ci.yml

How to use docker image from build step in deploy step in CircleCI 2.0?

Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels

Kubernetes deployment.extensions not found

I get the following error message in my Gitlab CI pipeline and I can't do anything with it. Yesterday the pipeline still worked, but I didn't change anything in the yml and I don't know where I made the mistake. I also reset my code to the last working commit, but the error still occurs.
$ kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
Error from server (NotFound): deployments.extensions "ft-backend" not
found
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west3-a
- gcloud config set project projectX
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials development --zone europe-west3-a --project projectX
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=MY_NAME --docker-password=$REGISTRY_PASSWD --docker-email=MY_MAIL
- kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend:${CI_COMMIT_SHA}
- kubectl apply -f deployment.yml
I suppose that when you are invoking command:
kubectl set image deployment/ft-backend ft-backend=registry.gitlab.com/projectX/ft-backend
deployment ft-backend does not exist in your cluster. Does the command: kubectl get deployment ft-backend return the same result?
Use this command to create deployments, its not supported in newer version:
check this for newer version:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4

Resources