Best practices when implementing CI/CD pipeline using GitHub/Jenkins/Kubernetes - jenkins

This question is more advice related so I hope its not flagged for anything. Just really need help :(
Trying to implement CI/CD using GitHub/Jenkins/Kubernetes.
On a highlevel this is what should happen:
Build on Jenkins
Push to container registry
Deploy built image on Kubernetes development cluster
Once testing finished on Development cluster, deploy it on a client
testing cluster and finally production cluster
So far this is what I have created a job on Jenkins which will be triggered using a Github hook.
This job is responsible for the following things:
Checkout from GitHub
Run unit tests / call REST API and send unit test results
Build artifacts using maven / call REST API and inform if build
success or fail
Build docker image
Push docker image to container registry (docker image will have
incremented versions which match with the BUILD_NUMBER environment variable)
The above stated tasks are more or less completed and I dont need much assitance with it (unless anyone thinks the aforementioned steps are not best practice)
I do need help with the part where I deploy to the Kubernetes cluster.
For local testing, I have set up a local cluster using Vagrant boxes and it works. In order to deploy the built image on the development cluster, I am thinking about doing it like this:
Point Jenkins build server to Kubernetes development cluster
Deploy using deployment.yml and service.yml (available in my repo)
This part I need help with...
Is this wrong practice? Is there a better/easier way to do it?
Also is there a way to migrate between clusters? Ex: Development cluster to client testing cluster and client testing cluster to production cluster etc
When searching on the internet, the name Helm comes up a lot but I am not sure if it will be applicable to my use case. I would test it and see but I am a bit hard pressed for time which is why I cant
Would appreciate any help y'all could provide.
Thanks a lot

There are countless ways of doing this. Take Helm out for now as you are just starting.
If you are already using Github and docker , then I would just recommend you to push your code/changes/config/Dockerfile to Github that will auto trigger a docker build on Dockerhub ( maybe jenkins in ur case if u dont want to use dockerhub for builds ) , it can be a multi-stage docker build where you can build code , run tests , throw away dev environmenet , and finally produce a producion docker image , once the image is produced , it will triger a web hook to your kubernetes deployment job/manifests to deploy on to test evironmenet , followed by manual triiger to deploy to production.
The docker images can be tagged based on SHA of the commits in Github/Git so that you can deploy and rollback based on commits.
Reference: https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build
Here is my Gitlab implementation of Gtips workflow:
# Author , IjazAhmad
image: docker:latest
stages:
- build
- test
- deploy
services:
- docker:dind
variables:
CI_REGISTRY: dockerhub.example.com
CI_REGISTRY_IMAGE: $CI_REGISTRY/$CI_PROJECT_PATH
DOCKER_DRIVER: overlay2
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
docker-build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
docker-push:
stage: build
script:
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
unit-tests:
stage: test
script:
- echo "running unit testson the image"
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
sast:
stage: test
script:
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
dast:
stage: test
script:
- echo "running security testing on the image"
- echo "pushing the results to build/test pipeline dashboard"
testing:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-test -f k8s-configs/
environment:
name: testing
url: https://testing.example.com
only:
- branches
staging:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-stage -f k8s-configs/
environment:
name: staging
url: https://staging.example.com
only:
- master
production:
stage: deploy
script:
- sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml
- sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml
- kubectl apply --namespace webproduction-prod -f k8s-configs/
environment:
name: production
url: https://production.example.com
when: manual
only:
- master
Links:
Trigger Jenkins builds by pushing to Github
Triggering a Jenkins build from a push to Github
Jenkins: Kick off a CI Build with GitHub Push Notifications

Look at spinnaker for continuous delivery. After the image is built and pushed to registry, have a web hook in spinnaker trigger a deployment to required kubernetes cluster. Spinnaker works well with kubernetes and you definitely should try it out

I understand that you are trying to implement GitOps, my advice is to review this article where you can start to figure out a little bit more about the components you need.
https://www.weave.works/blog/managing-helm-releases-the-gitops-way
Basically, you need to implement your own helm charts for your custom services and manage it using flux, I recommend to use a different repository per environment and leave flux to manage the deployment to each environment based on the state of the master branch on the repo.

Related

How to setup Gitlab CI E2E tests using Multiple dockers

I am a bit lost with the automated testing using Gitlab CI. I hope I can explain my problem so somebody can help me. I'll try to explain the situation first, after which I'll try to ask a question (which is harder than it sounds)
Situation
Architecture
React frontend with Jest unit tests and Cypress e2e tests
Django API server 1 including a Postgres database and tests
Django API server 2 with a MongoDB database (which communicates with the other API
Gitlab
For the 2 API's, there is a Docker and a docker-compose file. These work fine and are set up correctly.
We are using GitLab for the CI/CD, there we have the following stages in this order:
build: where dockers for 1, 2 & 3 are build separate and pushed to private-registry
Test: Where the unit testing and e2e test (should) run
Release: where the docker images are released
Deploy: Where the docker images are deployed
Goal
I want to set up the GitLab CI such that it runs the cypress tests. But for this, all build dockers are needed. Currently, I am not able to use all dockers together when performing the end-to-end tests.
Problem
I don't really get how I would achieve this.
Can I use the dockers that are built in the build stage for my e2e tests and can somebody give me an example of how this would be achieved? (By running the build docker containers as a service?)
Do I need one Docker-compose file including all dockers and databases?
Do I even need a dind?
I hope somebody can give me some advice on how to achieve this. An example would be even better but I don't know if somebody would want to do that.
Thanks for taking the time to read!
(if needed) Example of the API server 1
build-api:
image: docker:19
stage: build
services:
- docker:19-dind
script:
cd api
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
docker pull $IMAGE_TAG_API:latest || true
docker build -f ./Dockerfile --cache-from $IMAGE_TAG_API:latest --tag $IMAGE_TAG_API:$CI_COMMIT_SHA .
docker push $IMAGE_TAG_API:$CI_COMMIT_SHA
test-api:
image: docker:19
stage: test
services:
- postgres:12.2-alpine
- docker:19-dind
variables:
DB_NAME: project_ci_test
POSTGRES_HOST_AUTH_METHOD: trust
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker run $IMAGE_TAG_API:$CI_COMMIT_SHA sh -c "python manage.py test"
after_script:
- echo "Pytest tests complete"
coverage: "/TOTAL.+ ([0-9]{1,3}%)/"
release-api-staging:
image: docker:19
stage: release
services:
- docker:19-dind
only:
refs: [ master ]
changes: [ ".gitlab-ci.yml", "api/**/*" ]
environment:
name: staging
script:
- docker pull $IMAGE_TAG_API:$CI_COMMIT_SHA
- docker tag $IMAGE_TAG_API:$CI_COMMIT_SHA $IMAGE_TAG_API:latest
- docker push $IMAGE_TAG_API:latest
The answer is a bit late, but still i'll try to explain the approach briefly for other developers with same issues. I also created an example project, contain 3 microservices in GitLab, where Server A runs end-to-end tests and is dependend on Server B and Server C.
When e2e test full-stack applications you have to either:
mock all the responses of the microservices
test against a deployed environment;
or spin-up the environment temporary in the pipeline
As you noted, you want to spin-up the environment temporary in the pipeline. The following steps should be taken:
Deploy all backends as docker images in GitLab's private registry;
Mimic your docker-compose.yml services in 1 job in the pipeline;
connect the dots together.
Deploy backends as docker images in GitLab private registry
First you have to publish your docker images in the private registry of GitLab. You do this, because you now can reuse those images in another job. For this approach you need docker:dind. A simple example job to publish to a private registry on gitlab looks like:
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
publish:image:docker:
stage: publish
image: docker
services:
- name: docker:dind
alias: docker
variables:
CI_DOCKER_NAME: ${CI_REGISTRY_IMAGE}/my-docker-image
script:
- docker pull $CI_REGISTRY_IMAGE || true
- docker build --pull --cache-from $CI_REGISTRY_IMAGE --tag $CI_DOCKER_NAME --file Dockerfile .
- docker push $CI_DOCKER_NAME
only:
- master
To see a real-world example, I have an example project that is public available.
Mimic your docker-compose.yml services in 1 job in the pipeline
Once you dockerized all backends and published the images on a private registry, you can start to mimic your docker-compose.yml with a GitLab job. A basic example:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
- name: mongo
alias: mongo
# my backend image
- name: registry.gitlab.com/[MY_GROUP]/my-docker-image
alias: server
script:
- curl http://server:3000 # expecting server exposes on port 3000, this should work
- curl http://mongo:270117 # should work
- curl http://postgress:5432 # should work!
Run the tests
Now everything is running in a single job in GitLab, you can simply start your front-end in detached mode and run cypress to test it. Example:
script:
- npm run start & # start in detached mode
- wait-on http://localhost:8080 # see: https://www.npmjs.com/package/wait-on
- cypress run # make sure cypress is available as well
Conclusion
Your docker-compose.yml is not meant to run in a pipeline. Mimic it instead using GitLab services. Dockerize all backends and store them in GitLab's private registry. Spin up all services in your pipeline and run your tests.
This article might shed some light.
https://jessie.codes/article/running-cypress-gitlab-ci/
Essentially, you make two docker composers, one for your Cypress test and one for your items that are to be tested. This gets around the issues with images being able to access node and docker.

Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user

We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.

Keeping docker builds in Gitlab CI with docker-compose

I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.

gitlab-runner - deploy docker image to a server

I can setup a gitlab-runner with docker image as below:-
stages:
- build
- test
- deploy
image: laravel/laravel:v1
build:
stage: build
script:
- npm install
- composer install
- cp .env.example .env
- php artisan key:generate
- php artisan storage:link
test:
stage: test
script: echo "Running tests"
deploy_staging:
stage: deploy
script:
- echo "What shall I do?"
environment:
name: staging
url: https://staging.example.com
only:
- master
It can pass the build stage and test stage, and I believe a docker image/container is ready for deployment. From Google, I observe it may use "docker push" to proceed next step, such as push to AWS ECS, or somewhere of Gitlab. Actually I wish to understand, if I can push it directly to another remote server (e.g. by scp)?
A docker image is a combination of different Layers which are being built when you use the docker build command. It reuses existing Layers and gives the combination of layers a name, which is your image name. They are usually stored somewhere in /var/lib/docker.
In general all necessary data is stored on your system, yes. But it is not advised to directly copy these layers to a different machine and I am not quite sure if this would work properly. Docker advises you to use a "docker registry". Installing an own registry on your remote server is very simple, because the registry can also be run as a container (see the docs).
I'd advise you to stick to the proposed solution's from the docker team and use the public DockerHub registry or an own registry if you have sensitive data.
You are using GitLab. GitLab provides its own registry. You can push your images to your own Gitlab registry and pull it from your remote server. Your remote server only needs to authenticate against your registry and you're done. GitLab's CI can directly build and push your images to your own registry on each push to the master-branch, for example. You can find many examples in the docs.

Docker-in-Docker with Gitlab Shared runner for building and pushing docker images to registry

Been trying to set-up Gitlab CI which can build a docker image, and came across that DinD was enabled initially only for separate runners and Blog Post suggest it would be enabled soon for shared runners,
Running DinD requires enabling privileged mode in runners, which is set as a flag while registering runner, but couldn't find an equivalent mechanism for Shared Runners
The shared runners are now capable of building Docker images. Here is the job that you can use:
stages:
- build
- test
- deploy
# ...
# other jobs here
# ...
docker:image:
stage: deploy
image: docker:1.11
services:
- docker:dind
script:
- docker version
- docker build -t $CI_REGISTRY_IMAGE:latest .
# push only for tags
- "[[ -z $CI_BUILD_TAG ]] && exit 0"
- docker tag $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
This job assumes that you are using the Container Registry provided by Gitlab. It pushes the images only when the build commit is tagged with a version number.
Documentation for Predefined variables.
Note that you will need to cache or generate as temporary artifacts of any dependencies for your service which are not committed in the repository. This is supposed to be done in other jobs. e.g. node_modules are not generally contained in the repository and must be cached from the build/test stage.

Resources