Clarification on where testing falls in the pipeline - docker

I was reading something that made me question if I'm implementing testing in my pipeline correctly.
I currently have it setup like the following:
Local feature branch for a microservice pushed to remote
PR request to merge feature request into testing branch for microservice
Accepted and merged
Triggers CI/CD pipeline:
First, a Linux VM is created
The microservice is deployed to the VM
The tests are run for the microservice
If all testing passes, it builds the Docker image and pushes it to the container registry
The deployment stage then pulls the image and deploys to the Kubernetes cluster
My questions are the following:
The dev should be testing locally, but should testing automatically happen when the PR is submitted?
Should I be building the Docker image first, creating a running container of the image and testing that (that is after all what is getting deployed)?
In other words, is the flow:
Build app -> test -> if passes, build image -> deploy
Build image -> test -> if passes, deploy
Build app -> run certain tests -> if passes, build image -> run more tests -> deploy
Thanks for clarifying.

There are a couple of things I'd rearrange here.
As #DanielMann notes in a comment, your service should have a substantial set of unit tests. These don't require running in a container and don't have dependencies on any external resources; you might be able to use mock implementations of service clients, or alternate implementations like SQLite as an in-process database. Once your CI system sees a commit, before it builds an image, it should run these unit tests.
You should then run as much of the test sequence as you can against the pull request. (If the integration tests fail after you've merged to the main branch, your build is broken, and that can be problematic if you need an emergency fix.) Especially you can build a Docker image from the PR branch. Depending on how your system is set up you may be able to test that directly; either with an alternate plain-Docker environment, or in a dedicated Kubernetes namespace, or using a local Kubernetes environment like minikube or kind.
Absolutely test the same Docker image you will be running in production.
In summary, I'd rearrange your workflow as:
Local feature branch for a microservice pushed to remote
PR request to merge feature request into testing branch for microservice
Triggers CI pipeline:
Run unit tests (without Docker)
Builds a Docker image (and optionally pushes to a registry)
Deploy to some test setup and run integration tests (with Docker)
Accepted and merged
Triggers CI/CD pipeline:
Runs unit tests (without Docker)
Builds a Docker image and pushes it
Deploy to the production Kubernetes environment

Related

Gitlab CI: How to configure cypress e2e tests with multiple server instances?

My goal is to run a bunch of e2e tests every night to check if the code changes made the day before break core features of our app.
Our platform is an Angular app which calls 3 separate Node.js backends (auth-backend, old- and new-backend). Also we use a MongoDB as Database.
Let's consider every of the 4 projects to have a branch called develop which should only be testet.
My approach would be the following:
I am running every backend plus the database in a separate docker container.
Therefor I need to get either the latest build of that project from gitlab using ssh
or clone the repo to the docker container and run a build inside it.
After all project are running on the right ports (which I'd specify somewhere) I start the npm script for running cypress e2e tests.
All of that should be defined in some file. Is that even possible?
I do not have experience with the gitlab CI, but I know, that other CI-systems provide the possibility, to run e.g. bash scripts.
So I guess you can do the following:
Write a local bash script that pulls all the repos (since gitlab can provide secret keys, you can use these in order to authenticate against your gitlab repos)
After all of these repos were pulled, you can run all your build commands for your different repos
Since you have some repos working and depending on each other, you possibly have to add a build command for exactly this use case, so that you always have production state, or whatever you need
After you have pulled and built your repos, you should start your servers for your backends
I guess your angular app uses some kind of environment variables to define the servers to send the request to, so you also have to define them in your build command/script for your app
Then you should be able to run your tests
Personally I think that docker is kind of overdose for this use case. Possibly you should define and run a pipeline to always create a new develop state of your backend, push the docker file to your sever. Then you should be able to create your test-pipeline which first starts the docker-container on your own server (so you do not have an "in-pipeline-server"). This should then have started all your backends, so that your test pipeline can now run your e2e tests against those set up Backend servers.
I as well advise, that you should not run this pipeline every night, but when the develop state of one of those linked repos changes.
If you need help setting this up, feel free to contact me.

CI/CD pipeline deployment flow for test and prod environment

I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Pipeline GitHub -> Travis CI -> Docker

I have a github-repository, that is linked to automated build on Docker. Consequently, on each commit to master-branch, docker triggers building of Docker-image.
Also, each commit is tested by Travis CI automatically.
My question is: is there any way to trigger Docker only if travis finishes successfully? Do I need some sort of webhook or something like that for my goal?
You could trigger the Travis CI test after the repository is pushed. Then, in the deploy step you could trigger a build on Docker. Or even do the build inside Travis, and just push the image to the repository you are using.
Travis has a nice overview of how to make this flow happen here.
The gist is that you're going to need to have sudo: required, so you're going to be running in a VM instead of inside Docker, as is the standard way in Travis. You also need to add docker as a service, much like you'd add redis or postgres for an integration test. The Pushing Docker Image to a Registry section has a lot of info on setting things up for the actual deployment. I'd use an actual deploy step with the script provider, rather than after_success, but that's up to you.

CI and CD implementation issues

I am looking for implementation of CI/CD in to my current project here is what i think will work.
Environment consists of
- Jenkins
- git
- docker
- gradle
- Linux servers
- Sonar
- Ansible.
Each tool will be used as following.
Git:- Developers will push there code to this CVS.
Jenkins:- On detecting Check-in Jenkins will trigger a build and will deploy to one of the server.
Sonar:- will be used for code coverage and will check the code before building the same through Jenkins.
ansible:- ansible will be used to quickly prepare added nodes so that code can be deployed to them.
Docker in case if we need fresh test environments every time we can use docker+ ansible combo for doing the things.
Flow of work will be
User run unit test cases on his machine and commits the code to the server.
Jenkins will pull the code from git and will run sonar on the same and will generate reports.
jenkins will create build and will deploy the same on dev server.
A jenkins job will run and will perform the integration testing on the dev server
Any other automated tests can be run.
Finally builds pushed to next server using Jenkins.
I will use shell commands inside Jenkins to push compiled code from one server to another.
In my this scenario can some one answer me following.
Where will sonar get fit and how to use the same?
I see there are CD tools, cant i push compiled code to the servers using shell scripts written inside the Jenkins jobs to automatically deploy the things? What extra benefits a CD tool provides
Is is wise to create fresh test environment or we can keep using the old one again and again?
Will this complete CI/CD?
can someone share there implementation
You say you plan to use Git. I'll outline a scenario using Git on GitHub
Developers push code changes here as pull requests
The SonarQube GitHub Plugin kicks off an initial analysis of only the code changed in the PR looking for the introduction of new issues (note that coverage and duplications are not included in this check)
Once the PR is merged, Jenkins (in one job or several, depending on your needs)
builds
fires integration tests & any other automated tests
runs SonarQube Scan. Note that this comes last to include integration test results.
pushes build to next server
Note that the ability to break the build when the project doesn't pass the SonarQube Quality Gate you've set up may be desirable in your situation. Unfortunately, it's not available in the current server version: 5.2. It is available in 5.1, and is should return soon.

Which is the best practice of using Jenkins?

Using a single server that is only contains one Jenkins building for dev, test, etc.
Using separate Jenkins on each dev, test servers to build and run tests.
Edit ;
this is an explanation of step by step our deployment and release model
Our server side developers develop and commit/push their code to github.
CI server that Jenkins is located in poll SCM and fetch changes than build. (within CI server), run unit tests.
After building process and deploying artifacts to repository server (artifactory server)
Then CI server starts to deploy latest successful build into Development Server.
then client mobile developers can develop on latest successful snapshot build of server side.
These are our standard deployment process.
By the way,
We are also doing test deployment to test server via CI server with another different job on Jenkins (same CI server) but, this is handling/triggering by manual.
Preproduction and production transitions are done by manual also. (preproduction and production are different servers of course)
Questions;
Integration tests should be run on test server. How can i figure it out by building system on remote CI server instead of building system on the same machine (test server) ?
As a further step, what would the best option be to construct a Continuous Delivery system. ?
Thanks
A good approach is to have a single CI system that builds the system continuously as development makes changes. This build will on each build run all the unit tests as well and result in some kind of package that can be deployed. That can be further connected with automation that deploys and runs other tests or it can be used by e.g. testers to further test the system.
Depending on your release model and branching strategy as well as type of system/product this basic setup can be adjusted to fit your needs.
If you want more details please explain what you build and how you release/deploy.

Resources