I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
Related
I work for a small startup. We have 3 environments (Production, Development, and Staging) and GitHub is used as VCS.
All env runs on EC2 with docker.
Can someone suggest me a simple CICD solution that can trigger builds automatically after certain branches are merged / manual trigger option?
Like, if anything in merged into dev-merge, build and deploy to development, and the same for staging and pushing the image to ECR and rolling out docker update.
We tried Jenkins but we felt it was over-complicated for our small-scale infra.
GitHub actions are also evaluated (self-hosted runners), but it needs YAMLs to be there in repos.
We are looking for something that can give us option to modify the pipeline or overall flow without code-hosted CICD config. (Like the way Jenkins gives option to either use Jenkins file or configure the job manually via GUI)
Any opinions about Team City?
we have 150 applications which has individual jenkins file for each application in Git repository. we deploying each application one by one in Jenkins by triggering desired branch and when it reaches to k8s deployment stage in pipeline, we need to select the environment dev/test/prod from the dropdown there to deploy in desired environment. Now I need help to prepare a single jenkins job to deploy all the 150 applications to k8s at a time. Please suggest step by step process if any..
I was reading something that made me question if I'm implementing testing in my pipeline correctly.
I currently have it setup like the following:
Local feature branch for a microservice pushed to remote
PR request to merge feature request into testing branch for microservice
Accepted and merged
Triggers CI/CD pipeline:
First, a Linux VM is created
The microservice is deployed to the VM
The tests are run for the microservice
If all testing passes, it builds the Docker image and pushes it to the container registry
The deployment stage then pulls the image and deploys to the Kubernetes cluster
My questions are the following:
The dev should be testing locally, but should testing automatically happen when the PR is submitted?
Should I be building the Docker image first, creating a running container of the image and testing that (that is after all what is getting deployed)?
In other words, is the flow:
Build app -> test -> if passes, build image -> deploy
Build image -> test -> if passes, deploy
Build app -> run certain tests -> if passes, build image -> run more tests -> deploy
Thanks for clarifying.
There are a couple of things I'd rearrange here.
As #DanielMann notes in a comment, your service should have a substantial set of unit tests. These don't require running in a container and don't have dependencies on any external resources; you might be able to use mock implementations of service clients, or alternate implementations like SQLite as an in-process database. Once your CI system sees a commit, before it builds an image, it should run these unit tests.
You should then run as much of the test sequence as you can against the pull request. (If the integration tests fail after you've merged to the main branch, your build is broken, and that can be problematic if you need an emergency fix.) Especially you can build a Docker image from the PR branch. Depending on how your system is set up you may be able to test that directly; either with an alternate plain-Docker environment, or in a dedicated Kubernetes namespace, or using a local Kubernetes environment like minikube or kind.
Absolutely test the same Docker image you will be running in production.
In summary, I'd rearrange your workflow as:
Local feature branch for a microservice pushed to remote
PR request to merge feature request into testing branch for microservice
Triggers CI pipeline:
Run unit tests (without Docker)
Builds a Docker image (and optionally pushes to a registry)
Deploy to some test setup and run integration tests (with Docker)
Accepted and merged
Triggers CI/CD pipeline:
Runs unit tests (without Docker)
Builds a Docker image and pushes it
Deploy to the production Kubernetes environment
I have a system with dozens of microservices, all build and released the same way - each is in a docker container, and deployed in a Kubernetes cluster.
There are multiple clusters (Dev1, dev2, QA ... Prod)
We are using Jenkins to deploy each microservice. Each microservice has its own pipeline and this pipelines is duplicated for each environment, like so:
DEV1 (view)
dev1_microserviceA (job / pipline)
dev1_microserviceB
...
dev1_microserviceX
DEV2
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
...
PROD
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
each of those pipelines is almost identical, differences are really just a matter of parameters like environment, name of the microservice, name of git repo.
Some common code is in libraries that each pipeline uses. Is this the proper / typical setup and most refactored setup? I'd like to avoid having to create a pipeline for each microservice and for each envionment but not sure what are my further refactoring options. I am new to Jenkins & devops.
I've looked into parametrized pipelines but I do not want to have to enter a parameter each time I need to build, and I also need to be able to chain builds, and see the results of all builds at a glance, in each environment.
I would use Declarative Pipelines where you can define your logic in a local Jenkinsfile in your repositories.
Using Jenkins, you can have a "master" Jenkinsfile and/or project that you can inherit by invoking the upstream project. This will allow you to effectively share your instructions and reduce duplication.
What is typically never covered when it comes to CI/CD is the "management" of deployments. Since most CI/CD services are stateless it has no notion of applications deployed.
GitLab has come a long way with this but Jenkins is far behind.
At the end of the day you will have to either create a separate project for each repository/purpose due to how Jenkins works OR (recommended) have a "master" project that let's you pass in things like project name, git repo url, specific application variables and values and so on.
I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.