Jenkins Pipeline building micro-services with multiple repos - docker

I'm trying to put together a Jenkins pipeline that builds a docker application composed of multiple containers. Each service is in it's own git repository.
i.e.
Service1 github.com/testproject/service1
Service2 github.com/testproject/service2
Service3 github.com/testproject/service3
I can create a Jenkinsfile that builds the individual services, but I'd like a way to build and test the application end-to-end if any single service changes (avoiding rebuilding the unchanged services).
I could maintain 3 separate Jenkinsfiles, and 3 separate pipelines to achieve this, but it seems like a lot of duplication. Is there a way to have a single pipeline that will let me achieve this ?

Related

Deploy Kubernetes using Helm and Jenkins for Microservice Based Application

We are developing an web application which had a multiple micro-services in same repository.
For our existing monolithic application, we are deploying our application to Kubernetes using Helm and Jenkins. When micro-services in question, we are struggling to define our CI/CD Pipeline strategies. Belows are the unclear issues:
For monolithic application I have one Dockerfile, one Jenkinsfile and one Helm Chart. For building stage I am building image/s using following command.
docker.build("registry/myrepo/image:${env.BUILD_NUMBER}"
For monolithic application, I have one chart with multiple values.yaml such as values.dev.yaml, values.prod.yaml which I configured for multiple environments.
So our questions are;
1.How should we build and push multiple containers for multiple Dockerfiles in Jenkinsfile for micro-services? At the present every micro-services have their own Dockerfiles in their own root.
2.Is that possible for Jenkins to distinguish the micro-services that we would like to deploy? I mean,sometimes probably we made some changes only for specific service and we would like to deploy this changes. So should we organize independent pipeline or is there any way to handle this same pipeline?
3.How should we organize our Helm chart to deploy micro-services Kubernetes? Should we create multiple charts per services or create multiple templates that refers only one values.yaml?
Looks like you are almost there.
Have separate pipelines for micro services, this would build, verify and deploy docker images to a docker registry. Have a separate pipeline for verifying and deploying the whole stack using helm.
I assume you would be using git events to identify the changes? When there is change in a microservice, I assume it would be committed to a single repository. This would trigger a git event using which you can trigger the pipeline of the respective microservice.
As the helm represents your whole application stack, I would suggest to have it as a single chart. If the complexity of the micro service is getting increased split it as subcharts.
Multiple charts can be a future maturity level when teams associated with each microservice can deploy the upgrades independently without affecting availability of the whole stack.
Have separate job in jenkins for each microservice.
Have a separate job to deploy the whole application using Helm chart

Jenkins CI/CD setup for a microservices system

I have a system with dozens of microservices, all build and released the same way - each is in a docker container, and deployed in a Kubernetes cluster.
There are multiple clusters (Dev1, dev2, QA ... Prod)
We are using Jenkins to deploy each microservice. Each microservice has its own pipeline and this pipelines is duplicated for each environment, like so:
DEV1 (view)
dev1_microserviceA (job / pipline)
dev1_microserviceB
...
dev1_microserviceX
DEV2
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
...
PROD
dev1_microserviceA
dev1_microserviceB
...
dev1_microserviceX
each of those pipelines is almost identical, differences are really just a matter of parameters like environment, name of the microservice, name of git repo.
Some common code is in libraries that each pipeline uses. Is this the proper / typical setup and most refactored setup? I'd like to avoid having to create a pipeline for each microservice and for each envionment but not sure what are my further refactoring options. I am new to Jenkins & devops.
I've looked into parametrized pipelines but I do not want to have to enter a parameter each time I need to build, and I also need to be able to chain builds, and see the results of all builds at a glance, in each environment.
I would use Declarative Pipelines where you can define your logic in a local Jenkinsfile in your repositories.
Using Jenkins, you can have a "master" Jenkinsfile and/or project that you can inherit by invoking the upstream project. This will allow you to effectively share your instructions and reduce duplication.
What is typically never covered when it comes to CI/CD is the "management" of deployments. Since most CI/CD services are stateless it has no notion of applications deployed.
GitLab has come a long way with this but Jenkins is far behind.
At the end of the day you will have to either create a separate project for each repository/purpose due to how Jenkins works OR (recommended) have a "master" project that let's you pass in things like project name, git repo url, specific application variables and values and so on.

Jenkins Docker image building for Different Tenant from same code repository

I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.

CI/CD pipeline deployment flow for test and prod environment

I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Restrict Jenkins declarative pipeline agents at the server level?

I'm currently looking into replacing a number of existing Jenkins builds with declarative pipeline builds.
The Jenkinsfile allows me to specify agents in a number of ways (name, labels, docker image), but what I really need is a way to restrict this at the server level.
The goal is to force all pipeline-based builds to run inside Docker containers, and not directly on the node.
Is this possible? If so: how?

Resources