Deploy Kubernetes using Helm and Jenkins for Microservice Based Application - jenkins

We are developing an web application which had a multiple micro-services in same repository.
For our existing monolithic application, we are deploying our application to Kubernetes using Helm and Jenkins. When micro-services in question, we are struggling to define our CI/CD Pipeline strategies. Belows are the unclear issues:
For monolithic application I have one Dockerfile, one Jenkinsfile and one Helm Chart. For building stage I am building image/s using following command.
docker.build("registry/myrepo/image:${env.BUILD_NUMBER}"
For monolithic application, I have one chart with multiple values.yaml such as values.dev.yaml, values.prod.yaml which I configured for multiple environments.
So our questions are;
1.How should we build and push multiple containers for multiple Dockerfiles in Jenkinsfile for micro-services? At the present every micro-services have their own Dockerfiles in their own root.
2.Is that possible for Jenkins to distinguish the micro-services that we would like to deploy? I mean,sometimes probably we made some changes only for specific service and we would like to deploy this changes. So should we organize independent pipeline or is there any way to handle this same pipeline?
3.How should we organize our Helm chart to deploy micro-services Kubernetes? Should we create multiple charts per services or create multiple templates that refers only one values.yaml?

Looks like you are almost there.
Have separate pipelines for micro services, this would build, verify and deploy docker images to a docker registry. Have a separate pipeline for verifying and deploying the whole stack using helm.
I assume you would be using git events to identify the changes? When there is change in a microservice, I assume it would be committed to a single repository. This would trigger a git event using which you can trigger the pipeline of the respective microservice.
As the helm represents your whole application stack, I would suggest to have it as a single chart. If the complexity of the micro service is getting increased split it as subcharts.
Multiple charts can be a future maturity level when teams associated with each microservice can deploy the upgrades independently without affecting availability of the whole stack.

Have separate job in jenkins for each microservice.
Have a separate job to deploy the whole application using Helm chart

Related

Jenkins Pipeline building micro-services with multiple repos

I'm trying to put together a Jenkins pipeline that builds a docker application composed of multiple containers. Each service is in it's own git repository.
i.e.
Service1 github.com/testproject/service1
Service2 github.com/testproject/service2
Service3 github.com/testproject/service3
I can create a Jenkinsfile that builds the individual services, but I'd like a way to build and test the application end-to-end if any single service changes (avoiding rebuilding the unchanged services).
I could maintain 3 separate Jenkinsfiles, and 3 separate pipelines to achieve this, but it seems like a lot of duplication. Is there a way to have a single pipeline that will let me achieve this ?

Jenkins Docker image building for Different Tenant from same code repository

I am trying to implement CI/CD pipeline for my Spring Boot micro service deployment. I am planned to use Jenkins and Kubernetes for Making CI/CD pipeline. And I have one SVN code repository for version control.
Nature Of Application
Nature of my application is, one microservice need to deploy for multiple tenant. Actually code is same but database configuration is different for different tenant. And I am managing the configuration using Spring cloud config server.
My Requirement
My requirement is that, when I am committing code into my SVN code repository, then Jenkins need to pull my code, build project (Maven), And need to create Docker Image for multiple tenant. And need to deploy.
Here the thing is that, commit to one code repository need to build multiple docker image from same code repo. Means one code repo - multiple docker image building process. Actually, Dockerfile containing different config for different docker image ie. for different tenant. So here my requirement is that I need to build multiple docker images for different tenant with different configuration added in Dockerfile from one code repo by using Jenkins
My Analysis
I am currently planning to do this by adding multiple Jenkins pipeline job connect to same code repo. And within Jenkins pipeline job, I can add different configuration. Because Image name for different tenant need to keepdifferent and need to push image into Dockerhub.
My Confusion
Here my confusion is that,
Can I add multiple pipeline job from same code repository using Jenkins?
If I can add multiple pipeline job from same code repo, How I can deploy image for every tenant to kubernetes ? Do I need to add jobs for deployment? Or one single job is enough to deploy?
You seem to be going about it a bit wrong.
Since your code is same for all the tenants and only difference is config, you should better create a single docker image and deploy it along with tenant specific configuration when deploying to Kubernetes.
So, your changes in your repository will trigger one Jenkins build and produce one docker image. Then you can have either multiple Jenkins jobs or multiple steps in pipeline which deploy the docker image with tenant specific config to Kubernetes.
If you don't want to heed to above, here are the answers to your questions:
You can create multiple pipelines from same repository in Jenkins. (Select New item > pipeline multiple times).
You can keep a list of tenants and just loop through OR run all deployments in parallel in a single pipeline stage.

Building tenant specific docker images using Jenkins & Deployment in kubernetes

My Application Structure
I am developing a tenant based application in service oriented architecture with deployment using kubernetes and Jenkins. In my application , It contains 15-20 number of microservices developing using spring boot. The each microservice need to be deploy separately for different customers. If I have 5 customer, I need to deploy 15 microservice for these 5 customer.This is the description about my tenancy model.
Deployment Planning
For this application I am planning to use kubernetes and Jenkins for deployment and implementing CI/CD pipeline.
My Findings
The nature of my application is building the images for different customer from same code by using spring cloud config server active profile functionality. Means In my docker file , I am launching the particular image by defining which is the active profile. Like the following,
java -jar -Dspring.profiles.active=<Profile_Name> dbdata-0.0.1-SNAPSHOT.jar
Here I am configuring profile in config server. So here I am using same code for creating multiple images which belongs to each customer.
Confusion
If I am following this style, how I can create and launch different images from same code repository using Jenkins? Is possible to launch multiple images using Jenkins from same code repository?
In summary, how can I understand multiple image creating and deployment as per above application structure?
As you have several microservices, it's better to use tools like Helm + Chartmuseum to simplify management of these services. In this case you will have individual release (and Kubernetes namespace) per tenant. You can use different docker images tags if different docker images per tenant is required.
As for Jenkins part, I don't see any problems (you can build any number of docker images from 1 repo):
create job to produce & upload docker image(s)
create job to produce & upload Helm chart(s)
create job(s) to deploy/update releases in Kubernetes
It's not required to build different docker images if they differ only in command line. This command line (or env variable) can be overridden in a Kubernetes resource description.

How to manage deployment?

I am new to DevOps, and need to develop a strategy for a growing business that will handle many different services/nodes (like 100).
I've been learning about Docker, and it seems like Docker Cloud is a good service, but I just don't really know the standard use cases of the various services, and how to compare them.
I need some guidance as to how to manage the development environment, deployment, production environment, and server administration. Are Docker Cloud, Chef Cloud, and AWS ECS tools that can help with all of these, or only some aspects? How do these services differ?
If you are only starting out with DevOps I would start with the most basic pipeline and the foundational elements of the pipeline.
The reason why I would start with a basic pipeline is because if you have no experience you have to get it from somewhere and understand the basics of Docker Engine and its foundational elements. In addition, you need to design the pipeline.
Here is one basic uni-container pipeline with which you can start getting some experience:
Maven - use the standard, well-understood versioning scheme in your Dockerfile(s) so your Docker tags will be e.g. 0.0.1-SNAPSHOT or 0.0.1 for a release
Maven - get familiar with and use the spotify plugin
Jenkins - this will do your pulls / pushes to Nexus 3
Nexus 3 - this will proxy both Docker Hub and Maven Central and be your private registry
Deploy Server (test/dev) - Jenkins will scp docker-compose files onto this environment and tear your environments up & down
Cleanup - clean up all your environments with spotify-gc (ideally daily, get Jenkins to do this)
Once you have the above going, then move onto cloud services, orchestration etc - but first get the basics right.

Setting up a private multi-project test cloud infrastructure with OpenStack and Jenkins

I want to deploy a private cloud test infrastructure using OpenStack and Jenkins for multiple projects. I thought of creating a template for OpenStack with one Jenkins installation using as master. For the projects I thought of separating them into nodes, i.e. each project would get one node. Is this a sensible structure? Or should I install one Jenkins installtion per project+vm?
1) How would you organize a private multi-project test cloud infrastructure?
2) Jenkins saves configuration and job information to /var/lib/jenkins by default, how do I manage the object storage for each project?
When you say node, I'm assuming you mean a machine running nova-compute and hosting VM instances. If this is the case, then I honestly wouldn't worry about trying to bind a project to a specific node - treat the entire openstack pool of resources you have as a global cluster, assign in projects, and let them spin up and tear down as they need.
You will likely find it beneficial to have an image with jenkins pre-installed as a publically available image, assuming you want a master jenkins per project in your cloud. If you're running jenkins as a stand-alone item per project, using a m1.medium may be sufficient, but you might find you want to use m1.large. It all depends on what you have your jenkins instance doing in each project.
If you want the jenkins data to persist across destroying and recreating the jenkins master instance, then you could use a volume and specifically mount /var/lib/jenkins into it - but you will need to manage the coordination of jenkins startup and having the volume attached appropriately. You may find it easier to give the jenkins instance a larger base disk and just back up and restore the data per project if you need to destroy and recreate the jenkins instance.
Another way to do this would be to share a master jenkins external to your openstack cloud and use the jclouds jenkins plugin to spin up jenkins instances and slaves as you need for projects. This isn't providing any segregation between projects in jenkins, which may not be to your liking based on the question above.

Resources