Best practice for keeping Helm chart in remote server for Jenkins deployment - jenkins

Currently I am trying to deploy one sample micro service developed using Spring Boot using Jenkins and Kubernetes on my on premise server. For that I am already created my Kubernetes resource using Helm chart.
I tested the Helm chart deployment using login in remote machine and in my home directory I created. And using terminal command "helm install" I deployed into kubernetes cluster. And end point is successfully working.
My Confusion
Now only tested from terminal. Now I am trying to add the helm install command in my Jenkins pipeline job. So where I need to keep this helm chart? Need to copy to /var/lib/jenkins directory (Jenkins home directory) ? Or I only need to give the full path in command ?
What is the best practice for saving Helm chart for Jenkins deployment? I am confused about to follow standard way of implementation. I am new to this CI/CD pipeline.

The Helm chart(s) should almost definitely be source controlled.
One reasonable approach is to keep a Helm chart in the same repository as your service. Then when Jenkins builds your project, it will also have the chart available, and can directly run helm install. (Possibly it can pass credentials it owns to helm install --set options to set values during deployment.) This scales reasonably well, since it also means developers can make local changes to charts as part of their development work.
You can also set up a "repository" of charts. In your Jenkins setup one path is just to keep a second source control repository with charts, and check that out during deployment. Some tools like Artifactory also support keeping Helm charts that can be directly deployed without an additional checkout. The corresponding downside here is that if something like a command line or environment variable changes, you need coordinated changes in two places to make it work.

I suggest to follow the below path for SDLC of helm charts and apps they whose deployment they describe:
keep spring boot app source code (incl. Dockerfile) in a dedicated repo (CI process builds docker image out of it)
keep app helm chart repo source code (which references the app image) in a dedicated repo (CI process builds helm chart out of it, tags it with version and pushes it to artifact registry, e.g. Artifactory or Harbor)
To deploy the chart using Jenkins job, you code the necessary steps you would use to deploy helm chart manually in the pipeline.
Modern alternative to the last step would be using GitOps methodology. In that case, you'd only put the latest released chart's tag in GitOps repository. The deployment will be done using GitOps operator.

Related

Best Practices for Installing Jenkins Instance / Pre-configured Jenkins from scratch

The Jenkins landscape is vast and new progress is difficult to keep track especially if you are not a regular DevOps.
I am currently in process of setting up a Jenkins CI system from scratch. I am looking for the best possible ways to get the Jenkins instance up and running. I have looked at options such as running from the JAR, setting it up a service, docker, blue ocean, etc.
I was wondering if you can please share your experience if there is a pre-configured setup or a scalable Jenkins solution already available in the market which is ready to be configured/deployed.
One of the key tenant on this Jenkins instance would be test automation guys running their Selenium tests (or I am ideally looking at Windows server installation although CentOS is an option) and would like to make it working for them as easy as possible.
I'm a Jenkins admin. In my company I've set up Jenkins on our Kubernetes cluster using the Helm chart with a custom docker image preloaded with plugins (you don't want to rely on the plugin update site during startup). All configuration is done with the Configuration as Code Plugin. We're using the Kubernetes plugin to do horizontally scaling. No builds are allowed on the build controller, everything is done within agents, which is custom docker images inspired by these images. and we don't allow no builds to run on the build controller. This works very well, and I'm very happy with the setup. There is also a Jenkins Kubernetes Operator which looks promising, but I havent tried it myself.
If you're not on Kubernetes, you can take a look at the Jenkins Evergreen project.
PS: The Blue Ocean project is dead, but the folks over at Cloudbees are currently in the process of overhauling the UX. They just released a weekly version where they got rid of all tables so the design is slowly becoming more and more responsive, and also a new set of icons is also coming up.
Maybe the nearest you can get to a pre-configurated Jenkins Instance is using the Docker Image (https://hub.docker.com/r/jenkins/jenkins). But also with the docker image, you have to selected plugins and so on. Maybe you want to raise an issue as purposal in the Jenkins Docker repository to make it possible to pre-configure Jenkins (Github Repo: https://github.com/jenkinsci/docker/issues)?

Does Jenkins (not Jenkins X) have gitops support?

I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (UI in preview, flaky command line, random IP address needs and poor windows support are a few of the issues that have lead me to that conclusion.)
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
But I am not sure about gitops support. When I try to google it (gitops jenkins) I get a bunch of information that includes Jenkins X.
Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?
Update:
By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a Jenkinsfile that is triggered on merge (commit to master) and have other steps for other branches (if you want).
I recommend to deploy with kubectl using kustomize and store the config files in your Git repository. You parameterize different environments e.g. staging and production with overlays. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.
Using Jenkins for this, I would create a docker agent image with kubectl, so your steps can use the kubectl command line tool.
Jenkins on Kubernetes
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that Jenkins X together with Tekton may be an upcoming promising solution for this, but I have not tried that setup.

Deploy Kubernetes using Helm and Jenkins for Microservice Based Application

We are developing an web application which had a multiple micro-services in same repository.
For our existing monolithic application, we are deploying our application to Kubernetes using Helm and Jenkins. When micro-services in question, we are struggling to define our CI/CD Pipeline strategies. Belows are the unclear issues:
For monolithic application I have one Dockerfile, one Jenkinsfile and one Helm Chart. For building stage I am building image/s using following command.
docker.build("registry/myrepo/image:${env.BUILD_NUMBER}"
For monolithic application, I have one chart with multiple values.yaml such as values.dev.yaml, values.prod.yaml which I configured for multiple environments.
So our questions are;
1.How should we build and push multiple containers for multiple Dockerfiles in Jenkinsfile for micro-services? At the present every micro-services have their own Dockerfiles in their own root.
2.Is that possible for Jenkins to distinguish the micro-services that we would like to deploy? I mean,sometimes probably we made some changes only for specific service and we would like to deploy this changes. So should we organize independent pipeline or is there any way to handle this same pipeline?
3.How should we organize our Helm chart to deploy micro-services Kubernetes? Should we create multiple charts per services or create multiple templates that refers only one values.yaml?
Looks like you are almost there.
Have separate pipelines for micro services, this would build, verify and deploy docker images to a docker registry. Have a separate pipeline for verifying and deploying the whole stack using helm.
I assume you would be using git events to identify the changes? When there is change in a microservice, I assume it would be committed to a single repository. This would trigger a git event using which you can trigger the pipeline of the respective microservice.
As the helm represents your whole application stack, I would suggest to have it as a single chart. If the complexity of the micro service is getting increased split it as subcharts.
Multiple charts can be a future maturity level when teams associated with each microservice can deploy the upgrades independently without affecting availability of the whole stack.
Have separate job in jenkins for each microservice.
Have a separate job to deploy the whole application using Helm chart

Jenkins configuration using command line

I am trying to move the complete eco-system of our SAAS product to Kubernetes (and use Docker containers).
I am supposed to give a bash script which will set up everything. Only manual intervention should be setting up the Kubernetes cluster and mounting Persistent Volumes.
We were using Jenkins for code deployment and cron jobs. I am able to create the Jenkins service but I can not find ways to configure it using the command line. Tried finding ways online but can not find any good documentation.
First welcome to kubernetes, second, there are a lot of tools, templates over there, I would recommend you to check what is Helm
This is the Jenkins chart if you want to check:
https://github.com/helm/charts/tree/master/stable/jenkins
There is also a "fork" of jenkins for containerized environments, that I like, you can check more about Jenkins-X here
You can use helm package manager and simply install the Jenkin stable version.
Before using helm you have to setup tiller on kubernetes cluster.
$ helm install --name my-release stable/jenkins
here stable version of jenkin using helm.
https://github.com/helm/charts/tree/master/stable/jenkins
I can add that you can store Jenkins home folder as well as plugins and artifacts folder on persistent volume and mount that volume to Jenkins pod as a part of Helm installation. You can also make daily snapshots/backups of Jenkins disk. In this way Jenkins deployment becomes very smooth, quick and reliable.

Continuous Deployment using Jenkins and Docker

We are building a java based high-availability service for a financial application. I am part of the team for managing continuous integration using Jenkins.
Lately we introduced continuous deployment too in the list and we opted for Docker containers.
Here is the the infrastructure:
The production cluster will have 3 RHEL machines running the following docker containers on each of them:
3 instances of Wildfly
Cassandra
Nginx
Application IDE is Netbeans and source code is in git.
Currently we are doing manual deployment on this infrastructure.
Please suggest me some tools which I use with Jenkins to complete the continuous deployment process.
You might want jenkins to trigger on each push to your jenkins repository. There are plugins that help you do that with a webhook.Gitlab-plugin is a solution similar solution exist for Github and other git solutions.
Instead of heavily relying on bash and jenkins configuration you might want to setup a jenkins pipeline with the jenkins pipeline plugin or even pipeline: multibranch plugin. With those you can automate your build in groovy code (jenkinsfile) in a repository with the possibility to add functunality with other plugins building on them.
You can then use the docker pipeline plugin to easily build docker containers, push docker images and run code inside docker containers.
I would suggest building your services inside docker so that your jenkins machine does not have all the different dependencies installed (and therefore maybe conflicting versions). Use docker containers with all the dependencies and run your build code in there with the docker pipeline plugin from groovy.
Install a registry solution to push and pull your docker images to.
Use the Pipeline: Shared Groovy Libraries to extract libraries from your jenkinsfiles so that they can be reused. Those library files should have their own repository which your jenkins knows about and keeps up to date. Possibly you can even have an entire pipeline process shared between multiple projects which simply add parameters in their jenkinsfile.
A lot of text and no examples. If you think something is interesting and you want to see some code just ask. I am currently setting all this up.

Resources