Adding Helm Lint to existing Jenkins CI/CD Pipeline - jenkins

I have an existing Jenkins CI/CD Pipeline, where the structure of the pipeline is not defined inside the Jenkins-file but in external groovy-files in the same project.
I also have multiple yaml-files defining different environments like dev, prod and qal.
To validate these yaml-files I wanted to use Helm lint. But I can only find information about using helm lint inside the Jenkins-file (commands like sh or else are not working in groovy scripts). Also as I am completely new to helm I am not able to transfer these information.
Do you know a guide for implementing helm lint in such a setup or have an example repository?
Thanks
Simon

Related

Chef deployment using Declarative Pipeline

I would like to create a declarative Jenkins pipeline setup for the continues integration and Deployment, My only confusion was how Jenkins and chef are going to communicate in this process, after the continue integration, I want the chef to take over and install the Jar or Zip packages and deploy them on the several nodes from Jfrog Repo. Here maven is my build tool. In Jenkins pipeline I can setup until the build is done , is there any thing that I can do in the Post section of the pipeline for the chef communication for deployment or it has to be done. Please share some suggestion.
You can achieve this by putting workstation in one of the jenkins node.
Check out the below image for the same, instead of Ansible use Chef. I suppose this might work, haven't use chef recently.
Link: https://thenucleargeeks.com/2020/06/07/jenkins-openshift-pipeline/

Best practice for keeping Helm chart in remote server for Jenkins deployment

Currently I am trying to deploy one sample micro service developed using Spring Boot using Jenkins and Kubernetes on my on premise server. For that I am already created my Kubernetes resource using Helm chart.
I tested the Helm chart deployment using login in remote machine and in my home directory I created. And using terminal command "helm install" I deployed into kubernetes cluster. And end point is successfully working.
My Confusion
Now only tested from terminal. Now I am trying to add the helm install command in my Jenkins pipeline job. So where I need to keep this helm chart? Need to copy to /var/lib/jenkins directory (Jenkins home directory) ? Or I only need to give the full path in command ?
What is the best practice for saving Helm chart for Jenkins deployment? I am confused about to follow standard way of implementation. I am new to this CI/CD pipeline.
The Helm chart(s) should almost definitely be source controlled.
One reasonable approach is to keep a Helm chart in the same repository as your service. Then when Jenkins builds your project, it will also have the chart available, and can directly run helm install. (Possibly it can pass credentials it owns to helm install --set options to set values during deployment.) This scales reasonably well, since it also means developers can make local changes to charts as part of their development work.
You can also set up a "repository" of charts. In your Jenkins setup one path is just to keep a second source control repository with charts, and check that out during deployment. Some tools like Artifactory also support keeping Helm charts that can be directly deployed without an additional checkout. The corresponding downside here is that if something like a command line or environment variable changes, you need coordinated changes in two places to make it work.
I suggest to follow the below path for SDLC of helm charts and apps they whose deployment they describe:
keep spring boot app source code (incl. Dockerfile) in a dedicated repo (CI process builds docker image out of it)
keep app helm chart repo source code (which references the app image) in a dedicated repo (CI process builds helm chart out of it, tags it with version and pushes it to artifact registry, e.g. Artifactory or Harbor)
To deploy the chart using Jenkins job, you code the necessary steps you would use to deploy helm chart manually in the pipeline.
Modern alternative to the last step would be using GitOps methodology. In that case, you'd only put the latest released chart's tag in GitOps repository. The deployment will be done using GitOps operator.

CI/CD pipeline deployment flow for test and prod environment

I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

Pipeline by using Jenkins with private SVN repository

I am trying to implement CI/CD pipeline for my microservice oriented project by using Kubernetes and Jenkins. I am using my code repository on my on-premise server. I created one SVN repository on my server.
I am interested to know, can I use my private SVN code repository with Jenkins?
The reason for my doubt is because every example is showing the creation of pipeline with Jenkins and GitHub project.
You can use the shell command in your pipeline. So you are free to use SVN with Jenkins:
https://tortoisesvn.net/docs/nightly/TortoiseSVN_en/tsvn-cli-main.html
Some info there:
Run bash command on jenkins pipeline

Continuous Deployment using Jenkins and Docker

We are building a java based high-availability service for a financial application. I am part of the team for managing continuous integration using Jenkins.
Lately we introduced continuous deployment too in the list and we opted for Docker containers.
Here is the the infrastructure:
The production cluster will have 3 RHEL machines running the following docker containers on each of them:
3 instances of Wildfly
Cassandra
Nginx
Application IDE is Netbeans and source code is in git.
Currently we are doing manual deployment on this infrastructure.
Please suggest me some tools which I use with Jenkins to complete the continuous deployment process.
You might want jenkins to trigger on each push to your jenkins repository. There are plugins that help you do that with a webhook.Gitlab-plugin is a solution similar solution exist for Github and other git solutions.
Instead of heavily relying on bash and jenkins configuration you might want to setup a jenkins pipeline with the jenkins pipeline plugin or even pipeline: multibranch plugin. With those you can automate your build in groovy code (jenkinsfile) in a repository with the possibility to add functunality with other plugins building on them.
You can then use the docker pipeline plugin to easily build docker containers, push docker images and run code inside docker containers.
I would suggest building your services inside docker so that your jenkins machine does not have all the different dependencies installed (and therefore maybe conflicting versions). Use docker containers with all the dependencies and run your build code in there with the docker pipeline plugin from groovy.
Install a registry solution to push and pull your docker images to.
Use the Pipeline: Shared Groovy Libraries to extract libraries from your jenkinsfiles so that they can be reused. Those library files should have their own repository which your jenkins knows about and keeps up to date. Possibly you can even have an entire pipeline process shared between multiple projects which simply add parameters in their jenkinsfile.
A lot of text and no examples. If you think something is interesting and you want to see some code just ask. I am currently setting all this up.

Resources