I have a Jenkins server running on a Virtual Machine and has 2 agents configured.
I am trying to execute the following stages using Jenkins Pipeline as a code in Continuous Delivery format.
Build -> Unit Test -> Containerize -> Deploy to Dev Kubernetes Cluster -> Carry out minimal testing -> Deploy to Staged Kubernetes Cluster -> Integration Tests -> Deploy to UAT
I have 2 microservices in using different repositories.
I have a scenario where My service 1 deploys kubernetes artifacts in Dev Kubernetes clusters and simultaneously my Service 2 deploys kubernetes artifacts on Staged cluster.
Can I isolate the contexts in anyways so that the kubernetes clusters information is not mixed between 2 pipelines?
Related
I have installed the Kubernetes plugin as well as added 2 kubernetes slave nodes in the Jenkins -> Manage Jenkins -> Configure system -> Cloud.
Then when adding jobs I can't see a place to configure the target environment. There's simply no tabs as seen for example here:
Here's what it looks on my job configuration:
I have three VMs which I have used to deploy develop, staging and master branch of a project.
Lets say jenkins is running on VM named JEN
Develop branch on VM named DEV
Staging branch on VM named STAGE
And Master branch on VM named MASTER
I have made three slave node (DEV, STAGE, MASTER) on Jenkins and thee different branch's Jenkinsfile run on different VMs(DEV, STAGE, MASTER).
Another approch I am coming through is:
Not to make DEV, STAGE, MASTER as slave node. That is we have only one Jenkins Agent (JEN).
Run pipeline and the tests in it on JEN and use ANSIBLE to deploy remotely on (DEV, STAGE, MASTER)
How would that compare with the first approach?
First, I believe it is Ansible, not ancible.
Second, the interest of an Ansible deployment model is that is is agentless (as opposed to Jenkins, which needs an agent listener agent.jar)
So if what you need to deploy is not the sources but deliverables, Ansible is more suited for that task, provided the target machines are accessible through SSH.
The Jenkins pipeline would simply do a tower_cli call to the right Ansible Job Template: that is what I have in my deployment platform.
I am trying to implement CI/CD pipeline by using Kubernetes and Jenkins. I am planning to use Kubernetes HA Cluster having 3 master and 5 worker machine / node.
Now I am exploring about the implementation tutorials for CI/CD Pipeline. And also exploring about the Jenkins usage with Kubernetes HA Cluster. When I am reading , I felt little bit confusions about Jenkins. That I am adding here.
1. I have total 8 VMs - 3 Master and 5 Worker machines / nodes (Kubernetes cluster). If I installing Jenkins in any one worker machines , then is there any problem while integrating with CI/CD pipeline for deployment ?
2. I am previously readed the following link for understanding the implementations,
https://dzone.com/articles/easily-automate-your-cicd-pipeline-with-jenkins-he
Is this mandatory to use Jenkins master and slave ?. In this tutorial showing that If kubectl,helm and docker is installed then don't need to use Jenkins slave. What is the idea about master and slave here?
3. If I am installing both jenkins master and slave in kubernetes cluster worker machine / node , then Need to install master and slave in separate separate VMs? I have still confusion about where to install Jenkins?
I am just started on CI/CD pipeline - Kubernetes and Jenkins.
Jenkins has two parts. There's the master, which manages all the jobs, and the workers, which perform the jobs.
The Jenkins master supports many kinds of workers (slaves) via plugins - you can have stand alone nodes, Docker based slaves, Kubernetes scheduled Docker slaves, etc.
Where you run the Jenkins master doesn't really matter very much, what is important is how you configure it to run your jobs.
Since you are on Kubernetes, I would suggest checking out the Kubernetes plugin for Jenkins. When you configure the master to use this plugin, it will create a new Kubernetes pod for each job, and this pod will run the Docker based Jenkins slave image. The way this works is that the plugin watches for a job in the job queue, notices there isn't a slave to run it, starts the Jenkins slave docker image, which registers itself with the master, then it does the job, and gets deleted. So you do not need to directly create slave nodes in this setup.
When you are in a Kubernetes cluster in a container based workflow, you don't need to worry about where to run the containers, let Kubernetes figure that out for you. Just use Helm to launch the Jenkins master, then connect to the Jenkins master and configure it to use Kubernetes slaves.
I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.