we have 150 applications which has individual jenkins file for each application in Git repository. we deploying each application one by one in Jenkins by triggering desired branch and when it reaches to k8s deployment stage in pipeline, we need to select the environment dev/test/prod from the dropdown there to deploy in desired environment. Now I need help to prepare a single jenkins job to deploy all the 150 applications to k8s at a time. Please suggest step by step process if any..
Related
I work for a small startup. We have 3 environments (Production, Development, and Staging) and GitHub is used as VCS.
All env runs on EC2 with docker.
Can someone suggest me a simple CICD solution that can trigger builds automatically after certain branches are merged / manual trigger option?
Like, if anything in merged into dev-merge, build and deploy to development, and the same for staging and pushing the image to ECR and rolling out docker update.
We tried Jenkins but we felt it was over-complicated for our small-scale infra.
GitHub actions are also evaluated (self-hosted runners), but it needs YAMLs to be there in repos.
We are looking for something that can give us option to modify the pipeline or overall flow without code-hosted CICD config. (Like the way Jenkins gives option to either use Jenkins file or configure the job manually via GUI)
Any opinions about Team City?
I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
I would like to set up jenkins server that would run test scripts based on successful build deployments on other Jenkins servers. for example, if the QA jenkins server is named JQA1OnMachine1 and i have three others that are named
J2OnMachine2, J3OnMachine3, J4OnMachine4 (different jenkins server on different boxes) can the JQA1OnMachine1 (QA jenkis) poll the others at regular interval to see if a build was deployed successfully? if so can anyone tell me how?
Jenkins master slave along with Jenkins Pipeline Plugin would be one of the better ways to implement this however, since you don't want to use that approach you can explore PSTools to remotely capture processes or files on different server.
Your builds may update a file on the build server post completion of the build and your QA machine can run script with PSTools to monitor and trigger the QA testing based on the file content
I'm setting up a development environment where I have Jenkins as CI server (using pipelines), and the last build step in Jenkinsfile is a deployment to staging. The idea is to have a staging environment for each branch that is pushed.
Whenever someone deletes a branch (sometimes after merging), Jenkins automatically removes its respective job.
I wonder if there is a way to run a custom script before the automatic job removal, then I would be able to connect to the staging server and stop or remove all services that are running for the job that is going to be deleted.
The plugin multibranch-action-triggers-plugin might be worth a look.
This plugin enables building/triggering other jobs when a Pipeline job is created or deleted, or when a Run (also known as Build) is deleted by a Multi Branch Pipeline Job.
I want to configure Jenkins to build my code on 1 server. Then want to deploy it on another server using Jenkins.Both servers are using Linux I want to automate the entire process as much as possible. I went through some of plugins like pipeline, Job Import Plugin, etc
Can anyone guide me how to go about it ? Which plugins will be useful ? Any example or tutorial somewhere will be useful. The configuration of build pipeline plugin on jenkins was not seamless for me.
Thanks,
Bhargav
I would work it this way :
Install jenkins on your first server
Install the following plugins : ssh credentials, ssh slaves, copy to
slave, and restart jenkins
Go to Manage jenkins -> Manage credentials, and add ssh credentials
for your second server
Go to Manage jenkins -> Manage nodes, and create a passive slave.
The launch method should be "Launch slave agents on Unix machines
via ssh". You should use the credentials that you have added in step
3
Create a job to build your code. In the advanded options of job, you
should indicate that the job must only be built on master node.
Create a job to deploy your code on the second server. In the
avanded options of job, you should indicate that the job must only
be built on slave node.
In the "Build Environment" section, check the "Copy files into workspace before building" box and configure what files you want to copy from first server (https://wiki.jenkins-ci.org/display/JENKINS/Copy+To+Slave+Plugin)
The code will be copied into the jenkins slave's workspace.