I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?
Related
My requirement is to trigger a CI & CD on a kubernetes on prem infra, whenever a PR has been raised. Jenkins X is an ideal candidate but unfortunately due to few proxy issues it didnt come to fruitition.
Coming to kubernetes-operator, looking for few clarifications.
I've 4 nodes cluster, with one node being the leader.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Do I have to do any addtional configurations to enable master - slave set up.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X ?
Thanks in advance.
As per your comments you are actually interessted in more of a cloud-native solution to operating a Jenkins, so here goes.
Since you already have a Kubernetes cluster and would like to use the Jenkins Kubernetes operator, then I would recommend you use the Jenkins Kubernetes Plugin for managing your workloads.
The Jenkins Kubernetes plugin enables you to run each of your pipelines in a separate Pod in your Kubernetes cluster, and once the required Service resources are setup, then the communication between master and slave pods is completely regulated by the plugin. I would recommend that you look into their documentation, which is quite good ( in comparison to other plugins ).
Now since you are also using the Jenkins Kubernetes operator you should know that the plugin is installed as one of the default plugins and is available as soon as your Jenkins instance is spun up. I would recommend you read through the Jenkins Kubernetes operator documentation to get a better grasp of what happens when that is running.
So now I will move onto your questions.
Do I've to set up a new instance of Jenkins before hand on my K8s cluster or kubernetes-operator does that for me ?
If you install the Jenkins Kubernetes operator via Helm chart, then no, you get a Jenkins master instance included. Otherwise if you install the controller into your cluster manually, then you will need to create a Jenkins CRD, which will create a Jenkins instance for you.
Looking to access the Jenkins instance under the domain : jenkins.mybusinessunit.myorg.com/jenkins
Use Ingress + Load Balancer + DNS Service or expose the Pod via NodePort. Note that exposing your master Pod via NodePort may require you to make your Jenkins Master instance publicly available ( and that may not be wise ).
Do I have to do any addtional configurations to enable master - slave set up.
Please refer to the documentation of Jenkins Kubernetes plugin and Jenkins Kubernetes operator. All details are provided there, but configuration is rather minimal.
Does kubernetes-operator provides a feature to support CI/CD model like Jenkins X
No. The Jenkins Kubernetes operator is there only to manage your Jenkins instance and backups in immutable fashion. Jenkins X can be used in combination with Jenkins, but neither replaces the other completely.
I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (UI in preview, flaky command line, random IP address needs and poor windows support are a few of the issues that have lead me to that conclusion.)
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
But I am not sure about gitops support. When I try to google it (gitops jenkins) I get a bunch of information that includes Jenkins X.
Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?
Update:
By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a Jenkinsfile that is triggered on merge (commit to master) and have other steps for other branches (if you want).
I recommend to deploy with kubectl using kustomize and store the config files in your Git repository. You parameterize different environments e.g. staging and production with overlays. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.
Using Jenkins for this, I would create a docker agent image with kubectl, so your steps can use the kubectl command line tool.
Jenkins on Kubernetes
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that Jenkins X together with Tekton may be an upcoming promising solution for this, but I have not tried that setup.
I fear that this may be a duplicate of some issue and if so I apologize. Permit me to describe what I would like in full and perhaps the community could point me to the components.
My Setup:
docker-mac kubernetes cluster.
Jenkins running as a service in my local cluster using nginx-ingress at something like jenkins.local.com.
~/jenkins_home on my local machine is mapped to /var/jenkins_home in the container image thus exposing my localhost filesystem
~/code/github exposed to /var/github on the container image as well.
I keep my various repositories in ~/code/github or ~/code/scm in general. The behavior I would like is as follows...
I make a commit, locally, not pushed to any branch on ~/code/github/my_project.
Jenkins sees this in its filesystem and triggers a build by running the Jenkinsfile of ~/code/github/my_project/Jenkinsfile
This would provide me with a development loop for local development based on Jenkins that could provide me with quick feedback on tests without me pushing to a branch on the scm.
What are my options for making Jenkins react to each commit in this way?
I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
I want to deploy a private cloud test infrastructure using OpenStack and Jenkins for multiple projects. I thought of creating a template for OpenStack with one Jenkins installation using as master. For the projects I thought of separating them into nodes, i.e. each project would get one node. Is this a sensible structure? Or should I install one Jenkins installtion per project+vm?
1) How would you organize a private multi-project test cloud infrastructure?
2) Jenkins saves configuration and job information to /var/lib/jenkins by default, how do I manage the object storage for each project?
When you say node, I'm assuming you mean a machine running nova-compute and hosting VM instances. If this is the case, then I honestly wouldn't worry about trying to bind a project to a specific node - treat the entire openstack pool of resources you have as a global cluster, assign in projects, and let them spin up and tear down as they need.
You will likely find it beneficial to have an image with jenkins pre-installed as a publically available image, assuming you want a master jenkins per project in your cloud. If you're running jenkins as a stand-alone item per project, using a m1.medium may be sufficient, but you might find you want to use m1.large. It all depends on what you have your jenkins instance doing in each project.
If you want the jenkins data to persist across destroying and recreating the jenkins master instance, then you could use a volume and specifically mount /var/lib/jenkins into it - but you will need to manage the coordination of jenkins startup and having the volume attached appropriately. You may find it easier to give the jenkins instance a larger base disk and just back up and restore the data per project if you need to destroy and recreate the jenkins instance.
Another way to do this would be to share a master jenkins external to your openstack cloud and use the jclouds jenkins plugin to spin up jenkins instances and slaves as you need for projects. This isn't providing any segregation between projects in jenkins, which may not be to your liking based on the question above.