The Jenkins landscape is vast and new progress is difficult to keep track especially if you are not a regular DevOps.
I am currently in process of setting up a Jenkins CI system from scratch. I am looking for the best possible ways to get the Jenkins instance up and running. I have looked at options such as running from the JAR, setting it up a service, docker, blue ocean, etc.
I was wondering if you can please share your experience if there is a pre-configured setup or a scalable Jenkins solution already available in the market which is ready to be configured/deployed.
One of the key tenant on this Jenkins instance would be test automation guys running their Selenium tests (or I am ideally looking at Windows server installation although CentOS is an option) and would like to make it working for them as easy as possible.
I'm a Jenkins admin. In my company I've set up Jenkins on our Kubernetes cluster using the Helm chart with a custom docker image preloaded with plugins (you don't want to rely on the plugin update site during startup). All configuration is done with the Configuration as Code Plugin. We're using the Kubernetes plugin to do horizontally scaling. No builds are allowed on the build controller, everything is done within agents, which is custom docker images inspired by these images. and we don't allow no builds to run on the build controller. This works very well, and I'm very happy with the setup. There is also a Jenkins Kubernetes Operator which looks promising, but I havent tried it myself.
If you're not on Kubernetes, you can take a look at the Jenkins Evergreen project.
PS: The Blue Ocean project is dead, but the folks over at Cloudbees are currently in the process of overhauling the UX. They just released a weekly version where they got rid of all tables so the design is slowly becoming more and more responsive, and also a new set of icons is also coming up.
Maybe the nearest you can get to a pre-configurated Jenkins Instance is using the Docker Image (https://hub.docker.com/r/jenkins/jenkins). But also with the docker image, you have to selected plugins and so on. Maybe you want to raise an issue as purposal in the Jenkins Docker repository to make it possible to pre-configure Jenkins (Github Repo: https://github.com/jenkinsci/docker/issues)?
Related
I am trying to setup Kubernetes for my company. I have looked a good amount into Jenkins X and, while I really like the roadmap, I have come the realization that it is likely not mature enough for my company to use at this time. (UI in preview, flaky command line, random IP address needs and poor windows support are a few of the issues that have lead me to that conclusion.)
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
But I am not sure about gitops support. When I try to google it (gitops jenkins) I get a bunch of information that includes Jenkins X.
Is there an easy(ish) way for normal Jenkins to use GitOps? If so, how?
Update:
By GitOps, I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
I mean something similar to what Jenkins X supports. (Meaning changes to the cluster stored in a Git repository. And merging causes a deployment.)
Yes, this is the what Jenkins (or other CICD tools) do. You can declare a deployment pipeline in a Jenkinsfile that is triggered on merge (commit to master) and have other steps for other branches (if you want).
I recommend to deploy with kubectl using kustomize and store the config files in your Git repository. You parameterize different environments e.g. staging and production with overlays. You may e.g. deploy with only 2 replicas in staging but with 6 replicas and more memory resources in production.
Using Jenkins for this, I would create a docker agent image with kubectl, so your steps can use the kubectl command line tool.
Jenkins on Kubernetes
But I understand that the normal Jenkins is very mature and can run on Kubernetes. I also understand that it can have dynamically created build agents run in the cluster.
I have not had the best experience with this. It may work - or it may not work so well. I currently host Jenkins outside the Kubernetes cluster. I think that Jenkins X together with Tekton may be an upcoming promising solution for this, but I have not tried that setup.
I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
I wanted to build a Jenkins server which would run test of my puppet code on Vagrant. The issue I found is that the we run our server as VMs already, either in vmWare or AWS and Vagrant will not work as another virtualisation.
Does anyone have an idea how can I create a test platform for my puppet code. What I want to test the deployment of manifest on the nodes them self i.e. If I deploy a class web server or make changes to it I would like to check if it affects/breaks deployment of other classes.
The idea would be to iterate over all the classes/roles and see if the deployments are passing. I would like to make it automatic and independent of our engineers. At the moment we are running manual test with vagrant up however there are too many roles to do that by hand.
Any ideas how can I tackle this?
You can use either Docker or AWS provider for Vagrant.
In case of AWS provider you need to set-up RSync to get your environment into newly launched instance.
If your Vagrant scripts are robust, you can use the same script for both local deployment on your workstation and AWS/Docker deployment on CI server.
There are drawbacks to doing these techniques, in case of Docker you are limited to the same kernel that Jenkins server is running, in case of AWS you will incur additional costs. However, for AWS your don't need to allocate as much resources for your Jenkins server, so you might even save money this way because you will be using paying for extra VMs only when you are running you tests. Just make sure you will shut them down after you done.
Is there any special reason why you want to use vagrant? I'm not sure if you are setting up your production environment with vagrant or not.
In case you are not bound to vagrant, I would recommend you to think about using a docker image to prepare a lightweight environment to run your setups and verifications in.
When doing your tests, spin up a container from your image that contains your puppet distribution and run your setups/tests inside. If you have special kernel requirements, use a separate jenkins slave/agent machine rather than executing jobs on the jenkins master.
If you are not sure how to get started using jenkins with docker, have a look into the examples section of the Jenkins Documentation. The provided examples are showing the declarative pipeline syntax thats still a bit new. Also consider the collapsed Toggle Scripted Pipeline Sections which show the groovy pipeline scripts that are alot more forgiving for jenkins pipeline beginners.
Those should be quite good pointers to get started with running+testing your puppet scripts inside docker. For building and using a docker image there should be more than enough tutorials out there.
Let me know if this was a hint in the right direction or if I mistinterpreted your question.
I'm a bit newbie on Jenkins so I have this question.
I am currently working on a project about CD. We are using jenkins to build a docker image, push it to the registry and deploy into OpenShift afterwards...Although this process works like a charm there is a tricky problem i'd like to solve. There is not only 1 openshift but 3 (and increasing) environments/regions where I want to deploy this images.
This is how we are currently doing:
Setting region tokens as secret text
$region_1 token1
$region_2 token2
$region_3 token3
Then
build $docker_image
push $docker_image to registry
deploy into Region1.ip.to.openshift:port -token $region_1
deploy into Region2.ip.to.openshift:port -token $region_2
deploy into Region3.ip.to.openshift:port -token $region_3
Thus, in case we need to new any new "region" to the Jenkins Jobs, we have to edit every job manually...
Since the number of docker images and also the number of Openshift regions/enviromnets is increasing, we are looking for the way to kind of "automate" or make it easier as possible when it comes to add a new Openshift region, since ALL the jobs (old and new ones) must deploy their images into those new environment/regions...
I have been reading documentation for a while but Jenkins is so powerful and have so many features/options that somehow i get lost reading all the docs...
I dont know if doing a Pipeline process or similar would help...
Any help is welcome :)
I want to use Amazon EC2 plugin for setting up autoscaled Slaves.
We aim to script everything using Chef and so far I haven't found anything for this Jenkins plugin. I want to write a cookbook of my own but am wondering what is the best way to do it?
Generally management of the build machine would be done through the EC2 plugin itself, it already installs the Jenkins remote jar for you, so all you need to do beyond that is make sure Java is installed.
There are two methods to use Amazon EC2 plugin and Chef together:
Run Chef to do provisioning on each slave launch or build start
Build pre-baked slave images using Chef and something like Packer and provide them to Jenkins Amazon EC2 plugin
Cons of the first approach:
May take a lot of time depending on what software you are installing with Chef. So it would give a latency for build start and extra bill for machine time.
You can't always get the same build environment you have last time. May lead to heisenbugs and hard troubleshooting.
The second approach is known as Immutable Server. It has its cons too:
Gives you an extra bill for AMI storage.
Less flexible — you can't just fix some version numbers or add requirements for some new software and start a new Jenkins build. You have to rebuild your slave images first. And if you need even slightly different environments you have to build and keep several pre-baked images.
I myself use the second approach right now. You could check source code here. Specifically, configuration of Amazon EC2 plugin with Chef is done here.