I'm trying to work out a way to backup and restore jenkins so we can provision a new jenkins automatically.
I cannot work out a way to backup+restore /path/to/jenkins/plugins without including the binaries. We would like the backup to be in xml file format, just as everything else in jenkins. My assumption was that we could somehow backup xml files, and jenkins could restore the plugin binaries if they are missing, since it has access to maven.
I would prefer to avoid using config management tools to install plugins, as I then I have to manage versions of plugins in a way that feels too controlled. I'm happy to just backup what ever is there, and restore it elsewhere when needed. The developers should be free to install plugins, without involving me or puppet.
Googl'ing the issue is difficult since "plugin" is used in so many other contexts.
Below link says it governs plugins as well, but I cannot see how this is - maybe I'm missing something.
http://jenkins-ci.org/content/keeping-your-configuration-and-data-subversion
I have ported the idea to use git and it generally works, except that plugins do not re-appear by magic on the new machine - only the default plugins come back.
Can anyone suggest?
If you don't want to back up the plugin binary files, you can use the Jenkins REST API to get the list of current plugins:
http://jenkins:8080/pluginManager/api/json?tree=plugins[shortName,version]&pretty=true.
(You can use tree=plugins[*] to see a more complete list of fields in the API.)
Save this data as part of your configuration backup and use the Jenkins API to restore the plugins when you're re-deploying.
There's additional documentation and how to update plugins on the pluginManager API page: http://jenkins:8080/pluginManager/api
The best idea I've come up with to date, is to split the instance into an OS disk and a Jenkins disk mounted on /var/lib/jenkins. Use your cloud provider's snapshot feature to backup the jenkins disk periodically. Jenkins is for many organisations I believe, always going to be a "flake" server, or a pet, that needs nurturing and does not benefit much from automation, other than what is used to maintain the OS.
To backup Jenkins components, you can use Handy Backup. A best practice is to set up at least daily backup in a differential or mixed (full/differential) mode. This is an advantage before using any plug-in, due to assured regularity of backups.
Related
I have a significant amount of pre-configuration that I want to automate for Jenkins. E.g. Pre configuring gerrit for the gerrit trigger plugins, pre configuring saml, libraries etc
I'm aware of two methods typically used to do similar tasks:
Configuration as code plugin + yaml configuration
Groovy scripts to execute from the init.groovy.d directory of jenkins home on Jenkins startup
My users want to be able to update Jenkins configuration from the UI without needing to update yaml, suggesting the config as code plugin isn't fit for our purpose as I believe it reapplies the config when the Jenkins container is restarted.
My hunch is to use groovy scripts that remove themselves after the first execution so that they don't reapply themselves on restart.
Is there a more standard way of pre configuring Jenkins? or is groovy my best bet?
TL;DR: Use the file system
Why? There is no "standard" way to achieve what you intend; the two approaches that you suggest are viable options for sure.
From operational point of view, however, it will be good to select a solution which is
generic (so it can cover all aspects of Jenkins configuration) and
"simple" to use
Now,
"Configuration as code" makes you depend on the corresponding plugin -- it may or may not support a specific configuration option
With groovy, it is sometimes quite difficult to find out how to set a Jenkins configuration option (and how to store the setting permanently).
Since all Jenkins configuration data is stored on-disk, another option for bootstrapping Jenkins with a well-defined configuration is to pre-fill those configuration files with proper content right away:
You can be sure that this works in all cases, including all border cases (like, secret/encrypyted data)
Users can change the data later on as needed
Usually, it's quite easy to find the proper configuration file
On the downside, there is a risk that the configuration file format might change with newer versions of the core or of some plugin. However, a similar risk exists for the two other solutions that you suggested.
Tip: for rolling out such pre-configured Jenkins setups, it is helpful to disable the Jenkins setup wizard by setting jenkins.install.runSetupWizard to false.
When you combine words like : pre-configuring Jenkins, init.groovy.d, jenkins home, jenkins startup, etc, it sounds confusing o_O
When Jenkins is ready to use, usual folks just need to create jobs or pipelines. If you need to create a job or pipeline, you just need to install and configure some plugins. Very few of them need groovy, because the goal is "Easy to use".
Advanced user are able to create its own plugins, with java. But almost all is available as plugins.
You can use groovy in a pipeline scripts or declarative pipelines.
So if your question is more like "What is the best way to create and configure jobs or pipelines", I can advise you:
Try much as possible to use pipeline scripts or declarative pipelines.
Use just verified and supported plugins.
Stop call shell scripts in hard drive.
Stop using complicated configurations. Almost all of requirements are already implemented and documented.
If you have a requirement and no one plugin seems to help you, ask here in stackoverflow or develop your own plugin focused in configurability, so you can release it, for the benefit of Jenkins Community.
I was recently put in charge of all Jenkins-related work at my job, and was tasked with storing build artifacts from our declarative pipelines in a place where:
- They are accessible to everyone on the team
- They can be stored for long periods of time
Ideally they would be visible on the Jenkins interface, where they appear when using the default 'archiveArtifacts' command. I know this saves them in the JENKINS_HOME directory. The problem is that I have to discard old builds to avoid running out of space and the artifacts are deleted with them. Furthermore, I don't have access to the server that Jenkins runs on because it's managed by a separate team, so I can't go into JENKINS_HOME.
I looked into a few ARMs like Nexus and Artifactory, but from my understanding those are only supposed to be used for full releases. I'm looking to save artifacts after every new merge, which can happen multiple times a day.
I'm currently saving them on a functional user's home directory, but I'm the only one with direct access to it so that's no good. I also looked into plugins like ArtifactDeployer, which doesn't support pipelines and only does as much as a 'cp' command as far as I could tell.
I ended up creating some freestyle jobs that copy artifacts from the pipelines and save them directly in their workspace. This way they're stored on our Jenkins slaves and visible through the interface to anyone who has permission to view job workspaces.
Nexus does not care what kind of artifacts you drop there. Its a good idea to use it.
I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
I am using Ubuntu14.04, installed Jenkins and configured jobs by installing some plugins.
I want to know if there is any efficient way to take backup of all plugins and jobs.
Edit: Also want to know how to restore the backup.
Thanks in advance for any kind of help.
You can check this path in your installation /opt/app/jenkins/var/lib/jenkins , This is the location where all your Jenkins data is stored. You can write a backup strategy for the contents of this folder to be purged.
Also take a look at this article from experts.
Alternatively you can also use the SCM SYNC Configuration plugin
To answer your question as a whole, the approach i listed in the first will help you restore the files on the system and all that you need is to restart Jenkins and you shud have restored the system fully. This approach gives you full insurance in a collapse of your particular jenkins instance.
Thin backup named plugin is given, which do the backup/restore work very smoothly.
I am working on a grails app and need regularly to deploy hot fixes to a remote server. I am using jenkins with grails plugin for automation.
My point is the following:
Most of the time i fix a few classes, with no big changes in the app (such as new database schema, new plugins....). However each time i create a patch i have to upload trough ssh a 75M war file, which takes between 15 to 20 min. Most of the data is not needed (ie all the packaged jars). What would be sufficient is to upload only the fresh compiled classes from WEB-INF/classes/ and reload the servlet container (in my case jetty).
Anybody experienced with this, preferably with jenkins?
Check the nojars argument for the war task: http://www.grails.org/doc/1.3.7/ref/Command%20Line/war.html
This way you can place all your .jars (which are usually the biggest files inside a .war) in some other directory on the server and just reference that directory in your Jetty classpath.
Or you could write a shell script to explode the .war file (after all it's just a regular .zip file), add the compiled classes and then re-package it.
You could try using CloudBees to do continuous delivery releases. They also use deltas to upload your changes, and deployments don't affect the user experience at all.
An easy to use plugin is available to make the process seamless from within your Grails app and in a Jenkins build. I've written a blog post about how to get it all working easily.
I remember seeing this subject on the mailing list...
http://grails.1312388.n4.nabble.com/Incremental-Deployment-td3066617.html
...they recommend using rsync or xdelta3 to only transfer updated files. Haven't tried it, but it might help you?
Maybe the Cloudfoundry Micro Cloud is an option, a deployment just transfers the deltas and not the whole war file.