Chef: Jenkins supermarket cookbook automated job deployment - jenkins

I am currently using the Jenkins supermarket cookbook to deploy an instance of Jenkins running as a service on my Chef managed node. So far I have modified my _master_war_ recipe file to install Jenkins and start Jenkins as a service using runit, install all plugin versions I need as well as enable matrix based security and create a base administrator account. It should also be noted we are using BitBucket for source control.
I also want to add my jobs as part of the automated Jenkins deployment. From what I understand, the way to do this is to copy the job configuration files from a directory on my chef workstation to a directory (Chef's cache path), on the managed node.
Per the Jenkins public supermarket cookbook readme:
The :create action requires a Jenkins job config.xml. This config file
must exist on the target node and contain a valid Jenkins job
configuration file. Because the Jenkins CLI actually reads and
generates its own copy of this file, do NOT write this configuration
inside of the Jenkins job. We recommend putting them in Chef's file
cache path.
As these job configurations do change periodically, I'm wondering what is the best way to maintain the most recent copy of the job configuration file(s) on my chef workstation for deployment to my managed node?
Am I understanding the cookbook documentation correctly in that we will need a local copy of the job configuration file (on the Chef workstation), that is then copied to Chef's cache file path on the managed node?
Thanks in advance for any help anyone is able to provide.

Personally, I count setting up Jenkins jobs a lot more into the domain of Jenkins instead of Chef. The Jenkins community has developed several "jobs as code" approaches, the most popular ones being the Job DSL and the Jenkins Pipelines, with the latter one being the probably better starting point.
What remains up to the Chef cookbook is to define the seed job, either e.g. for the "Bitbucket Organisation Folder" plugin (and the one job that points to your organisation at BitBucket) or a so-called "Seed job" for the Job DSL.
Regarding automated setup of pipelines, I recommend a look at Torben Knerr's examples. This uses
Pipeline (as defined in Jenkinsfiles) for the actual build jobs
a Job DSL seed job to set up the pipeline job(s)
Regarding the actual implementation in Chef, you can see an example in a cookbook of mine. The template resource copies a file from the cookbook (in the templates/ subdirectory) into some temporary path, from where the jenkins_job resource picks it up (on the Jenkins server).
So I'm not sure, if you got it right regarding:
will need a local copy of the job configuration file (on the Chef workstation)
So you just need it once on your workstation to add it to the cookbook, yes.

Related

jenkins-as-code: purpose of jobs

I want to use Jenkins and store the configuration and the pipeline in my SCM(e.g. git). To do so, I created a directory, let's say "jobs" in the root of my project where I will store jobs.groovy files written as JobDSL plugin files.
Should I do all the things in a single job file, like fetching the source code, testing it, maybe building Docker images if necessary, then deploying on AWS cloud? Or for each operation, should I create different jobs? If so, then how can I create a pipeline using these job files?
look at jenkins configuration as code plugin. following link would be helpful
https://github.com/tomasbjerre/jenkins-configuration-as-code-sandbox

Jenkins: Automated job configuration using Seed Jobs and Jenkinsfile

I am trying to understand how to best deploy an instance of Jenkins, complete with plugins, users and jobs using Chef. I am currently using the Chef Jenkins Supermarket cookbook.
I am attempting to achieve automated deployment of our Pipelines as part of the project. From what I have gathered, the best way to go about this is to have Chef configure a seed job in Jenkins initial setup and configuration.
The seed job should specify, among other things, the git repository from which to find and use a Jenkinsfile for a given job. I've found this resource by Daniel Spilker to be helpful in explaining seed jobs.
So the seed Jenkins job would be run, which would then generate the Jenkins job we have just scripted with it (in this case the seed job would be to pull the Jenkinsfile from source control and configure a new Jenkins job (our pipeline), with the details of the Jenkinsfile).
Am I understanding this correctly as the proper way to not only automate Jenkins job configuration, but also as the proper way to always have an up to date job configuration for any given job in the event the job configuration were to change?
If we used a seed job to setup our pipeline, what are some possible solutions to having the initial seed job run automatically once Jenkins is fully configured by Chef?
As for job configuration changes that may occur over time, would we need to setup the seed job to poll source control for any changes in the Jenkinsfile periodically in the event the Jenkinsfile has been modified? (It may be helpful to note that we are currently using BitBucket for source control).
Just getting started with pipeline as code. Thanks to everybody in advance for their patience and guidance.
I've mentioned this a bit in your other questions, but the least painful approach is to treat Jenkins as a database, not a web service. Have Chef do the basic install, but then configure the initial bits by hand. For DR, rely on your backups rather than Chef.

Can I store Jenkins configuration in the project repo (like Travis CI)?

How do you maintain the Jenkins job configuration in SCM along side the source code?
As source code evolves, so does the job configuration. It would be ideal to be able to keep the job configuration in SCM, for the following benefits:
easy to see who a history of the changes, including the author and the description
able to rebuild old branch/tag by checking out the revision and build just work
not having to scroll through the UI to find the appropriate section and make change
I see there is a Jenkins Job Builder plugin. I prefer a solution along the lines of Travis CI, where the job configuration is maintained in a YAML file (.travis.yml). Any good suggestions?
Note: Most of our projects are using Java & Maven.
Update 2016: Jenkins now provides a Jenkinsfile which provides exactly this. This is supported by the core Jenkins developers and actively developed.
Benefits:
Creating a Jenkinsfile, which is checked into source control, provides a number of immediate benefits:
Code review/iteration on the Pipeline
Audit trail for the Pipeline
Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.
I've written a plugin that does this!
Other than my plugin, you have some (limited) options with existing Jenkins plugins:
Use a single test script
If you configure your Jenkins to simply run:
$ bash run_tests.sh
You can then check in a run_tests.sh file into your SCM repo and you're now tracking changes for how you run tests. However, this won't track configuration of any plugins.
Similarly, if you're using Maven, the Maven Project Plugin simply runs a specified goal for your repo.
The Literate Plugin does allow Jenkins to run the commands in your README.md, but it hasn't yet been released.
Track changes to Jenkins configuration
You can use the SCM Sync configuration plugin to write configuration changes to SCM, so you at least have a persistent record. This is global, across all projects on your Jenkins instance.
There's also the job config history plugin, which stores config history on the filesystem.
Write Jenkins configuration from SCM
The Jenkins job builder project you mentioned lets you check config changes into SCM and have them applied to your Jenkins instance. Again, this is across all projects on your Jenkins instance.
Write Jenkins configuration from another job
You can use the Job DSL Plugin with a repo of groovy scripts. Jenkins then polls that repo, executes the groovy scripts, which create job configurations.
Discussions
Issue 996 (now closed) discusses this, and it has also been discussed on the mailing list: 'Keeping track of Hudson's configuration changes', and 'save hudson config in svn'.
you can do this all with the workflow plugin and a lot more. Workflow is one of the most advanced technics to use jenkins and it has a very strong support.
It is based on a groovy DSL and allows you to keep the whole configuration in the SCM of your choise (e.g. GIT, SVN...).

Way to clone a job from one jenkins to another

I have two Jenkins, both are master. Both have 5 salve Jenkins each. I have one job on first jenkins that needs to be cloned for each job.
I can clone the job on first jenkins and its slave but not on second master jenkins. Is there a way to clone a job from one jenkins to another?
I have one more question can I archive the job at some defined location other than master jenkins, May be on slave?
I assume you have a job called "JOB" on "Jenkins1" and you want to copy it to "Jenkins2":
curl JENKINS1_URL/job/JOB/config.xml | java -jar jenkins-cli.war -s JENKINS2_URL create-job
You might need to add username and password if you have turned on security in Jenkins. The jenkins-cli.war is available from your $JENKINS_URL/cli.
Ideally you should make sure you have the same plugins installed on both Jenkins1 and Jenkins2. More similar you can make the two Jenkins masters, the fewer problems you will have importing the the job.
For the second part of your question: slaves don't store any Jenkins configuration. All configuration is done on Master. There is a lot of backup plugins, some backup the whole Jenkins, some backup just job configuration, some backup individual jobs, export them to files, or even store/track changes from SCM such as SVN.
So "archiving job configuration to slave" simply makes no sense. But at the end of the day, a job configuration is simply an .xml file, and you can take that file and copy it anywhere you want.
As for the first part of the question, it's unclear what you want. Do you want to clone a job automatically (as part of another job's process), programmatically (through some script) or manually (through the UI, other means)?
Edit:
Go to your JENKINS_HOME directory on the server filesystem, navigate to the jobs folder, then select the specific job folder that you want.
Copy the config.xml to another server, this will create the same job with the same configuration (make sure your plugins are same)
Copy the whole job_name folder if you want to preserve history, builds, artifacts, etc

Best way to use Jenkins to install snapshot on remote machine?

I'm running Jenkins on one server and want to use chef and automatically install a snapshot (including runtime artifacts etc) on a separate server.
Currently Jenkins will use ssh to invoke chef on the seperate machine. Is there a better way?
Maven is also involved in this.
I've found that majority of "Deploy" type plugins are lacking in customization. We use "Execute" (bash or batch) build steps to trigger deployment scripts on remote machines (written in house, be they Puppet, Chef, or plain bash/batch).
The correlation between builds and deployments is achieved through "Promotions" and explained in detail here:
How to promote a specific build number from another job in Jenkins?

Resources