When running Jenkins jobs that are kicking off Terraform scripts, the workspaces are initialised each time. Is there a way to preserve the downloaded plugins when Terraform initialises. Would the -plugin-dir option be the best?
Related
I am trying to configure global build discard option in Cloudbees jenkins using groovy, For now, iam manually configuring global build discard in configure system..
But i couldnt find enough documents with the groovy.I could see scripts only for fetching the list of jenkins jobs and adding build discard property in the pipeline.
Cloudbees recommends this to run build discarder,
ExtensionList.lookupSingelton(BackgroundGlobalBuildDiscarder.class)doRun()
I have automated Jenkins master and slaves deployment and redeployment successfully.
I know how to manually create pipeline jobs and add github repos to use their Jenkinsfiles for the steps.
my issue is how can I automate the pipeline jobs addition to jenkins after its been destroyed and redeployed without having to manually create the pipeline jobs and point to Jenkinsfile each time.
I have seen this done before in a container environment with chef and docker when redeployed or updated it re-adds all the pipelines automatically again.
I want to not use the UI at all only to confirm job status progress and verify settings.
I would recommend looking at the JobDSL Plugin to create jobs, using a seed job to create them on initial Jenkins startup. The Jenkins Configuration-as-Code plugin can be used to setup any other configuration outside the jobs.
We are building a java based high-availability service for a financial application. I am part of the team for managing continuous integration using Jenkins.
Lately we introduced continuous deployment too in the list and we opted for Docker containers.
Here is the the infrastructure:
The production cluster will have 3 RHEL machines running the following docker containers on each of them:
3 instances of Wildfly
Cassandra
Nginx
Application IDE is Netbeans and source code is in git.
Currently we are doing manual deployment on this infrastructure.
Please suggest me some tools which I use with Jenkins to complete the continuous deployment process.
You might want jenkins to trigger on each push to your jenkins repository. There are plugins that help you do that with a webhook.Gitlab-plugin is a solution similar solution exist for Github and other git solutions.
Instead of heavily relying on bash and jenkins configuration you might want to setup a jenkins pipeline with the jenkins pipeline plugin or even pipeline: multibranch plugin. With those you can automate your build in groovy code (jenkinsfile) in a repository with the possibility to add functunality with other plugins building on them.
You can then use the docker pipeline plugin to easily build docker containers, push docker images and run code inside docker containers.
I would suggest building your services inside docker so that your jenkins machine does not have all the different dependencies installed (and therefore maybe conflicting versions). Use docker containers with all the dependencies and run your build code in there with the docker pipeline plugin from groovy.
Install a registry solution to push and pull your docker images to.
Use the Pipeline: Shared Groovy Libraries to extract libraries from your jenkinsfiles so that they can be reused. Those library files should have their own repository which your jenkins knows about and keeps up to date. Possibly you can even have an entire pipeline process shared between multiple projects which simply add parameters in their jenkinsfile.
A lot of text and no examples. If you think something is interesting and you want to see some code just ask. I am currently setting all this up.
We have a Jenkins system to automate build from Github, now we are implementing a Saltstack system. So I need to integrate my Jenkins with Salt-master so that it passes all the new builds to the master which then sends it across the salt-clients(minions).
The saltstack setup is in AWS cloud and and the Jenkins machine is outside the cloud in a local setup.
You could enable the salt-api and using the following plugin: https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin then all of your jenkins builds can execute states / orchestrations etc. to any minions on a per job basis.
Another way of doing this is to have a minion running on the salt-master and install the jenkins slave on the same box. Then restrict the jenkins jobs to that jenkins slave and execute the commands as if you were at the command line. NOTE: this option requires a bit more configuration.
I'm currently using jenkins with clustered slaves.
There is a multi-configuration job (triggered by a git hooks), which do nothing, it purpose is just to pull a git repository on all slaves. This repository stores all the needed scripts for other jenkins jobs. Thank to this, we can update the job's build script through git and ease the maintenance of our jenkins instance (I'm the only one familiar with Jenkins, and so setup this mess for my team).
However, the scripts aren't pulled in the same place on each slave:
In two slaves, the repository is pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name> and /builds/workspace/<multi-configuration job's name>.
In the last slave, the repository is only pulled in /builds/workspace/<multi-configuration job's name>/label/<slave's name>.
So, I have two questions :
Is it a good idea to use such multi-configuration job to synchronize build scripts in all the slave ?
Why the multi-configuration job doesn't pull the repository in the same place for all slaves ?
About the configuration:
The source code management and build triggers are configured to allow triggering from git hooks (Poll SCM is checked).
There is no build step.
Here is the configuration matrix:
(The master node is unchecked)