Trigger Jenkins with Puppet - jenkins

Recently our organization decided to move from using Maven/Cargo-plugin to deploy our applications to using Puppet. We still have all of our builds and test jobs in Jenkins. So what I'm trying to figure out is how do I trigger a specific Jenkins job based on a specific line being changed in a puppet manifest? We are using a manifest that has all of our deployed components and their versions. If I change the version of one of the components, I want a specific test job to be triggered based on which component was changed. And eventually I will want to rollback the puppet change if the test fails. Has anyone done something related?

I haven't used it for your specific use case, but for a "pull" scenario where you want to monitor the contents of the Puppet manifest for changes, the Jenkins FSTrigger plugin should work for you as long as your Jenkins job can access the Puppet manifest file. You can set it up to look for changes in the entire file content, or just in a particular part of the contents.
If you want a "push" scenario to trigger a build as soon as the Puppet manifest is changed, you could write a script that runs after the changes are saved, checks which components have been changed, and triggers a build via the Jenkins CLI.

Related

Global Jenkins script that will be executed before a build is started

I'm searching for a way to execute automatically a global configured script BEFORE a Jenkins job will be started.
My use case is, all Jenkins jobs are only allowed to start if a specific environment variable is set.
If a variable is not set, the build should be aborted.
I found the Global Post Plugin https://wiki.jenkins.io/display/JENKINS/Global+Post+Script+Plugin, i only need the oposite what this Plugin does.
Maybe there's another solution?
I needed to chmod my /data/jenkins/.npm and /data/jenkins/.sbt directories before running all my builds.
I could either add a prebuild step to every job (redundant and messy) or I could go under Manage Jenkins -> Configure System.
We have a Cloud -> Amazon EC2 configuration section with "Init script" - you can add what you want to run there on slave startup.
However, if you really want something to run something for every job (not enough to run on jenkins slave startup) then you probably don't want to manually configure it for each job.
I suggest you look into Jenkins DSL as you can define preBuildSteps section on any/all job(s) which can then reference a common snippet (eg. a shell script to run).
Partial Solution:
Take a look at the Global Pre Script plugin. This plugin is less feature-rich than the Global Post Script plugin, but it should do at least a part of what you want. It notably lacks the option to abort the build, but it is able to manipulate parameters or other preconditions that your jobs rely on. You may also be able to submit a PR to add some means of preventing the build from executing.
Some options:
Modify Global Pre Script to be able to cleanly abort the build from groovy.
Change your existing jobs to check for a precondition (manually or via script). This not the most scalable option.
Replace your existing jobs with Pipeline jobs and use Shared Libraries to bottleneck the logic. (This is what I do).
Generate your jobs using the Job DSL Plugin and enforce a pre build step in every generated job. (This is what I also do)
Limitations:
Something to keep in mind for both global plugins: neither plugin provides a proper build step. The groovy code executes on the master.
One use case that neither plugin will handle is a between-job slave cleanup/sanity check.

How can Jenkins read a polling text file checked in GIT to trigger a deployment?

Current scenario: Build and deployment happens in development environment and the code is checked in to GIT and the JAR file is placed in Nexus. Then a change request is raised to deploy the same to the QA environments. The CR is attached with two parameterized text files (One of which contains the nexus path and other contains website URL) which act as input for parametrized build along with selection of environment. Run deploy
Target Scenario:We want to remove the CR part and in doing so we want a file (containing parameters which were attached in CR) which when pushed to GIT, a copy-paste should happen to the parametrized Jenkins job in respective parameters and select the environment from dropdown.
What is the best way to achieve this, either by creating another Jenkins job which can read the parameters from the file or is there any other way.
P.S. We don't want to make any editing in the existing Parameterized Jenkins jobs.
Using the Jenkins GitHub Plugin, you can create a separate job with a GitHub build trigger. By adding the GitHub repo (where the parameter file is pushed) to this Jenkins job, you can process the file to get the parameters you want in order to kick off the appropriate Jenkins jobs.
For Jenkins to process the parameters, one option is to use the EnvInject Plugin. (As suggested in this answer.) Another suggestion: Extended Choice Parameter Plugin (from this answer).

Deploy web app via Jenkins

I have recently started to mess about with Jenkins and am unsure how to deploy my web app to a basic server. I've gotten into the Pipeline (https://jenkins.io/doc/book/pipeline/) and it seems like a fantastic way to work.
Where I'm a bit stuck is in two spots:
Once my repo is in my workspace within Jenkins, how do I prep it so I am only deploying the files necessary for the application? For example, I don't need my src/ directory or my Vagrantfile when I'm deploying things.
How do I deploy my app to the server? I see examples all over the place, but I am getting a bit lost since there seems to be so many ways to do this. I'm assuming scp or something like that...?
To build off of #2, is there a way to deploy web apps as transactions (in one shot) rather than file-by-file?
Please let me know if I can provide any information for potential answers!
I can't speak to your specific use case but a common way to do this is the build-and-deploy model, where you will have 2 Jenkins jobs. The "build" job will check out from source, run build commands such as maven or make, and lastly will "archive" the build artifacts. The latter is an option under the 'post-build actions' tab at the bottom.
In the "deploy" job, you will grab the artifacts of your choice. You can fetch a single file, all of them, and everything in between. This requires use of the 'Copy Artifact' plug-in and it allows you to copy files generated by other jobs. Now you can run your usual deploy script in the 'Execute Command' box. Most command line paradigms are supported out of the box such as setting environment variables.
The instructions above assume that you want to run your application off of a host that you've provisioned as a Jenkins slave.
Use artifacts as mentioned by Paul Back, or a 3rd party artifactory server as in video
This is always tricky and error-prone. Why not spin up a fresh server with new release (humanly verified once)
Jenkins & Ansible is the answer here. This is how I deploy to production, since I am in no need to use anything like Docker (too many issues with particular app) so have to run the app natively. Quick example would be
You monitor a specific branch in gitlab / github or whatever else and then call a webhook on push / merge etc on that branch, at this point you deal with anything you need to do by running a playbook on the jenkins job that monitors that branch (jenkins).
in my case jenkins and ansible run on the same server. Jenkins runs the ansible playbook that does whatever I need to do.
for example with ansible, I copy certain files that need to be there, run configs / change filenames etc. setup nginx, run composer,
you get the point.

Jenkins: what started the build

There are two machines with Jenkins: one for building, second for testing. If some job is successful on 1st machine, it triggers testing job on 2nd machine via http request. For example:
http://<2nd_jenkins_ip>:8080/job/<job_name>/buildWithParameters?BUILD_NUMBER=167
The problem: It seems that there is something, which launches some of the testing jobs automatically, but it shouldn't. I have deactivated nightly builds, but it happened again. And I can't find out the reason.
Question: Is there any possibility to display the IP/url of the machine, which started the build (e.g. into console output)? If not, can I find this information elsewhere (e.g. jenkins/linux logs)?
EDIT1:
Console shows:
Started by user anonymous
Building on master in workspace <my_workspace>
Cleaning local Directory ./test_data
Checking out ...
Following svn checkout and other build steps.
In the Jenkins_HOME directory on the server, look under jobs/<jobname>/builds/<select the last build you want by date>
In there, open log file (no extension) with any text editor. It will usually provide a more detailed cause at the top of the file.
There are many ways you can prevent unwanted builds. One way is to configure Authentication Token under Job's configuration -> Build Triggers -> Trigger builds remotely. Once a token a set, other (rogue/old) scripts could not trigger the job without providing this token.
This however does not prevent manual triggering through the UI or other projects triggering through Jenkins' methods (not URL).
I've also had some inconsistent issues regarding jobs that were configured on a schedule/timer to the effect that changes wouldn't take effect until Jenkins restart.

Delegate specific part of build to slave

I have a project where part of the build process is to create a native library on a remote machine. This is currently a manual process outside of the CI builds made by Jenkins.
The setup in question is that the Jenkins master server build a GIT based maven project, which has a dependency to a native library which can only be built on a specific machine. Jenkins can't compile this module, and because of this, it is currently a manual process.
I would like to install a Jenkins slave on the machine that creates the native library, and returns the compiled files to the Jenkins master, without handling any other parts of the build.
I am having trouble figuring out if this is even possible. The number of articles i have found on the subject discusses Jenkins slaves as a means of distributing the build, but i want the slave to take responsibility for a small part of the build process, and nothing else. The Jenkins master should just send the build request to the slave and wait for the result, instead of trying to compile the code itself.
I do exactly the same. My setup, very similar to what Mark O'Connor and gaige are advising, and I am using the Copy Artifact plugin.
job A: produces a zip file on a Mac
job B, runs on slave B - Windows machine, takes the zip as input and produces an MSI
Here's the important part in the config of job B:
restrict the job B on the proper slave using labels
make sure job B happens after job A
make sure artifacts from job A are sent to job B before your build
build your stuff
archive artifacts produced by job B
Delegating part of a job to a slave is something that would have to be done external to Jenkins, for example, using ssh.
However, as #kan indicates, you most likely want to extract the native library build as a separate job and then have that job execute on a particular slave, or any slave that meets a specific criteria.
To do this, my suggestion would be to use Labels in the node configurations to determine which slaves can be used for building that particular job.
In Jenkins > nodes > <slave node>, use the Labels property to set one-word labels that indicate your specific requirements, such as the OS or processor type.
Then, in the jobs that are node-specific, check Restrict where this project can be run and set the Label Expression to something that meets your criteria. If the criteria is simple, it will just be a single word, if you need a boolean, you can use those as well (such as OSX&&Lion in our case).
I believe this is all in the standard version of Jenkins, without need for a special plugin. Leave me a comment if it isn't and I'll try and diagnose which plugin enables this functionality.
This is problem is solved by using a binary repository manager to centralize your software artifacts. Personally I use Nexus, but it could be something as dumb as a remote file system.
The idea is to publish the built artifact after each Jenkins job (if you don't like Nexus, you could use one of the Publish over plugins) and retrieve it as a build dependency in the next job.
This approach means it longer matters where the build executes, and has the added advantage of decoupling the build of each module component.

Resources