Jenkins - Webhooks OR PollSCM - jenkins

In a scenario where continuous integration is important, for triggering builds which is a better option Webhooks or PollSCM.
These are my current understanding on both methods:
PollSCM is a heavy operation and depending on it to trigger build means we need to fire it frequently. But the configuration is easier and it is safer than web hooks as Jenkins will be communicating to code repo.
Web hooks can give you exact build trigger time without checking for it constantly. But on the other hand, there are security concerns when you are opening up a connection from outside and configuration is not easy compared to PollSCM.
Looking forward to know the exact pros and cons of both ways.

If your build cycle is very short (a few minutes) and if you want to trigger a build for each commit, the Webhooks solution is better.
But if your build cycle is longer (15/20 minutes) and if you don't need to build for each commit, the PollSCM is a good candidate :)
In my company, we are using Git/Stash and Jenkins + a Webhook to trigger a build every time something is committed. For the pull requests, we are using the Stash pullrequest builder plugin for Jenkins.

Related

How do I structure Jobs in Jenkins?

I have been tasked with setting up automated deployment and, after some research, settled on Jenkins to get the job done. Prior to this I had approximately zero knowledge of Jenkins, beyond hearing the name. I have no real knowledge of Devops beyond what I have learnt in the last couple of weeks; no formal training, no actual books, just Google searches.
We are not running a full blown/classic CI/CD process; this is a business decision. The basic requirements are:
Source code will be stored in GitHub.
Pull requests must be peer approved.
Pull requests must pass build/unit/db deploy tests.
Commits to specific branches must trigger a deployment to a related specific environment (Production, Staging or Development).
The basic functionality that I am attempting to support covers (what I currently see as) two separate processes:
On creation of a pull request, application is built, unit tests run, and db deploy tested. Status info must be passed to GitHub.
On commit to one of three specific branches (master, staging and dev) the application should be built, and deployed to one of three environments (production, staging and dev).
I have managed to cobble together a pipeline that does the first task rather well. I am using the generic web hook trigger, and manually handling all steps using a declarative pipeline stored in source control. This works rather well so far and, after much hacking, I am quite happy with the shape of it.
I am now starting work on the next bit, automated deployment.
On to my actual question(s).
In short, how do I split this up into Jobs in Jenkins?
To my mind, there are 1, 2 or 4 Jobs to be created:
One Job to Rule them All
This seems sub-optimal to me, as the pipeline will include relatively complex conditional logic and, depending on whether the Job is triggered by a Pull Request or a Commit, different stages will be run. The historical data will be so polluted as to be near useless.
OR
One job for handling pull requests
One job for handling commits
Historical data for deployments across all environments will be intermixed. I am a little concerned that I will end up with >1 Jenkinsfile in my repository. Although I see no technical reason why I can't have >1 Jenkinsfile, every example I see uses a single file. Is it OK to have >1 Jenkinsfile (Jenkinsfile_Test & Jenkinsfile_Deploy) in the repository?
OR
One job for handling pull requests
One job for handling commits to Development
One job for handling commits to Staging
One job for handling commits to Production
This seems to have some benefit over the previous option, because historical data for deployments into each environment will not be cross polluting each other. But now we're well over the >1 Jenkinsfile (perceived) limit, and I will end up with (Jenkinsfile_Test, Jenkinsfile_Deploy_Development, Jenkinsfile_Deploy_Staging and Jenkinsfile_Deploy_Production). This method also brings either extra complexity (common code in a shared library) or copy/paste code reuse, which I certainly want to avoid.
My primary objective is for this to be maintainable by someone other than myself, because Bus Factor. A real Devops/Jenkins person will have to update/manage all of this one day, and I would strongly prefer them not to suffer from my ignorance.
I have done countless searches, but I haven't found anything that provides the direction I need here. Searches for best practices make no mention on handling >1 Jenkinsfile, instead focusing on the contents of a single pipeline.
After further research, I have found an answer to my core question. This might not be the absolute correct answer, but it makes sense to me, and serves my needs.
While it is technically possible to have >1 Jenkinsfile in a project, that does not appear to align with best practices.
The best practice appears to be to create a separate repository for each Jenkinsfile, which maps 1:1 with a Job in Jenkins.
To support my specific use case I have removed the Jenkinsfile from my main source code repository. I then create 4 new repositories:
Project_Jenkinsfile_Test
Project_Jenkinsfile_Deploy_Development
Project_Jenkinsfile_Deploy_Staging
Project_Jenkinsfile_Deploy_Production
Each repository contains a single Jenkinsfile and a readme.md that, in theory, contains useful information.
This separation gives me a nice view of the historical success/failure of the Test runs as a whole, and Deployments to each environment separately.
It is highly likely that I will later create a fifth repository:
Project_Jenkinsfile_Deploy_SharedLibrary
This last repository would contain pipeline code that is shared amongst the four 'core' pipelines. Once I have the 'core' pipelines up and running properly, I will consider refactoring what I can into this shared library.
I will not accept my own answer at this point, in the hope that more answers are forthcoming.
Here's a proposal I would try for your requirements based on the experience at my last job.
Job1: builds and runs unit tests on every commit on master or whatever your main dev branch is (checks every 20 minutes or whatever suits you); this job usually finds compile and unit test issues very fast
Job2 (optional): run integration tests and various static code checks (e.g. clang-tidy, valgrind, cppcheck, etc.) every night, if the last run of Job1 was successful; this job finds usually lots of things, but probably takes lots of time, so let it run only at night
Job3: builds and tests every pull request for release branches; so you get some info in your pull requests, if its mature enough to be merged into the release branches
Job4: deploys to the appropriate environment on every commit on a release branch; on dev and staging you could probably trigger some more tests, if you have them
So Job1, Job2 and Job3 should run all the time. If pull requests to your release branches are approved by QA (i.e. reviews OK and tests successful) and merged to release branches, the deployment is done by Job4 automatically.
It depends on your requirements and your dev process, if you want to trigger Job4 only manually instead.

Mark Jenkins Job chain as failed if any downstream job fails

I have a set of chained freestyle jenkins jobs which we use as a build pipeline for several projects, we recently integrated it with our source repository management (phabricator) so it reports the continuous integration build result back to the merge request (whether if it failed or passed).
Due to some limitations with Phabricator, the way we are triggering the pipeline is through an AWS lambda function which knows what's the first job of the chain and starts it, since it's all chained together, triggering the first job executes the whole pipeline.
The issue is, technically we are triggering a single job (which triggers a downstream job as well and so on), so, if the first job passes, it will return a green build to Phabricator no matter if the second job fails, it won't wait for any of the downstream projects to finish, if the first one passed, it will say it build is green.
As I see it, there are two questions that come to my mind to solve this:
1.- Is there a way to mark the job as failed if the downstream project fails?
2.- Is there a way to trigger the chain instead of a single job? that way I think it will return the result of the chain instead of the single first job.
Any thoughts and advice are welcome.
Have you considered rewriting your Jenkins chain as a single pipeline job? I think this would make your life a lot easier.
Otherwise, you need to pass the Phabricator build ID down through your chain, and only post back a success response in the final job, with a fail response from any job that fails.

Custom Jenkins scheduler

We're seeing a problem with Jenkins and the scheduling of builds. Specifically, we trigger Jenkins to build a pipeline of work with every push to every branch of our git repo. On its own, the whole pipeline can take from 10 to 20 minutes to build. This can cause a problem if multiple pushes to a branch happened faster than the builds are completing. Multiplied by the twenty or thirty branches that are in development.
So, I'd like to be able to automatically deprioritise any scheduled builds on Jenkins if they are triggered on a Git commit sha that is no longer the tip of its branch. This is just one example of a factor that might indicate a desired priority. Others would be that branches with open pull requests should have higher priority than those without; or manual input in order to prioritise a PR or branch that needs feedback immediately.
Is there anyway to programmatically interact with the Queue of jobs on Jenkins and reorder it?
There is the Priority Sorter Plugin, but as far as I know this assigns each build a static priority. I would like to dynamically reprioritise items in the queue based on external info (e.g. from git).
I've found reference to two other plugins whose names indicate that they might do what I want, but I can't find any meaningful documentation on them. The former doesn't provide the options it claims to, and the latter doesn't even exist in the plugins repository. Neither seems to be maintained.
My alternatives seem to be
write my own implementation of hudson.model.Queue, which seems like overkill
maintain a separate queueing service that triggers individual jobs on Jenkins, in which case what is Jenkins even for?
Am I missing something obvious? I can't be the only person who wants more fine-grained control of Jenkins build ordering.

Jenkins: how to block a job to make it unrunnable

This is not just another question about concurrent job execution in Jenkins. The problem I have is that there are several jobs that run independently from one another. When they finish it should be possible to run a manual job. The condition though is that all those automated jobs should be in successful state. Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
I searched for the answer everywhere and checked every possible plugin that serves synchronization. But I did not figure it out how to solve the above problem.
IMHO the delivery pipeline plugin (see https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin for the download and http://www.infoq.com/articles/orch-pipelines-jenkins for a thorough description) could do what you want.
You can run a lot of jobs (in parallel or not), and when (and only when) they succeed another job (or more). You even can add manual steps (needing a button click when your pipeline may continue).
Everything is configurable - and quite stable at this moment.
No-one should be able to manually (or otherwise) start a job that is in "waiting state" for other jobs to finish.
Regarding this question:
Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
You can use the Throttle Concurrent Builds Plugin and create a category which will include your automated jobs and the manual jobs.
If one automated job is running, it will be impossible to launch the manual jobs.
Regarding your first question, did you have a look to the Join plugin?
Cheers
https://wiki.jenkins-ci.org/display/JENKINS/Promoted+Builds+Plugin can also be option. Setup promotions in that way that manual approval is needed and build will not fail only if automated jobs are done.

Stop Jenkins schedule build if it was failed more than 10 times?

I set my Jenkins job to build automaticlally many times a day by the scheduler.
If the build is failed, it will send mail to my team.
However I don't want to spamming the mail box. How can I set a condition to stop the build scheduler if it was failed more than 10 times ?
Rather than scheduling the job continuously, try the continuous integration paradigm, like this:
Unconditionally schedule the job only rarely. Perhaps once per day, just to ensure than any external factors (missing resources, changed interfaces, etc.) haven't come into play.
Trigger the job when any known source or dependency changes (e.g. source code, jar in your artifact repository, DB schema change, etc.)
Use a suitable plugin to retry failures.
I recommend the Naginator plugin for this. It can nag a limited number of times, and it auto-throttles: it nags frequently to begin with, then less frequently after a protacted period of failure.
Even if you don't change how the job is trigger, Naginator is probably a good solution for you. Use it to send your emails, instead of using an unconditional on-failure step.

Resources