In my task I need to trigger a same job if its current build failed.
I don't want the trigger if the build got succeeded.
Is there any plugins or any other method available to do this task?
You can use Downstream Ext Plugin for this:
my_project will be triggered only if this build fails.
Note: if you want to trigger the same job, you should realize that this is a chance to have an infinitive loop. If the build always fails, it will be triggered over and over again...
The best solution is to use Naginator Plugin.
If the build fails, it will be rescheduled to run again after the time you specified. You can choose how many times to retry running the job. For each consecutive unsuccessful build, you can choose to extend the waiting period.
Jenkins Naginator Plugin
Jenkins Naginator Plugin can be used to automatically reschedule a build after a failure.
This becomes very useful in scenarios where the build fail due to unavoidable reasons like a database connectivity is lost, the file system is not accessible etc.
Configurations
Rescheduling configuration is available as a post-build-action. There are a number of configurations for you to pick correctly based on your expected (unavoidable) build failure reason.
Read further on the configurations here with a screenshot.
Related
All my Jenkins jobs are triggered both by a Github webhook, but also via a scheduled build one per week. The build process is heavily cached to make the webhook CI builds finish quickly.
I would like to add a line to my build script which wipes the cache during the weekly scheduled build, and make it build from scratch. Is there a variable in the build script to identify if a build was triggered by a webhook or schedule?
Maybe the envInject plugin will give you what you need?
This plugin also exposes the cause of the current build as an
environment variable. A build can be triggered by multiple causes at
the same time e.g. an SCM Change could have occurred at the same time
as a user triggers the build manually.
The build cause is exposed as a comma separated list:
BUILD_CAUSE=USERIDCAUSE, SCMTRIGGER, UPSTREAMTRIGGER, MANUALTRIGGER
In addition, each cause is exposed as a single envvariable too:
BUILD_CAUSE_USERIDCAUSE=true
BUILD_CAUSE_SCMTRIGGER=true
BUILD_CAUSE_UPSTREAMTRIGGER=true
BUILD_CAUSE_MANUALTRIGGER=true
In Jenkins, I have used the Build-time plugin to timeout and abort the build if the build takes more than a certain specified time.
Is there a way to trigger another job immediately when this particular job fails with timeout where I can make my work process more efficient.
You can accomplish this by installing the Parameterized Trigger Plugin on Jenkins.
(https://wiki.jenkins.io/display/JENKINS/Parameterized+Trigger+Plugin)
Once installed, as a Post-build Action, add the Trigger parametrized build on other projects, from here you can select the Failure option that you need to trigger the next job.
I have one Jenkins job that triggers another job via "Trigger/call builds on other projects."
The triggered downstream job sometimes fails due to environmental reasons. I'd like to be able to restart the triggered job multiple times until it passes.
More specifically, I have a job which does the following:
Triggers a downstream job to configure my test environment. This process is sensitive to environmental issues and may fail. I'd like this to restart multiple times over a period of about an hour or two until it succeeds.
Trigger another job to run tests in the configured environment. This should not restart multiple times because any failure here should be inspected.
I've tried using Naginator for step 1 above (the configuration step). The triggered job does indeed re-run until it passes. Naginator looks so promising, but I'm disappointed to find that when the first execution of the job fails, the upstream job fails immediately despite a subsequent rebuild of the triggered job passing. I need the upstream job to block until the downstream set of jobs passes (or fails to pass) via Naginator.
Can someone help me know what my options are to accomplish this? Can I configure things differently for the upstream job so it relates to the Naginator-managed job better? I'm not wed to Naginator and am open to other plugins or options.
In case its helpful, my organization is currently using Jenkins 1.609.3 which is a few years old. I'd consider upgrading if that leads to a solution.
I've got a Jenkins job that is intended to do the following:
Build a project and deploy it to a test server
Run tests
If the tests fail, roll back the server to the previous version
If the tests succeed, update the version in our source control system
Because we have a single test server, we need to ensure that Jenkins is only running a single version of this job at a time. Unfortunately, we can't seem to find a way to run a job on failure and keep the upstream job from executing while the downstream job is running.
Is there an easy way to do this? Is there a better way?
The Jenkins Post Build Task allows you to run tasks in a job after failure. Rolling the server back sounds more like a task than a job, so that might suit.
Otherwise, there are a couple of plugins that allow for more complex pipelining features. The Pipeline Plugin seems to be the most popular at the moment.
In job configuration, under Advanced Project Options (just before the SCM part), click the Advanced... button. You can now chose to Block build when upstream/downstream is executing
As for running conditional steps on failure:
- Use Post Build Tasks as Paul suggested, or
- Configure logic using Conditional Build steps
In Jenkins, If one build is currently running and next one is in pending state then what should i do so that running one should get aborted and next pending one should start running and so on.
I have to do it for few projects and each project has few jobs in it, I tried to save build_number as env variable in one text file (build_number.txt) and take that number to abort previous triggered build but making build_number.txt file for each job is not looking efficient and then I have to create many build_number files for each job for every project.
Can anyone please suggest me some better approach
Thanks
Based on the comments, if sending too many emails is the actual problem, you can use Poll SCM to poll once in 15 minutes or so, or even specify quiet time for a job. This will ensure that build is taken once in 15 minutes. Users should locally test before they commit. But if Jenkins itself is used for verifying the commits I don't see anything wrong in sending an email if build fails. After all, they are supposed to know that, no matter even if they fixed it in a later update intentionally or unintentionally.
But if you still want to abort a running job if there are updates, you can try the following. Lets call the job to be aborted as JOB A
Create another job that listens on updates same as that of the job that needs to be aborted
Add build step to execute groovy script
In the groovy script use Jenkins APIs to check if JOB A is running. If yes, again use APIs to abort the job.
Jenkins APIs are available here