All my Jenkins jobs are triggered both by a Github webhook, but also via a scheduled build one per week. The build process is heavily cached to make the webhook CI builds finish quickly.
I would like to add a line to my build script which wipes the cache during the weekly scheduled build, and make it build from scratch. Is there a variable in the build script to identify if a build was triggered by a webhook or schedule?
Maybe the envInject plugin will give you what you need?
This plugin also exposes the cause of the current build as an
environment variable. A build can be triggered by multiple causes at
the same time e.g. an SCM Change could have occurred at the same time
as a user triggers the build manually.
The build cause is exposed as a comma separated list:
BUILD_CAUSE=USERIDCAUSE, SCMTRIGGER, UPSTREAMTRIGGER, MANUALTRIGGER
In addition, each cause is exposed as a single envvariable too:
BUILD_CAUSE_USERIDCAUSE=true
BUILD_CAUSE_SCMTRIGGER=true
BUILD_CAUSE_UPSTREAMTRIGGER=true
BUILD_CAUSE_MANUALTRIGGER=true
Related
What ideally I need to do is run a Jenkins pipeline that will run immediately after I start the execution , checking out code, running a build but just running the deployment in a particular date/time. So Would it be possible to schedule a particular step in jenkins pipeline? This step would be executed at date/time provided by user input. If that’s not possible is there any alternative way of doing it?
The is no such way to schedule the steps of the Jenkins Job. But the Following approach may help you to achieve what you are trying to do.
Step-1: Install "Schedule Build Plugin" which will provide you a capability to schedule the job to run on certain data and time.
Step-2: Create a pipeline job with the code that you want to execute at a certain date and time. (Note: You will not schedule this job at this time)
Step-3: Remove the pipeline code that you already copied to another pipeline job and add a build step to schedule the build on the second pipeline that you will create in step-2.
I have a (declarative) Jenkins Pipeline that is doing builds and tests continuously. When successful, the application should be deployed on particular test environments once a day, based on some schedule.
For instance, if the build was successful, and current time is
between 11:00 and 14:00, deploy to TestA, but just once a day;
between 14:00 and 18:00 deploy to TestB, but also just once a day;
etc.
I would be able to do the time slot handling in some groovy code, but I'm not sure how to "remember" whether there already was a deployment in this time period as of today. Of course, it is useless to store that information in the workspace, since later builds may be executed somewhere else.
So what options do I possibly have?
Store some marker file in a shared network location, and check this file and its timestamp in later builds to decide whether a deploy is required. This would probably work, but introduces dependency to external resources.
Can I somehow "mark" the Jenkins build when doing deployment, so that following builds can iterate through previous build(s) and search for such marker? Like archiving some small text file with the build?
Alternatively, is there any plugin that supports this scenario?
Or any completely different idea?
This seems to be a frequent scenario in CD pipelines, so I wonder how this is done in the wild... Thanks for any hints!
You should have the build and deploy stages on separate pipelines. That way the build can occur independently, and the deployment can be triggered by the timer to run exactly once per day.
In this case you'd want the build pipeline to archive its artifacts, so that the deploy pipeline can always deploy a successful build. The Copy Artifacts plugin can be used to get the build pipeline's artifacts into the deploy pipeline's workspace.
There are two jobs in my jenkins server
job1 : build every 10 minutes to scan the events, if happens it triggers the downstream job2
job2 : normal job mostly run once in the case.
Problem:
too many useless jenkins build for job1 in the UI since it runs frequently.
It will be good if the build can be discarded if it doesn't trigger the downstream job.
Solution so far:, using Discard Old build plugin in post build action is one direction, but no clue how to get it works nicely.
With the hints from #JamesD's comments, I can use several plugins to achieve this
Archive artifacts Plugin: to archive the param.txt files which is used to path to downstream jobs
Groovy Postbuild job Plugin: add the groovy script to check whether the param.txt exists or not. The build will be set to Abort if it doesn't exists
Discard Old Builds Plugin: will discard the Abort build
In my task I need to trigger a same job if its current build failed.
I don't want the trigger if the build got succeeded.
Is there any plugins or any other method available to do this task?
You can use Downstream Ext Plugin for this:
my_project will be triggered only if this build fails.
Note: if you want to trigger the same job, you should realize that this is a chance to have an infinitive loop. If the build always fails, it will be triggered over and over again...
The best solution is to use Naginator Plugin.
If the build fails, it will be rescheduled to run again after the time you specified. You can choose how many times to retry running the job. For each consecutive unsuccessful build, you can choose to extend the waiting period.
Jenkins Naginator Plugin
Jenkins Naginator Plugin can be used to automatically reschedule a build after a failure.
This becomes very useful in scenarios where the build fail due to unavoidable reasons like a database connectivity is lost, the file system is not accessible etc.
Configurations
Rescheduling configuration is available as a post-build-action. There are a number of configurations for you to pick correctly based on your expected (unavoidable) build failure reason.
Read further on the configurations here with a screenshot.
In Jenkins, If one build is currently running and next one is in pending state then what should i do so that running one should get aborted and next pending one should start running and so on.
I have to do it for few projects and each project has few jobs in it, I tried to save build_number as env variable in one text file (build_number.txt) and take that number to abort previous triggered build but making build_number.txt file for each job is not looking efficient and then I have to create many build_number files for each job for every project.
Can anyone please suggest me some better approach
Thanks
Based on the comments, if sending too many emails is the actual problem, you can use Poll SCM to poll once in 15 minutes or so, or even specify quiet time for a job. This will ensure that build is taken once in 15 minutes. Users should locally test before they commit. But if Jenkins itself is used for verifying the commits I don't see anything wrong in sending an email if build fails. After all, they are supposed to know that, no matter even if they fixed it in a later update intentionally or unintentionally.
But if you still want to abort a running job if there are updates, you can try the following. Lets call the job to be aborted as JOB A
Create another job that listens on updates same as that of the job that needs to be aborted
Add build step to execute groovy script
In the groovy script use Jenkins APIs to check if JOB A is running. If yes, again use APIs to abort the job.
Jenkins APIs are available here