How to lock builds on circleci - circleci

The idea is when 2-3 concurrent commits are pushed to a branch, it shouldn't start all build jobs for a given step in circleCI. It should wait until the 1st job is finished and then only run the next one in the queue.
I have tried using the below links but no luck. Please help.
* https://circleci.com/orbs/registry/orb/gastfreund/dynamo-lock?version=1.0.1
* https://circleci.com/orbs/registry/orb/freighthub/lock

You can take a look at https://circleci.com/developer/orbs/orb/eddiewebb/queue#usage-examples as it seems quite robust, is still maintained and has pretty good documentation. It also lets you define a custom job and queue the entire workflow or just a specific job(s).

Related

How do I prevent code in queued delayed jobs becoming invalid after a deploy?

We use delayed_job in a fairly high traffic Rails app, so there are always jobs in the queue.
After a deploy, when you restart dj, there is potential for the code in already queued jobs to become invalid. For example, if the deploy meant that a class used in the job no longer existed, the job would fail.
Is there a standard practice for getting round this?
Is there a standard practice for getting round this?
Yes, multi-stage deployments (or whatever is the proper name for this).
Step 1: Deploy a new version of code that allows both new and old jobs run at the same time. For example, if you want to remove some class, make new jobs not use it, but don't remove the class yet, so that it's available to old jobs.
Step 2: Wait until all old jobs are processed.
Step 3: Deploy a change that removes the class and the old job version.
Note: you could implement step 1 by copying old job code and giving it a new "version". If you had, say, ProjectJobs::Import, you copy it as ProjectJobs::Import2 (this version won't use class that you want to remove). Since they're different classes, DJ won't have any problems with picking appropriate implementation.
I would say, that you have to:
stop all workers right after they finish their jobs
deploy changes
start workers
I think that in your case, this code might be helpful.

How may I configure a Jenkins job to run at a specific time if an upstream job succeeds?

My use case:
Job A is set to run Monday through Friday at 18:00.
Job B is dependent upon Job A succeeding but should only run Monday through Friday at 06:00. (Monday morning's run would depend upon Friday evening's run). I prefer set times rather than delays between jobs.
On any given morning, if I see that Job A failed (thus Job B never ran), I would like to be able to run (fix) Job A then immediately trigger Job B.
What I have found so far only offers part of this use case. I have tinkered with Pipeline and recently upgraded my Jenkins instance to 2.89.3, so I have access to the most recent features and plugins. Filesystem triggering seems doable.
Any suggestions are appreciated.
You can use the options available in "Build Triggers".
Ex:
Build Trigger
Hope this work for you!
This is a tricky Use Case as generally you want a job to immediately follow on from another one rather than waiting for potentially three days.
Further complicated by wanting it to run straight away when you want it to.
I do not believe there is a "I have finished so kick this job at this time" downstream trigger So for the first part the only things I can think of are:
Job A kicks Job B as soon as it is finished and job B sits there with a time checker in it and starts its task when the time matches.
or Job A artefacts a file with its exit status and job B has a cron trigger for 6am mon-fri and picks up this artefact and then runs or doesn't dependent on the file contents
For the second part you could get the build Cause (see how to get $CAUSE in workflow for pipeline implementation and vote on https://issues.jenkins-ci.org/browse/JENKINS-41272 to get the feature when using sandbox).
And then get your pipeline to behave differently depending on trigger
i.e. if you went for the second option above then In job B you could do if triggered by Cron then read the artefact and do as needed. If triggered by Upstream then just run regardless.

Jenkins Pipeline and huge amount of parallel steps

I have searched the whole internet for 2 weeks now, asked on freenode IRC and in the Jenkins user group mailing list for that but got no answer so here I am, you are my last hope (no pressure)
I have a Jenkins scripted pipeline that generates hundreds of parallel branches that have to run simultaneously on hundreds of slaves node. At the moment it looks like Jenkins BlueOcean user interface is not suited for that. We reach a point were all the steps can't be displayed.
I need to provide some kind of background to let you understand our need: We have a huge project in our company that have thousands of Behat/Selenium and this takes more that 30h to run now if done sequentially. We implemented a basic solution some times ago were we use a queuing system (RabbitMq) to store all the tests and consumers that run the tests by downloading the source code from Jenkins and uploading artifacts back to Jenkins too, but this is not as scallable as Jenkins native slaves and it is not maintainable enough (eg. we don't benefit from real time output log and usage statistics).
I know there is an open issue that describe the problem here : https://issues.jenkins-ci.org/browse/JENKINS-41205 but, basically, I need a workaround working for the next week (Our deelopment team are waiting for this new pipeline for a long time now).
Our pippeline looks like that at the moment:
Build --- Unit Tests --- Integration Tests --- Functional Tests ---
| | |
tool A suite A matrix-A-A-batch 0
tool B suite B matrix-A-A-batch 1
tool C matrix-A-A-batch 2
matrix-A-A-batch 3
....
"Unable to display more"
You can find a full version of our Jenkinsfile here : https://github.com/willy-ahva/pim-community-dev/blob/086e4ed48ef1a3d880ca16b6f5572f350d26eb03/Jenkinsfile (It may looks complicated but, basically, the real problem is the "Functional Tests" stage)
My questions are:
Am I using parallel the good way ?
Is it only a Jenkins/BlueOcean issue and I should contribute to the issue I linked ? (If yes, how ? I'm not a Java dev at all)
Should I try to use MultiJob and parallelize jobs instead of steps ?
Is there any other tool except parallel that I can use ? (some kind of fork or whatever) ?
Thanks a lot for your help. I love what Jenkins became with the Pipeline and BlueOcean UI and I really want to make it work in our team.
This is probably a poor way to do the parallel tasks. I would instead treat each parallel map entry as a worker, and put your tests into a queue / stack / data structure. Each worker thread could pop off the queue as required, and then you wouldn't sit there with a million tasks queued. You would have to be more careful with your logging so that it is apparent which test failed, but that shouldn't be too tough.
It's probably not something that's easy to fix, as it is as much a UI design issue as anything else. I would recommend that you give it a poke though! Who knows, maybe a solution will click for you?
Probably not. In my opinion this makes this muddier
Parallel is your option for forking.
If you really want to keep doing this, but don't want the UI to be so weird, you can stop defining each test as a stage. It'll be less clear what failed when one fails, but the UI should be happier.

Jenkins: how to block a job to make it unrunnable

This is not just another question about concurrent job execution in Jenkins. The problem I have is that there are several jobs that run independently from one another. When they finish it should be possible to run a manual job. The condition though is that all those automated jobs should be in successful state. Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
I searched for the answer everywhere and checked every possible plugin that serves synchronization. But I did not figure it out how to solve the above problem.
IMHO the delivery pipeline plugin (see https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin for the download and http://www.infoq.com/articles/orch-pipelines-jenkins for a thorough description) could do what you want.
You can run a lot of jobs (in parallel or not), and when (and only when) they succeed another job (or more). You even can add manual steps (needing a button click when your pipeline may continue).
Everything is configurable - and quite stable at this moment.
No-one should be able to manually (or otherwise) start a job that is in "waiting state" for other jobs to finish.
Regarding this question:
Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
You can use the Throttle Concurrent Builds Plugin and create a category which will include your automated jobs and the manual jobs.
If one automated job is running, it will be impossible to launch the manual jobs.
Regarding your first question, did you have a look to the Join plugin?
Cheers
https://wiki.jenkins-ci.org/display/JENKINS/Promoted+Builds+Plugin can also be option. Setup promotions in that way that manual approval is needed and build will not fail only if automated jobs are done.

How do I trigger a job when another completes?

I have two jobs, consider them to be the super simple jobs that just print a line and have no triggers or timeouts defines. They work fine when I call them from a controller class through: <name of my class>Job.triggerNow()
What I want is to trigger one job and, as it as it finishes, trigger a consequent different job.
I have tried using the quartzScheduler, but I can't seem to get a JobDetail from my job classes, so I'm not sure what is the correct way for doing this. I also want to pass some results from the first job onto the second one.
I know I can trigger the second job as the last line on my first job's execute method, but this is not desirable since its technically not part of the first job and couples things more than I would like.
Any help will be greatly appreciated. thanks
What it sounds like you are after is an asynchronous "pipeline" of work where there are different workers that are all in a line and pass data to be worked on from one to the next. This sort of architecture is amazingly flexible and applies to a large number of very common applications
The best way that I have found to get such an architecture in place with Grails is to use a message queue, like RabbitMQ for example, with a series of queues (one for each step in the pipeline), and then have the controller(s) put messages into the first step of the pipeline.
Then, you have a worker (just a service within the Grails app if you use the excellent RabbitMQ Grails plugin) listen to the queue that holds jobs for them to work on. As work comes into the queue, the worker will pop the job off, processes it, and then put a message into the queue of the next step in the pipeline.
I've found this to be the best way to architect just about any asynchronous pipeline, since it allows you to scale each piece separately as needed and doesn't have too much overhead. There are also ways to decouple the jobs from having to know about the next step in the pipeline, but I've found that in most cases this isn't really needed and just adds useless complexity.
Quartz is great for jobs that need to happen on a schedule, but a pipeline is much better at processing things as it comes in in a scaleable way
Please have a look #
JobListener
You can utilize
public void jobWasExecuted(JobExecutionContext context,
JobExecutionException jobException);
I built something similar to this in my web application using queue messaging technique with Redis. I simply define the dependency structure for all the jobs, and have a master job with the only purpose is to monitor/update the status of other jobs and trigger dependent jobs if needed.
Each job will have to report its status running/finish/cancel using the Redis queue. Master job pop each queue message and process it properly.

Resources