We have a three layer multi configuration which at times fails since some sub job fails at times on some slaves
We are looking at rebuilding the whole job across all slaves selected in parent job from the beginning if any of the sub jobs fail
I have looked at rebuild plugin, but am also looking at a programmatic way of solving the problem, any guidance would help
Try Jenkins Remote Access API. This can do it.
https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API
Related
For internal reasons, one of my jobs is able to run concurrently, but new builds abort themselves if another build is already running (disabling concurrency doesn't help, since I don't want new jobs to be scheduled for execution once the current build is done).
However, this behaviour is detrimental to the job status preview (the colored ball next to the job name when inside the job's parent folder). It often shows the status as "aborted", which is undesirable - I want to view the latest running build as the source of the job status.
I tried deleting aborted builds from within their own execution, but that's unfortunately neither trivial nor stable, and thus not suitable for this situation. I could probably get a workaround running that deletes them from a separate job, but that's not ideal either.
Anyway, I'm now hoping that I can just tell Jenkins to ignore "aborted" builds in the calculation of the job preview. Unfortunately, I wasn't able to find a setting or plugin that allows me to do this.
Is this possible at all? Or do I need to find another way?
Would something like this help?
https://plugins.jenkins.io/build-blocker-plugin/
I haven't used it myself but it supports blocking builds completely if a build is already running.
Just looking for input on this one, but is there any benefit for disabling or deleting jobs?
Does disabling a job force the slave/node to remove any workspaces for that job or does disabling a job change how much the master has to work through?
TIA for any feedback.
Disabling a job is useful when it is a scheduled job, or one triggered by a hook
Few cases it could be of interest to disable a job:
You are aware of a third party service causing issues and you do not want your job to partially execute and fail, you could disable it the time the issue gets resolved by the team in charge of the 3rd party service.
Or someone in your team wants to make a demo on a server that gets updated by jenkins and would like the server not to be redeployed during the demo
You are working on a job that is not finally done, you have an urgency and want to disable the job until you can get back to working on it
Finally, in my opinion, if you only have pipeline jobs generated by a Jenkinsfile for each projects. You won't need to disable a job ever
Actually, there is no real benefit for disabling or deleting jobs. Up until now, I only see one benefit when you put Jenkins jobs in a pipeline, so you can test each upstream job without trigger downstream job mistakenly.
Disabling a job doesn't remove any workspaces on the slave/node and doesn't influence how much the master has to work through.
I have a number of multi-config jobs and all have to run on the same machines, one after another.
For example:
Build on all platforms.
Do some automated testing.
Do some automated benchmarking.
These are all happening on the same machines, in that order, but they are different jobs.
The problem is that if I want to add another platform or remove one of them, I will have to do it for every single multi-config job. What I would like is to have a way of defining those platforms in one place and then have the jobs point to that template and run.
I am quite sure I'm not the first one to hit this problem and that there should be some plugin out there, but I haven't been able to find it.
So, is there any simple way of doing this?
We create temaplte jobs in jenkins which helps us to create all the set of jobs reqired for a platform, we just pass the platform / component name as input pareamter for the template job. We us the job copy plugin https://wiki.jenkins-ci.org/display/JENKINS/Jobcopy+Builder+plugin
But for a deleting the jobs we have another job where again the component name is the input parameter and we use something similar to the answer given here Is it possible to delete a hudson job programmatically via REST API?
Using Jenkins, I am running 2 builds (Linux+Windows) and one Doxygen job
At the moment, I am using 3 separate SCM polling triggers pointing to the same source code
How can I use a single trigger for all three jobs provided that I still want to get separate statuses
For the record; the underlying SCM is Git
Off the top of my head, some solutions which might do what you are looking for:
Instead of setting an SCM trigger, use a post-receive hook in your repository, which can send a signal for Jenkins that there are new changes (see: https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository). This way Jenkins doesn't have to constantly poll the repository (multiple times for the different jobs), and the trigger would be faster, since there is no waiting for the next polling, but when there is a push, a build will be started.
Use an extra job, that would do nothing else, just have the SCM polling trigger, and start all three original jobs (without waiting for any of them to finish).
If the configuration is similar for all three jobs, you could consider creating a single project with a matrix configuration. Roughly what it does, is that you could have a variable for the build type, with values like linux, windows, doxygen. When the job is triggered, it would start multiple builds with all the possible values - of course you would have to set up the job in a way that the current parameter changes the build process according to what needs to be done. Actually I haven't had to use a matrix configuration yet, so my example may be not the best, but you can probably find lots of examples on the Jenkins wiki, if you think this is a good direction.
Is there are a way one can share variables across jenkins jobs ?
Job1 collects the required source code and labels them using perl scripts.
Then a number of other jobs compiles the code since there are many versions. As of now i have made the other jobs depend on Job1 so that same code could be collected from head since it was labelled just before in Job1, but this was not the case during release since codes were going in the repository at odd hours so we had no control, so we thought it would be nice if we could find a way to sync the code using perforce label created in Job1. I did not find any way to sync to particular label that got created in a different job.
So i thought if we could set an environment variable and then use the same for the following jobs, then the codes can be in perfect sync. But seems like environment variables cannot be shared across jobs.
I would appreciate any ideas and help.
Can you use the "Use Upstream Project revision" option? It allows you to sync to the changeset of another project.
If you want to stick to the label idea, I think it's doable. I haven't tried this, but I think I would first have the first job create a new label based on the job name and the build number; both are available in the create label post-build action.
If you launch the downstream job using the paramaterized trigger plugin it will have access to the upstream job name and build number as environment variables. The 'P4 Label' field in the downstream job can then use parameter substitution to specify the correct label name to sync to.
Perforce plugin can help you.
Look at section "Sync multiple builds to the same changeset".