I would like to monitor the estimated time of all of my builds to catch the cases where this value is shown as 'N/A'.
In these cases the build gets stuck (probably due to network issues in my environment) and it won't start new builds for that job until killed manually.
What I am missing is how to get that data for each job, either from api or other source.
I would appreciated any suggestion.
Thanks.
For each job, you can click "Trend" on the job run history table, and it will show you the currently executing progress along with a graph of "usual" execution times.
Using the API, you can go to http://jenkins/job/<your_job_name>/<build_number>/api/xml (or /json) and the information is under <duration> and <estimatedDuration> fields.
Finally, there is a Jenkins Timeout Plugin that you can use to automatically take care of "stuck" builds
Related
When using Jenkins CLI, I can use the build command with options -v and -s to run a build, waiting for it to finish and printing its output.
Is there any way I can achieve the same result (wait for execution and get job output) with a single call to the REST API? I know this can be done by polling for build status until it finishes and then requesting its output, but I want to know if there is a straightforward option for short-running jobs.
You can do like that somehow. But even if you do also you can't able to apply the same code for other jobs. There will be waiting period for the next available executor or some race conditions like this might happen. And holding the rest API for that long period is not gonna be a good option. And nobody suggests that.
So Instead of looking for the REST API, you can have an algorithm for polling itself. instead of every second, take results from the previous builds and process it and try to predict the near possible time and then poll. Like this kind of algorithms or else you can use Jenkins build remaining time also. Hope this makes sense.
I have a scheduled notebook job that has been running without issue for a number of days, however, last night it stopped running. Note that I am able to run the job manually without issue.
I raised a previous question on this topic: How to troubleshoot a DSX scheduled notebook?
Following the above instructions, I noticed that there were no log files created at the times when the job should have run. Because I'm able to run the job manually and there are no kernel logs created at the times the schedule job should have run, I'm presuming there is an issue with the scheduler service.
Are there any other steps I can perform to investigate this issue?
This sounds like a problem with the Scheduling service. I recommend to take it up with DSX support. Currently there is no management UX telling you why a specific job failed or letting you restart a particular execution (that would be a good fit for an enhancement request to provide via https://datascix.uservoice.com/).
I manage a Jenkins server with a few hundred projects in the whole ecosystem. Many of the projects rely on upstream servers, that, unfortunately, are not always responsive. When I have a lag on these servers, my build queue can get to 10 or more. Is there a plugin or setting to send a warning email when the build queue exceeds a particular length?
I have been unable to find a plugin that does this, but you can query Jenkins for the information as detailed here: Jenkins command to get number of builds in queue.
If you have a Jenkins slave available you could set up a job that runs every 15 minutes and just hit each of the other Jenkins servers with the API call to get build queue counts (this is easy if you have just one master and many slaves.)
If you wanted to stay completely outside of Jenkins (not add another job to the mix) you could write a script to poll the Jenkins API for the information. You could then run that script under, say, a 15 minute (or some other relevant time step) timer using cron (or windows scheduled task). Admittedly then you have to dedicate some resources to running this job.
It looks like you could use python to get the build queue and check the length of the returned list. get_queue_info()
I haven't mucked about with the Jenkins API much myself so I'm not sure offhand exactly what the script would need, but it should be simple enough once you dig into it.
I'm using githubs integration of travis-ci with coverity-scan (the free versions of all these services) to test my FLOSS code.
The problem I'm facing is that when continuously working on the code, i'm hitting the coverity quota pretty soon.
Since I'm working on multiple projects simultaneously, it can therefore well be that I switch away from working on a given project before I'm allowed to submit a coverity again, thus potentially having flaws in the code for weeks although they would have been caught easily by coverity.
I would like to avoid this.
The first measure to prevent hitting the quota too frequently, is by using a dedicated branch (usually coverity_scan) which does not receive pushes as often as the master and/or feature branches.
However, this puts cognitive load on the user (me), which I also like to avoid.
Also, sometimes I still hit the quota (some of my projects as in the 100k-500k lines-of-code range, so they have a lower threshold than usual).
What I would like to have is being able to automatically re-trigger a coverity-scan once the quota has expired, if (and only if) the current build did hit the quota.
Is somthing like this possible with plain travis-ci/coverity features?
Or would I have to setup a separate hook, that monitors the coverity quota and travis-ci builds?
You don't need to run Coverity on every check-in. It's just too slow.
You should configure your (coverity build) system to poll your repo for changes, but have them checked infrequently. Something like a few times per day.
This will trigger the build when things change, but not on every change that is detected.
I set my Jenkins job to build automaticlally many times a day by the scheduler.
If the build is failed, it will send mail to my team.
However I don't want to spamming the mail box. How can I set a condition to stop the build scheduler if it was failed more than 10 times ?
Rather than scheduling the job continuously, try the continuous integration paradigm, like this:
Unconditionally schedule the job only rarely. Perhaps once per day, just to ensure than any external factors (missing resources, changed interfaces, etc.) haven't come into play.
Trigger the job when any known source or dependency changes (e.g. source code, jar in your artifact repository, DB schema change, etc.)
Use a suitable plugin to retry failures.
I recommend the Naginator plugin for this. It can nag a limited number of times, and it auto-throttles: it nags frequently to begin with, then less frequently after a protacted period of failure.
Even if you don't change how the job is trigger, Naginator is probably a good solution for you. Use it to send your emails, instead of using an unconditional on-failure step.