travis/coverity: automatically re-schedule a build after given time - travis-ci

I'm using githubs integration of travis-ci with coverity-scan (the free versions of all these services) to test my FLOSS code.
The problem I'm facing is that when continuously working on the code, i'm hitting the coverity quota pretty soon.
Since I'm working on multiple projects simultaneously, it can therefore well be that I switch away from working on a given project before I'm allowed to submit a coverity again, thus potentially having flaws in the code for weeks although they would have been caught easily by coverity.
I would like to avoid this.
The first measure to prevent hitting the quota too frequently, is by using a dedicated branch (usually coverity_scan) which does not receive pushes as often as the master and/or feature branches.
However, this puts cognitive load on the user (me), which I also like to avoid.
Also, sometimes I still hit the quota (some of my projects as in the 100k-500k lines-of-code range, so they have a lower threshold than usual).
What I would like to have is being able to automatically re-trigger a coverity-scan once the quota has expired, if (and only if) the current build did hit the quota.
Is somthing like this possible with plain travis-ci/coverity features?
Or would I have to setup a separate hook, that monitors the coverity quota and travis-ci builds?

You don't need to run Coverity on every check-in. It's just too slow.
You should configure your (coverity build) system to poll your repo for changes, but have them checked infrequently. Something like a few times per day.
This will trigger the build when things change, but not on every change that is detected.

Related

How to get Cloud Run to handle multiple simultaneous deployments?

I've got a project with 4 components, and every component has hosting set up on Google Cloud Run, separate deployments for testing and for production. I'm also using Google Cloud Build to handle the build & deployment of the components.
Due to lack of good webhook events from source system, I'm currently forced to trigger a rebuild of all components in a project every time there is a new change. In the project this means 8 different images to build and deploy, as testing and production use different build-time settings as well.
I've managed to optimize Cloud Build to handle the 8 concurrent builds pretty nicely, but they all finish around the same time, and then all 8 are pushed to Cloud Run. It often seems like Cloud Run does not like this at all and starts throwing some errors to me that I've been unable to resolve.
First and more serious is that often about 4-6 of the 8 deployments go through as expected, and the remaining ones either are significantly delayed or just fail, often so that the first few go through fine, then a few with significant delays, and the final 1-2 just fail. This seems to be caused by some "reconciliation request quota" being exhausted in the region (in this case europe-north1), as this is the error I can see at the top of the Cloud Run service -view:
Additionally and mostly annoyingly, the Cloud Run dashboard itself does not seem to handle having 8 services deployed, as just sitting on the dashboard view listing the services regularly throws me another error related to some read quotas:
I've tried contacting Google via their recommended "Send feedback" button but have received no reply in ~1wk+ (who knows when I sent it, because they don't seem to confirm receipt).
One option I can do to try and improve the situation is to deploy the "testing" and "production" variants in different regions, however that would be less than optimal, and seems like this is some simple configuration somewhere about the limits. Are there other options for me to consider? Or should I just try to set up some synchronization on these that not all deployments are fired at once?
Optimizing the need to build and deploy all components at once is not really an option in this case, since they have some shared code as well, and when that changes it would still be necessary to support this.
This is an issue with Cloud Run. Developers are expected to be able to deploy many services in parallel.
The bug should be fixed within a few days or couple of weeks.
[update] Bug should now be fixed.
Make sure to use the --async flag if you want to deploy in parrallel: gcloud run deploy $SERVICE --image --async

How to schedule an on-premise Azure DevOps build to run every 5 minutes?

Never mind the rationale, I have a case where a build needs to run every 5 minutes. On-premise installation does not support schedules in the YAML.
So, how do we do it? I can probably use the REST Api, but that sucks, because it seems either I create a one-off script or a script for very simple type of schedules. Building a reusable solution, that could be used in general for other builds seems to be involved. So, instead of concentrating on my business I need to go sideways and cover for the deficiencies of the on-premise version of Azure DevOps.
I wonder if there is a better way.
Understand your concern. However, this is not supported at present with on-premise TFS sever.
The UI for defining time-based build triggers isn't flexible enough. It can only support fixed times on days of the week.
Just as you have pointed out in the comment, we have a need to run a build every 5 minutes which requires us to create 288 schedules which is tedious.
Actually, this has already been a user voice.
Scheduled builds - More flexible timing configuration
https://developercommunity.visualstudio.com/idea/365630/scheduled-builds-more-flexible-timing-configuratio.html
Multiple persons commented and echoed. After go through the marketplace, haven't found a pretty appropriate workaround. Sorry for any inconvenience. You could monitor the status of above user voice.

Jenkins - monitoring the estimated time of builds

I would like to monitor the estimated time of all of my builds to catch the cases where this value is shown as 'N/A'.
In these cases the build gets stuck (probably due to network issues in my environment) and it won't start new builds for that job until killed manually.
What I am missing is how to get that data for each job, either from api or other source.
I would appreciated any suggestion.
Thanks.
For each job, you can click "Trend" on the job run history table, and it will show you the currently executing progress along with a graph of "usual" execution times.
Using the API, you can go to http://jenkins/job/<your_job_name>/<build_number>/api/xml (or /json) and the information is under <duration> and <estimatedDuration> fields.
Finally, there is a Jenkins Timeout Plugin that you can use to automatically take care of "stuck" builds

TFS 2010: Rolling CI Builds

I've been looking around online at ways of improving our build time (which is currently ~30-40 minutes, depending on which build agent gets the task), and one common theme I've seen is use CI builds.
I understand the logic behind this, and it makes sense that it would reduce the time each build takes. Our problem, however, is that building on every check-in is a pointless use of our resources, because in our development branch, we only keep the latest successful build. This means that if 2 people check-in in a short space of time, whoever checked-in last will be the one whose build is kept.
It's this reason (along with disk space limitations) that we changed to using Rolling Builds, so that we only built the development branch a maximum of once every 45 minutes (obviously we could manually trigger builds on otp of that).
What I want to know (and haven't been able to find anywhere) is whether there's a way of combining rolling builds AND continuous integration. So keep building only once every 45 minutes, but only get and build files that have changed.
I'm not even sure it's possible, and if not then I'll look into other ways, but this seems like something that should be possible.

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Resources