Fail a Travis build if a variable has degraded since the previous build - travis-ci

Is it possible to have a Travis job calculate some metric (such as the number of npm audit problems, number of failed tests, number of lint warnings etc) and fail the job if the value is worse than for the previous build (possibly for the previous build on the same branch)?
If so, how could the previous value be stored?
I've thought of a ugly interesting workaround: storing the metric value as a tag or a note on the git branch, which the job may pick up, but this really abuses git.
So: Is there a "proper" way of storing data from a Travis build so that other builds may then read it?
In case anyone's curious, the problem I'm having is that my yarn audit condition (no high severity vulnerabilities: yarn audit --groups dependencies || (mask=$? && [ $mask -lt 8 ])) fails without any way to proceed when there's a new CVE which hasn't been addressed yet, so I need a way to proceed with my jobs in such scenarios. In lieu of interactive jobs where you could respond to a "Deploy anyway?" prompt, I thought one way could be to simply re-trigger the same job and let it pass if previousAuditErrors and auditErrors are both 1, i.e., there are no degrades. There may be a better solution for this problem that I didn't think of, but I think the question above is interesting regardless.

Related

Is there a standard way to delete successful vnext builds (PR) just after their completion?

The most aggressive build retention policy one can set for pull request builds is described in "Clean up pull request builds"
a policy that keeps a minimum of 0 builds
Still, it means that successful PR builds (with artifacts no one will ever need) will be deleted only after the next automatic retention cleanup - usually the next day, but in reality it results in nearly two days worth of no longer needed builds.
In our particular case it seems to be desirable to find a way to clean successful PR builds ASAP due to their frequency and artifact's sheer size that may periodically strain our not yet fully organized infrastructure dedicated to PR handling (it will be significantly improved, but not as soon as we'd like to, and those successful PR builds would still remain no less of a dead weight).
And as far as I see the only way to do it would be to delete builds manually.
While it is not too difficult to implement, I'd still like to check whether there is a simpler standard way to delete successful PR builds automatically.
P.S.: There is one particularity in our heavily customized build process - we have multiple dependent artifacts. Like create A, use it to build B, create C to test B... So trying not to Publish artifacts on overall successful build with custom condition like it is suggested below is not exactly feasible.
Let's look at the problem from a different perspective: The problem isn't that builds are retained, the problem is that your PR builds are publishing artifacts.
You can make the Publish Artifacts steps conditional so that they don't run during PRs. Something like and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')) will make the task only run if it's not a PR.

Jenkins: Starting a build without permanently recording the result

I'm trying to tweak some options in my jenkins configuration, which is causing many builds to fail. I'd prefer to not keep these failures around in the build history, since they're not technically failures of the repository. In the past, I've just deleted the build after looking at the log, but this is a little tedious.
Is there a way to start a build with an option to not record the result of the build permanently?
Perhaps there's a URL that can be used to trigger a debugging build, somethign like:
JENKINS_URL/job/JOBNAME/build?DEBUGGING
You can set the "discard old builds" option in your job to only keep 1 build. If you have older builds you want to keep, you can give them the "keep this build forever" property. If you have a large number of jobs to work with, you can use the Configuration Slicing plugin to modify the Max # of builds to keep.

Manually failing a build after it's complete

Is it possible to set the build result for a build after that build is complete?
I could not find any plugins that do this already, and I was considering writing my own, but I wanted to see if this was even possible before going down that path.
(I have looked at existing code and how the "Fail The Build" plugin works as an example, but my understanding of the Jenkins code base is not advanced enough to understand what all the possibilities are.)
Use case: we have a build pipeline, and near the end of the pipeline there is a deploy-to-qa step that deploys the artifact to a QA environment. We have automated tests before this step to try to catch any problems with the artifact, but our test coverage is not very high in some areas so bugs could still slip through the cracks. I'd like to have the ability to mark a deploy-to-qa build as FAILED after the fact, to denote that that particular pipeline was invalid and is not a candidate for production release. (Basically the same as this Build Pipeline Plugin issue)
After some more investigation in the code, I believe that this is not possible.
From hudson.model.Run:
public void setResult(Result r) {
// state can change only when we are building
assert state==State.BUILDING;
// snip
...
}
So the build result cannot change except when in "building" state.
I could try to muck with the lastSuccessful and lastStable symlinks (as is done with the delete() function in hudson.model.AbstractBuild), but then those would be reset as soon as Jenkins reloaded the build results from jobs/JOBNAME/builds/.
I have an untested suggestion: Make a parametrized build, where the parameter determines if build will fail or not (for example simple bat / shell script testing the parameter from the environment variable it sets, and doing exit 0 or exit 1). This assumes that build pipelines manually triggered step will ask the parameters, and not use default values.
If it does not support interactive build parameters, then some other way is needed to tell this extra build step wether it should fail or not. Maybe editing upstream build description or display name to indicate failure, and then allowing build pipeline to continue to this extra build step, which probably has to use system groovy script to dig out upstream build description or display name.
I have seen several debates on this topic previously, and the outcome was always that it is theoretically possible to do so, but the codebase is not designed to allow this and it would have to be a very hacky workaround.
It's also been said that this is a bad practice in general, although I don't remember what the argument against it was.
I am facing the same requirement. I haven't found an appropriate plugin, changing the build status is not just a flag but has other impacts on links (eg latest successful build etc). So instead of changing the status of the build I looked for a possibility for qualifying the build. The Promoted Builds Plugin apply flags to build to define e.g. different quality stages. Build promotions can be performed manually or based on e.g. downstream project successful builds. Any successful build can be qualified, based on the promotion additional build and post build actions can be executed, e.g tagging or archiving.
Actually I was able to do it by changing the build.xml manually to <result>FAILURE</result>.
I've then played a little bit with mklink to create some symbolic links and also renamed the lastSuccessfulBuild to lastFailedBuild and it worked. If you are allowed to access the filesystem from within a Jenkins PlugIn, then it is possible to write one.
In case you are fine to delete the current build and start the same build using a version number and setting the next BUILD_NUMBER to the deleted one, then you could use this plugin to tell it to fail instead of succeed:
Fail The Build Plugin

Can Jenkins (continuous build) pinpoint the commit that caused a build failure?

Jenkins says a build succeeded or failed, but can it identify the exact commit (and author!) that caused a build to fail?
This issue would seem to indicate no.
Edit: From my exchange with Pace:
What I see is "include culprits", which is everyone since the last
build. I don't want that. I want THE culprit, with Jenkins doing the
binary search. If Jenkins does two builds 10 commits apart, I don't
want 10 possible culprits, I want it to find the one.
I haven't yet heard how to do that.
That page was talking about the "find bugs" plugin, not the normal build cycle. Depending on how things are setup Jenkins can identify the exact commit and author that caused a failure. If Jenkins has the appropriate source control plugins installed and is configured to know about the repository the build is tied to then for every build it will list the changes since the last build.
In addition, Jenkins has the capability in many of its reporting plugins to blame the faulty committer. It can, for example, send an e-mail notification on a failed build to the developer that made the faulty commit.
However, many setups make it difficult for Jenkins to know. For example, if Jenkins is configured for daily builds then there are likely many commits which could have caused the issue. It's also possible that Jenkins isn't configured to know about the source control repository, or there is no source control repository. All of these issues could cause Jenkins to be unable to identify the build breaker.
Specifically for e-mailing faulty committers you can use the email-ext plugin which has options to send e-mails to everyone that committed since the last successful build.
For a humorous take on this subject check out this approach.
I think what you're asking for is impossible in some cases. Determining who the culprit is requires insight into conflict resolution that only a human can decide. Even still, sometimes a manager has to be involved in order to arbitrate. Say for instance you get 3 commits (A,B,C) that depend on a preexisting definition. However, another commit (D) modifies the behavior of that function. Which do you revert? Perhaps it's the business plan to keep A,B,C as is and return D to its original state. The opposite, modifying A,B,C to adapt to the changes of D, is also possible.
In the cases where a machine can handle the arbitration, it is the responsibility of unit tests, and static analyzers, to determine the culprit (although still imperfect). Static analyzers sometimes have built in features that email the person who committed a violation. Unit tests can be written that notify teams or team members responsible for a failed test. Both could work in the same way that identifies who was the last committer on a particular line that failed. Still, if it is a problem with linking, then perhaps some members should be associated with the particular makefile.

Using Jenkins, Perforce, and Ant, how can I run PMD only on files that have changed since the last green build?

Given that:
There seems to be no easy way to get a list of "changed" files in Jenkins (see here and here)
There seems to be no fast way to get a list of files changed since label xxxx
How can I go about optimising our build so that when we run PMD it only runs against files that have been modified since the last green build.
Backing up a bit… our PMD takes 3–4 minutes to run against ~1.5 million lines of code, and if it finds a problem the report invariably runs out of memory before it completes. I'd love to trim a couple of minutes off of our build time and get a good report on failures. My original approach was that I'd:
get the list of changes from Jenkins
run PMD against a union of that list and the contents of pmd_failures.txt
if PMD fails, include a list of failing files in pmd_failures.txt
More complicated than I'd like, but worth having a build that is faster but still reliable.
Once I realised that Jenkins was not going to easily give me what I wanted, I realised that there was another possible approach. We label every green build. I could simply get the list of files changed since the label and then I could do away with the pmd_failures.txt entirely.
No dice. The idea of getting a list of files changed since label xxxx from Perforce seems to have never been streamlined from:
$ p4 files //path/to/branch/...#label > label.out
$ p4 files //path/to/branch/...#now > now.out
$ diff label.out now.out
Annoying, but more importantly even slower for our many thousands of files than simply running PMD.
So now I'm looking into trying to run PMD in parallel with other build stuff, which is still wasted time and resources and makes our build more complex. It seems to me daft that I can't easily get a list of changed files from Jenkins or from Perforce. Has anyone else found reasonable workaround for these problems?
I think I've found the answer, and I'll mark my answer as correct if it works.
It's a bit more complex than I'd like, but I think it's worth the 3-4 minutes saved (and potential memory issues).
At the end of a good build, save the good changelist as a Perforce counter. (post-build task). Looks like this:
$ p4 counter last_green_trunk_cl %P4_CHANGELIST%
When running PMD, read the counter into the property last.green.cl and get the list of files from:
$ p4 files //path/to/my/branch/...#${last.green.cl},now
//path/to/my/branch/myfile.txt#123 - edit change 123456 (text)
//path/to/my/branch/myotherfile.txt#123 - add change 123457 (text)
etc...
(have to parse the output)
Run PMD against those files.
That way we don't need the pmd_failures.txt and we only run PMD against files that have changed since the last green build.
[EDIT: changed it to use p4 counter, which is way faster than checking in a file. Also, this was very successful so I will mark it as answered]
I'm not 100% sure since I've never use Perforce with Jenkins, but I believe Perforce passes the changelist number through the environment variable $P4_CHANGELIST. With that, you can run the p4 filelog -c $P4_CHANGELIST which should give you the files from that particular changelist. From there, it shouldn't be hard to script something up to just get the changed files (plus the old failures into PMD).
I haven't use Perforce in a long time, but I believe the -Ztag parameter makes it easier to parse P4 output for the various scripting languages.
Have you thought about using automatic labels? They're basically just an alias for a changelist number, so it's easier to get the set of files that differ between two automatic labels.

Resources