Travis, is it possible to combine cron-scheduled builds and checks about github changes? - travis-ci

Trying on SO too, after the Travis forum.
I’ve a quite big project, which takes long time to build. Because of that, I don’t want it to be rebuilt upon every Github pushed change. At the same time, I’d like to build it automatically every day, but only if there have been GitHub changes since the last build. Scheduling a daily rebuild in Travis doesn’t seem to achieve such a result, the repo is rebuilt daily anyway, even if the code on GH is exactly the same as the day before. Rebuilding a big unchanged codebase for nothing isn’t very good.
Is there a way to obtain that in Travis? Should I file a new feature request?

OK, strangely, it doesn't seem an interesting problem, so I had to find some sort of solution on my own.
As far as I can understand, Travis doesn't support such a feature (I don't know why, it's pretty basic to me), but it offers an environment variable to know what triggered the build. Which can be combined with git log:
if [[ "$TRAVIS_EVENT_TYPE" == "cron" ]]; then
nchanges=$(git log --since '24 hours ago' --format=oneline | wc -l)
if [[ $(($nchanges)) == 0 ]]; then
cat <<EOT
This is a cron-triggered build and the code didn't change since the
latest build, so we're not rebuilding.
This is based on github logs (--since '24 hours ago'). Please,
launch a new build manually if I didn't get it right.
EOT
exit
fi
fi
This isn't perfect, cause the whole VM and its environment are bringed up anyways, and the Travis logs show the event without distinguishing it from any other build. But, until I find a better solution, at least this is better than building every day for nothing (or building many times a day, even against minimal changes).

Related

Fail a Travis build if a variable has degraded since the previous build

Is it possible to have a Travis job calculate some metric (such as the number of npm audit problems, number of failed tests, number of lint warnings etc) and fail the job if the value is worse than for the previous build (possibly for the previous build on the same branch)?
If so, how could the previous value be stored?
I've thought of a ugly interesting workaround: storing the metric value as a tag or a note on the git branch, which the job may pick up, but this really abuses git.
So: Is there a "proper" way of storing data from a Travis build so that other builds may then read it?
In case anyone's curious, the problem I'm having is that my yarn audit condition (no high severity vulnerabilities: yarn audit --groups dependencies || (mask=$? && [ $mask -lt 8 ])) fails without any way to proceed when there's a new CVE which hasn't been addressed yet, so I need a way to proceed with my jobs in such scenarios. In lieu of interactive jobs where you could respond to a "Deploy anyway?" prompt, I thought one way could be to simply re-trigger the same job and let it pass if previousAuditErrors and auditErrors are both 1, i.e., there are no degrades. There may be a better solution for this problem that I didn't think of, but I think the question above is interesting regardless.

Check file creation time in jenkins pipeline

I'm wondering if anybody knows a generic way in Jenkins pipeline to find out the creation time of a file? There are operations to touch a file but seemingly not for reading that time.
What I actually want to do is to clean out workspaces every few days - for most of our builds, we want incremental behaviour and thus don't clean out. However, there are times things build up and we'd like to be able to automatically clean out on occasion.
I need the code to work on both Linux flavours and Windows. My only big idea to date is to actually write the time into a file and read that back. However, it somehow seems wrong!

TFS Build takes long time

In our company we use Gated Checkin to make sure commited code doesn't have any problems and also we run all of our unit tests there. We have around 450 unit tests.
To build the entire solution it takes 15-20 seconds and for tests maybe 3 mins on my local computer. When I build it on the server it takes 10 minutes. Why is that? Is there an extra stuff that will be fired that I don't know of?
Be aware that there are additional overheads (clean/get workspace is the main culprit most of the time) in the workflow prior to the actual build and then test cycle. I have seen the same behaviour myself and never really got to a point where the performance was that close to what it would be locally.
Once the build is running, you can view the progress and see where the time is being taken, this will also be in the logs.
In the build process parameters you can skip some extra steps if you just want to build the checked in code.
Set all these to False: Clean Workspace, Label sources, Clean build, Update work items with build number.
You could also avoid publishing (if you're doing it) or copying binaries to a drop folder (also, if you're doing it).
As others have suggested, take a look at the build log, it'll tell you what's consuming the time.

Zip old builds in jenkins plugin

There is some Jenkins plugin to ZIP old builds? I don't want to package just the generated archive (I am deleting those). I want to ZIP just the log data and the data used by tools, like FindBugs, Checkstyle, Surefire, Cobertura, etc.
Currently I am running out of disk space due to Jenkins. There are some build log files that reach 50 Mb due running 3000+ unit tests (most of these are severely broken builds full of stacktraces in which everything fails). But this happens frequently in some of my projects, so I get this for every bad build. Good builds are milder and may get up to around 15 Mb, but that is still a bit costly.
The surefile XML files for these are huge too. Since they tend to contain very repetitive data, I could save a lot of disk space by zipping them. But I know no Jenkins plugin for this.
Note: I am already deleting old builds not needed anymore.
The 'Compress Buildlog' plugin does pretty much exactly what you're asking for, for the logs themselves at least.
https://github.com/daniel-beck/compress-buildlog-plugin
For everything else, you'll probably want an unconditional step after your build completes that manually applies compression to other generated files that will stick around.
The administering Jenkins guide gives some guidance on how to do this manually. There are also links to the following plugins
Shelve Project
thinBackup
The last one is really designed to backup Jenkins configuration, but there are also options for build results.
Although this question is early 3 years ago there may be other people search the same question
here is my answer
If you want to compress the current build job's log using This jenkins plugin
If you want to compress the old jenkins jobs using the following script mtime +5 means the file change time is 5 days ago
cd "$JENKINS_HOME/jobs"
find * -name "log" -mtime +5|xargs gzip -9v '{}'

Using Jenkins, Perforce, and Ant, how can I run PMD only on files that have changed since the last green build?

Given that:
There seems to be no easy way to get a list of "changed" files in Jenkins (see here and here)
There seems to be no fast way to get a list of files changed since label xxxx
How can I go about optimising our build so that when we run PMD it only runs against files that have been modified since the last green build.
Backing up a bit… our PMD takes 3–4 minutes to run against ~1.5 million lines of code, and if it finds a problem the report invariably runs out of memory before it completes. I'd love to trim a couple of minutes off of our build time and get a good report on failures. My original approach was that I'd:
get the list of changes from Jenkins
run PMD against a union of that list and the contents of pmd_failures.txt
if PMD fails, include a list of failing files in pmd_failures.txt
More complicated than I'd like, but worth having a build that is faster but still reliable.
Once I realised that Jenkins was not going to easily give me what I wanted, I realised that there was another possible approach. We label every green build. I could simply get the list of files changed since the label and then I could do away with the pmd_failures.txt entirely.
No dice. The idea of getting a list of files changed since label xxxx from Perforce seems to have never been streamlined from:
$ p4 files //path/to/branch/...#label > label.out
$ p4 files //path/to/branch/...#now > now.out
$ diff label.out now.out
Annoying, but more importantly even slower for our many thousands of files than simply running PMD.
So now I'm looking into trying to run PMD in parallel with other build stuff, which is still wasted time and resources and makes our build more complex. It seems to me daft that I can't easily get a list of changed files from Jenkins or from Perforce. Has anyone else found reasonable workaround for these problems?
I think I've found the answer, and I'll mark my answer as correct if it works.
It's a bit more complex than I'd like, but I think it's worth the 3-4 minutes saved (and potential memory issues).
At the end of a good build, save the good changelist as a Perforce counter. (post-build task). Looks like this:
$ p4 counter last_green_trunk_cl %P4_CHANGELIST%
When running PMD, read the counter into the property last.green.cl and get the list of files from:
$ p4 files //path/to/my/branch/...#${last.green.cl},now
//path/to/my/branch/myfile.txt#123 - edit change 123456 (text)
//path/to/my/branch/myotherfile.txt#123 - add change 123457 (text)
etc...
(have to parse the output)
Run PMD against those files.
That way we don't need the pmd_failures.txt and we only run PMD against files that have changed since the last green build.
[EDIT: changed it to use p4 counter, which is way faster than checking in a file. Also, this was very successful so I will mark it as answered]
I'm not 100% sure since I've never use Perforce with Jenkins, but I believe Perforce passes the changelist number through the environment variable $P4_CHANGELIST. With that, you can run the p4 filelog -c $P4_CHANGELIST which should give you the files from that particular changelist. From there, it shouldn't be hard to script something up to just get the changed files (plus the old failures into PMD).
I haven't use Perforce in a long time, but I believe the -Ztag parameter makes it easier to parse P4 output for the various scripting languages.
Have you thought about using automatic labels? They're basically just an alias for a changelist number, so it's easier to get the set of files that differ between two automatic labels.

Resources