For a project certain preconditions have to met before a CI build makes sense. Not all of them can be ensured prior to commit. Hence, to safe resources I'd like to evaluate those preconditions in .travis.yml (i.e. using shell commands) and cancel the Travis build if not met.
I don't think it'd be helpful or feasible to start experimenting with kill variations in the before_install phase. There must be a better way.
Put these checks in your regular install commands and have them exit with a failure if the preconditions are not met.
install:
- check_preconditions && actually_build
This will result in "failed" builds whenever this happens. A (slightly silly) workaround is to abort with success, which instead will show a "successful" build even when nothing was actually built.
install:
- { check_preconditions && actually_build; true; }
Related
I'm playing with my own yocto layer/recipes to figure out how everything works together, and I'm seeing some very confusing behavior related to the build task. Here is my test recipe called condtest:
ICENSE = "GPLv2+ & LGPLv2+"
DISTRO = ""
do_print() {
echo "print"
}
addtask print
do_fetch() {
echo "fetch"
}
addtask fetch before build
do_build() {
echo "build"
}
addtask build
do_compile() {
:
}
addtask compile
So if I run bitbake -c fetch condtest I see "fetch" echoed exactly as I would expect, so that makes sense. However, when I run bitbake -c build condtest bitbake will not echo "build" and instead will begin fetching and compiling a bunch of packages. What confuses me further is that if I add the -e flag to the two commands, their output is nearly identical, so I'm not sure why bitbake appears to begin building an entirely different recipe with the default build task instead of using the override build task that I defined in my recipe.
The base bbclass file (meta/classes/base.bbclass) sets:
do_build[noexec] = "1
which means the content of the function is not executed and it is just a placeholder task for the dependency graph. This is why you never see output from the build task.
As mentioned in other answers, there are default dependencies which are why other recipes execute when you try and run "standard" tasks like do_build.
The other packages are built because there are build time dependencies (and such dependencies are not needed for the fetch task). Content of your build task is not relevant, the dependencies are stored elsewhere (see the BitBake User Manual and section Build Dependencies for more information). You can generate graph of dependencies using the -g in bitbake invocation (see the official docs).
If you want to disable default dependencies, check the documentation for the variable INHIBIT_DEFAULT_DEPS.
It wasn't part of your question, but I see these glitches in your recipe:
You don't have to add addtask for standard tasks. You can find them (along with documentation) in the documentation.
If you want to skip the task and preserve the the dependency list, you can use do_compile[noexec] = "1".
The DISTRO variable (i.e. definition) belongs to the global configuration.
Edit: I didn't answer why build is not echoed, see the Richard's answer for the explanation.
I have a project that sometimes doesn't link correctly if you try to rebuild it after making updates. After doing a clean build, it works fine. I'd like to replicate this behavior in Jenkins - first try rebuilding, then if that fails, run the clean script and try again, and only if that fails, fail the build.
How do I build in conditionals like this?
You can use a simple try/catch block in your pipeline script:
try {
// build step without clean
}
catch (error) {
// clean step
// build step
}
(but you might want to consider cleaning before every build and/or figuring out why it's failing)
I am trying to set the build name of a Jenkins build only on a successful build. Any failure, whether in building or testing, should use the 'default' (build number) instead.
I can't find any mention of this in the documentation or online. Is this possible?
It is pretty simple to do with Groovy Postbuild: https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin
You have some nice examples there too. So just check result and then set the:
manager.build.result
As they do in Example 3
In the post build operation you can run a "set of scripts" - there you can select any way to do so, set a description, run system groovy or groovy script to change the name or any other method of your choosing - you can add many build steps to help you do so. wrap it around a conditional statement and run it only when build is successful.
Good luck!
I invoke our gtest suite for iOS in Jenkins using the shell script
#!/bin/sh
pkill -a "iPhone Simulator"
ios-sim launch ${WORKSPACE}/source/apple/build/Debug-iphonesimulator/MyAppTest.app --args --gtest_output=xml:${WORKSPACE}/JUnitTestResultsIOS.xml
exit $?
This always successfully runs the tests, and when the tests pass the xml file gets generated as expected. However, when the tests fail, no xml file is generated, and the "Execute shell command" build step terminates but does not fail the job. I echoed the exit code and it comes back 0 even when the tests fail.
This is even more confusing to me since we have a basically identical script in the same job for running tests on our OSX version. This always writes the xml and successfully fails the job when the tests fail.
This behavior seems totally arbitrary and everything about our configuration seems to be exactly as it should be. What am I missing?
Thanks!
There were two things at work here.
First of all, we had the break_on_failure gtest option enabled, which works great when running tests on a local machine but isn't useful within Jenkins, so we disabled it on the build machine.
The second issue was around how we used the exit code. Since ios-sim launch ... always succeeds we were always getting an exit code of 0, regardless of whether the tests passed or failed. I ended up using grep to determine if the resulting xml file indicated any failures, and generated an exit code based on that.
I'm trying to 'release' my project using the gradle release-plugin
The plugin starts by checking if my working copy is clean, so that only properly versioned stuff gets released.
This works just fine on my local machine. But when I try the same thing in a Jenkins job, the build fails complaining various stuff is changed in the workplace. I decided that a lot of stuff was just internally used by jenkins and added it to gitignore:
caches/
native/
wrapper/
But it also considers gradlew as changed:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':checkCommitNeeded'.
> You have uncommitted files:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
M gradlew
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Why does Jenkins change that file how do I prevent that?
I think the following settings on the Jenkins job might be relevant:
Checkout/merge to local branch (optional) is set to master. Without this setting the release plugin complains about not being on a branch
Clean after checkout is currently checked, but checking/unchecking it didn't make a difference
Make gradlew executable is checked, and at least to me sounds like a likely cause, but unchecking it makes the build faile because gradlew is not executable
Pretty old question, but for the record to anyone coming over here, jenkins isn't at fault here, you should commit gradlew with executable bit set:
# git update-index --chmod=+x gradlew
# git commit
Then you will no longer need the jenkins setting to set it executable, which is the workaround causing your issue.