Yocto - Why can't I override the build task? - task

I'm playing with my own yocto layer/recipes to figure out how everything works together, and I'm seeing some very confusing behavior related to the build task. Here is my test recipe called condtest:
ICENSE = "GPLv2+ & LGPLv2+"
DISTRO = ""
do_print() {
echo "print"
}
addtask print
do_fetch() {
echo "fetch"
}
addtask fetch before build
do_build() {
echo "build"
}
addtask build
do_compile() {
:
}
addtask compile
So if I run bitbake -c fetch condtest I see "fetch" echoed exactly as I would expect, so that makes sense. However, when I run bitbake -c build condtest bitbake will not echo "build" and instead will begin fetching and compiling a bunch of packages. What confuses me further is that if I add the -e flag to the two commands, their output is nearly identical, so I'm not sure why bitbake appears to begin building an entirely different recipe with the default build task instead of using the override build task that I defined in my recipe.

The base bbclass file (meta/classes/base.bbclass) sets:
do_build[noexec] = "1
which means the content of the function is not executed and it is just a placeholder task for the dependency graph. This is why you never see output from the build task.
As mentioned in other answers, there are default dependencies which are why other recipes execute when you try and run "standard" tasks like do_build.

The other packages are built because there are build time dependencies (and such dependencies are not needed for the fetch task). Content of your build task is not relevant, the dependencies are stored elsewhere (see the BitBake User Manual and section Build Dependencies for more information). You can generate graph of dependencies using the -g in bitbake invocation (see the official docs).
If you want to disable default dependencies, check the documentation for the variable INHIBIT_DEFAULT_DEPS.
It wasn't part of your question, but I see these glitches in your recipe:
You don't have to add addtask for standard tasks. You can find them (along with documentation) in the documentation.
If you want to skip the task and preserve the the dependency list, you can use do_compile[noexec] = "1".
The DISTRO variable (i.e. definition) belongs to the global configuration.
Edit: I didn't answer why build is not echoed, see the Richard's answer for the explanation.

Related

Bazel builds from scratch ignoring cache

I observe that my Bazel build agent frequently builds the project from scratch (including compiling grpc, which keeps unchanged) instead of taking results from cache. Is there a way, like query or cquery (pardon my ignorance) to determine why is the cache considered invalid for particular target? Or any techniques to tackle cache invalidation problem?
This is How the bazel build works :
When running a build or a test, Bazel does the following: Loads the BUILD files relevant to the target. Analyzes the inputs and their dependencies, applies the specified build rules, and produces an action graph. Executes the build actions on the inputs until the final build outputs are produced.
If you are having any clear assumptions can you please share the complete details!
This is most likely due to the rebuild sensitivity to particular environment variables. Many build actions will read from environment variables and use them to change the outputs. Bazel keeps track of this and will rebuild seemingly unchanged remote targets when your env changes.
To demonstrate this;
Build grpc (2x ensure it is cached the second time)
Change the PATH environment variable (your IDE may do this without you knowing)
mkdir ~/bin && export PATH=$PATH:~/bin
Rebuild grpc (This should trigger a complete rebuild)
There are a couple helpful flags to combat this rebuild sensitivity, and I'd recommend adding them to your bazelrc.
incompatible_strict_action_env: Freezes your environment and doesn't source environment variables from your shell.
action_env modify environment variables as needed for you build.
# file //.bazelrc
# Don't source environment from shell
build --incompatible_strict_action_env
# Use action_env as needed for your project
build --action_env=CC=clang

How do I debug an annotation processor in a bazel java_library rule?

I have added an annotation processor as a java_plugin and have added this into the plugins section of my java_library rule. I was wondering what are the bazel options to step through the annotation processor code and the javac compiler's code?
One way to do this is to run bazel build with --subcommands. Bazel will then print out all the commands it executes during a build. You can then find the javac invocation you're interested in, copy the command line (including the cd part so you're in the correct directory), modify the command line to include the debugging options, and run it manually. Then you can debug it like you would any java program.
One thing to note is that bazel will print only the commands that it actually runs in that build, so if the action you're interested in is already up-to-date, you may have to delete one of its outputs (e.g. the jar output of that library) to get bazel to re-run the action.

Clean up unreachable generated files in Bazel

Suppose I have a very minimal project with an empty WORKSPACE and a single package defined at the project root that simply uses touch to create a file called a, as follows:
genrule(
name = "target",
cmd = "touch $#",
outs = ["a"],
)
If I now run
bazel build //:target
the package will be "built" and the a file will be available under bazel-genfiles.
Suppose I now change the BUILD to write the output to a different file, as follows:
genrule(
name = "target",
cmd = "touch $#",
outs = ["b"],
)
Building the same target will result in the file b being available under bazel-genfiles. a will still be there though, even though at this point it's "unreachable" from within the context of the build definition.
Is there a way to ask Bazel to perform some sort of "garbage collection" and remove files (and possibly other content) generated by previous builds that are no longer reachable as-per the current build definition, without getting rid of the entire directory? The bazel clean command seems to adopt the latter behavior.
There seems to be a feature in the works, but apparently it cannot be performed on demand, but rather it executes automatically as soon as a certain threshold has been reached.
Note that running bazel clean will not actually delete the external directory. To remove all external artifacts, use bazel clean --expunge
bazel clean is the way to remove these.
The stale outputs aren't visible to actions, provided you build with sandboxing. (Not yet available on Windows, only on Linux and macOS.)
What trouble do these files make?

How to specify "default" target labels when running bazel test

We're considering migrating to Bazel from Make. To make the transition easier I would like to have bazel test (no flags / options) run the current directory's tests, if any.
So instead of bazel test my_tests bazel test would find the current directory's BUILD file, find any *_test rules and run those.
If you want to do exactly as you said then you can use your own script.
When you run “bazel” it actually looks to see if there is a script named “bazel” under tools directory of the current workspace. So if you have an executable under “$workspace/tools/bazel” bazel will run that instead of the bazel binary itself.
This means you can write a script that checks if the only argument is “test” and if so calls “bazel-real test :all”
It can also check the exit code to see if there were no tests (it’s a specific error code) and return 0 instead
You can use the all target pattern to match all targets in the current package: bazel test :all
You can read more about it here: https://docs.bazel.build/versions/master/user-manual.html#target-patterns
Note however that if there are no test targets in the current package, bazel will give an error: "ERROR: No test targets were found, yet testing was requested.". In this case bazel will give an exit code of 4: https://docs.bazel.build/versions/master/user-manual.html#what-exit-code-will-i-get
I recommend creating a alias called bazel-test to bazel test :all.

How to execute package for one submodule only on Jenkins?

I have a sbt project with 4 modules: module-a, module-b, module-c, module-d.
Each module can be packaged as a WAR. I want to set up a deployment on Jenkins that would build only one of the 4 modules and deploy it to a container.
In detail, I want to have 4 Jenkins jobs - job-a, job-b, job-c, job-d, each building only the defined module (a to d).
For now, I am using clean update test package as the command for the Jenkins sbt build, but this results in packaging all 4 modules that is not necessary.
I already tried project -module-a clean update test package but with no luck.
You may also like to execute project-scoped clean and test tasks as follows:
sbt module-a/clean module-a/test
The solution is slightly shorter and clearer as to what project the following commands apply to.
You don't need to execute update task since it's implicitly executed by test as described in inspect tree test.
There's a way to make it cleaner with an alias. Use the following in the build.sbt:
addCommandAlias("jenkinsJob4ModuleA", "; module-a/clean; module-a/test")
With the alias, execute jenkinsJob4ModuleA to have the same effect as the above solution.
Quote the argument to project, i.e. project module-a, and don't use a dash before the name of the submodule.
The entire command line for the Jenkins job would than be as follows:
./sbt "project module-a" clean update test

Resources