Bazel test rule that only executes under certain configuration - bazel

I have a custom test rule that validates the size of a produced binary. I want this rule to only execute in a certain configuration (optimized, --compilation_mode=opt) and not run (or be a no-op PASS) otherwise.
Specifically,
bazel test //my:example_size_test should not run the test (preferably, although running and always passing is acceptable)
bazel test -c opt //my:example_size_test should run the test, passing based on the test outcome
Is there a way to achieve this?
I've tried using a macro to conditionally alias to the rule:
size_test is a macro that instantiates
$name_enabled_test, the actual test target of type _size_test
$name_disabled_test, a noop_test rule (custom rule that does essentially exit 0)
$name, an alias that selects between $name_enabled_test and $name_disabled_test depending on the configuration via select
However, a hypothetical bazel test //my:example_size_test builds but doesn't run the test. This is documented:
Tests are not run if their alias is mentioned on the command line. To define an alias that runs the referenced test, use a test_suite rule with a single target in its tests attribute.
I've tried using a test_suite instead of alias:
size_test is a macro that instantiates
$name_enabled_test, the actual test target of type _size_test
$name_disabled_test, a noop_test rule (custom rule that does essentially exit 0)
$name, a test_suite that has a tests attribute that selects between $name_enabled_test and $name_disabled_test depending on the configuration
However, this doesn't work because the tests attribute is non-configurable.
Is there an idiomatic (or even roundabout) way to achieve a test that only applies to certain configurations?

Sounds like a job for select().
Define a config_setting for -c opt, and use select in the test's data attribute to depend on the binary. Also pass some flag to the test to indicate whether it should verify the binary's size or not.
I'll give you the example with sh_test because I don't want to assume anything about size_test:
some_test(
name = "example_size_test",
srcs = [...], # you need to implement this
deps = ["#bazel_tools//tools/bash/runfiles"],
data = select({
":config_opt": ["//my:binary"],
"//conditions:default": [],
}),
args = select({
":config_opt": [],
"//conditions:default": ["do_not_run"],
}),
)
If $1 == "do_not_run", then the test should exit 0, otherwise it should use a runfiles-library (see for Bash, C++, Java, Python) to retrieve //my:binary's location and test its size.

Related

Bazel: how to specify --define in file?

I hope this is a simple question :) In bazel, I can select a config_setting by specifying --define K=V passed from the command line. How can I create a library in my BUILD.bazel that "sets" this config_setting without the need to specify it from command line?
Defaults for flags can be set a .bazelrc file, for example:
build --define=VERSION=0.0.0-PLACEHOLDER
build --define=FOO=1
You can also have configuration sets too:
build:bar --define=VERSION=0.0.0-PLACEHOLDER
build:bar --define=FOO=1
The above would only become active when passing the --config=bar flag.
Flags passed on the command line will take precedence over those in the .bazelrc file.
It's worth mentioning though, that changing define values will cause bazel to analyze everything again, which depending on the graph may take some time, but only affected actions will be executed.
Some rules have an env argument, so you can do f.e.:
sh_binary(
name = "target",
...
env = {
"K": "V"
}
)

How to pass variables to Bazel target build?

I am trying to build a Docker image with this code:
container_image(
name = "docker_image",
base = "#java_base//image",
files = [":executable_deploy.jar"],
cmd = ["java", "-jar", "executable_deploy.jar"],
env = { "VERSION" : "$(VERSION)" }
)
I want to pass a variable to the target built so it can be replaced in $(VERSION). Is this possible?
I have tried with VERSION=1.0.0 bazel build :docker_image, but I get an error:
$(VERSION) not defined.
How can I pass that variable?
According docs:
The values of this field (env) support make variables (e.g., $(FOO)) and
stamp variables; keys support make variables as well.
But I don't understand exactly what that means.
Those variables can be set via the --define flag.
There is a section on the rules_docker page about stamping which covers this.
Essentially you can do something like:
bazel build --define=VERSION=1.0.0 //:docker_image
It is also possible to source these key / value pairs from the stable-status.txt and volatile-status.txt files. The user manual page for bazel shows how to use these files, and the use of the --workspace_status_command to populate them.
For setting defaults, you could use a .bazelrc file, with something like the following as the contents:
build --define=VERSION=0.0.0-PLACEHOLDER
The flags passed on the command line will take precedence over those in the .bazelrc file.
It's worth mentioning, that changing define values will cause bazel to analyze everything again, which depending on the graph may take some time, but only affected actions will be executed.

Bazel select() based on build config

I am trying to provide some preprocessor definitions at compile time based on whether the user runs bazel test or bazel build.
Specifically, I want to have a conditional dependency of a cc_library.deps and a conditional definition in cc_library.defines.
I found that select() is the way to go but I cannot figure out how to know what action the user runs.
I'm not aware of any way to detect the current command (build vs test) using select(), but I think you can achieve something similar with custom keys.
You could define a config_setting block like the following:
# BUILD
config_setting(
name = "custom",
values = {
"define": "enable_my_flag=true"
}
)
and use it in you library to control the defines:
# BUILD - continued
cc_library(
name = "mylib",
hdrs = ["mylib.h"],
srcs = ["mylib.cc"],
defines = select({
":custom": ["MY_FLAG"],
"//conditions:default": [],
})
)
Now building the library using bazel build :mylib will result in the default case - no defines to be present, but if you build using bazel build :mylib --define enable_my_flag=true then the other branch will be selected and MY_FLAG will be defined.
This can be easily extended to the test case, for example by adding the --define to your .bazelrc:
# .bazelrc
test --define enable_my_flag=true
Now every time you run bazel test :mylib_test the define flag will be appended and the library will be built with MY_FLAG defined.
Out of curiosity why do you want to run the test on a library built with a different set of defines/dependencies? That might defeat the purpose of the test since in the end you're testing something different from the library you're going to use.

bazel select help -- configuring tcmalloc debug

a project I'm working on -- Envoy proxy -- uses Bazel and tcmalloc. I'd like to configure it to use the debug version of tcmalloc when compiling for debug and fastbuild, and use the optimized one for optimized builds.
There are other conditions as well, e.g. a command-line flag passed to bazel to turn off tcmalloc completely, using this logic:
https://github.com/envoyproxy/envoy/blob/7d2e84d3d0f8a4ffbf4257c450b3e5a6d93d4697/bazel/envoy_build_system.bzl#L166
def tcmalloc_external_dep(repository):
return select({
repository + "//bazel:disable_tcmalloc": None,
"//conditions:default": envoy_external_dep_path("tcmalloc_and_profiler"),
})
I have PR out (https://github.com/envoyproxy/envoy/pull/5424) failing continuous integration which changes the logic (https://github.com/envoyproxy/envoy/blob/1ed5aba5894ce519181edbdaee3f52c2971befaf/bazel/envoy_build_system.bzl#L156) to:
def tcmalloc_external_dep(repository):
return select({
repository + "//bazel:disable_tcmalloc": None,
repository + "//bazel:dbg_build": envoy_external_dep_path("tcmalloc_debug"),
"//conditions:default": envoy_external_dep_path("tcmalloc_and_profiler"),
})
However this does not work as we allow disabling tcmalloc on debug builds (which we do in continuous-integration scripts when running tsan). This runs afoul of bazel which evidently expects the conditions to be mutually exclusive, when I want "first matching rule wins" in this case. I get this error:
ERROR: /home/jmarantz/git4/envoy/test/common/network/BUILD:58:1: Illegal ambiguous match on configurable attribute "malloc" in //test/common/network:dns_impl_test:
//bazel:disable_tcmalloc
//bazel:dbg_build
Multiple matches are not allowed unless one is unambiguously more specialized.
ERROR: Analysis of target '//test/common/network:dns_impl_test' failed; build aborted:
/home/jmarantz/git4/envoy/test/common/network/BUILD:58:1: Illegal ambiguous match on configurable attribute "malloc" in //test/common/network:dns_impl_test:
//bazel:disable_tcmalloc
//bazel:dbg_build
What's the best way to solve this? Can I use a Python conditional on the bazel command-line settings? Can I use AND or OR operators in the conditional expressions to make them mutually exclusive? Or is there another approach I could use?
Not an answer, but perhaps I can give you some ideas:
As of now, you can simulate and and or by nesting selects or refactoring your config_settings.
There is a proposal for some changes to add flexibility here:
https://github.com/bazelbuild/proposals/blob/master/designs/2018-11-09-config-setting-chaining.md
You might also find some useful ideas in Skylib.
https://github.com/bazelbuild/bazel-skylib
Yup you can chain select using https://github.com/bazelbuild/bazel-skylib/blob/master/lib/selects.bzl#L80. You can also write your own feature flag rule that can be used in the select and that has artibrary logic in it, see https://source.bazel.build/bazel/+/0faef9148362a5234df3507441dadb0f32ade59a:tools/cpp/compiler_flag.bzl for example, it's a rule that can be used in selects and that gets the current C++ toolchain and inspects its state and returns its compiler value. You'll have to follow the thread a bit to see all the pieces. I'll ask for better docs for this.

Default, platform specific, Bazel flags in bazel.rc

I was wondering if its possible for platform-specific default Bazel build flags.
For example, we want to use --workspace_status_command but this must be a shell script on Linux and must point towards a batch script for Windows.
Is there a way we can write in the tools/bazel.rc file something like...
if platform=WINDOWS build: --workspace_status_command=status_command.bat
if platform=LINUX build: --workspace_status_command=status_command.sh
We could generate a .bazelrc file by having the users run a script before building, but it would be cleaner/nicer if this was not neccessary.
Yes, kind of. You can specify config-specific bazelrc entries, which you can select by passing --config=<configname>.
For example your bazelrc could look like:
build:linux --cpu=k8
build:linux --workspace_status_command=/path/to/command.sh
build:windows --cpu=x64_windows
build:windows --workspace_status_command=c:/path/to/command.bat
And you'd build like so:
bazel build --config=linux //path/to:target
or:
bazel build --config=windows //path/to:target
You have to be careful not to mix semantically conflicting --config flags (Bazel doesn't prevent you from that). Though it will work, the results may be unpredictable when the configs tinker with the same flags.
Passing --config to all commands is tricky, it depends on developers remembering to do this, or controlling the places where Bazel is called.
I think a better answer would be to teach the version control system how to produce the values, like by putting a git-bazel-stamp script on the $PATH/%PATH% so that git bazel-stamp works.
Then we need workspace_status_command to allow commands from the PATH rather than a path on disk.
Proper way to do this is to wrap your cc_library with a custom macro, and pass hardcoded flags to copts. For full reference, look at envoy_library.bzl.
In short, your steps:
Define a macro to wrap cc_library:
def my_cc_library(
name,
copts=[],
**kwargs):
cc_library(name, copts=copts + my_flags(), **kwargs)
Define my_flags() macro as following:
config_setting(
name = "windows_x86_64",
values = {"cpu": "x64_windows"},
)
config_setting(
name = "linux_k8",
values = {"cpu": "k8"},
)
def my_flags():
x64_windows_options = ["/W4"]
k8_options = ["-Wall"]
return select({
":windows_x86_64": x64_windows_options,
":linux_k8": k8_options,
"//conditions:default": [],
})
How it works:
Depending on --cpu flag value my_flags() will return different flags.
This value is resolved automatically based on a platform. On Windows, it's x64_windows, and on Linux it's k8.
Then, your macro my_cc_library will supply this flags to every target in a project.
A better way of doing this has been added since you asked--sometime in 2019.
If you add
common --enable_platform_specific_config to your .bazelrc, then --config=windows will automatically apply on windows hosts, --config=macos on mac, --config=linux on linux, etc.
You can then add lines to your .bazelrc like:
build:windows --windows-flags
build:linux --linux-flags
There is one downside, though. This works based on the host rather than the target. So if you're cross-compiling, e.g. to mobile, and want different flags there, you'll have to go with a solution like envoy's (see other answer), or (probably better) add transitions into your graph targets. (See discussion here and here. "Flagless builds" are still under development, but there are usable hacks in the meantime.) You could also use the temporary platform_mappings API.
References:
Commit that added this functionality.
Where it appears in the Bazel docs.

Resources