We're starting to use gRPC and are currently using bazel as our build tool. After an engineer pulls in updates to proto definitions, they'll need to proto compile. Due to the structure of our repository, the proto compile targets will be scattered in the repo.
The only option I'm seeing is to use a target naming convention so engineers just need to do something like bazel build //...:compile-proto. Are there other ways to make it easy for engineers to proto compile all updated proto definitions?
If you add a specific tag to each of them, you can use --build_tag_filters.
For example:
a_proto_library(
name = "compile-proto",
tags = ["a_proto"],
[...]
)
and then bazel build --build_tag_filters=a_proto //....
You can also wrap the rule in a macro to add the tag automatically.
I don't think //...:compile-proto is a valid target pattern, so unfortunately I'm not sure that that would work (not that you necessarily really want to rely on naming conventions anyway). See https://docs.bazel.build/versions/main/guide.html#specifying-targets-to-build
One option is to let bazel do all the updating for you. If you're already doing builds like bazel build //... to build everything, then once you pull in updates to proto definitions, another bazel build //... should rebuild only what has changed.
Another option is to find all rules using bazel query:
https://docs.bazel.build/versions/main/query.html
https://docs.bazel.build/versions/main/query-how-to.html
https://docs.bazel.build/versions/main/query.html#kind
Something like:
targets=$(bazel query "kind('java_proto_library', //...)")
bazel build $targets
Note that query with //... will load every build file in the workspace, but not build anything.
Related
I've set up my bazel crosstool so that I can specifically select the compiler that I want: gcc9, gcc10, ..., clang12, clang13... This works great.
bazel build --compiler=clang13 //:target
I'm scratching my head wondering how I achieve this with platforms! It seems to want to select whatever compiler you specify for the given platform, and if you want to change it, you have to edit the file!
In particular, if I want my compiler to be used by dependencies, whatever I do needs to be compatible with, for example, absl, and grpc.
Is there anyway to coerce toolchain selection via --config, --define, or other flags?
# In Workspace
register_toolchains("//toolchains:gcc12",
"//toolchains:clang13",
"//toolchains:clang14",
...)
# But how do I tell it that I want clang13, or clang14???
bazel build --platform=linux_x86 //:target
Here are two ideas that could help you:
do not use register_toolchains() to make all toolchains known to bazel, but use https://bazel.build/reference/command-line-reference#flag--extra_toolchains (maybe based on a --config via the .bazelrc). This lets bazel only know about one compiler toolchain available for resolution. Of course with this approach you can't use different compiler toolchains for different targets.
make use of the constraint_setting()s defined here. https://bazel.build/configure/windows#clang references how this is done:
platform(
name = "x64_windows-clang-cl",
constraint_values = [
"#platforms//cpu:x86_64",
"#platforms//os:windows",
"#bazel_tools//tools/cpp:clang-cl",
],
)
I have a repo which uses bazel to build a bunch of Python code. I would like to introduce various flavors of static analysis into the build and have the build fail if these static analyses throw errors. What is the best way to do this?
For example, I'd like to declare something like:
py_library_with_static_analysis(
name = "foo",
srcs = ["foo.py"],
)
py_library_with_static_analysis(
name = "bar",
srcs = ["bar.py"],
deps = [":foo"],
)
In a build file and have it error out if there are mypy/flake/etc errors in foo.py. I would like to be able to do this gradually, converting libraries/binaries to static analysis one target at a time. I'm not sure if I should do this via a new rule, a macro, an aspect or something else.
Essentially, I think I'm asking how to run an additional command while building a py_binary/py_library and fail if that command fails.
I could create my own version of a py_library rule and have it run static analysis within the implementation but that seems like something which is really easy to get wrong (my guess is that native.py_library is quite complex?) and there doesn't seem to be a way to instantiate a native.py_library within a custom rule.
I've also played around with macros a bit, but haven't been able to get that to work either. I think my issue there is that a macro doesn't actually specify new commands, only new targets and I can't figure out how to make the static analysis target get force built along with the py_library/py_binary I'm interested in.
A macro that adds implicit test targets is not such a bad idea: The test targets will be picked up automatically when you run bazel test //..., which you could do in a gating CI to prevent imperfect code from merging.
Bazel supports a BUILD prelude (which is underdocumented) that you could use to replace all py_binary, py_library, and even py_test with your test-adding wrapper macros with minimal changes to existing code.
If you somehow fail the build instead it will make it harder to quickly prototype things. Sometimes you want to just quickly try something out, and you don't care about any pydoc violations yet.
In case you do want to fail the build, you might be able to use the Validations Output Group of a rule that you implement to wrap or replace your py_libraries.
Sometimes I see extensions loading from the internet or built-in ones.
Canonical example:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
However, I cannot distinguish local repo and known repo by looking at the load expression.
How can I check the source (location) of any repo which I see in my WORKSPACE/BUILD files?
If the Bazel label is sufficient as a source, you might try fetching repo roots with BUILD files with bazel query 'buildfiles(//...)'.
Otherwise, you could run bazel clean --expunge and run a build with --experimental_execution_log_file=<FILENAME>. This creates a protobuf based log of the actions by Bazel. In there, all internet repos are downloaded anew because of clean --expunge.
Check https://github.com/bazelbuild/bazel/tree/master/src/tools/execlog for a parser.
It is super inconvenient that this information is not available another way - afaik. I really hope someone swings by and corrects me, but this way you at least know the available sources you can correlate.
I'm new to Bazel, but as far as I understand:
Copy the name of the repo. E.g. io_bazel_rules_docker
Search it through the codebase
Look at how it's being loaded
E.g. if you see
http_archive(
name = "io_bazel_rules_docker",
...
)
http_file(
name = "io_bazel_rules_docker",
...
)
And you can conclude where it's coming from.
bazel query --output=build //external:repo_name works just fine.
I was wondering if its possible for platform-specific default Bazel build flags.
For example, we want to use --workspace_status_command but this must be a shell script on Linux and must point towards a batch script for Windows.
Is there a way we can write in the tools/bazel.rc file something like...
if platform=WINDOWS build: --workspace_status_command=status_command.bat
if platform=LINUX build: --workspace_status_command=status_command.sh
We could generate a .bazelrc file by having the users run a script before building, but it would be cleaner/nicer if this was not neccessary.
Yes, kind of. You can specify config-specific bazelrc entries, which you can select by passing --config=<configname>.
For example your bazelrc could look like:
build:linux --cpu=k8
build:linux --workspace_status_command=/path/to/command.sh
build:windows --cpu=x64_windows
build:windows --workspace_status_command=c:/path/to/command.bat
And you'd build like so:
bazel build --config=linux //path/to:target
or:
bazel build --config=windows //path/to:target
You have to be careful not to mix semantically conflicting --config flags (Bazel doesn't prevent you from that). Though it will work, the results may be unpredictable when the configs tinker with the same flags.
Passing --config to all commands is tricky, it depends on developers remembering to do this, or controlling the places where Bazel is called.
I think a better answer would be to teach the version control system how to produce the values, like by putting a git-bazel-stamp script on the $PATH/%PATH% so that git bazel-stamp works.
Then we need workspace_status_command to allow commands from the PATH rather than a path on disk.
Proper way to do this is to wrap your cc_library with a custom macro, and pass hardcoded flags to copts. For full reference, look at envoy_library.bzl.
In short, your steps:
Define a macro to wrap cc_library:
def my_cc_library(
name,
copts=[],
**kwargs):
cc_library(name, copts=copts + my_flags(), **kwargs)
Define my_flags() macro as following:
config_setting(
name = "windows_x86_64",
values = {"cpu": "x64_windows"},
)
config_setting(
name = "linux_k8",
values = {"cpu": "k8"},
)
def my_flags():
x64_windows_options = ["/W4"]
k8_options = ["-Wall"]
return select({
":windows_x86_64": x64_windows_options,
":linux_k8": k8_options,
"//conditions:default": [],
})
How it works:
Depending on --cpu flag value my_flags() will return different flags.
This value is resolved automatically based on a platform. On Windows, it's x64_windows, and on Linux it's k8.
Then, your macro my_cc_library will supply this flags to every target in a project.
A better way of doing this has been added since you asked--sometime in 2019.
If you add
common --enable_platform_specific_config to your .bazelrc, then --config=windows will automatically apply on windows hosts, --config=macos on mac, --config=linux on linux, etc.
You can then add lines to your .bazelrc like:
build:windows --windows-flags
build:linux --linux-flags
There is one downside, though. This works based on the host rather than the target. So if you're cross-compiling, e.g. to mobile, and want different flags there, you'll have to go with a solution like envoy's (see other answer), or (probably better) add transitions into your graph targets. (See discussion here and here. "Flagless builds" are still under development, but there are usable hacks in the meantime.) You could also use the temporary platform_mappings API.
References:
Commit that added this functionality.
Where it appears in the Bazel docs.
I have the following maven_jar in my workspace:
maven_jar(
name = "com_google_code_findbugs_jsr305",
artifact = "com.google.code.findbugs:jsr305:3.0.1",
sha1 = "f7be08ec23c21485b9b5a1cf1654c2ec8c58168d",
)
In my project I reference it through #com_google_code_findbugs_jsr305//jar. However, I now want to depend on a third party library that references #com_google_code_findbugs_jsr305 without the jar target.
I tried looking into both bind and alias, however alias cannot be applied inside the WORKSPACE and bind doesn't seem to allow you to define targets as external repositories.
I could rename the version I use so it doesn't conflict, but that feels like the wrong solution.
IIUC, your code needs to depend on both #com_google_code_findbugs_jsr305//jar and #com_google_code_findbugs_jsr305//:com_google_code_findbugs_jsr305. Unfortunately, there isn't any pre-built rule that generates BUILD files for both of those targets, so you basically have to define the BUILD files yourself. Fortunately, #jart has written most of it for you in the closure rule you linked to. You just need to add //jar:jar by appending a couple of lines, after line 69 add something like:
repository_ctx.file(
'jar/BUILD',
"\n".join([
"package(default_visibility = '//visibility:public')"] + _make_java_import('jar', '//:com_google_code_findbugs_jsr305.jar')
This creates a //jar:jar (or equivalently, //jar) target in the repository.