Specify "--build_python_zip" flag within Bazel py_binary rule - bazel

Is it possible to specify the bazel "--build_python_zip" flag from within the py_binary rule so that I don't need to add this flag every time I use Bazel in my workspace?

It doesn't seem that there's a way to specify this per py_binary target, according to this issue.
However, you can use a bazelrc file to store common options. Add the following line to your workspace's bazelrc at <workspace>/.bazelrc:
build --build_python_zip
Note that using this method, every invocation of bazel build, bazel test and bazel run will include this flag.

After PR 9453, you can:
filegroup(
name = "foo_zip",
srcs = [":foo_binary"],
output_group = "python_zip_file",
)
py_binary(
name = "foo_binary",
srcs = ["foo.py"],
)
Then you can invoke bazel as:
bazel build :foo_zip
and don't have to specify --build_python_zip. This will also allow par_binary rules to co-exist with native zips.

Related

With Bazel how do I make part of one genrules' source files (e.g. header files) available to another genrule?

Maybe this is a no-brainer and I just didn't get the concept yet.
I have a genrule, basically wrapping an existing make/config workflow to integrate it into a Bazel-based build configuration. In my example I'd like to build openssl, and then (with the same approach) some library depending on openssl, say xmlsec1
My (shortened) rule for openssl looks like this:
genrule(
name = "build",
visibility = ["//visibility:public"],
srcs = glob(["**/*"], exclude=["bazel-*"]),
outs = [
"libssl.a",
"libcrypto.a",
"include/openssl/opensslconf.h",
],
cmd = """
OUT_DIR="$$(realpath $(RULEDIR))"
pushd "$$(dirname $(location config))"
./config
make
make -j6 DESTDIR="$$OUT_DIR" install_sw install_ssldirs
"""
)
This builds fine and $OUT_DIR contains all files I need to build against openssl
I'd now like to create another genrule building xmlsec1 which needs the path to openssls header files.
Now if I want to access a header, say include/opensslv.h it won't be part of #openssl//:builds artifacts since I didn't explicitly listing it in outs. But doing so results in
ERROR: Traceback (most recent call last):
File "/bla/blubb/.cache/bazel/_bazel_me/f68917ddf601b6533d6db04f8101d580/external/openssl/BUILD.bazel", line 37, column 8, in <toplevel>
genrule(
Error in genrule: rule 'build' has file 'include/openssl/opensslv.h' as both an input and an output
which is correct of course, but what can I do about it?
Removing those header files from srcs doen't work neither, since they wouldn't be available at build time.
One way would be to make install openssl to some destination directory, listing each of the dozens of header files explicitly and using that prefix in all dependent projects. But that doesn't feel right.
What's the recommended way to pass lists of files from one genrule to another?
xmlsec1 could have include/openssl/opensslv.h in its own srcs directly. The build genrule shouldn't really need include/openssl/opensslv.h in its outs both because that would be a circular dependency as bazel said, and because the genrule doesn't really build that file: it already exists on disk (I assume it's getting captured by the glob())
There may be nicer way to organize the library though, something like this:
genrule(
name = "build_openssl",
visibility = ["//visibility:private"],
outs = [
"libssl.a",
"libcrypto.a",
"include/openssl/opensslconf.h",
],
.....,
)
cc_library(
name = "openssl",
srcs = [":build_openssl"],
hdrs = [
"include/openssl/opensslv.h",
# other headers that openssl should provide
],
)
then your other rules can depend on the openssl cc_library and get both the .a files and the header files. (I have not tested this though)

How to fail a Bazel build on a rule failure?

I am using Bazel rules in NodeJS in my application. The aim is to simply lint a set of files and fail the build if linting fails. What I'm currently experiencing is that the build is successful despite lint errors.
Here's a part of my BUILD file:
load("#npm//htmlhint:index.bzl", "htmlhint")
filegroup(
name = "htmldata",
srcs = glob(["**/*.html"]),
)
htmlhint(
name = "compile",
data = [
"htmlhint.conf",
"//:htmldata"
],
args = [
"--config",
"htmlhint.conf",
"$(locations //:htmldata)"
]
)
I first load the hinting library, then I define a filegroup for all the HTML files that I want to lint. Afterward, I use the rule with its data and arguments.
To run the build, I use the default option via npm script: bazel build //...
Your build file is working as expected. Unfortunately it doesn't do what you want, because when you load the macro from #npm//htmlhint:index.bzl it sets up the nodejs binary which is a runnable target, which means that it will only create runfiles + executable when building. In this case, the build will not run the library.
There are several options to do what you want:
Use the htmlhint_test macro to create a test target.
Create a custom rule that will use the nodejs binary to build some artefacts. In this case, you can force the build to fail.
However, I suggest using the first approach, because if htmlhint is a linting tool, it won't produce any meaningful outputs and is best to keep it as part of the test suite.
Here's what you need to do to set up the compile target as a test target
diff --git a/BUILD.bazel b/BUILD.bazel
index 4e58ac5..3db5dbb 100644
--- a/BUILD.bazel
+++ b/BUILD.bazel
## -1,11 +1,11 ##
-load("#npm//htmlhint:index.bzl", "htmlhint")
+load("#npm//htmlhint:index.bzl", "htmlhint_test")
filegroup(
name = "htmldata",
srcs = glob(["**/*.html"]),
)
-htmlhint(
+htmlhint_test(
name = "compile",
data = [
"htmlhint.conf",
Then you can check it with bazel test //....
If you want to see the output just run your compile target with bazel run //path/to:compile

Add Go Test binary to container_image using Bazel

I am building a go test package that I would like to have included in a Docker image. The binary can be built using bazel build //testing/e2e:e2e_test. This creates a binary in the bazel_bin folder. Now I would like to take this binary and add it to a docker image...
container_image(
name = "image",
base = "#alpine_linux_amd64//image",
entrypoint = ["/e2e_test"],
files = [":e2e_test"],
)
This gives me the following error
ERROR: ...BUILD.bazel:27:16: in container_image_ rule //testing/e2e:image: non-test target '//testing/e2e:image' depends on testonly target '//testing/e2e:e2e_test' and doesn't have testonly attribute set
ERROR: Analysis of target '//testing/e2e:image' failed; build aborted: Analysis of target '//testing/e2e:image' failed
Ultimately, what I am trying to accomplish is creating a application that contains a suite of end to end tests using the Go Test framework. I can then distribute these tests in a docker container to be run in testing environment.
The error is saying that a non-testonly target depends on a testonly one, which the docs for testonly say isn't allowed:
If True, only testonly targets (such as tests) can depend on this target.
Equivalently, a rule that is not testonly is not allowed to depend on any rule that is testonly.
You can do what you're looking for by making your target testonly like this:
container_image(
name = "image",
testonly = True,
base = "#alpine_linux_amd64//image",
entrypoint = ["/e2e_test"],
files = [":e2e_test"],
)

Why does bazel fail with header not under the specified strip prefix

I am using bazel 3.7.2 (the same project works OK with bazel 3.3.1)
In my build file I use:
cc_library(
name = "xft",
hdrs = ["#nixpkgs_xft//:include"],
strip_include_prefix = "/external/nixpkgs_xft/include",
)
Running bazel build target, bazel complains:
BUILD:71:11: in cc_library rule //target:xft: header 'external/nixpkgs_xft/include/X11/Xft/Xft.h' is not under the specified strip prefix 'external/nixpkgs_xft/include'
Somehow bazel got a different understanding of a header's prefix... How is this supposed to work?
external/nixpkgs_xft/include/X11/Xft/Xft.h
external/nixpkgs_xft/include
OK, the error message is really confusing. I looked through the code and found that there is a different thing compared to the prefix than the header file: the repository relative path is compared to the prefix (but the error message prints the "exec path").
Seems in newer Bazel, you do no longer need to include the exernal/foo part of the path, so this works:
cc_library(
name = "xft",
hdrs = ["#nixpkgs_xft//:include"],
strip_include_prefix = "/include",
)

Bazel cc_library dependency on other cc_library when each compile with a different crosstool

I have a code generator tool that generates C/C++ code. This code generator tool is compiled with crosstool1. The generated C/C++ code needs to be compiled with crosstool2.
So the actions are:
Using Crosstool1 compile 'code_generator'.
Execute 'code_generator' and generate 'generated_code.cpp'
Using Crosstool2 compile 'generated_code.cpp'
Is it possible to make a cc_library() determine the crosstool to use? I saw that Skylark rules now allow a 'toolchains' parameter which I'm not sure how this is used, also I do not want to do the heavy lifting of C/C++ compiling bare bone with Skylark.
Is there an example of using a proper Host Crosstool and Target Crosstool except for the Tenserflow example? I get a headache each time I read it :D
Assume //crosstool1:toolchain is a label for cc_toolchain_suite rule describing first crosstool, //crosstool2:toolchain is a label for cc_toolchain_suite for second crosstool, and the build file for the project is:
cc_binary(
name = "generator",
srcs = [ "main.cc" ],
)
genrule(
name = "generate",
outs = ["generated.cc"],
cmd = "$(location :generator) > $#",
tools = [":generator"],
)
cc_binary(
name = "generated",
srcs = [ "generated.cc" ],
)
Then running:
bazel build --host_crosstool_top=//crosstool1:toolchain --crosstool_top=//crosstool2:toolchain :generated
will do exactly what you describe, it will use crosstool1 to build :generator, and crosstool2 to build generated. Genrules use host configuration by default, so all should just work.

Resources