What is the role of "py_binary"? - bazel

py_binary finally generates an executable file or an alias for a py script? What are its benefits? If it is an executable file, it will lose the meaning of python.

Making something executable can be just adding a chmod +x and slapping a #!/foo/bar line on top, the thing itself is still whatever interpreter code it was before.
In the case of bazel, it will add a wrapper script that will set up an execution environment before dispatching to the Python code. Consider e.g. Bazel's runfiles, but also other py_library targets.
In addition, you can use the target in places where an executable is required as attribute for another target. A single Python file doesn't have any dependencies Bazel knows about, so that would technically fit there but would not integrate well with Bazel.

Related

Bazel rules with unknown output filenames

I have a command that compiles and runs a program, but the intermediate files are randomly named (but contained within a directory). E.g.
build foo.src bar.src -o output_dir
run output_dir
Bazel requires me to pre-declare all of the outputs of my rule, but I can't do that because they're randomly named. Can I somehow name an entire directory instead?
The only alternative I can think of is having the rule zip/unzip the directory before/after it runs the commands, which is a pretty awful solution.
Edit: I found an issue exactly describing the "just zip/unzip everything" solution here. The closing comment says to just use the rules from rules_pkg to zip/unzip stuff. Unfortunately it requires Python too.
Some of the comments in that thread suggest you can use declare_directory() but I don't think that really works.
There are tree artifacts. An example of how to use an tree artifact can be found here.
Tree artifacts are problematic for caching since Bazel is not aware of the content of the corresponding directory and if for some reason the content of a tree artifact is different between two machines that use the same Bazel cache and same Bazel configuration you are trouble.

Access Cargo features *inside* the build script

How is it possible to access which features the package is being built with, inside the build.rs script? There is an incredibly expensive step in the script which is only needed for a particular cargo feature, but I can't see any way to access config features inside the build script.
Is there any way to read whether or not a given feature is enabled in the build.rs script?
I haven't been able to find documentation here, but was able to figure out one solution by guessing.
Cargo features are available as build features not just in the main source files, but inside the build.rs script as well. So you can use any of the standard ways to check configuration, like the cfg! and #[cfg(feature = "...")] macros, as mentioned in https://doc.rust-lang.org/reference/conditional-compilation.html and How do I use conditional compilation with `cfg` and Cargo?
Cargo sets a number of environment variables when the build scripts are run:
https://doc.rust-lang.org/cargo/reference/environment-variables.html#environment-variables-cargo-sets-for-build-scripts
Including an environment variable for each feature:
CARGO_FEATURE_<name> — For each activated feature of the package being built, this environment variable will be present where <name> is the name of the feature uppercased and having - translated to _.

How do I debug an annotation processor in a bazel java_library rule?

I have added an annotation processor as a java_plugin and have added this into the plugins section of my java_library rule. I was wondering what are the bazel options to step through the annotation processor code and the javac compiler's code?
One way to do this is to run bazel build with --subcommands. Bazel will then print out all the commands it executes during a build. You can then find the javac invocation you're interested in, copy the command line (including the cd part so you're in the correct directory), modify the command line to include the debugging options, and run it manually. Then you can debug it like you would any java program.
One thing to note is that bazel will print only the commands that it actually runs in that build, so if the action you're interested in is already up-to-date, you may have to delete one of its outputs (e.g. the jar output of that library) to get bazel to re-run the action.

How can I pass a specific macro to each compile in bazel?

Here's an easy version of the BUILD file:
cc_library(
name = "ab",
srcs = ['a.c', 'b.c', 'logger.h'],
)
logger.h contains the implementation of a logging function that uses the macro XOC_FILE_ID. XOC_FILE_ID has to contain the name of the source file.
Using __FILE__ instead would not help because __FILE__ expands to the string "logger.h" inside the file logger.h.
That's why I need to compile these files with different defines:
gcc -c [...] -DXOC_FILE_ID="a.c" a.c
gcc -c [...] -DXOC_FILE_ID="b.c" b.c
My failed approaches:
set the attribute local_defines using the value{source_file}: local_defines = ['XOC_FILE_ID="{source_file}"]: does not get replaced
set the attribute local_defines using the make variable $<: local_defines = ['XOC_FILE_ID="$<"]: Bazel aborts telling me that $(<) is not defined
same approach for attribute copts
Of course, I could try to make Bazel call a compiler wrapper script. However, this would mean that I have to explicitly set PATH to my wrapper script(s) before each call to Bazel. Isn't there a better solution?
You have access to {source_file} in a toolchain definition.
This means you have to write your own toolchain definition.
I tried two ways of writing a toolchain:
Use the Bazel tutorial on toolchains. Afterwards my build was broken: The default compile options of Bazel were missing. cc_library did not create shared libraries any more.
Use a hint pointing to a post in bazel-discuss and use the toolchain that Bazel itself creates using your environment. That's what I'm going to describe now (for Bazel 3.5.1)
If you want to use a compiler that is not in $PATH, do bazel clean and update $PATH to make compiler available. Bazel will pick it up.
create a toolchain directory (maybe my-toolchain/) in your workspace
bazel build #bazel_tools//tools/cpp:toolchain
copy BUILD, all *.bzl files, cc_wrapper.sh and builtin_include_directory_paths from $(bazel info output_base)/external/local_config_cc/ to your toolchain directory; copy the files the symbolic links are pointing to instead of copying the symbolic links
Adapt the BUILD file in my-toolchain/ to your needs—like adding '-DXOC_FILE_ID=\\"%{source_file}\\"' to compile_flags of cc_toolchain_config.
add these lines to your .bazelrc to make Bazel use your new toolchain by default:
build:my-toolchain --crosstool_top=//my-toolchain:toolchain
build --config=my-toolchain

How can I run custom tools from a premake build script?

I'm using protocol buffers for data serialization in my C++ application. I would like to add the invokation of the protoc code generator in my premake build script (thus ensure the up-to-date state of the generated classes and avoid the need to store generated source under version control).
Even their FAQ has a question and answer about this, but the answer is very incomplete for me. Having the ability to call any lua function is great, but where exactly do I put that call? I need to run the protoc compiler before building either the application or the unit tests.
You can certainly call outside code from Premake scripts. But remember: Premake scripts are used to generate build files: Makefiles, C++ projects, etc. The Premake script is run before building the project.
If you want this preprocess to be run outside of the actual build files (and not by make, VC++, Code::Blocks, etc), then it's easy. Lua's os.execute will execute a command-line.
Premake scripts are still Lua scripts. All of the Premake commands are just Lua calls into functions that Premake defines. Premake executes the scripts, then uses the data from them to generate the build files. So all of your Lua code is run during the execution of the script. Where you put this command in your script is irrelevant; wherever it is, it will execute before your build files are generated.
And if you want to run the protoc step during the build (from VC++, makefile, etc.) you can set up a prebuild command. See http://industriousone.com/prebuildcommands for more info and an example.

Resources