What is the purpose of runfiles in Bazel? - bazel

I am trying to understand "runfiles" in Bazel.
What is the purpose of runfiles in Bazel?
What do runfiles achieve that could not be accomplished by placing files at a predictable relative path to an executable?
Why are there runfiles libraries for various target languages?

There are different reasons.
Hermetic tests: How should Bazel differentiate between random files lying around and files that are really needed/intended for a test
Remote execution: If you use remote execution it does not help to have the files on or local development environment - you need to move them to a remote executor.
Caching: How to cache test if the use different input files

#Vertexwahn already gave a good answer for the purpose of runfiles, I'll just add on to this:
Why are there runfiles libraries for various target languages?
For multiplatform support (e.g. Windows runfiles are quite different), convenience, and less boilerplate. They're not strictly necessary because you can use your language's path and I/O APIs at runtime, depending on your environment. Bazel can be quite opinionated about the runfiles layout/structure, which is why some users find it useful to use the runfiles libs' APIs instead.

Related

Bazel: how disentangle the 81 ways to config a c/c++ build

I'm writing a new C/C++ library with tasks. It's low level so it'll need the ability to tune the build for CPU, O/S, whether libraries (or tasks) are build opt, dbg, or which network library is used (e.g. generic TCP, Solarflare, Mellanox, Infiniband). This seems like an ideal Bazel use-case.
I have basic Bazel working e.g. I can build sample tasks, libraries, with dependencies. And so far that works nicely.
Now, to the point: as the code is just C with some C++, it seems impractical to build a whole new toolchain. What Bazel ships with and/or can do on GCC/Clang is good enough; most defaults are fine. Can't I just customize it? Still, I need a simple way to accomplish the following:
allow developers to choose their compiler typically clang, or gcc, with version
allow developers to decide how to build their code e.g. x86-32 dbg libraries but x86-32 opt binaries. Or 64 bit versions of same ...
allow developers to select which network library to link in, and therefore build it but not the others
for dbg builds developers may want to toss in an extra compilation flag and be assured it's used everywhere when build libraries and tasks
to limit developers to valid configurations. For example, if I know the code works on Intel chips x86-32bit or 64 bit but PPC processors aren't supported ... then ... developers may have lots of dials to turn for x86, it's desirable to stop with an error when cpu=ppc
Which basic approach is best?
allow Bazel to auto-discover the platform, toolchain and instruct developers to modify it to task? How?
provide a copy-and-pasted-and-edited custom c/c++ tool chain .bzl file?
Focus on on CROSSTOOL only?
Ship the library with customized platform and tool chain files?
TIA

Migrating maven-shade-plugin rules to bazel

I am currently investigating bazel as a tool to speed up java builds. I have a somewhat complex build to handle, including shading of many libs.
This shading is performed today using maven-shade-plugin. I could not find a bazel equivalent.
The solution should be able to:
aggregate multiple input jars
filter in/out files
specify which artifacts to include
parameter relocations (!)
propose a mechanism equivalent to resource transformers https://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html
If this is out of reach, I would be very interested in some generic way to specify some input, output and "something" to lauch to produce the later from the former.
Any java_bin has an implicit _deploy.jar that contains all classes and is similar to the shaded jar:
name_deploy.jar: A Java archive suitable for deployment (only built if explicitly requested)
The deploy jar contains all the classes that would be found by a classloader that searched the classpath from the binary's wrapper script from beginning to end.
https://docs.bazel.build/versions/master/be/java.html#java_binary_implicit_outputs
But I don't think bazel provides any of the other features that you are requesting.

Using large non-bazel dependencies in a bazel project

I would like to use a very large non-bazel system in a bazel project. Specifically, ROS2. This dependency provides a large number of python, C, and C++ libraries which are built using its own hand-rolled buildsystem. Obviously, I would like to avoid having to translate the entire buildsystem over to bazel.
Broadly, what's the best way of me doing this? In instinct was to use a custom repository rule to download the source (since it's split across many repositories), then use a genrule to call the ROS2 build system. Then write my simple cc_import and py_library rules for each of the individual components that I need.
However, I'm having trouble with the bit where I need to call the foreign build system. It seems that genrules require a list of output files to be specified, while I would like it to make an entire build directory available.
Before I spent any more time on this, I thought I'd ask whether I'm on the right lines since I'm new to bazel. Is this a good strategy? How would you approach this problem? Are there any other projects that mainly use bazel, but call other build systems in this way that I can look at?
As of recent, you can use rules_foreign_cc to call native CMake or make/configure like projects.

How to combine bazel aspects and cc_library

I want to build a rule that is very similar to cc_proto_library. The key features are that it would apply an aspect to all the transitive proto_library dependencies and generate .cc and .h files for all of the dependencies. In addition it would generate actions that would compile these into object files.
While I understand how I can do the file generation, I don't see how to easily do the object generation. The native module is not available for rule (or aspect) implementations, and I cannot use a macro on top of the aspect as I need the object files to be generated in the same package as the proto_library so that it is generated only once.
cc_proto_library can do this I believe because it is not written in Skylark and thus has access to more primitives. Is there anyway to do this with just Skylark?
This is unfortunately currently not possible. There is no Skylark API to the C++ rules/actions (what we call C++ sandwich). We have plans to implement this in Q1 2018. There are many tracking issues, this one looks the most relevant: https://github.com/bazelbuild/bazel/issues/2163.

Handling complex and large dependencies

Problem
I've been developing a game in C++ in my spare time and I've opted to use Bazel as my build tool since I have never had a ton of luck (or fun) working with make or cmake. I also have dependencies in other languages (python for some of the high level scripting). I'm using glfw for basic window handling and high level graphics support and that works well enough but now comes the problem. I'm uncertain on how I should handle dependencies like glfw in a Bazel world.
For some of my dependencies (like gtest and fruit) I can just reference them in my WORKSPACE file and Bazel handles them automagically but glfw hasn't adopted Bazel. So all of this leads me to ask, what should I do about dependencies that don't use Bazel inside a Bazel project?
Current approach
For many of the simpler dependencies I have, I simply created a new_git_repository entry in my WORKSPACE file and created a BUILD file for the library. This works great until you get to really complicated libraries like glfw that have a number of dependencies on their own.
When building glfw for a Linux machine running X11 you now have a dependency on X11 which would mean adding X11 to my Bazel setup. X11 Comes with its own set of dependencies (the X11 libraries like X11Cursor) and so on.
glfw also tries to provide basic joystick support which is provided by default in Linux which is great! Except that this is provided by the kernel which means that the kernel is also a dependency of my project. Now I shouldn't need anything more than the kernel headers this still seems like a lot to bring in.
Alternative Options
The reason I took the approach I've taken so far is to make the dependencies required to spin up a machine that can successfully build my game very minimal. In theory they just need a C/C++ compiler, Java 8, and Bazel and they're off to the races. This is great since it also means I can create a Docker container that has Bazel installed and do CI/CD really easily.
I could sacrifice this ease and just say that you need to have libraries like glfw installed before attempting to compile the game but that brings the whole which version is installed and how is it all configured problem back up that Bazel is supposed to help solve.
Surely there is a simpler solution and I'm overthinking this?
If the glfw project has no BUILD files, then you have the following options:
Build glfw inside a genrule.
If glfw supports some other build system like make, you could create a genrule that runs the tool. This approach has obvious drawbacks, like the not-to-be-underestimated impracticality of having to declare all inputs of that genrule, but it'd be the simplest way of Bazel'izing glfw.
Pre-build glfw.o and check it into your source tree.
You can create a cc_library rule for it, and put the .o file in the srcs. Even though this solution is the least flexible of all because you not only restrict the target platform to whatever the .o was built for, but also make it harder to reproduce the whole build, the benefits are sometimes worth the costs.
I view this approach as a last resort. Even in Bazel's own source code there's one cc_library.srcs that includes a raw object file, because it was worth it, as the commit message of 92caf38 explains.
Require that glfw be installed.
You already considered this option. Some people may prefer this to the other approaches.

Resources