(From https://groups.google.com/d/msg/bazel-discuss/LQfL6c-6Wqg/uinZMCTYCgAJ)
Hi--
Is it possible to use bazel to cross-compile using a toolchain where the compiler flags are not remotely gcc-like?
For example, bazel seems to want/need to use -MD -MF foo.d, but the toolchain I have doesn't support these flags, and I do not know of a way to filter these flags from the compile invocation.
The only thing I can think of is to point the CROSSTOOL at some wrapper scripts to muck with all the arguments.
--Rob
Ideally, CROSSTOOL would encapsulate all the toolchain/platform specific flags and Bazel won't hardcode any flags specific to gcc/linux. We're getting there, although at much slower pace than expected (it's quite painful process).
So you should be able to write your own crosstool (or generate one similarly to how bazel does it) that would not emit -MD -MF foo.d. Since we're in the process of migrating many internal crosstools, Bazel is trying to be smart and will add features that your crosstool is missing. Check CppConfiguration.java and CppLinkActionConfigs.java for these "patches".
And regarding wrapper scripts, that's what bazel has been doing for MSVC builds, translating gcc-like command lines into cl.exe style. We are slowly removing logic from these scripts as crosstool is more powerful (e.g. Bazel#head now doesn't use wrapper scripts for linking at all).
Related
I need to use bazel to manage our the source dependencies such that the final build product is purely a function of the toolchain, a vanishingly small number of files from the linux-distribution, and the source code itself. This means building things like libz, libssl, libcrypto, libcurl...
These dependences depend on each other
They have their own native (mostly autotools based) build systems, based on something like ./configure --prefix=foo && make -j && make install.
It seems to me that Bazel is not well suited to this use case. In particular, we need to manually recreate the make install step for each library, in order to copy make install artifacts out of execroot. It's unclear to me how the next dependency reuses the products. So, for example, when building zlib, we produce libz.a, and a bunch of header files. Then, when building libcrypto.a, we need to modify CPPFLAGS and LDFLAGS to point to the zlib "installation".
This strikes me as so pedantic that it's begging for code generation to generate the BUILD files.
Is there an alternative approach that doesn't require bespoke copying the "make install" logic into a genrule?
Take a look at rules_foreign_cc (https://github.com/bazelbuild/rules_foreign_cc). This contains rules for integrating with foreign build systems (make, autotools+make, cmake, etc.).
py_binary finally generates an executable file or an alias for a py script? What are its benefits? If it is an executable file, it will lose the meaning of python.
Making something executable can be just adding a chmod +x and slapping a #!/foo/bar line on top, the thing itself is still whatever interpreter code it was before.
In the case of bazel, it will add a wrapper script that will set up an execution environment before dispatching to the Python code. Consider e.g. Bazel's runfiles, but also other py_library targets.
In addition, you can use the target in places where an executable is required as attribute for another target. A single Python file doesn't have any dependencies Bazel knows about, so that would technically fit there but would not integrate well with Bazel.
I have a Bazel project (the new tcmalloc) I'm trying to integrate into a typical GNU Make project that uses it's own build of compiler/libc++. The goal is to not fork the upstream project.
If I pass all the C++ options correctly to bazel (one set of which is -nostdinc++ -I<path to libc++>), Bazel is uhappy The include path '/home/vlovich/myproject/deps/toolchain/libc++/trunk/include' references a path outside of the execution root. (tcmalloc is a git submodule sibling # deps/tcmalloc). It's possible to get this "working" by giving Bazel a custom script to invoke as the compiler that injects those flags so that Bazel never sees them. However, I'd like to just define a toolchain to work properly.
I've read all the documentation I could find on this topic but it's not clear to me how to glue all these docs together.
Specifically not really clear where I should place the toolchain definition files or how to tell Bazel to find those definitions. Is there a way to give bazel a directory that it uses to find toolchain definitions? Am I expected to create a top-level WORKSPACE # /home/vlovich/myproject & register tcmalloc and my toolchain there, & then invoke bazel from /home/vlovich/myproject instead of /home/vlovich/myproject/deps/tcmalloc?
Toolchain support is rather complicated, and it is hard to understand, if you are not a bazel maintainer.
You can use CC and CXX environment variables to set a different compiler like: CC=your_c_compiler CXX=your_c++_compiler bazel build .... You can write your own custom script wrapper which will act as a normal C++ compiler
That -I<path to libc++> does not work, because all normal include paths have to be defined in srcs attribute or via dependencies indicated by the deps attribute. For system-wide dependencies use -isystem Read more about it https://stackoverflow.com/a/44061589/4638604
Is it possible to mix targets build in a different compilation mode? I want to compile external dependencies, which are rarely changed in -c opt, but I want build my internal code in dbg or fastbuild mode
The easiest way to do this is stick -O2 in copts of external dependencies or use --per_file_copts in a .bazelrc file.
On some platforms, --compilation_mode has global implications on position-independence and linking behavior. So, it wouldn't necessarily make sense to switch --compilation_mode for part of a build.
I am just starting to get to grips with Nix, so apologies if I missed the answer to my question in the docs.
I want to use Nix to setup a secure production machine with the minimal set of libraries and executables. I don't want any compilers or other build tools present because these can be security risks.
When I install some packages, it seems that they depend on only the minimum set of runtime dependencies. For example if I install apache-tomcat-8.0.23 then I get a Java runtime (JRE) and the pre-built JAR files comprising Tomcat.
On the other hand, some packages seem to include a full build toolchain as dependencies. Taking another Java-based example, when I install spark-1.4.0 Nix pulls down the Java development kit (JDK) which includes a compiler, and it also pulls the Maven build tool etc.
So, my questions are as follows:
Do Nix packages make any distinction between build and runtime dependencies?
Why do some packages appear to depend on build tools whereas others only need runtime? Is this all down to how the package author wrapped up the application?
If a package contains build dependencies that I don't want, is there anything that I, as the operator, can do about it except design my own alternative packaging for the same application?
Many thanks.
The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path. For example, if you build a package using the compiler /nix/store/abcdef...-foo-1.20, then Nix will scan all files in the generated output for the hash bit abcdef.... If that hash is found, then the output is assumed to reference the compiler in some way, so it's kepts as a runtime dependency. If that hash does not occur, however, then the generated output has no reference to the compiler and therefore cannot access it at runtime, so foo-1.20 is treated as a build-time-only dependency.
Some packages record large parts of their build environment for informational/debugging purposes. Perl, for example, stores every little detail about the tools used to compile it, so all those store paths end up being treated as runtime dependencies despite the fact that Perl doesn't actually need them at runtime, but Nix can't know: it just knows that the Perl store path references those tools. Now, Nixpkgs maintainers usually make an effort to clean that up, i.e. by pruning the logfile that contains all those store paths from the installation, etc., but for sure there are plenty of packages in the database still that haven't been optimized to that end yet.
Let's assume that you'd like to compile a version of openssh that does not depend on PAM. Then you can remove the build input from the expression by means of an override, i.e. you replace the pam argument that's normally passed to the openssh build function with null. To do that, store the following file in ~/.nixpkgs/config.nix
{
packageOverrides = super: let self = super.pkgs; in {
openssh-without-pam = super.openssh.override {
pam = null;
};
};
}
and now install that package by running:
$ nix-env -f "<nixpkgs>" -iA openssh-without-pam