I need to use bazel to manage our the source dependencies such that the final build product is purely a function of the toolchain, a vanishingly small number of files from the linux-distribution, and the source code itself. This means building things like libz, libssl, libcrypto, libcurl...
These dependences depend on each other
They have their own native (mostly autotools based) build systems, based on something like ./configure --prefix=foo && make -j && make install.
It seems to me that Bazel is not well suited to this use case. In particular, we need to manually recreate the make install step for each library, in order to copy make install artifacts out of execroot. It's unclear to me how the next dependency reuses the products. So, for example, when building zlib, we produce libz.a, and a bunch of header files. Then, when building libcrypto.a, we need to modify CPPFLAGS and LDFLAGS to point to the zlib "installation".
This strikes me as so pedantic that it's begging for code generation to generate the BUILD files.
Is there an alternative approach that doesn't require bespoke copying the "make install" logic into a genrule?
Take a look at rules_foreign_cc (https://github.com/bazelbuild/rules_foreign_cc). This contains rules for integrating with foreign build systems (make, autotools+make, cmake, etc.).
Related
At the end of a cargo build, I would like to call wasm-opt with specific optimization options on the generated WASM file.
Unfortunately, it seems that the Cargo.toml does not support support asyncify options.
A good solution would prevent cargo from rebuilding the project after running wasm-opt on the WASM file.
If I used a cargo-build script, it is unclear to me how I could specify a dependency on the wasm-opt build step to avoid unnecessary re-builds even though the Rust source files haven't changed. Any pointers ?
I have a Bazel project (the new tcmalloc) I'm trying to integrate into a typical GNU Make project that uses it's own build of compiler/libc++. The goal is to not fork the upstream project.
If I pass all the C++ options correctly to bazel (one set of which is -nostdinc++ -I<path to libc++>), Bazel is uhappy The include path '/home/vlovich/myproject/deps/toolchain/libc++/trunk/include' references a path outside of the execution root. (tcmalloc is a git submodule sibling # deps/tcmalloc). It's possible to get this "working" by giving Bazel a custom script to invoke as the compiler that injects those flags so that Bazel never sees them. However, I'd like to just define a toolchain to work properly.
I've read all the documentation I could find on this topic but it's not clear to me how to glue all these docs together.
Specifically not really clear where I should place the toolchain definition files or how to tell Bazel to find those definitions. Is there a way to give bazel a directory that it uses to find toolchain definitions? Am I expected to create a top-level WORKSPACE # /home/vlovich/myproject & register tcmalloc and my toolchain there, & then invoke bazel from /home/vlovich/myproject instead of /home/vlovich/myproject/deps/tcmalloc?
Toolchain support is rather complicated, and it is hard to understand, if you are not a bazel maintainer.
You can use CC and CXX environment variables to set a different compiler like: CC=your_c_compiler CXX=your_c++_compiler bazel build .... You can write your own custom script wrapper which will act as a normal C++ compiler
That -I<path to libc++> does not work, because all normal include paths have to be defined in srcs attribute or via dependencies indicated by the deps attribute. For system-wide dependencies use -isystem Read more about it https://stackoverflow.com/a/44061589/4638604
I am trying to follow the instructions for contributors here:
https://bazel.build/contributing.html
I have a successful build off of master (i.e. bazel build //src:bazel), but the doc suggests also "you might want to build the various tools Bazel uses." I am trying to do that, for example:
cd src/java_tools/singlejar
bazel build //...
but it fails with:
ERROR: /Users/.../bazel/third_party/protobuf/3.2.0/BUILD:621:1: no such target '//external:gtest': target 'gtest' not declared in package 'external' defined by /Users/plaird/scone/public/bazel/WORKSPACE and referenced by '//third_party/protobuf/3.2.0:test_plugin'.
Do I need to build gtest locally, and then add it to the WORKSPACE file?
bazel build //..., no matter where you invoke it, will build everything in the project. It looks like what you probably want is bazel build //src/java_tools/singlejar/..., which will build all targets under that directory.
In general, though, you probably don't need to compile singlejar separately. I've been working on Bazel for several years and 99% of the time you don't have to build the tools separately.
In terms of the error you're getting, it would be nice if we could get //... building, but it hasn't been a huge priority. The protobuf code build is weird and I don't recommend trying to debug it, just jump into whatever you want to actually work on.
I am just starting to get to grips with Nix, so apologies if I missed the answer to my question in the docs.
I want to use Nix to setup a secure production machine with the minimal set of libraries and executables. I don't want any compilers or other build tools present because these can be security risks.
When I install some packages, it seems that they depend on only the minimum set of runtime dependencies. For example if I install apache-tomcat-8.0.23 then I get a Java runtime (JRE) and the pre-built JAR files comprising Tomcat.
On the other hand, some packages seem to include a full build toolchain as dependencies. Taking another Java-based example, when I install spark-1.4.0 Nix pulls down the Java development kit (JDK) which includes a compiler, and it also pulls the Maven build tool etc.
So, my questions are as follows:
Do Nix packages make any distinction between build and runtime dependencies?
Why do some packages appear to depend on build tools whereas others only need runtime? Is this all down to how the package author wrapped up the application?
If a package contains build dependencies that I don't want, is there anything that I, as the operator, can do about it except design my own alternative packaging for the same application?
Many thanks.
The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path. For example, if you build a package using the compiler /nix/store/abcdef...-foo-1.20, then Nix will scan all files in the generated output for the hash bit abcdef.... If that hash is found, then the output is assumed to reference the compiler in some way, so it's kepts as a runtime dependency. If that hash does not occur, however, then the generated output has no reference to the compiler and therefore cannot access it at runtime, so foo-1.20 is treated as a build-time-only dependency.
Some packages record large parts of their build environment for informational/debugging purposes. Perl, for example, stores every little detail about the tools used to compile it, so all those store paths end up being treated as runtime dependencies despite the fact that Perl doesn't actually need them at runtime, but Nix can't know: it just knows that the Perl store path references those tools. Now, Nixpkgs maintainers usually make an effort to clean that up, i.e. by pruning the logfile that contains all those store paths from the installation, etc., but for sure there are plenty of packages in the database still that haven't been optimized to that end yet.
Let's assume that you'd like to compile a version of openssh that does not depend on PAM. Then you can remove the build input from the expression by means of an override, i.e. you replace the pam argument that's normally passed to the openssh build function with null. To do that, store the following file in ~/.nixpkgs/config.nix
{
packageOverrides = super: let self = super.pkgs; in {
openssh-without-pam = super.openssh.override {
pam = null;
};
};
}
and now install that package by running:
$ nix-env -f "<nixpkgs>" -iA openssh-without-pam
I'm using a CMake build in a Jenkins environment and want to build the protobuf compiler from source.
This all works but in the next step I'm trying to use the compiler to translate my proto files which doesn't work, cause it cannot find it's own shared objects. I've tried defining the search path in the CMakeLists.txt file but it won't detect the shared object location in my repository tree $PRJ_MIDDLEWARE/protobuf/lib. I've tried telling cmake or the system where to search by defining:
set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib")
set(ENV{LD_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib:$ENV{LD_LIBRARY_PATH}")
But it always fails when trying to invoke the protoc compiler I just build. I tried invoking ´´ldconfig´´ from CMake but that doesn't work cause the jenkins user doesn't have the right to do this. Currently my only solution is to login to the build server an do this manually as root. But that is not how I want to do this... the next release moves to a new directory—this has to be done again. Do I have other options? Preferably directly from CMake, from Jenkins or maybe even Protocol Buffers?
Thank you
Two ideas come to mind:
Build protobuf compiler as a static binary (I don't know if that's possible but it usually is.)
Set LD_LIBRARY_PATH environment variable before invoking cmake to point to the location of protoc shared libraries.