Build versus Runtime Dependencies in Nix - nix

I am just starting to get to grips with Nix, so apologies if I missed the answer to my question in the docs.
I want to use Nix to setup a secure production machine with the minimal set of libraries and executables. I don't want any compilers or other build tools present because these can be security risks.
When I install some packages, it seems that they depend on only the minimum set of runtime dependencies. For example if I install apache-tomcat-8.0.23 then I get a Java runtime (JRE) and the pre-built JAR files comprising Tomcat.
On the other hand, some packages seem to include a full build toolchain as dependencies. Taking another Java-based example, when I install spark-1.4.0 Nix pulls down the Java development kit (JDK) which includes a compiler, and it also pulls the Maven build tool etc.
So, my questions are as follows:
Do Nix packages make any distinction between build and runtime dependencies?
Why do some packages appear to depend on build tools whereas others only need runtime? Is this all down to how the package author wrapped up the application?
If a package contains build dependencies that I don't want, is there anything that I, as the operator, can do about it except design my own alternative packaging for the same application?
Many thanks.

The runtime dependencies are a subset of the build-time dependencies that Nix determines automatically by scanning the generated output for the hash part of each build-time dependencies' store path. For example, if you build a package using the compiler /nix/store/abcdef...-foo-1.20, then Nix will scan all files in the generated output for the hash bit abcdef.... If that hash is found, then the output is assumed to reference the compiler in some way, so it's kepts as a runtime dependency. If that hash does not occur, however, then the generated output has no reference to the compiler and therefore cannot access it at runtime, so foo-1.20 is treated as a build-time-only dependency.
Some packages record large parts of their build environment for informational/debugging purposes. Perl, for example, stores every little detail about the tools used to compile it, so all those store paths end up being treated as runtime dependencies despite the fact that Perl doesn't actually need them at runtime, but Nix can't know: it just knows that the Perl store path references those tools. Now, Nixpkgs maintainers usually make an effort to clean that up, i.e. by pruning the logfile that contains all those store paths from the installation, etc., but for sure there are plenty of packages in the database still that haven't been optimized to that end yet.
Let's assume that you'd like to compile a version of openssh that does not depend on PAM. Then you can remove the build input from the expression by means of an override, i.e. you replace the pam argument that's normally passed to the openssh build function with null. To do that, store the following file in ~/.nixpkgs/config.nix
{
packageOverrides = super: let self = super.pkgs; in {
openssh-without-pam = super.openssh.override {
pam = null;
};
};
}
and now install that package by running:
$ nix-env -f "<nixpkgs>" -iA openssh-without-pam

Related

Bazel: building a tree of dependencies

I need to use bazel to manage our the source dependencies such that the final build product is purely a function of the toolchain, a vanishingly small number of files from the linux-distribution, and the source code itself. This means building things like libz, libssl, libcrypto, libcurl...
These dependences depend on each other
They have their own native (mostly autotools based) build systems, based on something like ./configure --prefix=foo && make -j && make install.
It seems to me that Bazel is not well suited to this use case. In particular, we need to manually recreate the make install step for each library, in order to copy make install artifacts out of execroot. It's unclear to me how the next dependency reuses the products. So, for example, when building zlib, we produce libz.a, and a bunch of header files. Then, when building libcrypto.a, we need to modify CPPFLAGS and LDFLAGS to point to the zlib "installation".
This strikes me as so pedantic that it's begging for code generation to generate the BUILD files.
Is there an alternative approach that doesn't require bespoke copying the "make install" logic into a genrule?
Take a look at rules_foreign_cc (https://github.com/bazelbuild/rules_foreign_cc). This contains rules for integrating with foreign build systems (make, autotools+make, cmake, etc.).

Access Cargo features *inside* the build script

How is it possible to access which features the package is being built with, inside the build.rs script? There is an incredibly expensive step in the script which is only needed for a particular cargo feature, but I can't see any way to access config features inside the build script.
Is there any way to read whether or not a given feature is enabled in the build.rs script?
I haven't been able to find documentation here, but was able to figure out one solution by guessing.
Cargo features are available as build features not just in the main source files, but inside the build.rs script as well. So you can use any of the standard ways to check configuration, like the cfg! and #[cfg(feature = "...")] macros, as mentioned in https://doc.rust-lang.org/reference/conditional-compilation.html and How do I use conditional compilation with `cfg` and Cargo?
Cargo sets a number of environment variables when the build scripts are run:
https://doc.rust-lang.org/cargo/reference/environment-variables.html#environment-variables-cargo-sets-for-build-scripts
Including an environment variable for each feature:
CARGO_FEATURE_<name> — For each activated feature of the package being built, this environment variable will be present where <name> is the name of the feature uppercased and having - translated to _.

How to see all currently used packages?

I implemented a ros package that depends on some other packages.
These packages depend on even more packages and so on...
How can I find out which packages are actually used when building and running nodes in my package?
(Except for looking at ALL the package.xml files manually, because there are multiple cases in which some packages are listed there but already deprecated and not actually used anymore)
So I'm looking for something like a tool/command/script that can list all actual package dependencies.
I think you can do this natively with rospack. To see everything a package depends on (including dependencies of dependencies) without duplicates, just do
rospack depends my_package
You can get it formatted with indents to see all dependency chains of each package (will include duplicates across chains if more than one package shares the same dependency):
rospack depends-indent my_package
And if you only wanted to know the immediate dependencies of your package, you can do:
rospack depends1 my_package
I'm not sure that addresses the problem you identify that it shouldn't identify deprecated dependencies, but if a package is still specifying a dependency explicitly in a package.xml, how is the system to know that isn't really a dependency? It'd be better to get those package.xml files up to date.

build Compiler 'protobuf' from source and use it with it's shared objects from within cmake

I'm using a CMake build in a Jenkins environment and want to build the protobuf compiler from source.
This all works but in the next step I'm trying to use the compiler to translate my proto files which doesn't work, cause it cannot find it's own shared objects. I've tried defining the search path in the CMakeLists.txt file but it won't detect the shared object location in my repository tree $PRJ_MIDDLEWARE/protobuf/lib. I've tried telling cmake or the system where to search by defining:
set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib")
set(ENV{LD_LIBRARY_PATH} "$ENV{PRJ_MIDDLEWARE}/protobuf/lib:$ENV{LD_LIBRARY_PATH}")
But it always fails when trying to invoke the protoc compiler I just build. I tried invoking ´´ldconfig´´ from CMake but that doesn't work cause the jenkins user doesn't have the right to do this. Currently my only solution is to login to the build server an do this manually as root. But that is not how I want to do this... the next release moves to a new directory—this has to be done again. Do I have other options? Preferably directly from CMake, from Jenkins or maybe even Protocol Buffers?
Thank you
Two ideas come to mind:
Build protobuf compiler as a static binary (I don't know if that's possible but it usually is.)
Set LD_LIBRARY_PATH environment variable before invoking cmake to point to the location of protoc shared libraries.

Specifying maven nar plugin compiler and linker executable paths

I need to specify the full path to the compiler executable for building with maven-nar.
The aol.properties file seems to only accept certain predefined values for the compiler name.
How do I tell the nar plugin exactly where my compiler and linker executables are. In this case I am trying to compile for ios from macosx.
Looks like the only way is to add the compiler to CppTasks and install the 'hacked' version on your build host.
Here is the version I would use as a starting point: http://duns.github.com/maven-nar-plugin/cpptasks.html
There is starting to be some effort to merge the NAR branches
https://github.com/maven-nar/maven-nar-plugin
Would be worthwhile raising issues there.
For Windows 32/64 bit support different compiler paths where needed if wanting to run maven once without changing environment variables to build both platforms.
There is a work in progress, been using for a while, but not really published it. I'm using windows only and never tried it on mac.
https://github.com/GregDomjan/maven-nar-plugin/tree/multi
I was aiming to merge it with trunk and have not had a chance yet to load the matching cpptasks changes required to allow provision of the path and some other settings.
Unfortunatly there where also a bunch of other changes that may be uncessary around configuration.

Resources