Nix how to use alternative store path for CI caching - nix

Many CI providers give you a directory whose contents are retained across builds and you can use that as a cache. Everything that is stored elsewhere is lost. This means that any artefacts that are created during a nix-build that are placed in the nix store (/nix/store) are lost. I'm trying to figure out how to convince nix to prefer that other cache directory over the global /nix/store. However the documentation is a bit lacking.
What I've tried so far:
Add file:///the/path to substituters and then nix copy --to that path. However I discovered that nix only creates some metadata files in that directory and copies the actual derivation into /nix/store. That's not what I want.
Use local?root=/the/path instead of the file:// url (btw, this syntax is not documented anywhere, I only found it in a single github issue!). That made nix copy the whole derivation to that folder, but I couldn't figure out how to convince nix-build to actually consult that store during build.

Would it be possible to use something along the lines of nix run --store ~/my-nix nixpkgs.hello -c hello --greeting 'Hi everybody!'? The installation guide points to uses of --store for such a use case, as well as some sections in the manual: 1, 2.
An example of this can be found in nix run's tests.
There are also the environment variables NIX_STORE_DIR which might be of use? It's documented in nix-shell --help.
There's also several issues in the Nix repo, here's an interesting discussion.

Related

How to force Nix to "install packages" by building them locally instead of downloading a pre-built binary?

By "install packages" I mean to evaluate Nix build expressions (using nix-env, nix-shell -p, etc.) to build from source instead of using a substitute.
Also cross-posted to Unix& Linux because, as Charles Duffy pointed out, it is more on topic if it is about command-line tools or configuration. Still leaving this here because I assume forcing a package to always compile from source is possible by only using the Nix language, I just don't yet know how. (Or if it is in fact not possible, someone will point it out, and then this question does belong here.)
Either set the substitute option to false in nix.conf (the default is true) or use --option substitute false when invoking a Nix command.
nix-env --options substitute false -i hello
nix-shell --options substitute false -p hello
Might not be the droids you are looking for
As Robert Hensing (comment, chat), Henri Menke (comment), and Vladimír Čunát (comment) pointed out, this may not be the thing that you are really after.
To elaborate: I have been using the most basic Nix features confidently, but got to a point where I need to maintain and deploy a custom fork of a large application written in C, which is quite intimidating at the outset.
Tried to attack the problem the simplest way to just fetch my fork and re-build it with the new source, so I boiled it down to this question. Although, I suspect that the right direction for me is something along the lines of Nixpkgs/Create and debug packages in the NixOS Wiki.
Only re-build the package itself
Vladimír Čunát commented that "disabling substitutes makes you rebuild everything that's missing locally, even though I suspect that people asking such a question often only want to rebuild the specified package itself."
(This is probably achieved with nix-build or "just" overriding the original package but could be wrong. The latter is mention (maybe demonstrated even?) in the NixOS wiki article Development environment with nix-shell but haven't been able to read it thoroughly yet.)
Test for reproducibility
One might arrive to formulating this same question if they want to make sure that subsequent builds are deterministic. As Henri Menke comments, one should use nix-build --check for that.
The --check option is easy to miss; it's not documented in man nix-build or at nix-build in the Nix manual, but at nix-store --realize because (as man nix-build explains it):
nix-build is essentially a wrapper around nix-instantiate (to
translate a high-level Nix expression to a low-level store derivation)
and nix-store --realise (to build the store derivation) [and so] all
options not listed here are passed to nix-store --realise, except
for --arg and --attr / -A which are passed to nix-instantiate.
See detailed examples in the Nix manual at 18.1. Spot-Checking Build Determinism and the next section right after it.
The relevant parts for the substitute configuration option under the nix.conf section from the Nix manual:
Name
nix.conf — Nix configuration file
Description
Nix reads settings from two configuration files:
The system-wide configuration file sysconfdir/nix/nix.conf (i.e. /etc/nix/nix.conf on most systems), or $NIX_CONF_DIR/nix.conf if NIX_CONF_DIR is set.
The user configuration file $XDG_CONFIG_HOME/nix/nix.conf, or ~/.config/nix/nix.conf if XDG_CONFIG_HOME is not set.
You can override settings on the command line using the --option flag,
e.g. --option keep-outputs false.
The following settings are currently available:
[..]
substitute
If set to true (default), Nix will use binary substitutes if available. This option can be disabled to force building from source.
(Formerly known as use-binary-caches.)
Notes
Setting substitute to false (either with --options or in nix.conf) won't recompile the package if the command issue multiple times. That is, hello above would be compiled from source the first time, and then it will access the already present store path if the command issued again.
This is where it gets fuzzy: it is clear that no recompilation takes place because unless the package's Nix build expression doesn't change, the store output hash won't change either, making the next compilation output equivalent to the previous one, hence the action would be superfluous.
So if one would do some light hacking on a package, and just wanted to try it out locally (e.g., with nix-shell) then one would have to use -I nixpkgs=a/local/nixpkgs/dir to pick up those changes - and eventually do a recompilation? Or should one use nix-build?
See also question How to nix-build again a built store path?

Is there a tool for generating crosstool files for installed compilers?

Bazel use CROSSTOOL files to figure out how to builds things. This can be used to (for example) switch between GCC and Clang by setting --crosstool_top. The problem is that it's far from trivial to construct those files.
Does anyone know of any tools that can inspect a Linux installation and generate the needed crosstool files for any "common" compiler(s) that happens to be installed? Something that would be able to find and support any installed versions of Clang and GCC would be enought, any other compilers (icc, etc.) would be fantastic.
(Alternatively: are there any repo's with pre-constructed crosstool files for default installations of all the common compilers?)
Note
I've already found #bazel_tools//tools/cpp:cc_configure.bzl et al. but those seem to only generate configs for the default system compiler and I'm specifically looking for support for the non default compiler(s).
It's only a variation on cc_configure, but you can use environment variables to tweak the generation. Maybe using CC will be enough? If not, what else would you need (pull requests welcomed)?
There is no repo with premade crosstools yet, there will eventually be (maybe in the form of docker containers, we'll see) but currently there's not.

What are different among various dot nix files?

I am new to nixos, this is my understanding about configurations
Configuration files created by installer
/etc/nixos/configuration.nix :: The central point of system description used by nixos-rebuild
/etc/nixos/hardware-configuration.nix :: to be include in above configuration.nix
Configuration files for packages
<package>.nix on nixpkgs github :: configuration for each module (options are searchable on nixos package page)
These are what I do not fully understand
defatult.nix (any where in filesystem) :: for nix-shell like .bashrc
~/.nixpkgs/config.nix :: nix-env overrided configuration for each users
~/.config/<various>.nix :: ?? no idea
Am I understand it right?
Where can I find more information on these configuration files?
You don't call all of these files configuration files. E.g. the <package>.nix files are rather called derivations. What all these files share is the language in which they are written.
/etc/nixos/configuration.nix is indeed where you configure your system and ~/.nixpkgs/config.nix where you configure nix-env.
default.nix doesn't mean anything in particular except that this is the default file that it selected by the commands nix-build and nix-shell when you give them a directory as argument instead of a specific file. Note e.g. that the nixpkgs collection (on GitHub like you noticed already) contains a lot of such default.nix files.
To understand all of this better I advise you to read Nix-pills (that's a long series but it's worth it) and of course the NixOS, Nix and nixpkgs manuals.

How to use luadoc in ubuntu/linux?

As the title says, how to use luadoc in ubuntu/linux? I generated documentation in windows using batch file but no success in ubuntu. Any ideas?
luadoc
Usage: /usr/bin/luadoc [options|files]
Generate documentation from files. Available options are:
-d path output directory path
-t path template directory path
-h, --help print this help and exit
--noindexpage do not generate global index page
--nofiles do not generate documentation for files
--nomodules do not generate documentation for modules
--doclet doclet_module doclet module to generate output
--taglet taglet_module taglet module to parse input code
-q, --quiet suppress all normal output
-v, --version print version information
First off, I have little experience with Luadoc, but a lot of experience with Ubuntu and Lua, so I'm basing all my points off of that knowledge and a quick install that I've just done of luadoc. Luadoc, as far as I can see, is a Lua library (so can also be used in Lua scripts as well as bash). To make documentation (in bash), you just run
luadoc file.lua
(where file is the name of your file that you want to create documentation for)
The options -d and -t are there to choose where you want to put the file and what template you want to use (which I have no clue about, I'm afraid :P). For example (for -d):
luadoc file.lua -d ~/Docs
As far as I can see, there is little else to explain about the actual options (as your code snippet explains what they do well enough).
Now, looking at the errors you obtained when running (lua5.1: ... could not open "index.html" for writing), I'd suggest a few things. One, if you compiled the source code, then you may have made a mistake somewhere, such as not installing dependencies (which I'd be surprised about, because otherwise you wouldn't have been able to make it at all). If you did, you could try getting it from the repos with
sudo apt-get install luadoc
which will install the dependencies too. This is probably the problem, as my working copy of luadoc runs fine from /usr/bin with the command
./luadoc
which means that your luadoc is odd, or you're doing something funny (which I cannot work out from what you've said). I presume that you have lua5.1 installed (considering the errors), so it's not to do with that.
My advice to you is to try running
luadoc file.lua
in the directory of file.lua with any old lua file (although preferably one with at least a little data in) and see if it generates an index.html in the same folder (don't change the directory with -d, for testing purposes). If that DOESN'T work, then reinstall it from the repos with apt-get. If doing that and trying luadoc file.lua doesn't work, then reply with the errors, as something bigger is going wrong (probably).

"Bundling" external libraries in Erlang?

I have an erlang application I have been writing which uses the erldis library for communicating with redis.
Being a bit of a newbie with actually deploying erlang applications to production, I wanted to know if there was anyway to 'bundle' these external libraries with the application rather than installing into my system wide /usr/lib/erlang/lib/ folder.
Currently my directory structure looks like...
\
--\conf
--\ebin
--\src
I have a basic Makefile that I stole from a friend's project, but I am unsure how to write them properly.
I suspect this answer could involve telling me how to write my Makefile properly rather than just which directory to plonk some external library code into.
You should really try to avoid project nesting whenever possible. It can lead to all sorts of problems because of how module/application version is structured within Erlang.
In my development environment, I do a few things to simplify dependancies and multiple developed projects. Specifically, I keep most of my projects sourced in a dev directory and create symlinks into an elibs dir that is set in the ERL_LIBS environmental variables.
~/dev/ngerakines-etap
~/dev/jacobvorreuter-log_roller
~/dev/elib/etap -> ~/dev/ngerakines-etap
~/dev/elib/log_roller -> ~/dev/jacobvorreuter-log_roller
For projects that are deployed, I've either had package-rpm or package-apt make targets that create individual packages per project. Applications get boot scripts and init.d scripts for easy start/stop controls but libraries and dependancy projects just get listed as package dependencies.
I use mochiweb-inspired style. To see example of this get your copy of mochiweb:
svn checkout http://mochiweb.googlecode.com/svn/trunk/ mochiweb
and use
path/to/mochiweb/scripts/new_mochiweb.erl new_project_name
to create sample project of the structure (feel free to delete everything inside src afterwards and use it for your project).
It looks like this:
/
/ebin/
/deps/
/src/
/include/
/support/
/support/include.mk
Makefile
start.sh
ebin contains *.beam files
src contains ***.erl files and local *.hrl files
include contains global *.hrl files
deps contains symlinks to root directories of dependencies
Makefile and include.mk takes care of including appropriate paths when project is built.
start.sh takes care of including appropriate paths when project is run.
So using symlinks in deps directory you are able to fine tune the versions of libraries you use for every project. It is advised to use relative paths, so afterwards it is enough to rsync this structure to the production server and run it.
On more global scale I use the following structure:
~/code/erlang/libs/*/
~/code/category/project/*/
~/code/category/project/*/deps/*/
Where every symlink in deps points to the library in ~/code/erlang/libs/ or to another project in the same category.
The simplest way to do this would be to just create a folder named erldir and put the beams you need into it and then in your start script just use the -pa flag to the erlang runtime to point out where it should fetch the beams.
The correct way (at least if you buy into the OTP distribution model) would be to create a release using reltool (http://www.erlang.org/doc/man/reltool.html) or systools (http://www.erlang.org/doc/man/systools.html) which includes both your application and erldis.
Add the external libraries that you need, anywhere you want them, and add them to your ERL_LIBS environment variable. Separate the paths with colon in unix or semicolon in dos.
Erlang will add the "ebin"-named subdirs to its code loading path.
Have your *.app file point out the other applications it depends on.
This is a good halfway-there approach for setting up larger applications.
Another way is put your lib path in ~/.erlang.
code:add_pathz("/Users/brucexin/sources/mochiweb/ebin").
code:add_pathz("/Users/brucexin/sources/webnesia/ebin").
code:add_pathz("./ebin").
code:add_pathz("/Users/brucexin/sources/erlang-history/ebin/2.15.2").

Resources