Infinite recursion when referring to pkgs.system from Nix module options section - nix

The following is a minimal reproducer for an infinite recursion error when building a nixos configuration:
(import <nixpkgs/nixos>) {
configuration = { pkgs, ... }: {
options = builtins.trace "Building a system with system ${pkgs.system}" {};
};
system = "x86_64-linux";
}
When evaluated it fails as follows, unless the reference to pkgs.system is removed:
$ nix-build
error: infinite recursion encountered
at /Users/charles/.nix-defexpr/channels/nixpkgs/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
If we look at the implementation of nixos/lib/eval-config.nix:33, we see that the value passed for the system argument is set as an overridable default in pkgs. Does this mean we can't access it until later in the evaluation process?
(In the real-world use case, I'm introspecting a flake -- investigating someFlake.packages.${pkgs.system} to find packages for which to generate configuration options.)
This has been cross-posted to NixOS Discourse; see https://discourse.nixos.org/t/accessing-target-system-when-building-options-for-a-module/

In order for the module system to construct the configuration, it needs to know which config and options items exist, at least to the degree necessary to produce the root attribute set of configuration.
The loop is as follows:
Evaluate the attribute names in config
Evaluate the attribute names of the options
Evaluate pkgs (your code)
Evaluate config._module.args.pkgs (definition of module argument)
Evaluate the attribute names in config (loop)
It can be broken by removing or reducing the dependency on pkgs.
For instance, you could define your "dynamic" options as type = attrsOf foo instead of enumerating the each item from your flake as individual options.
Another potential solution is to move the option definitions into a submodule. A submodule without attrsOf as in attrsOf (submodule x) is generally quite useless, but it may create a necessary indirection that separates your dynamic pkgs-dependent options from the module fixpoint that has pkgs.
(import <nixpkgs/nixos>) {
configuration = { pkgs, lib, ... }: {
options.foo = lib.mkOption {
type = lib.types.submodule {
options = builtins.trace "Building a system with system ${pkgs.system}" { };
};
default = { };
};
};
system = "x86_64-linux";
}
nix-repl> config.foo
trace: Building a system with system x86_64-linux
{ }

As an alternate approach for cases where avoiding recursion isn't feasible, one can use specialArgs in invoking nixos/lib/eval-config.nix to pass a final value not capable of being overridden through the module system:
let
configuration = { pkgs, forcedSystem, ... }: {
options = builtins.trace "Building a system with system ${forcedSystem}" {};
};
in
(import <nixpkgs/nixos/lib/eval-config.nix>) {
modules = [ configuration ];
system = "x86_64-linux";
specialArgs = { forcedSystem = "x86_64-linux"; };
}

Related

How should I write a flake.nix in order to use nix develop instead of nix develop github:project#shell

NB: add --extra-experimental-features nix-command --extra-experimental-features flakes if you've allow experimental feature in nix
This repository propose to load a shell this way:
nix develop github:informalsystems/cosmos.nix#cosmos-shell
It seems to work.
In order to see if I've really understood how nix flake works (I haven't) I am trying to write a flake.nix so that I only have to write
nix develop
There is a field devShells in output in flake.nix in this repo. Warning devShells not devShell. This is a collection of shells defined in configuration.nix
my flake.nix :
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable"; #not useful for this question
cosmos_inform_system.url = github:informalsystems/cosmos.nix;
flake-utils.url = github:numtide/flake-utils;
};
outputs = { self, nixpkgs, cosmos_inform_system, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
shells = cosmos_inform_system.devShells;
in {
devShell.default = shells.cosmos-shell;
});
}
warning: flake output attribute 'devShell' is deprecated; use 'devShells..default' instead
error: attribute 'cosmos-shell' missing
There is cosmos-shell in configuration.nix this a field devShells. Therefore I don't understand the error
In order to remove the warning, I replace this line
devShell = shells.cosmos-shell;
by this line
devShells.${system}.default = shells.cosmos-shell;
error: flake attribute 'devShell.aarch64-linux' is not a derivation
I still have the warning.
Take a look at this flake.nix file for an example of how to do that.
nix flake check
works with that
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
cosmos_inform_system.url = github:informalsystems/cosmos.nix;
flake-utils.url = github:numtide/flake-utils;
};
outputs = { self, nixpkgs, cosmos_inform_system, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
shells = cosmos_inform_system.devShells;
in {
devShells.default = shells.${system}.cosmos-shell;
}
);
}
but
nix develop
error: hash mismatch in fixed-output derivation '/nix/store/yz9kjwjhszzpp7g4wnbxj43zwga1zzsy-ica-go-modules.drv':
specified: sha256-ykGo5TQ+MiFoeQoglflQL3x3VN2CQuyZCIiweP/c9lM=
got: sha256-D31e+G/7KAmF3Gkk4IOmU2g/eLlqkrkpwJa7CEjdaAk=
[1/158 built (1 failed)] building ica-go-modules (installPhase): installing
Nevertheless I have exactly the same problem with nix develop github:informalsystems/cosmos.nix#cosmos-shell
I still think my flake file is the same as nix develop github:informalsystems/cosmos.nix#cosmos-shell

How is vendorSha256 computed?

I'm trying to understand how the vendorSha256 is calculated when using buildGoModule. In nixpkgs manual the only info I get is:
"vendorSha256: is the hash of the output of the intermediate fetcher derivation."
Is there a way I can calculate the vendorSha256 for a nix expression I'm writing? To take a specific example, how was the "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=" generated here:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=";
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
From the manual:
The function buildGoModule builds Go programs managed with Go modules.
It builds a Go Modules through a two phase build:
An intermediate fetcher derivation. This derivation will be used to fetch all of the dependencies of the Go module.
A final derivation will use the output of the intermediate derivation to build the binaries and produce the final output.
You can see that, when you're trying to build the above expression in the ouput of nix-build. If you run:
nix-build -E 'with import <nixpkgs> { }; callPackage ./yourExpression.nix { }'
you see the first 2 lines of output:
these 2 derivations will be built:
/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv
/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv
The first derivation will be used to fetch all dependencies for your Go module, and the second will be used to build your actual module. So vendorSha256 is the hash of the output of that first derivation.
When you write a Nix expression to build a Go module you don't know in advance that hash. You only know it that the first derivation has been realised(download dependencies and find the hash based on them). However you can use Nix validation to find out the value of vendorSha256.
Modify your Nix expression like so:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = lib.fakeSha256;
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
The only difference is vendorSha256 has now the value of lib.fakeSha256, which is just a fake/wrong sha256 hash. Nix will try to build the first derivation and will check the hash of the dependencies against this value. Since they will not match, an error will occur:
error: hash mismatch in fixed-output derivation '/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=
error: 1 dependencies of derivation '/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv' failed to build
So this answer your question. The value of vendorSha256 you need is sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=. Copy and add it your file and you're good to go!

How to define string based on host os in bazel rule definition?

I have the following rule definition:
helm_action = rule(
attrs = {
…
"cluster_aliases": attr.string_dict(
doc = "key value pair matching for creating a cluster alias where the name used to evoke a cluster alias is different than the actual cluster's name",
default = DEFAULT_CLUSTER_ALIASES,
),
…
},
…
)
I'd like for DEFAULT_CLUSTER_ALIASES value to be based on the host os but
DEFAULT_CLUSTER_ALIASES = {
"local": select({
"#platforms//os:osx": "docker-desktop",
"#platforms//os:linux": "minikube",
})
}
errors with:
Error in string_dict: expected value of type 'string' for dict value element, but got select({"#platforms//os:osx": "docker-desktop", "#platforms//os:linux": "minikube"}) (select)
How do I go about defining DEFAULT_CLUSTER_ALIASES based on the host os?
Judging from https://github.com/bazelbuild/bazel/issues/2045, selecting based on host os is not possible.
When you create a rule or macro, it is evaluated during the loading phase, before command-line flags are evaluated. Bazel needs to know the default value in your build rule helm_action during the loading phase but can't because it hasn't parsed the command line and analysed the build graph.
The command line is parsed and select statements are evaluated during the analysis phase. As a broad rule, if your select statement isn't in a BUILD.bazel then it's not going to work. So the easiest way to achieve what you are after is to create a macro that uses your rule injecting the default. e.g.
# helm_action.bzl
# Add an '_' prefix to your rule to make the rule private.
_helm_action = rule(
attrs = {
…
"cluster_aliases": attr.string_dict(
doc = "key value pair matching for creating a cluster alias where the name used to evoke a cluster alias is different than the actual cluster's name",
# Remove default attribute.
),
…
},
…
)
# Wrap your rule in a publicly exported macro.
def helm_action(**kwargs):
_helm_action(
name = kwargs["name"],
# Instantiate your rule with a select.
cluster_aliases = DEFAULT_CLUSTER_ALIASES,
**kwargs,
)
It's important to note the difference between a macro and a rule. A macro is a way of generating a set of targets using other build rules, and actually expands out roughly equivalent to it's contents when used in a BUILD file. You can check this by querying a target with the --output build flag. e.g.
load(":helm_action.bzl", "helm_action")
helm_action(
name = "foo",
# ...
)
You can query the output using the command;
bazel query //:foo --output build
This will demonstrate that the select statement is being copied into the BUILD file.
A good example of this approach is in the rules_docker repository.
EDIT: The question was clarified, so I've got an updated answer below but will keep the above answer in case it is useful to others.
A simple way of achieving what you are after is to use Bazels toolchain api. This is a very flexible API and is what most language rulesets use in Bazel. e.g.
Create a build file with your toolchains;
# //helm:BUILD.bazel
load(":helm_toolchains.bzl", "helm_toolchain")
toolchain_type(name = "toolchain_type")
helm_toolchain(
name = "osx",
cluster_aliases = {
"local": "docker-desktop",
},
)
toolchain(
name = "osx_toolchain",
toolchain = ":osx",
toolchain_type = ":toolchain_type",
exec_compatible_with = ["#platforms//os:macos"],
# Optionally use to restrict target platforms too.
# target_compatible_with = []
)
helm_toolchain(
name = "linux",
cluster_aliases = {
"local": "minikube",
},
)
toolchain(
name = "linux_toolchain",
toolchain = ":linux",
toolchain_type = ":toolchain_type",
exec_compatible_with = ["#platforms//os:linux"],
)
Register your toolchains so that Bazel knows what to look for;
# //:WORKSPACE
# the rest of your workspace...
register_toolchains("//helm:all")
# You may need to register your execution platforms too...
# register_execution_platforms("//your_platforms/...")
Implement the toolchain backend;
# //helm:helm_toolchains.bzl
HelmToolchainInfo = provider(fields = ["cluster_aliases"])
def _helm_toolchain_impl(ctx):
toolchain_info = platform_common.ToolchainInfo(
helm_toolchain_info = HelmToolchainInfo(
cluster_aliases = ctx.attr.cluster_aliases,
),
)
return [toolchain_info]
helm_toolchain = rule(
implementation = _helm_toolchain_impl,
attrs = {
"cluster_aliases": attr.string_dict(),
},
)
Update helm_action to use toolchains. e.g.
def _helm_action_impl(ctx):
cluster_aliases = ctx.toolchains["#your_repo//helm:toolchain_type"].helm_toolchain_info.cluster_aliases
#...
helm_action = rule(
_helm_action_impl,
attrs = {
#…
},
toolchains = ["#your_repo//helm:toolchain_type"]
)

Nix maintain original order of a set

Assumptions:
You have yq and nix installed on your OS running NixOS or some Linux distro.
Question:
Can nix maintain the original ordering of a set? i.e. If I create a sample.nix file:
{pkgs}:
let
dockerComposeConfig = {
version = "1.0";
services = {
srv1 = { name = "srv1"; };
srv2 = { name = "srv2"; };
};
};
in writeTextFile {
name = "docker-compose.json";
text = builtins.toJSON dockerComposeConfig;
}
When I build and convert the output to yaml below I notice is that the set has been alphabetized by Nix. Is there a workaround that keeps my JSON in the same ordering as intended by a Docker user such that the `dockerComposeConfig attrributes remain in the order they are created?
# Cmd1
nix-build -E "with import <nixpkgs> {}; callPackage ./sample.nix {}"
# Cmd2
cat /nix/store/SOMEHASH-docker-compose.json | yq r - --prettyPrint
Nix attribute sets don't have an ordering to their attributes and they are represented as a sorted array in memory. Canonicalizing values helps with reproducibility.
If it's really important you could write a function that turns a list of key value pairs into a JSON object as a Nix string. But that's not going to be easy to use like builtins.toJSON. I'd consider the JSON as "compiled build output" and not worry too much about aesthetics.
Side note: Semantically, they are not even created in any order. The Nix language is declarative: a Nix expression (excluding derivations) describes something that is, not how to create it, although it may be defined in terms of functions.
This is necessary for Nix's laziness to be effective.

Why `sudo nixos-rebuild` doesnt show output of trace?

I expect this here
let config_ = lib.debug.showVal (config); in
....
systemd = import ./systemd { inherit pkgs; config = config_; };
to show the content of config, why I don't see it?
$ sudo nixos-rebuild dry-build --show-trace
building the system configuration...
these derivations will be built:
/nix/store/g24yj8lzz2zg921daibfbj2yz5933fwn-hubstaff-1.3.0-9b2ba62.drv
/nix/store/hps81xprfk0b4lhq8z2vycn1jq4ds841-system-path.drv
/nix/store/1s689dqbl45g094mnd5sjzdh44wrd6g5-dbus-1.drv
/nix/store/wqhr5z2f7l0a49fxb4arkwagb1iwmkx4-unit-dbus.service.drv
.....
/nix/store/8r03578gxmk2plvxn4p0jbj8aal63vc6-lightdm.conf.drv
/nix/store/i2ikmkxhgyns0ylj17cw2yv0v82m0lfh-etc.drv
/nix/store/kwqbq4mmim8ph4i3zjbsi5hhwjr6qkg7-nixos-system-machine-18.03.131954.2569e482904.drv
That version of the systemd/default.nix function doesn't evaluate the config attribute of its argument, so it will not evaluate your debugging function either. The Nix language evaluates by need.
To print config, make sure it gets evaluated. A function that may help you is builtins.seq it evaluates its first argument, but returns the second. Try this at the top of your file:
{ config, pkgs, ... }:
with pkgs;
builtins.seq (lib.debug.showVal config) {
imports = [
/* etcetera */

Resources