Nix maintain original order of a set - docker

Assumptions:
You have yq and nix installed on your OS running NixOS or some Linux distro.
Question:
Can nix maintain the original ordering of a set? i.e. If I create a sample.nix file:
{pkgs}:
let
dockerComposeConfig = {
version = "1.0";
services = {
srv1 = { name = "srv1"; };
srv2 = { name = "srv2"; };
};
};
in writeTextFile {
name = "docker-compose.json";
text = builtins.toJSON dockerComposeConfig;
}
When I build and convert the output to yaml below I notice is that the set has been alphabetized by Nix. Is there a workaround that keeps my JSON in the same ordering as intended by a Docker user such that the `dockerComposeConfig attrributes remain in the order they are created?
# Cmd1
nix-build -E "with import <nixpkgs> {}; callPackage ./sample.nix {}"
# Cmd2
cat /nix/store/SOMEHASH-docker-compose.json | yq r - --prettyPrint

Nix attribute sets don't have an ordering to their attributes and they are represented as a sorted array in memory. Canonicalizing values helps with reproducibility.
If it's really important you could write a function that turns a list of key value pairs into a JSON object as a Nix string. But that's not going to be easy to use like builtins.toJSON. I'd consider the JSON as "compiled build output" and not worry too much about aesthetics.
Side note: Semantically, they are not even created in any order. The Nix language is declarative: a Nix expression (excluding derivations) describes something that is, not how to create it, although it may be defined in terms of functions.
This is necessary for Nix's laziness to be effective.

Related

How should I write a flake.nix in order to use nix develop instead of nix develop github:project#shell

NB: add --extra-experimental-features nix-command --extra-experimental-features flakes if you've allow experimental feature in nix
This repository propose to load a shell this way:
nix develop github:informalsystems/cosmos.nix#cosmos-shell
It seems to work.
In order to see if I've really understood how nix flake works (I haven't) I am trying to write a flake.nix so that I only have to write
nix develop
There is a field devShells in output in flake.nix in this repo. Warning devShells not devShell. This is a collection of shells defined in configuration.nix
my flake.nix :
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable"; #not useful for this question
cosmos_inform_system.url = github:informalsystems/cosmos.nix;
flake-utils.url = github:numtide/flake-utils;
};
outputs = { self, nixpkgs, cosmos_inform_system, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
shells = cosmos_inform_system.devShells;
in {
devShell.default = shells.cosmos-shell;
});
}
warning: flake output attribute 'devShell' is deprecated; use 'devShells..default' instead
error: attribute 'cosmos-shell' missing
There is cosmos-shell in configuration.nix this a field devShells. Therefore I don't understand the error
In order to remove the warning, I replace this line
devShell = shells.cosmos-shell;
by this line
devShells.${system}.default = shells.cosmos-shell;
error: flake attribute 'devShell.aarch64-linux' is not a derivation
I still have the warning.
Take a look at this flake.nix file for an example of how to do that.
nix flake check
works with that
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
cosmos_inform_system.url = github:informalsystems/cosmos.nix;
flake-utils.url = github:numtide/flake-utils;
};
outputs = { self, nixpkgs, cosmos_inform_system, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
shells = cosmos_inform_system.devShells;
in {
devShells.default = shells.${system}.cosmos-shell;
}
);
}
but
nix develop
error: hash mismatch in fixed-output derivation '/nix/store/yz9kjwjhszzpp7g4wnbxj43zwga1zzsy-ica-go-modules.drv':
specified: sha256-ykGo5TQ+MiFoeQoglflQL3x3VN2CQuyZCIiweP/c9lM=
got: sha256-D31e+G/7KAmF3Gkk4IOmU2g/eLlqkrkpwJa7CEjdaAk=
[1/158 built (1 failed)] building ica-go-modules (installPhase): installing
Nevertheless I have exactly the same problem with nix develop github:informalsystems/cosmos.nix#cosmos-shell
I still think my flake file is the same as nix develop github:informalsystems/cosmos.nix#cosmos-shell

Infinite recursion when referring to pkgs.system from Nix module options section

The following is a minimal reproducer for an infinite recursion error when building a nixos configuration:
(import <nixpkgs/nixos>) {
configuration = { pkgs, ... }: {
options = builtins.trace "Building a system with system ${pkgs.system}" {};
};
system = "x86_64-linux";
}
When evaluated it fails as follows, unless the reference to pkgs.system is removed:
$ nix-build
error: infinite recursion encountered
at /Users/charles/.nix-defexpr/channels/nixpkgs/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
If we look at the implementation of nixos/lib/eval-config.nix:33, we see that the value passed for the system argument is set as an overridable default in pkgs. Does this mean we can't access it until later in the evaluation process?
(In the real-world use case, I'm introspecting a flake -- investigating someFlake.packages.${pkgs.system} to find packages for which to generate configuration options.)
This has been cross-posted to NixOS Discourse; see https://discourse.nixos.org/t/accessing-target-system-when-building-options-for-a-module/
In order for the module system to construct the configuration, it needs to know which config and options items exist, at least to the degree necessary to produce the root attribute set of configuration.
The loop is as follows:
Evaluate the attribute names in config
Evaluate the attribute names of the options
Evaluate pkgs (your code)
Evaluate config._module.args.pkgs (definition of module argument)
Evaluate the attribute names in config (loop)
It can be broken by removing or reducing the dependency on pkgs.
For instance, you could define your "dynamic" options as type = attrsOf foo instead of enumerating the each item from your flake as individual options.
Another potential solution is to move the option definitions into a submodule. A submodule without attrsOf as in attrsOf (submodule x) is generally quite useless, but it may create a necessary indirection that separates your dynamic pkgs-dependent options from the module fixpoint that has pkgs.
(import <nixpkgs/nixos>) {
configuration = { pkgs, lib, ... }: {
options.foo = lib.mkOption {
type = lib.types.submodule {
options = builtins.trace "Building a system with system ${pkgs.system}" { };
};
default = { };
};
};
system = "x86_64-linux";
}
nix-repl> config.foo
trace: Building a system with system x86_64-linux
{ }
As an alternate approach for cases where avoiding recursion isn't feasible, one can use specialArgs in invoking nixos/lib/eval-config.nix to pass a final value not capable of being overridden through the module system:
let
configuration = { pkgs, forcedSystem, ... }: {
options = builtins.trace "Building a system with system ${forcedSystem}" {};
};
in
(import <nixpkgs/nixos/lib/eval-config.nix>) {
modules = [ configuration ];
system = "x86_64-linux";
specialArgs = { forcedSystem = "x86_64-linux"; };
}

How is vendorSha256 computed?

I'm trying to understand how the vendorSha256 is calculated when using buildGoModule. In nixpkgs manual the only info I get is:
"vendorSha256: is the hash of the output of the intermediate fetcher derivation."
Is there a way I can calculate the vendorSha256 for a nix expression I'm writing? To take a specific example, how was the "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=" generated here:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = "sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=";
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
From the manual:
The function buildGoModule builds Go programs managed with Go modules.
It builds a Go Modules through a two phase build:
An intermediate fetcher derivation. This derivation will be used to fetch all of the dependencies of the Go module.
A final derivation will use the output of the intermediate derivation to build the binaries and produce the final output.
You can see that, when you're trying to build the above expression in the ouput of nix-build. If you run:
nix-build -E 'with import <nixpkgs> { }; callPackage ./yourExpression.nix { }'
you see the first 2 lines of output:
these 2 derivations will be built:
/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv
/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv
The first derivation will be used to fetch all dependencies for your Go module, and the second will be used to build your actual module. So vendorSha256 is the hash of the output of that first derivation.
When you write a Nix expression to build a Go module you don't know in advance that hash. You only know it that the first derivation has been realised(download dependencies and find the hash based on them). However you can use Nix validation to find out the value of vendorSha256.
Modify your Nix expression like so:
{ lib, buildGoModule, fetchFromGitHub }:
buildGoModule rec {
pname = "oapi-codegen";
version = "1.6.0";
src = fetchFromGitHub {
owner = "deepmap";
repo = pname;
rev = "v${version}";
sha256 = "sha256-doJ1ceuJ/gL9vlGgV/hKIJeAErAseH0dtHKJX2z7pV0=";
};
vendorSha256 = lib.fakeSha256;
# Tests use network
doCheck = false;
meta = with lib; {
description = "Go client and server OpenAPI 3 generator";
homepage = "https://github.com/deepmap/oapi-codegen";
license = licenses.asl20;
maintainers = [ maintainers.j4m3s ];
};
}
The only difference is vendorSha256 has now the value of lib.fakeSha256, which is just a fake/wrong sha256 hash. Nix will try to build the first derivation and will check the hash of the dependencies against this value. Since they will not match, an error will occur:
error: hash mismatch in fixed-output derivation '/nix/store/j13s3dvlwz5w9xl5wbhkcs7lrkgksv3l-oapi-codegen-1.6.0-go-modules.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=
error: 1 dependencies of derivation '/nix/store/4wyj1d9f2m0521nlkjgr6al0wfz12yjn-oapi-codegen-1.6.0.drv' failed to build
So this answer your question. The value of vendorSha256 you need is sha256-Y4WM+o+5jiwj8/99UyNHLpBNbtJkKteIGW2P1Jd9L6M=. Copy and add it your file and you're good to go!

Conditionally create a Bazel rule based on --config

I'm working on a problem in which I only want to create a particular rule if a certain Bazel config has been specified (via '--config'). We have been using Bazel since 0.11 and have a bunch of build infrastructure that works around former limitations in Bazel. I am incrementally porting us up to newer versions. One of the features that was missing was compiler transitions, and so we rolled our own using configs and some external scripts.
My first attempt at solving my problem looks like this:
load("#rules_cc//cc:defs.bzl", "cc_library")
# use this with a select to pick targets to include/exclude based on config
# see __build_if_role for an example
def noop_impl(ctx):
pass
noop = rule(
implementation = noop_impl,
attrs = {
"deps": attr.label_list(),
},
)
def __sanitize(config):
if len(config) > 2 and config[:2] == "//":
config = config[2:]
return config.replace(":", "_").replace("/", "_")
def build_if_config(**kwargs):
config = kwargs['config']
kwargs.pop('config')
name = kwargs['name'] + '_' + __sanitize(config)
binary_target_name = kwargs['name']
kwargs['name'] = binary_target_name
cc_library(**kwargs)
noop(
name = name,
deps = select({
config: [ binary_target_name ],
"//conditions:default": [],
})
)
This almost gets me there, but the problem is that if I want to build a library as an output, then it becomes an intermediate dependency, and therefore gets deleted or never built.
For example, if I do this:
build_if_config(
name="some_lib",
srcs=[ "foo.c" ],
config="//:my_config",
)
and then I run
bazel build --config my_config //:some_lib
Then libsome_lib.a does not make it to bazel-out, although if I define it using cc_library, then it does.
Is there a way that I can just create the appropriate rule directly in the macro instead of creating a noop rule and using a select? Or another mechanism?
Thanks in advance for your help!
As I noted in my comment, I was misunderstanding how Bazel figures out its dependencies. The create a file section of The Rules Tutorial explains some of the details, and I followed along here for some of my solution.
Basically, the problem was not that the built files were not sticking around, it was that they were never getting built. Bazel did not know to look in the deps variable and build those things: it seems I had to create an action which uses the deps, and then register an action by returning a (list of) DefaultInfo
Below is my new noop_impl function
def noop_impl(ctx):
if len(ctx.attr.deps) == 0:
return None
# ctx.attr has the attributes of this rule
dep = ctx.attr.deps[0]
# DefaultInfo is apparently some sort of globally available
# class that can be used to index Target objects
infile = dep[DefaultInfo].files.to_list()[0]
outfile = ctx.actions.declare_file('lib' + ctx.label.name + '.a')
ctx.actions.run_shell(
inputs = [infile],
outputs = [outfile],
command = "cp %s %s" % (infile.path, outfile.path),
)
# we can also instantiate a DefaultInfo to indicate what output
# we provide
return [DefaultInfo(files = depset([outfile]))]

How can I build custom rules using the output of workspace_status_command?

The bazel build flag --workspace_status_command supports calling a script to retrieve e.g. repository metadata, this is also known as build stamping and available in rules like java_binary.
I'd like to create a custom rule using this metadata.
I want to use this for a common support function. It should receive the git version and some other attributes and create a version.go output file usable as a dependency.
So I started a journey looking at rules in various bazel repositories.
Rules like rules_docker support stamping with stamp in container_image and let you reference the status output in attributes.
rules_go supports it in the x_defs attribute of go_binary.
This would be ideal for my purpose and I dug in...
It looks like I can get what I want with ctx.actions.expand_template using the entries in ctx.info_file or ctx.version_file as a dictionary for substitutions. But I didn't figure out how to get a dictionary of those files. And those two files seem to be "unofficial", they are not part of the ctx documentation.
Building on what I found out already: How do I get a dict based on the status command output?
If that's not possible, what is the shortest/simplest way to access workspace_status_command output from custom rules?
I've been exactly where you are and I ended up following the path you've started exploring. I generate a JSON description that also includes information collected from git to package with the result and I ended up doing something like this:
def _build_mft_impl(ctx):
args = ctx.actions.args()
args.add('-f')
args.add(ctx.info_file)
args.add('-i')
args.add(ctx.files.src)
args.add('-o')
args.add(ctx.outputs.out)
ctx.actions.run(
outputs = [ctx.outputs.out],
inputs = ctx.files.src + [ctx.info_file],
arguments = [args],
progress_message = "Generating manifest: " + ctx.label.name,
executable = ctx.executable._expand_template,
)
def _get_mft_outputs(src):
return {"out": src.name[:-len(".tmpl")]}
build_manifest = rule(
implementation = _build_mft_impl,
attrs = {
"src": attr.label(mandatory=True,
allow_single_file=[".json.tmpl", ".json_tmpl"]),
"_expand_template": attr.label(default=Label("//:expand_template"),
executable=True,
cfg="host"),
},
outputs = _get_mft_outputs,
)
//:expand_template is a label in my case pointing to a py_binary performing the transformation itself. I'd be happy to learn about a better (more native, fewer hops) way of doing this, but (for now) I went with: it works. Few comments on the approach and your concerns:
AFAIK you cannot read in (the file and perform operations in Skylark) itself...
...speaking of which, it's probably not a bad thing to keep the transformation (tool) and build description (bazel) separate anyways.
It could be debated what constitutes the official documentation, but ctx.info_file may not appear in the reference manual, it is documented in the source tree. :) Which is case for other areas as well (and I hope that is not because those interfaces are considered not committed too yet).
For sake of comleteness in src/main/java/com/google/devtools/build/lib/skylarkbuildapi/SkylarkRuleContextApi.java there is:
#SkylarkCallable(
name = "info_file",
structField = true,
documented = false,
doc =
"Returns the file that is used to hold the non-volatile workspace status for the "
+ "current build request."
)
public FileApi getStableWorkspaceStatus() throws InterruptedException, EvalException;
EDIT: few extra details as asked in the comment.
In my workspace_status.sh I would have for instance the following line:
echo STABLE_GIT_REF $(git log -1 --pretty=format:%H)
In my .json.tmpl file I would then have:
"ref": "${STABLE_GIT_REF}",
I've opted for shell like notation of text to be replaced, since it's intuitive for many users as well as easy to match.
As for the replacement, relevant (CLI kept out of this) portion of the actual code would be:
def get_map(val_file):
"""
Return dictionary of key/value pairs from ``val_file`.
"""
value_map = {}
for line in val_file:
(key, value) = line.split(' ', 1)
value_map.update(((key, value.rstrip('\n')),))
return value_map
def expand_template(val_file, in_file, out_file):
"""
Read each line from ``in_file`` and write it to ``out_file`` replacing all
${KEY} references with values from ``val_file``.
"""
def _substitue_variable(mobj):
return value_map[mobj.group('var')]
re_pat = re.compile(r'\${(?P<var>[^} ]+)}')
value_map = get_map(val_file)
for line in in_file:
out_file.write(re_pat.subn(_substitue_variable, line)[0])
EDIT2: This is how the Python script is how I expose the python script to rest of bazel.
py_binary(
name = "expand_template",
main = "expand_template.py",
srcs = ["expand_template.py"],
visibility = ["//visibility:public"],
)
Building on Ondrej's answer, I now use somthing like this (adapted in SO editor, might contain small errors):
tools/bazel.rc:
build --workspace_status_command=tools/workspace_status.sh
tools/workspace_status.sh:
echo STABLE_GIT_REV $(git rev-parse HEAD)
version.bzl:
_VERSION_TEMPLATE_SH = """
set -e -u -o pipefail
while read line; do
export "${line% *}"="${line#* }"
done <"$INFILE" \
&& cat <<EOF >"$OUTFILE"
{ "ref": "${STABLE_GIT_REF}"
, "service": "${SERVICE_NAME}"
}
EOF
"""
def _commit_info_impl(ctx):
ctx.actions.run_shell(
outputs = [ctx.outputs.outfile],
inputs = [ctx.info_file],
progress_message = "Generating version file: " + ctx.label.name,
command = _VERSION_TEMPLATE_SH,
env = {
'INFILE': ctx.info_file.path,
'OUTFILE': ctx.outputs.version_go.path,
'SERVICE_NAME': ctx.attr.service,
},
)
commit_info = rule(
implementation = _commit_info_impl,
attrs = {
'service': attr.string(
mandatory = True,
doc = 'name of versioned service',
),
},
outputs = {
'outfile': 'manifest.json',
},
)

Resources