NixOS: how to import some configuration from GitLab? infinite recursion encountered - nix

I'm trying to deploy a NixOS VM while storing its configuration on a private GitLab repository.
My configuration.nix looks like this (simplified to only include the relevant bits):
{ pkgs, ... }:
let
repo = pkgs.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}
but it is giving me this error:
error: infinite recursion encountered
at /nix/var/nix/profiles/per-user/root/channels/nixos/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
I do not understand where the recursion is happening.
It doesn't seem like its even fetching the repo, as I can put any non-existing names in the args and get the same error.
I saw https://nixos.org/guides/installing-nixos-on-a-raspberry-pi.html doing something similar without issue:
imports = ["${fetchTarball "https://github.com/NixOS/nixos-hardware/archive/936e4649098d6a5e0762058cb7687be1b2d90550.tar.gz" }/raspberry-pi/4"];
And I can use that line on my VM and it will build fine.
What am I missing?

The recursion is as follows
Compute the configuration
Compute the config fixpoint of all modules
Find all modules
Compute "${repo}/my-server-name/host.nix"
Compute repo (pkgs.fetch...)
Compute pkgs
Compute config._module.args.pkgs (Nixpkgs can be configured by NixOS options)
Compute the configuration (= 1)
You can break the recursion at 6 by using builtins.fetchTarball instead.
Alternatively, you can break it around 7, by using a different "pkgs".
If you're using configuration.nix as part of a larger expression, you may be able to pass an invoked Nixpkgs to NixOS via specialArgs.pkgs2 = import nixpkgs { ... }. This creates a pkgs2 module argument that can't be configured by NixOS itself.
Otherwise, you could define pkgs2 in a let binding.
{ pkgs, ... }:
let
# pkgs2: An independent Nixpkgs that can be constructed before the NixOS
# imports are resolved.
pkgs2 = import <nixpkgs> {};
repo = pkgs2.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}

Related

what is ${modulesPath} is configuration.nix

In /etc/nixos/configuration.nix, I have this code
{ lib, pkgs, config, modulesPath, ... }:
with lib;
let
nixos-wsl = import ./nixos-wsl;
in
{
imports = [
"${modulesPath}/profiles/minimal.nix"
nixos-wsl.nixosModules.wsl
];
I would like to know what "${modulesPath} is.
I have tried in shell.
echo ${modulesPath}
nothing
I have tried to print it in a nix interpreter.
nix repl
${modulesPath}
error: syntax error, unexpected DOLLAR_CURLY
modulePath
error: undefined variable 'modulesPath'
nothing too.
Does somebody what is that and more generally how to get the value of "nix constant"
update
I missed something important:
I have to import it in nix repl like this.
nix repl
{modulesPath}: modulesPath
«lambda # (string):1:1»
It say that it is a lamdba. I thought it would give a string value.
Quoting from the nixpkgs source:
For NixOS, specialArgs includes modulesPath, which allows you to import extra modules from the nixpkgs package tree without having to somehow make the module aware of the location of the nixpkgs or NixOS directories.
{ modulesPath, ... }: {
imports = [
(modulesPath + "/profiles/minimal.nix")
];
}
This is performed in nixos/lib/eval-config-minimal.nix, as follows:
lib.evalModules {
inherit prefix modules;
specialArgs = {
modulesPath = builtins.toString ../modules;
} // specialArgs;
};
Because this is done in <nixpkgs>/nixos/lib, ../modules becomes <nixpkgs>/nixos/modules.
$ nix repl
Welcome to Nix 2.8.1. Type :? for help.
nix-repl> "${toString <nixpkgs>}/nixos/modules/profiles/minimal.nix"
"/nix/store/qdblsqzrzarf9am35r6nqnvlsl7dammk-source/nixos/modules/profiles/minimal.nix"
...run this on your own machine, and you'll get a directory that exists for you.

Install Custom Dependency for KFP Op

I'm trying to setup a simple KubeFlow pipeline, and I'm having trouble packaging up dependencies in a way that works for KubeFlow.
The code simply downloads a config file and parses it, then passes back the parsed configuration.
However, in order to parse the config file, it needs to have access to another internal python package.
I have a .tar.gz archive of the package hosted on a bucket in the same project, and added the URL of the package as a dependency, but I get an error message saying tarfile.ReadError: not a gzip file.
I know the file is good, so it's some intermediate issue with hosting on a bucket or the way kubeflow installs dependencies.
Here is a minimal example:
from kfp import compiler
from kfp import dsl
from kfp.components import func_to_container_op
from google.protobuf import text_format
from google.cloud import storage
import training_reader
def get_training_config(working_bucket: str,
working_directoy: str,
config_file: str) -> training_reader.TrainEvalPipelineConfig:
download_file(working_bucket, os.path.join(working_directoy, config_file), "ssd.config")
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
config_op_packages = ["https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz",
"google-cloud-storage",
"protobuf"
]
training_config_op = func_to_container_op(get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages)
def output_config(config: training_reader.TrainEvalPipelineConfig) -> None:
print(config)
output_config_op = func_to_container_op(output_config)
#dsl.pipeline(
name='Post Training Processing',
description='Building the post-processing pipeline'
)
def ssd_postprocessing_pipeline(
working_bucket: str,
working_directory: str,
config_file:str):
config = training_config_op(working_bucket, working_directory, config_file)
output_config_op(config.output)
pipeline_name = ssd_postprocessing_pipeline.__name__ + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
The https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz IRL requires authentication. Try to download it in Incognito mode and you'll see the login page instead of file.
Changing the URL to https://storage.googleapis.com/my_bucket/packages/training-reader-0.1.tar.gz works for public objects, but your object is not public.
The only thing you can do (if you cannot make the package public) is to use google.cloud.storage library or gsutil program to download the file from the bucket and then manually install it suing subprocess.run([sys.executable, '-m', 'pip', 'install', ...])
Where are you downloading the data from?
What's the purpose of
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
Why not just do the following:
def get_training_config(
working_bucket: str,
working_directory: str,
config_file: str,
output_config_path: OutputFile('TrainEvalPipelineConfig'),
):
download_file(working_bucket, os.path.join(working_directoy, config_file), output_config_path)
the way kubeflow installs dependencies.
Export your component to loadable component.yaml and you'll see how KFP Lighweight components install dependencies:
training_config_op = func_to_container_op(
get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages,
output_component_file='component.yaml',
)
P.S. Some small pieces of info:
#dsl.pipeline(
Not required unless you want to use the dsl-compile command-line program
pipeline_name = ssd_postprocessing_pipeline.name + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
Did you know that you can just kfp.Client(host=...).create_run_from_pipeline_func(ssd_postprocessing_pipeline, arguments={}) to run the pipeline right away?

How to build a NixOps deployment on hydra

Deploying my NixOps machines takes alot of time, as packages need to build. I want to do the building regularly on my trusted private Hydra instance.
My current approach involves this release.nix file, but it doesn't work out so well.
{ nixpkgs ? <nixpkgs>, onlySystem ? true, extraModules ? [] }:
let
nixos = import "${nixpkgs}/nixos";
buildEnv = conf: (nixos {
configuration = conf;
});
buildTarget = m: let build = buildEnv (buildConf m); in
if onlySystem then build.system else build.vm;
buildConf = module: { ... }:
{
imports = [ module ] ++ extraModules;
};
in
{
machine1 = buildTarget ./machine1/configuration.nix;
machine2 = buildTarget ./machine2/configuration.nix
machine3 = buildTarget ./machine3/configuration.nix
machine4 = buildTarget ./machine4/configuration.nix
}
I don't really understand this code, as I copied it from here.
This builds fine if I run nix-build release.nix locally, but on hydra I never get a full build. Sometimes builds don't dequeue (they just don't get build), sometimes they fail with various error messages. As nothing of the hydra problems are reproducible (beside the fact, that I never get a full build), I wonder what the best practice for building a NixOps deployment is.
Please note, that I have unfree packages in my deployment. The option nixpkgs.config.allowUnfree = true; is set on the hydra server.
This is not a question about my hydra failures, but about what would be a good way to build a NixOps deployment with Hydra CI.
As far as I know, there is no way to make this super easy. Your code looks ok, but my method is a slightly different. I only build the toplevel attribute and I construct the NixOS configuration differently.
I build NixOS 'installations' from inside Nix using something like:
let
modules = [ ./configuration.nix ];
nixosSystem = import (pkgs.path + "/nixos/lib/eval-config.nix") {
inherit (pkgs) system;
inherit modules;
};
in
nixosSystem.config.system.build.toplevel

How to import nixos config and merge it with nixops deployment expression

I'm in the process of learning how to use Nix/NixOs/NixOps, and I'm having trouble refactoring a simple NixOps deployment.
My starting point is this working vbox-all.nix file :
{
server =
{ config, pkgs, ... }:
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
# postgres-specific config
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
# htop-specific config
environment.systemPackages =
[
pkgs.htop
];
};
}
Running nixops create ./vbox.nix -d mydeployment and then nixops deploy -d mydeployment works perfectly : I get a VirtualBox machine with Postgres 9.6 running and htop installed.
Now, having all of this in one file does not seem to be a good idea for long term maintenance.
Here is the file layout I think I want:
.
├── configuration-all.nix # forms a NixOs config with htop, postgres, etc.
├── htop.nix # NixOs config of just htop
├── postgres.nix # NixOs config of just Postgres
└── vbox-all.nix # NixOps config for virtualbox with htop, postgres, etc.
The idea being that vbox-all.nix imports configuration-all.nix which imports all services/packages/conf I might want (currently postgres and htop).
That's what I cannot get to work.
Here is my configuration-all.nix :
{ config, pkgs, ... }:
{
imports = [ ./postgres.nix ./htop.nix ];
}
Here is ./postgres.nix :
{ config, pkgs, ... }:
{
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
}
I think you can guess the content of ./htop.nix, and it doesn't really matter anyway.
And finally, my modified vbox-all.nix:
{
server =
{ config, pkgs, ... }:
with (pkgs.callPackage ./configuration-all.nix { });
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
When I re-run nixops deploy -d mydeployment, I don't get any errors but the resulting VM doesn't have neither postgres nor htop.
I must be fundamentally misunderstanding either with or callPackage. For me it should : execute the function defined in ./configuration-all.nix (auto-filling all args) and merge the resulting expression with my "deployment-specific config".
I tried a few things like: replacing pkgs.callPackage with import (still no error, but still no good), using inherit (pkgs.callPackage ./configuration-all.nix { }) instead of with, etc. but so far no dice.
I must be missing something small and probably obvious...
Here is my final working vbox-all.nix I figured out while writing my question.
{
server =
{
imports = [ ./configuration-all.nix ];
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
Thanks SO, you're a good rubber duck.
I still need to understand why my other attempts with with and inherit did not work, so don't hesitate to comment or post an alternative answer. I have a lot to learn.

Buildbot nightly build is failing to checkout the branch

I am using buildbot for building and testing chromium source. We have a local repo maintained for chromium source. We are using Gerrit for maintaining the repository. The buildbot gets successfully triggered when some change is checked in.
I am facing problem with Nightly builds. I want the buildbot to get the source and build, even when there is no change. But, it fails with the following error:
===Running git checkout --force origin/ ===
error: pathspec 'origin/' did not match any file(s) know to git.
===Failed in 0.0. mins===
Can somebody tell me how to get this working?
My master.cfg is as below:
from master import master_utils
from master import slaves_list
from buildbot.schedulers import timed
from buildbot.changes import filter
import config
import master_site_config
ActiveMaster = master_site_config.Chromium
c = BuildmasterConfig = {}
c['change_source'] = []
c['schedulers'] = []
c['status'] = [] pendingRequests = {}
c['builders'] = []
import master_source_cfg
import master_full_cfg
master_source_cfg.Update(config, ActiveMaster, c)
master_full_cfg.Update(config, ActiveMaster, c)
c['logCompressionLimit'] = False
c['projectName'] = ActiveMaster.project_name
c['projectURL'] = config.Master.project_url
c['buildbotURL'] = 'build.chromium.org/p/chromium/';
slaves = slaves_list.SlavesList('slaves.cfg', 'Chromium')
# Trying to find the location of adding factories to the builder
from twisted.python import log
for builder in c['builders']:
log.msg('My BUILDER',builder)
log.msg(dir(builder['factory']))
builder['slavenames'] = slaves.GetSlavesName(builder=builder['name'])
c['slaves'] = master_utils.AutoSetupSlaves(
c['builders'],
config.Master.GetBotPassword(),
missing_recipients=['buildbot#chromium-build-health.appspotmail.com'])
master_utils.VerifySetup(c, slaves)
master_utils.AutoSetupMaster(c, ActiveMaster, enable_http_status_push=ActiveMaster.is_production_host)
c['buildHorizon'] = 3000
c['logHorizon'] = 3000
c['eventHorizon'] = 200
I had missed to add the last part of the master.cfg which contains the configuration for nightly:
c['schedulers'].append(
timed.Nightly(name='nightly',
branch='src',
change_filter=filter.ChangeFilter(project_re='pj/Sample/chromium.*', branch=['buildbot-testing1']),
builderNames=['Linux x64'],
hour=8,
minute=15))
With the above configuration, the nightly gets triggered at the correct time.
But, it is not checking out the 'buildbot-testing1' branch as needed.
Instead, it exits with the following error:
===Running git checkout --force origin/ ===
error: pathspec 'origin/' did not match any file(s) know to git.
===Failed in 0.0. mins===

Resources