NixOS service systemd unit's $PATH does not contain expected dependency - nix

I have the following definition:
hello123 =
(pkgs.writeScriptBin "finderapp" ''
#!${pkgs.stdenv.shell}
# Call hello with a traditional greeting
ls ${pkgs.ffmpeg-full}/bin/ffmpeg
ffmpeg --help
echo hello
''
);
And the service:
systemd.services = {
abcxyz = {
enable = true;
description = "abcxyz";
serviceConfig = {
WorkingDirectory = "%h/temp/";
Type = "simple";
ExecStart = "${hello123}/bin/finderapp";
Restart = "always";
RestartSec = 60;
};
wantedBy = [ "default.target" ];
};
};
However, this seems to fail being able to execute ffmpeg:
Jul 10 19:47:54 XenonKiloCranberry systemd[1]: Started abcxyz.
Jul 10 19:47:54 XenonKiloCranberry finderapp[10042]: /nix/store/9yx9s5yjc6ywafadplblzdfaxqimz95w-ffmpeg-full-4.2.3/bin/ffmpeg
Jul 10 19:47:54 XenonKiloCranberry finderapp[10042]: /nix/store/bxfwljbpvl21wsba00z5dm9dmshsk3bx-finderapp/bin/finderapp: line 5: ffmpeg: command not found
Jul 10 19:47:54 XenonKiloCranberry finderapp[10042]: hello
Why is this failing? I assume it's correctly getting ffmpeg as a runtime dependency (verified with nix-store -q --references ...) as stated in another question here: https://stackoverflow.com/a/68330101/1663462
If I add a echo $PATH to the script, it outputs the following:
Jul 10 19:53:44 XenonKiloCranberry finderapp[12011]: /nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/bin:/nix/store/97vambzyvpvrd9wgrrw7i7svi0s8vny5-findutils-4.7.0/bin:/nix/store/srmjkp5pq8c055j0lak2hn0ls0fis8yl-gnugrep-3.4/bin:/nix/store/p34p7ysy84579lndk7rbrz6zsfr03y71-gnused-4.8/bin:/nix/store/vfzp1mavwiz5w3v10hs69962k0gwl26c-systemd-243.7/bin:/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/sbin:/nix/store/97vambzyvpvrd9wgrrw7i7svi0s8vny5-findutils-4.7.0/sbin:/nix/store/srmjkp5pq8c055j0lak2hn0ls0fis8yl-gnugrep-3.4/sbin:/nix/store/p34p7ysy84579lndk7rbrz6zsfr03y71-gnused-4.8/sbin:/nix/store/vfzp1mavwiz5w3v10hs69962k0gwl26c-systemd-243.7/sbin
Or these paths basically:
/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/bin
/nix/store/97vambzyvpvrd9wgrrw7i7svi0s8vny5-findutils-4.7.0/bin
/nix/store/srmjkp5pq8c055j0lak2hn0ls0fis8yl-gnugrep-3.4/bin
/nix/store/p34p7ysy84579lndk7rbrz6zsfr03y71-gnused-4.8/bin
/nix/store/vfzp1mavwiz5w3v10hs69962k0gwl26c-systemd-243.7/bin
/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/sbin
/nix/store/97vambzyvpvrd9wgrrw7i7svi0s8vny5-findutils-4.7.0/sbin
/nix/store/srmjkp5pq8c055j0lak2hn0ls0fis8yl-gnugrep-3.4/sbin
/nix/store/p34p7ysy84579lndk7rbrz6zsfr03y71-gnused-4.8/sbin
/nix/store/vfzp1mavwiz5w3v10hs69962k0gwl26c-systemd-243.7/sbin
Which shows that ffmpeg is not in there.

I don't think this is the most elegant solution as the dependencies have to be known in the service definition instead of the package/derivation, but it's a solution none the less.
We can add additional paths with path = [ pkgs.ffmpeg-full ];:
abcxyz = {
enable = true;
description = "abcxyz";
path = [ pkgs.ffmpeg-full ];
serviceConfig = {
WorkingDirectory = "%h/temp/";
Type = "simple";
ExecStart = "${hello123}/bin/finderapp";
Restart = "always";
RestartSec = 60;
};
wantedBy = [ "default.target" ];
};

In addition to the previous answers
not using PATH
adding to PATH via systemd config
you can add it to the PATH inside the wrapper script, making the script more self-sufficient and making the extended PATH available to subprocesses, if ffmpeg or any other command needs it (probably not in this case).
The ls command has no effect on subsequent commands, like it shouldn't.
What you want is to add it to PATH:
hello123 =
(pkgs.writeScriptBin "finderapp" ''
#!${pkgs.stdenv.shell}
# Call hello with a traditional greeting
PATH="${pkgs.ffmpeg-full}/bin${PATH:+:${PATH}}"
ffmpeg --help
echo hello
''
);
The part ${PATH:+:${PATH}} takes care of the : and pre-existing PATH, if there is one. The simplistic :${PATH} could effectively add . to PATH when it was empty, although that would be rare.

Related

How to simulate generating a source-file in a Bazel action?

Suppose I am writing a custom Bazel rule for foo-compiler.
The user provides a list of source-files to the rule:
foo_library(
name = "hello",
srcs = [ "A.foo", "B.foo" ],
)
To build this without Bazel, the steps would be:
Create a config file config.json that lists the sources:
{
"srcs": [ "./A.foo", "./B.foo" ]
}
Place the config alongside the sources:
$ ls .
A.foo
B.foo
config.json
Call foo-compiler in that directory:
$ foo-compiler .
Now, in my Bazel rule implementation I can declare a file like this:
config_file = ctx.actions.declare_file("config.json")
ctx.actions.write(
output = config_file,
content = json_for_srcs(ctx.files.srcs),
)
The file is created and it has the right content.
However, Bazel does not place config.json alongside the srcs.
Is there a way to tell Bazel where to place the file?
Or perhaps I need to copy each source-file alongside the config?
You can do this with ctx.actions.symlink e.g.
srcs = []
# Declare a symlink for each src files in the same directory as the declared
# config file.Then write that symlink.
for f in ctx.files.srcs:
src = ctx.actions.declare_file(f.basename)
srcs.append(src)
ctx.actions.symlink(
output = src,
target_file = f,
)
config_file = ctx.actions.declare_file("config.json")
ctx.actions.write(
output = config_file,
content = json_for_srcs(ctx.files.srcs),
)
# Run compiler
ctx.actions.run(
inputs = srcs + [config_file],
outputs = # TODO: Up to you,
tools = [ctx.file.__compiler], #TODO: Update this to match your rule.
command = ctx.file.__compiler.path,
args = ["."],
#...
)
Note that when you return your provider that you should only return the result of your compilation not the srcs. Otherwise, you'll likely run into problems with duplicate outputs.

Expand a Bazel rule output's directory into a flat output of another rule

I'm trying to package a bundle for uploading to Google Cloud. I have an output of pkg_web from an angular build that I did, which, if I pass into this custom rule I'm generating, is a File object that is a directory of the files. The custom rule I am generating takes the app.yaml, etc, and the bundle, and uploads.
However, the bundle becomes a directory, and I need the files of that directory expanded for uploading in the root of command.
For example:
- bundle/index.html <-- bundle directory
- bundle/main.js
- app.yaml
and I need:
- index.html
- main.js
- app.yaml
My rule:
deploy(
name = "deploy",
srcs = [":bundle"] <-- pkg_web rule,
yaml = ":app.yaml"
)
Rule implementation:
def _deploy_pkg(ctx):
inputs = []
inputs.append(ctx.file.yaml)
inputs.extend(ctx.files.srcs)
script_template = """
#!/bin/bash
gcloud app deploy {yaml_path}
"""
script = ctx.actions.declare_file("%s-deploy" % ctx.label.name)
ctx.actions.write(script, script_content, is_executable = True)
runfiles = ctx.runfiles(files = inputs, transitive_files = depset(ctx.files.srcs))
return [DefaultInfo(executable = script, runfiles = runfiles)]
Thank you for your ideas!
Seems a bit excessive, but I ended using a custom shell command to accomplish this:
def _deploy_pkg(ctx):
inputs = []
out = ctx.actions.declare_directory("out")
yaml_out = ctx.actions.declare_file(ctx.file.yaml.basename)
inputs.append(out)
ctx.actions.run_shell(
outputs = [yaml_out],
inputs = [ctx.file.yaml],
arguments = [ctx.file.yaml.path, yaml_out.path],
progress_message = "Copying yaml to output directory.",
command = "cp $1 $2",
)
for f in ctx.files.srcs:
if f.is_directory:
ctx.actions.run_shell(
outputs = [out],
inputs = [f],
arguments = [f.path, out.path],
progress_message = "Copying %s to output directory.".format(f.basename),
command = "cp -a -R $1/* $2",
)
else:
out_file = ctx.actions.declare_file(f.basename)
inputs.append(out_file)
ctx.actions.run_shell(
outputs = [out_file],
inputs = [f],
arguments = [f.path, out_file.path],
progress_message = "Copying %s to output directory.".format(f.basename),
# This is what we're all about here. Just a simple 'cp' command.
# Copy the input to CWD/f.basename, where CWD is the package where
# the copy_filegroups_to_this_package rule is invoked.
# (To be clear, the files aren't copied right to where your BUILD
# file sits in source control. They are copied to the 'shadow tree'
# parallel location under `bazel info bazel-bin`)
command = "cp -a $1 $2",
)
....
``

starting container process caused "exec: \"/tmp/installer.sh\": permission denied"

I have a base image (named #release_docker//image) and I'm trying to install some apt packages on it (alongside my built binary). Here is what it looks like:
load("#io_bazel_rules_docker//docker/package_managers:download_pkgs.bzl", "download_pkgs")
load("#io_bazel_rules_docker//docker/package_managers:install_pkgs.bzl", "install_pkgs")
download_pkgs(
name = "downloaded-packages",
image_tar = "#release_docker//image",
packages = [
"numactl",
"pciutils",
"python",
],
)
install_pkgs(
name = "installed-packages",
image_tar = "#release_docker//image",
installables_tar = ":downloaded-packages.tar",
output_image_name = "release_docker_with_packages"
)
cc_image(
name = "my-image",
base = ":installed-packages",
binary = ":built-binary",
)
But inside the build docker (a docker image which the build command runs in), when I run bazel build :my-image --action_env DOCKER_HOST=tcp://192.168.1.2:2375, it errors:
+ DOCKER=/usr/bin/docker
+ [[ -z /usr/bin/docker ]]
+ TO_JSON_TOOL=bazel-out/host/bin/external/io_bazel_rules_docker/docker/util/to_json
+ source external/io_bazel_rules_docker/docker/util/image_util.sh
++ bazel-out/host/bin/external/io_bazel_rules_docker/contrib/extract_image_id bazel-out/k8-fastbuild/bin/external/release_docker/image/image.tar
+ image_id=b55375fc9c651e1eff0428490d01b4883de0fca62b5b18e8ede9f3d812b3fc10
+ /usr/bin/docker load -i bazel-out/k8-fastbuild/bin/external/release_docker/image/image.tar
+++ pwd
+++ pwd
++ /usr/bin/docker run -d -v /opt/bazel-root-directory/...[path-to].../downloaded-packages.tar:/tmp/bazel-out/k8-fastbuild/bin/marzban/downloaded-packages.tar -v /opt/bazel-root-directory/...[path-to].../installed-packages.install:/tmp/installer.sh --privileged b55375fc9c651e1eff0428490d01b4883de0fca62b5b18e8ede9f3d812b3fc10 /tmp/installer.sh
/usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/tmp/installer.sh\": permission denied": unknown.
+ cid=ce62e444aefe1f32a20575750a6ee1cc9c2f79d46f2f60187a8bc23f87b5aa25
I came across the same issue and it took some time for me to find the actual cause.
As you conjectured, there is a bug in your version of rules_docker repo. The actual problem is the assumption that a local folder can be directly mounted into the target image. Obviously, the assumption fails in the case of DIND (Docker-In-Docker).
Fortunately, this bug has been already fixed as part of install_pkgs uses named volumes to work with DIND. As the title suggests, the solution is to use a named volume instead of short -v src:dst.
So, the solution is to upgrade to v0.13.0 or newer.
rules_docker$ git tag --contains 32f12766248bef88358fc1646a3e0a66efd0e502 | head -1
v0.13.0
I was running into the exact same problems as you. If you change '"#release_docker//image"' to ""#release_docker//image:image.tar" it should work.
The rule requires a .tar file (which is the same format as a docker save imageName). I didn't look into the code behind the rule, but I'd assume the image also needs access to apt.
Here is a working example
BUILD FILE
load(
"#io_bazel_rules_docker//docker/package_managers:download_pkgs.bzl",
"download_pkgs",
)
load(
"#io_bazel_rules_docker//docker/package_managers:install_pkgs.bzl",
"install_pkgs",
)
install_pkgs(
name = "postgresPythonImage",
image_tar = "#py3_image_base//image:image.tar",
installables_tar = ":postgresql_pkgs.tar",
output_image_name = "postgres_python_base"
)
download_pkgs(
name = "postgresql_pkgs",
image_tar = "#ubuntu1604//image:image.tar",
packages = [
"postgresql"
],
)
WORKSPACE
http_archive(
name = "layer_definitions",
strip_prefix = "layer-definitions-ade30bae7cb1a8c1fed70e18040936fad75de8a3",
urls = ["https://github.com/GoogleCloudPlatform/layer-definitions/archive/ade30bae7cb1a8c1fed70e18040936fad75de8a3.tar.gz"],
sha256 = "af72a1a804934ba154c97c43429ec556eeaadac70336f614ac123b7f5a5db299"
)
load("#layer_definitions//layers/ubuntu1604/base:deps.bzl", ubuntu1604_base_deps = "deps")
ubuntu1604_base_deps()

How to install systemd service on nixos

If I do this:
#!/usr/bin/env bash
set -e;
cd "$(dirname "$BASH_SOURCE")"
ln -sf "$(pwd)/interos-es-mdb.service" '/etc/systemd/system/interos-es-mdb.service'
systemctl enable interos-es-mdb.service
systemctl start interos-es-mdb.service
then I get this error:
ln: failed to create symbolic link '/etc/systemd/system/interos-es-mdb.service': Read-only file system
anyone know the right way to install a service on nixos machine? (I am the root user)...here is the service for reference:
[Unit]
Description=Interos MongoDB+ES log capture
After=network.target
[Service]
Environment=interos_emit_only_json=yes
EnvironmentFile=/root/interos/env/es-service.env
StartLimitIntervalSec=0
Type=simple
Restart=always
RestartSec=1
ExecStart=/root/interos/repos/elastic-search-app/syslog-exec.sh
[Install]
WantedBy=multi-user.target
update:
perhaps what I am looking for is "per-user" service, not something run as root etcetera.
The reason its broken
NixOS is a declarative operating system. This means that directories like /etc live inside the read-only /nix/store directory. Only the nix-daemon is allowed to mount the nix-store as writable. Therefore, you must create a systemd.services.<yourservice> entry in your configuration.nix to interact with the underlying system; alternatively you can patch nixpkgs directly and point your configuration to your fork.
All running services not declared explicitly by the user can be assumed to live inside nixpkgs/nixos/modules.
Fix
configuration.nix:
{
systemd.services.foo = {
enable = true;
description = "bar";
unitConfig = {
Type = "simple";
# ...
};
serviceConfig = {
ExecStart = "${foo}/bin/foo";
# ...
};
wantedBy = [ "multi-user.target" ];
# ...
};
}
user services
almost identical except they begin with systemd.user.services. In addition, user home directories are not managed declartively, so you can also place a regular systemd unit file under $XDG_CONFIG_DIR/systemd as usual.
relevant:
Full list of valid attributes for systemd.services.<name>, From: NixOS Manual
Module basics, From: Wiki
An appropriate entry in your /etc/nixos/configuration.nix might look like:
let
# assumes you build a derivation for your software and put it in
# /etc/nixos/pkgs/interosEsMdb/default.nix
interosEsMdb = import ./pkgs/interosEsMdb {};
in config.systemd.services.interosEsMdb = {
description = "Interos MongoDB+ES log capture";
after = ["network.target"];
wantedBy = ["multi-user.target"];
serviceConfig = {
# change this to refer to your actual derivation
ExecStart = "${interosEsMdb}/bin/syslog-exec.sh";
EnvironmentFile = "${interosEsMdb}/lib/es-service.env";
Restart = "always";
RestartSec = 1;
}
}
...assuming you actually build a derivation for interosEsMdb (which is the only sane and proper way to package software on NixOS).

How to get the name from a nixpkgs derivation in a nix expression to be used by nix-shell?

I'm writing a .nix expression to be used primarily by nix-shell. I'm not sure how to do that. Note this is not on NixOS, but I don't think that is very relevant.
The particular example I'm looking at is that I want to get this version-dependent name that looks like:
idea-ultimate = buildIdea rec {
name = "idea-ultimate-${version}";
version = "2017.2.2"; /* updated by script */
description = "Integrated Development Environment (IDE) by Jetbrains, requires paid license";
license = stdenv.lib.licenses.unfree;
src = fetchurl {
url = "https://download.jetbrains.com/idea/ideaIU-${version}-no-jdk.tar.gz";
sha256 = "b8eb9d612800cc896eb6b6fbefbf9f49d92d2350ae1c3c4598e5e12bf93be401"; /* updated by script */
};
wmClass = "jetbrains-idea";
update-channel = "IDEA_Release";
};
My nix expression is the following:
let
pkgs = import <nixpkgs> {};
stdenv = pkgs.stdenv;
# idea_name = assert pkgs.jetbrains.idea-ultimate.name != ""; pkgs.jetbrains.idea-ultimate.name;
in rec {
scalaEnv = stdenv.mkDerivation rec {
name = "scala-env";
builder = "./scala-build.sh";
shellHook = ''
alias cls=clear
'';
CLANG_PATH = pkgs.clang + "/bin/clang";
CLANGPP_PATH = pkgs.clang + "/bin/clang++";
# A bug in the nixpkgs openjdk (#29151) makes us resort to Zulu OpenJDK for IDEA:
# IDEA_JDK = pkgs.openjdk + "/lib/openjdk";
# PATH = "${pkgs.jetbrains.idea-ultimate}/${idea_name}/bin:$PATH";
IDEA_JDK = /usr/lib/jvm/zulu-8-amd64;
# IDEA_JDK = /opt/zulu8.23.0.3-jdk8.0.144-linux_x64;
# IDEA_JDK = /usr/lib/jvm/java-8-openjdk-amd64;
buildInputs = with pkgs; [
ammonite
boehmgc
clang
emacs
jetbrains.idea-ultimate
less
libunwind
openjdk
re2
sbt
stdenv
unzip
zlib
];
};
}
I have commented out setting PATH as it depends on getting idea_name in the let-clause. As an interesting side note, as is, this does not fail if I leave it uncommented but causes a very bizarre error when executing nix-shell about not being able to execute bash. I've also tried the more simple case of let idea_name = pkgs.jetbrains.idea-ultimate.name; but this fails later on when idea_name is used in setting PATH since idea_name ends up being undefined.
Update:
I began exploring with nix-instantiate, but the derivation of interest seems empty:
[nix-shell:/nix/store]$ nix-instantiate --eval --xml -E "((import <nixpkgs> {}).callPackage ./3hk87pqgl2qdqmskxbhy23cyr24q8g6s-nixpkgs-18.03pre114739.d0d905668c/nixpkgs/pkgs/applications/editors/jetbrains { }).idea-ultimate";
<?xml version='1.0' encoding='utf-8'?>
<expr>
<derivation>
<repeated />
</derivation>
</expr>
If your intent is to get idea-ultimate into nix-shell environment, then just include that package to buildInputs. I see it's already included, so it should already be present in your PATH.
BTW, you can extend your shellHook and export PATH and other variables rather from there, where you have full bash. Why would you do it from bash? Less copying. When you specify
IDEA_JDK = /usr/lib/jvm/zulu-8-amd64;
in Nix, the file /usr/lib/jvm/zulu-8-amd64 get's copied to nix store and IDEA_JDK is set to point to file in /nix/store. Was that your intent?
Regarding nix-instantiate:
$ nix-instantiate --eval -E 'with import <nixpkgs>{}; idea.pycharm-community.outPath'
"/nix/store/71jk0spr30rm4wsihjwbb1hcwwvzqr4k-pycharm-community-2017.1"
but you still have to remove doublequotes (https://gist.github.com/danbst/a9fc068ff26e31d88de9709965daa2bd)
Also, nitpick, assert pkgs.jetbrains.idea-ultimate.name != ""; can be dropped as it's impossible to have empty derivation name in Nix.
And another nitpick. You'll soon find very incovenient to launch IDE from shell every time. It seems a good idea to specify, that some package is used for development, but nix-shell doesn't work well for non-cli applications. Not to mention occasional problems with Nix GC and nix-shell. You'd better install IDE globally or per-user, it is better long-term solution.
[ADDENDUM]
You are looking for this (dev-environment.nix):
with import <nixpkgs> { };
buildEnv {
name = "my-super-dev-env";
paths = [
#emacs
nano
idea.pycharm-community
];
buildInputs = [ makeWrapper ];
postBuild = ''
for f in $(ls -d $out/bin/*); do
wrapProgram $f \
--set IDEA_JDK "/path/to/zulu-jdk" \
--set CLANG_PATH ... \
--set CLANCPP_PATH ...
done
'';
}
which you install using nix-env -if ./dev-environment.nix. It will wrap your programs with those env vars, without polluting your workspace (you can pollute it further using nix-shell with shell hook, if you want).

Resources