I have a base image (named #release_docker//image) and I'm trying to install some apt packages on it (alongside my built binary). Here is what it looks like:
load("#io_bazel_rules_docker//docker/package_managers:download_pkgs.bzl", "download_pkgs")
load("#io_bazel_rules_docker//docker/package_managers:install_pkgs.bzl", "install_pkgs")
download_pkgs(
name = "downloaded-packages",
image_tar = "#release_docker//image",
packages = [
"numactl",
"pciutils",
"python",
],
)
install_pkgs(
name = "installed-packages",
image_tar = "#release_docker//image",
installables_tar = ":downloaded-packages.tar",
output_image_name = "release_docker_with_packages"
)
cc_image(
name = "my-image",
base = ":installed-packages",
binary = ":built-binary",
)
But inside the build docker (a docker image which the build command runs in), when I run bazel build :my-image --action_env DOCKER_HOST=tcp://192.168.1.2:2375, it errors:
+ DOCKER=/usr/bin/docker
+ [[ -z /usr/bin/docker ]]
+ TO_JSON_TOOL=bazel-out/host/bin/external/io_bazel_rules_docker/docker/util/to_json
+ source external/io_bazel_rules_docker/docker/util/image_util.sh
++ bazel-out/host/bin/external/io_bazel_rules_docker/contrib/extract_image_id bazel-out/k8-fastbuild/bin/external/release_docker/image/image.tar
+ image_id=b55375fc9c651e1eff0428490d01b4883de0fca62b5b18e8ede9f3d812b3fc10
+ /usr/bin/docker load -i bazel-out/k8-fastbuild/bin/external/release_docker/image/image.tar
+++ pwd
+++ pwd
++ /usr/bin/docker run -d -v /opt/bazel-root-directory/...[path-to].../downloaded-packages.tar:/tmp/bazel-out/k8-fastbuild/bin/marzban/downloaded-packages.tar -v /opt/bazel-root-directory/...[path-to].../installed-packages.install:/tmp/installer.sh --privileged b55375fc9c651e1eff0428490d01b4883de0fca62b5b18e8ede9f3d812b3fc10 /tmp/installer.sh
/usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/tmp/installer.sh\": permission denied": unknown.
+ cid=ce62e444aefe1f32a20575750a6ee1cc9c2f79d46f2f60187a8bc23f87b5aa25
I came across the same issue and it took some time for me to find the actual cause.
As you conjectured, there is a bug in your version of rules_docker repo. The actual problem is the assumption that a local folder can be directly mounted into the target image. Obviously, the assumption fails in the case of DIND (Docker-In-Docker).
Fortunately, this bug has been already fixed as part of install_pkgs uses named volumes to work with DIND. As the title suggests, the solution is to use a named volume instead of short -v src:dst.
So, the solution is to upgrade to v0.13.0 or newer.
rules_docker$ git tag --contains 32f12766248bef88358fc1646a3e0a66efd0e502 | head -1
v0.13.0
I was running into the exact same problems as you. If you change '"#release_docker//image"' to ""#release_docker//image:image.tar" it should work.
The rule requires a .tar file (which is the same format as a docker save imageName). I didn't look into the code behind the rule, but I'd assume the image also needs access to apt.
Here is a working example
BUILD FILE
load(
"#io_bazel_rules_docker//docker/package_managers:download_pkgs.bzl",
"download_pkgs",
)
load(
"#io_bazel_rules_docker//docker/package_managers:install_pkgs.bzl",
"install_pkgs",
)
install_pkgs(
name = "postgresPythonImage",
image_tar = "#py3_image_base//image:image.tar",
installables_tar = ":postgresql_pkgs.tar",
output_image_name = "postgres_python_base"
)
download_pkgs(
name = "postgresql_pkgs",
image_tar = "#ubuntu1604//image:image.tar",
packages = [
"postgresql"
],
)
WORKSPACE
http_archive(
name = "layer_definitions",
strip_prefix = "layer-definitions-ade30bae7cb1a8c1fed70e18040936fad75de8a3",
urls = ["https://github.com/GoogleCloudPlatform/layer-definitions/archive/ade30bae7cb1a8c1fed70e18040936fad75de8a3.tar.gz"],
sha256 = "af72a1a804934ba154c97c43429ec556eeaadac70336f614ac123b7f5a5db299"
)
load("#layer_definitions//layers/ubuntu1604/base:deps.bzl", ubuntu1604_base_deps = "deps")
ubuntu1604_base_deps()
Related
I want to run KubernetesPodOperator in Airflow that reads some file and send the content to XCOM.
Definition looks like:
read_file = DefaultKubernetesPodOperator(
image = 'alpine:3.16',
cmds = ['bash', '-cx'],
arguments = ['cat file.json >> /airflow/xcom/return.json'],
name = 'some-name',
task_id = 'some_name',
do_xcom_push = True,
image_pull_policy = 'IfNotPresent',
)
but I am getting: INFO - stderr from command: cat: can't open '/***/xcom/return.json': No such file or directory
When I use ubuntu:22.04 it works, but I want it make faster by using smaller (Alpine) image. Why it is not working with alpine and how to overcome that?
I am making a very short workflow in which I use a tool for my analysis called salmon.
In the hpc that I am working in, I cannot install this tool so I decided to pull the container from biocontainers.
In the hoc we do not have docker installed (I also do not have permission to do so) but we have singularity instead.
So I have to pull docker container (from: quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0) using singularity.
The workflow management system that I am working with is nextflow.
This is the short workflow I made (index.nf):
#!/usr/bin/env nextflow
nextflow.preview.dsl=2
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
shell = ['/bin/bash', '-euo', 'pipefail']
process INDEX {
script:
"""
salmon index \
-t /hpc/genome/gencode.v39.transcripts.fa \
-i index \
"""
}
workflow {
INDEX()
}
I run it using this command:
nextflow run index.nf -resume
But got this error:
salmon: command not found
Do you know how I can fix the issue?
You are so close! All you need to do is move these directives into your nextflow.config or declare them at the top of your process body:
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
shell = ['/bin/bash', '-euo', 'pipefail']
My preference is to use a process selector to assign the container directive. So for example, your nextflow.config might look like:
process {
shell = ['/bin/bash', '-euo', 'pipefail']
withName: INDEX {
container = 'quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0'
}
}
singularity {
enabled = true
// not strictly necessary, but highly recommended
cacheDir = '/path/to/singularity/cache'
}
And your index.nf might then look like:
nextflow.enable.dsl=2
params.transcripts = '/hpc/genome/gencode.v39.transcripts.fa'
process INDEX {
input:
path fasta
output:
path 'index'
"""
salmon index \\
-t "${fasta}" \\
-i index \\
"""
}
workflow {
transcripts = file( params.transcripts )
INDEX( transcripts )
}
If run using:
nextflow run -ansi-log false index.nf
You should see the following results:
N E X T F L O W ~ version 21.04.3
Launching `index.nf` [clever_bassi] - revision: d235de22c4
Pulling Singularity image docker://quay.io/biocontainers/salmon:1.2.1--hf69c8f4_0 [cache /path/to/singularity/cache/quay.io-biocontainers-salmon-1.2.1--hf69c8f4_0.img]
[8a/279df4] Submitted process > INDEX
I'd like to slim down a debian 10 Docker image using Bazel and then flatten the result into a single layer image.
Here's the code I have:
load("#io_bazel_rules_docker//container:container.bzl", "container_image", "container_flatten", "container_bundle", "container_import")
load("#io_bazel_rules_docker//docker/util:run.bzl", "container_run_and_commit")
container_run_and_commit(
name = "debian10_slimmed_layers",
commands = ["rm -rf /usr/share/man/*"],
image = "#debian10//image",
)
# Create an image just so we can flatten it.
container_image(
name = "debian10_slimmed_image",
base = ":debian10_slimmed_layers",
)
# Flatten the layers to a single layer.
container_flatten(
name = "debian10_flatten",
image = ":debian10_slimmed_image",
)
Where I'm stuck is I can't figure out how to use the output of debian10_flatten to produce a runnable Docker image.
I tried:
container_image(
name = "debian10",
base = ":debian10_flatten",
)
That fails with:
2021/06/27 13:16:25 Unable to load docker image from bazel-out/k8-fastbuild/bin/debian10_flatten.tar:
file manifest.json not found in tar
container_flatten gives you the filesystem tarball. You need to add the tarball as tars in debian10, instead of deps:
container_image(
name = "debian10",
tars = [":debian10_flatten.tar"],
)
deps is for another container_image rule (or equivalent). If you had a docker save-style tarball, container_load would be the way to get the container_image equivalent.
I figured this out looking at the implementation in container/flatten.bzl. The docs could definitely use some improvements if somebody wants to open a PR (they're generated from the python-style docstring in that .bzl file).
I'm trying to use nixos-generators to build an AMI in my Ubuntu machine which has nix package manager installed. I have this configuration:
$ cat configuration.nix
{ pkgs, ... }:
{
imports = [ <nixpkgs/nixos/modules/virtualisation/amazon-image.nix> ];
ec2.hvm = true;
environment.systemPackages = with pkgs; [ git ];
}
And then I use this command to create an AMI:
$ nixos-generate -f amazon -c ./configuration.nix
these derivations will be built:
/nix/store/zc68psb6kxz9sxqr82bqs7x6c3vamnbd-nixos-amazon-image-20.09pre-git-x86_64-linux.drv
error: a 'x86_64-linux' with features {kvm} is required to build '/nix/store/zc68psb6kxz9sxqr82bqs7x6c3vamnbd-nixos-amazon-image-20.09pre-git-x86_64-linux.drv', but I am a 'x86_64-linux' with features {benchmark, big-parallel, nixos-test}
You can see the above error which I get. I do know that my system is
capable of running KVM virtual machines:
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
And based on reading the documentation, I populate the nix.conf with this:
$ cat ~/.config/nix/nix.conf
system-features = "kvm"
And to this, I get this strange error:
$ nixos-generate -f amazon -c ./configuration.nix
these derivations will be built:
/nix/store/zc68psb6kxz9sxqr82bqs7x6c3vamnbd-nixos-amazon-image-20.09pre-git-x86_64-linux.drv
error: a 'x86_64-linux' with features {kvm} is required to build '/nix/store/zc68psb6kxz9sxqr82bqs7x6c3vamnbd-nixos-amazon-image-20.09pre-git-x86_64-linux.drv', but I am a 'x86_64-linux' with features {"kvm"}
Any idea on how to resolve this issue and build an AMI ?
#patogonicus from IRC found that the issue was double quotes in the nix.conf file:
system-features = kvm
And it worked fine with that configuration.
I'm trying to learn to write Nix expressions, and I thought about doing my own very simple "Hello World!" (as is tradition).
So I have my dir with only this default.nix file :
{pkgs ? import <nixpkgs> {}}:
derivation {
system = "x86_64-linux";
name = "simple-bash-derivation-helloworld";
builder = pkgs.bash;
args = [ "-c" "echo 'Hello World' > $out" ];
}
Here is what I get when I try to build it:
nix-build
these derivations will be built:
/nix/store/3grmahx3ih4c50asj84p7xnpqpj32n5s-simple-bash-derivation-helloworld.drv
building path(s) ‘/nix/store/6psl3rc92311w37c1n6nj0a6jac16hv1-simple-bash-derivation-helloworld’
while setting up the build environment: executing ‘/nix/store/wb34dgkpmnssjkq7yj4qbjqxpnapq0lw-bash-4.4-p12’: Permission denied
builder for ‘/nix/store/3grmahx3ih4c50asj84p7xnpqpj32n5s-simple-bash-derivation-helloworld.drv’ failed with exit code 1
error: build of ‘/nix/store/3grmahx3ih4c50asj84p7xnpqpj32n5s-simple-bash-derivation-helloworld.drv’ failed
Removing the args line yields the same issue.
Why do I get a permission issue?
What would be the correct way to make a simple derivation just doing a bash echo?
Please note that this is a learning exercise: I do not want to use stdenv.mkDerivation here for example.
I am running nix-env (Nix) 1.11.9 on an Ubuntu 16.04 system.
Thanks in advance.
Try running ls command on /nix/store/wb34dgkpmnssjkq7yj4qbjqxpnapq0lw-bash-4.4-p12 and you will see it's a directory rather than an executable file (pointing to $out of the pkgs.bash derivation). If you wanted to refer to bash binary you would use:
builder = "${pkgs.bash}/bin/bash";