Docker to Apptainer/Singularity image for Julia application: manifest_usage.toml Read-only file system - docker

I'm trying to create a docker image that could work also in singularity for the software Whippet ( https://github.com/timbitz/Whippet.jl ).
The docker image works fine, but when I try to use it with singularity (singularity-ce version 3.9.5), It's not able to write the manifest_usage.toml ( that is comprehensible, since in singularity the command is executed by the user and not by the root as in docker).
Here's my Dockerfile:
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENTRYPOINT ["whippet-quant.jl"]
Any suggestions?
I tried using diffrent locations for the JULIA_DEPOT_PATH ( also the default ~/.julia ) creating a new user, but I got the same issue.
the command
docker run cloxd/whippet:1.6.1 -h
works nicely, meanwhile the same command with singularity raise the following error
singularity run docker://cloxd/whippet:1.6.1
INFO: Using cached SIF image
Whippet v1.6.1 loading...
Activating project at `/whippet`
ERROR: LoadError: SystemError: opening file "/depot/logs/manifest_usage.toml": Read-only file system
Stacktrace:
[1] systemerror(p::String, errno::Int32; extrainfo::Nothing)
# Base ./error.jl:176
[2] #systemerror#80
# ./error.jl:175 [inlined]
[3] systemerror
# ./error.jl:175 [inlined]
[4] open(fname::String; lock::Bool, read::Nothing, write::Nothing, create::Nothing, truncate::Nothing, append::Bool)
# Base ./iostream.jl:293
[5] open(f::Pkg.Types.var"#44#46"{String}, args::String; kwargs::Base.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:append,), Tuple{Bool}}})
# Base ./io.jl:382
[6] write_env_usage(source_file::String, usage_filepath::String)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:487
[7] Pkg.Types.EnvCache(env::Nothing)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:345
[8] EnvCache
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:325 [inlined]
[9] add_snapshot_to_undo(env::Nothing)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1862
[10] add_snapshot_to_undo
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1858 [inlined]
[11] activate(path::String; shared::Bool, temp::Bool, io::Base.TTY)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1700
[12] activate(path::String)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1659
[13] top-level scope
# /whippet/bin/whippet-quant.jl:12
in expression starting at /whippet/bin/whippet-quant.jl:12

Finally I found a working solution: after the initialization, it's enough to add /tmp/ to the JULIA_DEPO_PATH and Julia is using /tmp/logs/manifest_usage.toml instead of the protected /depot/logs/manifest_usage.toml.
For clarity, here's the Dockerfile
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENV JULIA_DEPOT_PATH="/tmp/:${JULIA_DEPOT_PATH}"
ENTRYPOINT ["whippet-quant.jl"]

Related

Docker shows me an error of COPY of to fix?

I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .

Use .env file variables during Docker build

I am trying to use the sed command to replace variables during docker build. The variable I am attempting to do (to start) is $DATABASE_HOST. The value for that is coming from my .env file. I am reading online that environment variables are only available during run time if they come from the .env file. Due to this, my sed command is not registering.
Dockerfile:
# Dockerfile for Sphinx SE
# https://hub.docker.com/_/alpine/
FROM alpine:3.12
# https://sphinxsearch.com/blog/
ENV SPHINX_VERSION 3.4.1-efbcc65
# Install dependencies
RUN apk add --no-cache mariadb-connector-c-dev \
postgresql-dev \
wget \
sed
# set up and expose directories
RUN mkdir -pv /opt/sphinx/log /opt/sphinx/index
VOLUME /opt/sphinx/index
# http://sphinxsearch.com/downloads/sphinx-3.3.1-b72d67b-linux-amd64-musl.tar.gz
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}-linux-amd64-musl.tar.gz -O /tmp/sphinxsearch.tar.gz \
&& cd /opt/sphinx && tar -xf /tmp/sphinxsearch.tar.gz \
&& rm /tmp/sphinxsearch.tar.gz
# point to sphinx binaries
ENV PATH "${PATH}:/opt/sphinx/sphinx-3.4.1/bin"
RUN indexer -v
# redirect logs to stdout
RUN ln -sv /dev/stdout /opt/sphinx/log/query.log \
&& ln -sv /dev/stdout /opt/sphinx/log/searchd.log
# expose TCP port
EXPOSE 36307
EXPOSE 9306
# Copy base sphinx.conf file to container
VOLUME /opt/sphinx/conf
COPY ./sphinx.conf /opt/sphinx/conf/sphinx.conf
# Copy all docker sphinx.conf files
COPY ./configs/web-finder/docker/ /opt/sphinx/conf/
# look for and replace
RUN sed -i "s+DATABASE_HOST+${DATABASE_HOST}+g" /opt/sphinx/conf/sphinx.conf
# Concat the sphinx.conf files for all apps
# RUN cat /tmp/myconfig.append >> /etc/portage/make.conf && rm -f /tmp/myconfig.append
CMD indexer --all --config /opt/sphinx/conf/sphinx.conf \
&& searchd --nodetach --config /opt/sphinx/conf/sphinx.conf
.env file:
DATABASE_HOST=someport
DATABASE_USERNAME=someusername
DATABASE_PASSWORD=somepassword
DATABASE_SCHEMA=someschema
DATABASE_PORT=3306
SPHINX_PORT=36307
sphinx.conf:
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
With sphinx the 'sphinx.conf' file can be 'executable'. Ie it can actully be a 'shell script' (or PHP, perl etc!)
Assuming your .env file makes real (runtime!) environment variables within the container (not overly familiar with Docker), then your sphinx.conf file could be ...
#!/bin/sh
set -eu
cat <<EOF
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
EOF
And because it a shell script, the variables will automatically be expanded :)
Need it executable too!
RUN chmod a+x /opt/sphinx/conf/sphinx.conf
Then dont need the sed command in Dockerfile at all!

CentOS 6 Docker build using livemedia-creator is failing

I am trying to build an Docker base image using livemedia-creator on CentOS 7.5 with latest patches installed is failing. Below is the error I am getting.
# livemedia-creator --make-tar --no-virt --iso=CentOS-6.10-x86_64-netinstall.iso --ks=centos-6.ks --image-name=centos-root.tar.xz
Starting package installation process
The installation was stopped due to incomplete spokes detected while running in non-interactive cmdline mode. Since there cannot be any questions in cmdline mode, edit your kickstart file and retry installation.
The exact error message is:
CmdlineError: Missing package: firewalld.
The installer will now terminate.
The kickstart file which I am using is as below
url --url="http://mirrors.kernel.org/centos/6.9/os/x86_64/"
install
keyboard us
lang en_US.UTF-8
rootpw --lock --iscrypted locked
authconfig --enableshadow --passalgo=sha512
timezone --isUtc Etc/UTC
selinux --enforcing
#firewall --disabled
firewall --disable
network --bootproto=dhcp --device=eth0 --activate --onboot=on
reboot
bootloader --location=none
# Repositories to use
repo --name="CentOS" --baseurl=http://mirror.centos.org/centos/6.9/os/x86_64/ --cost=100
repo --name="Updates" --baseurl=http://mirror.centos.org/centos/6.9/updates/x86_64/ --cost=100
# Disk setup
zerombr
clearpart --all
part / --size 3000 --fstype ext4
%packages --excludedocs --nobase --nocore
vim-minimal
yum
bash
bind-utils
centos-release
shadow-utils
findutils
iputils
iproute
grub
-*-firmware
passwd
rootfiles
util-linux-ng
yum-plugin-ovl
%end
%post --log=/tmp/anaconda-post.log
# Post configure tasks for Docker
# remove stuff we don't need that anaconda insists on
# kernel needs to be removed by rpm, because of grubby
rpm -e kernel
yum -y remove dhclient dhcp-libs dracut grubby kmod grub2 centos-logos \
hwdata os-prober gettext* bind-license freetype kmod-libs dracut
yum -y remove dbus-glib dbus-python ebtables \
gobject-introspection libselinux-python pygobject3-base \
python-decorator python-slip python-slip-dbus kpartx kernel-firmware \
device-mapper* e2fsprogs-libs sysvinit-tools kbd-misc libss upstart
#clean up unused directories
rm -rf /boot
rm -rf /etc/firewalld
# Randomize root's password and lock
dd if=/dev/urandom count=50 | md5sum | passwd --stdin root
passwd -l root
#LANG="en_US"
#echo "%_install_lang $LANG" > /etc/rpm/macros.image-language-conf
awk '(NF==0&&!done){print "override_install_langs='$LANG'\ntsflags=nodocs";done=1}{print}' \
< /etc/yum.conf > /etc/yum.conf.new
mv /etc/yum.conf.new /etc/yum.conf
echo 'container' > /etc/yum/vars/infra
rm -f /usr/lib/locale/locale-archive
#Setup locale properly
localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
#disable services
for serv in `/sbin/chkconfig|cut -f1`; do /sbin/chkconfig "$serv" off; done;
mv /etc/rc1.d/S26udev-post /etc/rc1.d/K26udev-post
rm -rf /var/cache/yum/*
rm -f /tmp/ks-script*
rm -rf /etc/sysconfig/network-scripts/ifcfg-*
#Generate installtime file record
/bin/date +%Y%m%d_%H%M > /etc/BUILDTIME
%end
I am not able to figure out from where firewalld is being picked. Any thought how to fix this issue.

Run dbus-daemon inside Docker container

I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors

How to install nvm in a Dockerfile?

I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.
I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work

Resources