How to install nvm in a Dockerfile? - docker

I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.

I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm

Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work

Related

Docker shows me an error of COPY of to fix?

I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .

Docker to Apptainer/Singularity image for Julia application: manifest_usage.toml Read-only file system

I'm trying to create a docker image that could work also in singularity for the software Whippet ( https://github.com/timbitz/Whippet.jl ).
The docker image works fine, but when I try to use it with singularity (singularity-ce version 3.9.5), It's not able to write the manifest_usage.toml ( that is comprehensible, since in singularity the command is executed by the user and not by the root as in docker).
Here's my Dockerfile:
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENTRYPOINT ["whippet-quant.jl"]
Any suggestions?
I tried using diffrent locations for the JULIA_DEPOT_PATH ( also the default ~/.julia ) creating a new user, but I got the same issue.
the command
docker run cloxd/whippet:1.6.1 -h
works nicely, meanwhile the same command with singularity raise the following error
singularity run docker://cloxd/whippet:1.6.1
INFO: Using cached SIF image
Whippet v1.6.1 loading...
Activating project at `/whippet`
ERROR: LoadError: SystemError: opening file "/depot/logs/manifest_usage.toml": Read-only file system
Stacktrace:
[1] systemerror(p::String, errno::Int32; extrainfo::Nothing)
# Base ./error.jl:176
[2] #systemerror#80
# ./error.jl:175 [inlined]
[3] systemerror
# ./error.jl:175 [inlined]
[4] open(fname::String; lock::Bool, read::Nothing, write::Nothing, create::Nothing, truncate::Nothing, append::Bool)
# Base ./iostream.jl:293
[5] open(f::Pkg.Types.var"#44#46"{String}, args::String; kwargs::Base.Pairs{Symbol, Bool, Tuple{Symbol}, NamedTuple{(:append,), Tuple{Bool}}})
# Base ./io.jl:382
[6] write_env_usage(source_file::String, usage_filepath::String)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:487
[7] Pkg.Types.EnvCache(env::Nothing)
# Pkg.Types /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:345
[8] EnvCache
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/Types.jl:325 [inlined]
[9] add_snapshot_to_undo(env::Nothing)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1862
[10] add_snapshot_to_undo
# /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1858 [inlined]
[11] activate(path::String; shared::Bool, temp::Bool, io::Base.TTY)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1700
[12] activate(path::String)
# Pkg.API /usr/local/julia/share/julia/stdlib/v1.8/Pkg/src/API.jl:1659
[13] top-level scope
# /whippet/bin/whippet-quant.jl:12
in expression starting at /whippet/bin/whippet-quant.jl:12
Finally I found a working solution: after the initialization, it's enough to add /tmp/ to the JULIA_DEPO_PATH and Julia is using /tmp/logs/manifest_usage.toml instead of the protected /depot/logs/manifest_usage.toml.
For clarity, here's the Dockerfile
FROM julia:bullseye
LABEL version=v1.6.1
RUN apt-get update && apt-get install -y git
RUN mkdir /depot
ENV JULIA_PATH=/usr/local/julia
ENV JULIA_DEPOT_PATH=/depot
ENV JULIA_PROJECT=/whippet
RUN mkdir /whippet/ && cd /whippet && \
git clone --depth 1 --branch v1.6.1 https://github.com/timbitz/Whippet.jl.git . && \
julia --project=/whippet -e 'using Pkg; Pkg.instantiate(); Pkg.precompile();'
RUN chmod 777 -R /depot/
ENV PATH="/whippet/bin/:${PATH}"
RUN whippet-quant.jl -h || echo "Done"
RUN whippet-index.jl -h || echo "Done"
RUN whippet-delta.jl -h || echo "Done"
ENV JULIA_DEPOT_PATH="/tmp/:${JULIA_DEPOT_PATH}"
ENTRYPOINT ["whippet-quant.jl"]

Command `supervisorctl restart` failed with exit code using mac m1

I am running on mac m1, docker, gulp.
my first error was command ld not found, but i fixed it in here.
how to solve running gcc failed exist status 1 in mac m1?
After that it leads me to this error.
this is the full error:
[17:09:04] 'restart-supervisor' errored after 1.04 s
[17:14:45] '<anonymous>' errored after 220 ms
[17:14:45] Error in plugin "gulp-shell"
Message:
Command `supervisorctl restart projectname` failed with exit code 7
[17:14:45] 'restart-supervisor' errored after 838 ms
Ive done a lot of research:
Ive tried doing this, but the command isn't found.
https://github.com/Supervisor/supervisor/issues/121
This as well.
https://github.com/Supervisor/supervisor/issues/1223.
I even change my image to arm64v8/golang:1.17-alpine3.14
this is my gulpfile.js:
var gulp = require("gulp");
var shell = require('gulp-shell');
gulp.task("build-binary", shell.task(
'go build'
));
gulp.task("restart-supervisor", gulp.series("build-binary", shell.task(
'supervisorctl restart projectname'
)))
gulp.task('watch', function() {
gulp.watch([
"*.go",
"*.mod",
"*.sum",
"**/*.go",
"**/*.mod",
"**/*.sum"
],
{interval: 1000, usePolling: true},
gulp.series('build-binary', 'restart-supervisor'
));
});
gulp.task('default', gulp.series('watch'));
This is my current dockerfile:
FROM arm64v8/golang:1.17-alpine3.14
RUN apk update && apk add gcc make git libc-dev binutils-gold
# Install dependencies
RUN apk add --update tzdata \
--no-cache ca-certificates git wget \
nodejs npm \
g++ \
supervisor \
&& update-ca-certificates \
&& npm install -g gulp gulp-shell
COPY ops/api/local/supervisor /etc
ENV PATH $PATH:/go/bin
WORKDIR /go/src/github.com/projectname/src/api
in my docker-compose.yaml i have this:
entrypoint:
[
"sh",
"-c",
"npm install gulp gulp-shell && supervisord -c /etc/supervisord.conf && gulp"
]
vim /etc/supervisord.conf:
#!/bin/sh
[unix_http_server]
file=/tmp/supervisor.sock
username=admin
password=revproxy
[supervisord]
nodaemon=false
user=root
logfile=/dev/null
logfile_maxbytes=0
logfile_backups=0
loglevel=info
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
username=admin
password=revproxy
[program:projectname_api]
directory=/go/src/github.com/projectname/src/api
command=/go/src/github.com/projectname/src/api/api
autostart=true
autorestart=true
stderr_logfile=/go/src/github.com/projectname/src/api/api_err.log
stderr_logfile_maxbytes=0
stdout_logfile=/go/src/github.com/projectname/src/api/api_debug.log
stdout_logfile_maxbytes=0
startsecs=0
But seriously, what is wrong with this mac m1.
I have tried doing it in rosetta and non-rosetta, version 2.
If the title of my question is wrong please correct me, I also not sure of my error.
I fixed the problem by adding #!/bin/sh and startsecs=0, no errors to be showing but the next problem is the API is not running.

Docker syntax error end of file unexpected

Hello im new in docker and im having problem to build this:
MySQL install well
PhpMyadmin instal wel...
but in apache i have this error
error:
: not foundbin/pete_install.sh: 2:
/usr/local/bin/pete_install.sh: 110: Syntax error: end of file unexpected (expecting "then")
pete_install.sh
Line 1 to 10
#!/bin/bash
FILE=/var/www/html/.installed
if [ ! -f "$FILE" ]; then
echo "#######################################"
echo "Starting WordPress Pete installation..."
echo "#######################################"
rm -rf /var/www/html/Pete4
Linea 99 to 110
FILE=/var/www/.ssh/id_rsa.pub
if [ ! -f "$FILE" ]; then
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
fi
chmod 600 -R /var/www/.ssh/id_rsa
chmod 600 -R /var/www/.ssh/id_rsa.pub
apachectl -DFOREGROUND
#systemctl start
#/etc/init.d/apache2 reload
echo "Loading apache..."
full file https://pastebin.com/1f5a3pJY
Most of time, the error causes because you write your script on windows, the line break on windows is \r\n, while on linux it's \n.
You should install some tools to change format, e.g.:
$ sudo apt-get update
$ sudo apt-get install -y dos2unix
$ dos2unix /usr/local/bin/pete_install.sh

Run dbus-daemon inside Docker container

I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors

Resources