Run nvm with docker exec - docker

I want to run nvm with docker exec
something like
docker run -d <image>
docker exec <container> nvm use v6.13.0 && npm install
but I have an error
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"nvm\": executable file not found in $PATH": unknown
I know that I can do something like that which work
docker exec <container> /bin/bash -c 'source "$NVM_DIR"/nvm.sh && nvm use v6.13.0'
But I don't want. Why ? because the point is to create a docker container usable with all my project with different version of python and node and run the nvm use <version> && npm install directly from gitlab-ci using the .nvmrc file into my project
my gitlab-cy.yml run a makefile which basically run the nvm use and npm install
image: cracky5457/nvm-pyenv-yarn
stages:
- install
- test
variables:
GITLAB_CACHING: "true"
cache:
paths:
- pip-cache/
key: "python_2.7"
installing:
stage: install
script:
- make install
artifacts:
paths:
- venv/
- node_modules/
expire_in: 1 hour
tags:
- docker-runner
and I don't want to push /bin/bash -c into my makefile because the project will become docker dependent locally
This is my docker image with the instructions to run it ( you have to create a file base_dependencies.txt, node-versions.txt, python-versions.txt ) or you can just docker pull cracky5457/nvm-pyenv-yarn
https://hub.docker.com/r/cracky5457/nvm-pyenv-yarn/
FROM phusion/baseimage:0.10.0
# Make sure bash is the standard shell
RUN rm /bin/sh && ln -sf /bin/bash /bin/sh
ENV ENV ~/.profile
ENV PYENV_ROOT /root/.pyenv
ENV PATH $PYENV_ROOT/shims:$PYENV_ROOT/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH
# Add yarn registry
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install base system libraries.
ENV DEBIAN_FRONTEND=noninteractive
COPY base_dependencies.txt /base_dependencies.txt
RUN apt-get update && \
apt-get install -y $(cat /base_dependencies.txt)
# Install pyenv and default python version.
ENV PYTHONDONTWRITEBYTECODE true
RUN git clone https://github.com/yyuu/pyenv.git /root/.pyenv && \
cd /root/.pyenv && \
git checkout `git describe --abbrev=0 --tags` && \
eval "$(pyenv init -)"
# Install nvm and default node version.
ENV NVM_DIR /usr/local/nvm
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash && \
echo 'source $NVM_DIR/nvm.sh' >> /etc/profile
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install python and node versions
COPY python-versions.txt /python-versions.txt
RUN for version in $(cat python-versions.txt); do pyenv install $version; pyenv global $version; pip install virtualenv; done
COPY node-versions.txt /node-versions.txt
RUN for version in $(cat node-versions.txt); do source $NVM_DIR/nvm.sh; nvm install $version; done
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]

I didn't found a proper way.
You can create a bash file into /usr/bin/nvm with chmod +x /usr/bin/nvm
#!/bin/bash
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
nvm "$#"
And then
docker exec <container> nvm use
But it's tricky and I can't add an other instruction in my exec, for exemple I can't docker exec <container> nvm use && npm install at the same time.
But I finally fixed my issue directly in gitlab-ci.yaml using
$(NVM_DIR)/nvm.sh && nvm use && npm install

Related

Error during installation of Node.js, node -v outputs node not found

I run a given Dockerfile in order to build image for my TeamCity Agent
FROM jetbrains/teamcity-agent:2022.10.1-linux-sudo
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
RUN sudo sh -c 'echo deb https://apt.kubernetes.io/ kubernetes-xenial main > /etc/apt/sources.list.d/kubernetes.list'
RUN curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
# https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/12/jdk/ubuntu/Dockerfile.hotspot.releases.full
RUN sudo apt-get update && \
sudo apt-get install -y ffmpeg gnupg2 git sudo kubectl \
binfmt-support qemu-user-static mc jq
#RUN wget -O - https://apt.kitware.com/keys/kitware-archive-la3est.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
#RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' && \
# sudo apt-get update && \
RUN sudo apt install -y cmake build-essential wget
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
RUN sudo tar -xvf node-v14.17.3-linux-x64.tar.gz
RUN echo 'export PATH="$HOME/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
RUN echo "The version of Node.js is $(node -v)"
All the code was right, but then I decided to add node.js installation to the Dockerfile. that begins from this line:
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
However, the problem right now is that I have the following error, during execution of the last line of the Dockerfile:
RUN echo "The version of Node.js is $(node -v)"
Output for this line is:
Step 10/22 : RUN echo "The version of Node.js is $(node -v)"
21:07:41 ---> Running in 863b0e75e45a
21:07:42 /bin/sh: 1: node: not found
You need to make the 2 following changed in your Dockerfile for your node installation to be included in your $PATH env var -
Remove the $HOME variable from the path you're concating, as you are currently downloading node to your root folder and not the $HOME folder -
RUN echo 'export PATH="/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
Either source ~/.bashrc explicitly for the $PATH changes to take place or run the export command as part of the Dockerfile
Once you apply these 2 changes, the error should go away.

Why is my container when starting as root seem to be empty?

When I get into my container nothing seems to have ebeen installed?
docker pull brandojazz/iit-term-synthesis:test
then
docker run -u root -ti brandojazz/iit-term-synthesis:test_arm bash
see:
(base) root#897a4007076f:/home/bot# opam switch
[WARNING] Running as root is not recommended
[ERROR] Opam has not been initialised, please run `opam init'
it should have been initialized.
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc
# rlwrap \
# sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
# RUN echo "bot:bot" | chpasswd
# RUN && adduser docker sudo
WORKDIR /home/bot
USER bot
ADD https://api.github.com/repos/IBM/pycoq/git/refs/heads/main version.json
# -- setup opam like VP's PyCoq
RUN opam init --disable-sandboxing
# compiler + '_' + coq_serapi + '.' + coq_serapi_pin
RUN opam switch create ocaml-variants.4.07.1+flambda_coq-serapi.8.11.0+0.11.1 ocaml-variants.4.07.1+flambda
RUN opam switch ocaml-variants.4.07.1+flambda_coq-serapi.8.11.0+0.11.1
RUN eval $(opam env)
RUN opam repo add coq-released https://coq.inria.fr/opam/released
# RUN opam pin add -y coq 8.11.0
# ['opam', 'repo', '--all-switches', 'add', '--set-default', 'coq-released', 'https://coq.inria.fr/opam/released']
RUN opam repo --all-switches add --set-default coq-released https://coq.inria.fr/opam/released
RUN opam update --all
RUN opam pin add -y coq 8.11.0
#RUN opam install -y --switch ocaml-variants.4.07.1+flambda_coq-serapi_coq-serapi_8.11.0+0.11.1 coq-serapi 8.11.0+0.11.1
RUN opam install -y coq-serapi
#RUN eval $(opam env)
#
## makes sure depedencies for pycoq are installed once already in the docker image
#RUN pip install https://github.com/ddelange/psutil/releases/download/release-5.9.1/psutil-5.9.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
#ENV WANDB_API_KEY="SECRET"
#RUN pip install wandb --upgrade
#
#RUN pip install ultimate-utils
## RUN pip install pycoq # do not uncomment on arm, unless serlib is removed from setup.py in the pypi pycoq version.
## RUN pip install ~/iit-term-synthesis # likely won't work cuz we don't have iit or have pused it to pypi
#
## then make sure editable mode is done to be able to use changing pycoq from system
#RUN echo "pip install -e /home/bot/ultimate-utils" >> ~/.bashrc
#RUN echo "pip install -e /home/bot/pycoq" >> ~/.bashrc
#RUN echo "pip install -e /home/bot/iit-term-synthesis" >> ~/.bashrc
#RUN echo "pip install wandb --upgrade" >> ~/.bashrc
#
#RUN echo "eval $(opam env)" >> ~/.bashrc
## - set env variable for bash terminal prompt p1 to be nicely colored
#ENV force_color_prompt=yes
#
#RUN mkdir -p /home/bot/data/
# RUN pytest --pyargs pycoq
#CMD /bin/bash
NB: This may not be your only problem (I have no idea what opam is or how it works), but one thing jumps out:
This...
RUN eval $(opam env)
...doesn't do anything. Each RUN invocation happens in a new subshell; environment variables set in one RUN command aren't going to be visible in a subsequent RUN command.
Rather than a list of single-command RUN commands, chain everything together in a single command:
RUN eval $(opam env) && \
opam repo add coq-released https://coq.inria.fr/opam/released && \
opam repo --all-switches add --set-default coq-released https://coq.inria.fr/opam/released && \
opam update --all && \
opam pin add -y coq 8.11.0 && \
opam install -y coq-serapi
Because the above runs in a single shell, the environment set by eval $(opam env) will be available to all the following commands.

Change node version using nvm and docker-compose

I have a Dockerfile running centos/systemd that also installs nvm and have an entrypoint.sh that runs /usr/sbin/init (as required by docs) it also accept an argument from docker-compose command to control the node version being used - BUT it seems the node version is not persistent/kept for some reason.
How can I control node version from docker-compose file?
Dockerfile:
FROM centos/systemd
# Install & enable httpd
RUN yum -y update
RUN yum -y install \
httpd \
autofs \
gcc-c++ \
make \
git \
fontconfig \
bzip2 \
libpng-devel \
ruby \
ruby-devel \
zip \
unzip
RUN yum clean all
RUN systemctl enable httpd.service
# Setting up virtual hosts
RUN echo "IncludeOptional apps/*.conf" >> /etc/httpd/conf/httpd.conf
# Install nvm to later use in compose
ENV NVM_DIR /root/.nvm
ENV NODE_VERSION 13.10.0
RUN curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm install 12.16.1 \
&& nvm install 11.9.0 \
&& nvm install 10.9.0 \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Expose ports
EXPOSE 80
EXPOSE 443
COPY entrypoint.sh ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
source root/.nvm/nvm.sh && nvm use "$#"
node --version
exec /usr/sbin/init
docker-compose:
version: '3'
services:
httpd:
build: '..\Web-Server\Apache'
privileged: true
ports:
- 80:80
- 443:443
command: 11.9.0
docker-compose up (output):
httpd_1 | Now using node v11.9.0 (npm v6.5.0)
httpd_1 | v11.9.0
docker exec -it /bin/sh -lc "node --version":
v13.10.0
Thanks!
If you create a dockerfile for each project then, combine them with docker-compose files, for each deployment, that is your best option. If you want to facilitate for code reuse you can look at creating a generic base image to which all your dockerfiles use.
Answering my own question after endless searches across the web.
2 things to note/change:
We need to set the default node version as well (inside the shell script). Unfortunately, I don't no why it's necessary to set it as default to keep it persistent but it just works (if anyone can explain that, please do). So entrypoint.sh looks like this:
#!/bin/bash
source root/.nvm/nvm.sh && nvm use "$#" && nvm alias default "$#"
node --version
exec /usr/sbin/init
when running bash with docker exec -it <container_id> /bin/sh -c "node --version" and not in interactive mode or login to shell it will not read startup scripts so node version set by using source /root/.nvm/nvm.sh and nvm use XXX is not red and thats why it's not "changed" for this specific bash session. Solution is to login to container and run node --version from within OR source nvm.sh as well before running node --version e.g. docker exec -it <container_id> sh -c "source /root/.nvm/nvm.sh && node --version"
Hope that helps to anyone that came across the same issue.

Docker exec npm command

I have successfully build docker container with node in it.
When I ssh'd into it, npm, node commands works as expected, but when I'm trying to execute command remotely (docker exec vvs_workspace npm install), it prints rpc error: code = 2 desc = oci runtime error: exec failed: exec: "npm": executable file not found in $PATH
Dockerfile:
#####################################
# Node / NVM:
#####################################
ENV NVM_DIR=/home/dockuser/.nvm
ENV NODE_VERSION 6.3.1
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.3/install.sh | bash \
&& . ~/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
RUN echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc
P.S. when executing docker exec vvs_workspace composer install everything is ok.
I found the solution, https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/82 , just add ENV PATH $PATH:/home/laradock/.nvm/versions/node/v6.8.0/bin in your Dockfile. change /home/laradock/.nvm/versions/node/v6.8.0/bin to your nvm path.

Docker CMD doesn't see installed components

I am trying to build a docker image using the following docker file.
FROM ubuntu:latest
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update packages
RUN apt-get -y update && apt-get install -y \
curl \
build-essential \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
ENV APP_NAME testapp
ENV NODE_VERSION 5.10
ENV SERVE_PORT 8080
ENV LIVE_RELOAD_PORT 8888
# Install nvm, node, and angular
RUN (curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash -) \
&& source /root/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g angular-cli \
&& ng new $APP_NAME \
&& cd $APP_NAME \
&& npm run postinstall
EXPOSE $SERVE_PORT $LIVE_RELOAD_PORT
WORKDIR $APP_NAME
EXPOSE 8080
CMD ["node", "-v"]
But I keep getting an error when trying to run it:
docker: Error response from daemon: Container command 'node' not found or does not exist..
I know node is being properly installed because if I rebuild the image by commenting out the CMD line from the docker file
#CMD ["node", "-v"]
And then start a shell session
docker run -it testimage
I can see that all my dependencies are there and return proper results
node -v
v5.10.1
.....
ng -v
angular-cli: 1.0.0-beta.5
node: 5.10.1
os: linux x64
So my question is. Why is the CMD in Dockerfile not able to run these and how can I fix it?
When using the shell to RUN node via nvm, you have sourced the nvm.sh file and it will have a $PATH variable set in it's environment to search for executable files via nvm.
When you run commands via docker run it will only inject a default PATH
docker run <your-ubuntu-image> echo $PATH
docker run <your-ubuntu-image> which node
docker run <your-ubuntu-image> nvm which node
Specifying a CMD with an array execs a binary directly without a shell or a $PATH to lookup.
Provide the full path to your node binary.
CMD ["/bin/node","-v"]
It's better to use the node binary rather than the nvm helper scripts due to the way dockers signal processing works. It might be easier to use the node apt packages in docker rather than nvm.

Resources