Docker install nvm - docker

I'm trying to install nvm like this:
FROM maven:3-jdk-8
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN source ~/.nvm/nvm.sh
RUN nvm install 16
RUN nvm use 16
However I keep getting this error:
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 253B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3-jdk-8 1.1s
=> [1/6] FROM docker.io/library/maven:3-jdk-8#sha256:ff18d86faefa15d1445d0fa4874408cc96dec068eb3487a0fc6d07f359a24607 0.0s
=> CACHED [2/6] RUN rm /bin/sh && ln -s /bin/bash /bin/sh 0.0s
=> CACHED [3/6] RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash 0.0s
=> CACHED [4/6] RUN source ~/.nvm/nvm.sh 0.0s
=> ERROR [5/6] RUN nvm install 16 0.1s
------
> [5/6] RUN nvm install 16:
#7 0.128 /bin/sh: line 1: nvm: command not found
------
executor failed running [/bin/sh -c nvm install 16]: exit code: 127
I would expect NVM is accessible because I run this line:
RUN source ~/.nvm/nvm.sh
What am I doing wrong here? When I run this manually in my docker container it works.

Each RUN statement is executed in its own shell, therefore the source command does not affect the subsequent shells.
To fix it, use a single RUN command:
FROM maven:3-jdk-8
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN source ~/.nvm/nvm.sh && nvm install 16 && nvm use 16

The source command will not have effect on the next RUN command.
You need to have all the nvm commands in the same layer like this:
RUN source ~/.nvm/nvm.sh && nvm install 16 && nvm use 16
Or, if you would like to do it manually, the source command should be adding env variables (you can view them using the env command).

Related

Bulding process of a Docker container for Gauge Framework wont stop running

I write my bachelor thesis about E2E-Testing of a specific software for my university. I work with the Gauge-framework which includes Taiko. My tests are fine and working on my local machine.
But now I have to build a docker Container because my tests have to work autonomously regardless of which OS is in use (my mentor uses IoS and there are some problems if he just runs my code from GitLab).
And know the problem:
I read a bit about docker and watched some tutorials how to use it. Therefore, I understand what is happening in the following code to some extent. The docker file is generated when I initialize a gauge project. There is antoher dockerfile example on the gauge hompage but that doesn´t work too (it is from 2018 and maybe outdated but not changed on the doc site).
`
# Building the image
# docker build -t gauge-taiko .
# Running the image
# docker run --rm -it -v ${PWD}/reports:/gauge/reports gauge-taiko
# This image uses the official node base image.
FROM node
# The Taiko installation downloads and installs the chromium required to run the tests.
# However, we need the chromium dependencies installed in the environment. These days, most
# Dockerfiles just install chrome to get the dependencies.
RUN apt-get update \
&& apt-get install -y wget gnupg ca-certificates \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable
# Set a custom npm install location so that Gauge, Taiko and dependencies can be
# installed without root privileges
ENV NPM_CONFIG_PREFIX=/home/gauge/.npm-packages
ENV PATH=$PATH:/home/gauge/.npm-packages/bin
# ENV PATH=$PATH:/home/node/.npm-global/bin
# Add the Taiko browser arguments
ENV TAIKO_BROWSER_ARGS=--no-sandbox,--start-maximized,--disable-dev-shm-usage
ENV headless_chrome=true
ENV TAIKO_SKIP_DOCUMENTATION=true
# Uncomment the lines below to use chrome bundled with this image
#ENV TAIKO_SKIP_CHROMIUM_DOWNLOAD=true
#ENV TAIKO_BROWSER_PATH=/usr/bin/google-chrome
# Set working directory
WORKDIR /gauge
# Copy the local working folder
COPY . .
# Create an unprivileged user to run Taiko tests
RUN groupadd -r gauge && useradd -r -g gauge -G audio,video gauge && \
mkdir -p /home/gauge/.npm-packages/lib && \
chown -R gauge:gauge /home/gauge /gauge
USER gauge
# Install dependencies and plugins
RUN npm install -g #getgauge/cli && \
gauge install js && \
gauge install html-report && \
gauge install screenshot && \
gauge config check_updates false
# Default command on running the image
ENTRYPOINT ["npm", "test"]
`
and then the bulding process wont stop (see below):
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2.03kB 0.0s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/node:latest 0.6s
=> [internal] load build context 0.1s
=> => transferring context: 1.44kB 0.0s
=> [1/6] FROM docker.io/library/node#sha256:d5222e1ebd7dd7e9683f47a8861a4711cb4407a4830cbe04a582ca4986245700 0.0s
=> CACHED [2/6] RUN apt-get update && apt-get install -y wget gnupg ca-certificates && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && sh -c 'echo "deb [arch=amd6 0.0s
=> CACHED [3/6] WORKDIR /gauge 0.0s
=> CACHED [4/6] COPY . . 0.0s
=> CACHED [5/6] RUN groupadd -r gauge && useradd -r -g gauge -G audio,video gauge && mkdir -p /home/gauge/.npm-packages/lib && chown -R gauge:gauge /home/gauge /gauge 0.0s
=> CANCELED [6/6] RUN npm install -g #getgauge/cli && gauge install js && gauge install html-report && gauge install screenshot && gauge config check_updates false 31.3s
It let it run for about 10 minutes but nothing happened and I cnaceled it by myself.
After some tests and research, I think the problem is this line
npm install -g #getgauge/cli &&
I changed the order of how the commands are executed (e.g. if I execute gauge install js first with a particlar RUN command it executes. Then it stops at the command line above again).
Then I ran antoher test and tried to install a specific version of Gauge with the npm install -g #getgauge/cli#Version command (in my test it was 1.1.1 because I have seen that in a GitHub example) and with that it worked. However, the current version is 1.4.4 and I use that version on my local machine and, therefore, want to use it within the Docker Container (plus there were some pretty usefull bugfixes in between these versions ...). Do you have any ideas how I can fix that problem or give me a hint about how to fix it or where to look up some information?
Tahnk you guys and have happy holidays!

Docker cross compile build context leads to `dockerfile.v0: unsupported frontend capability moby.buildkit.frontend.contexts`

I'm trying to cross compile a rust application for my raspberry pi (the compilation there is very slow).
For that I try to execute a Dockerfile with a build context somewhere else (because there are some certificates and other things, which are needed in the Docker image).
Dockerfile (./myapp/Dockerfile)
FROM rust
RUN apt-get update && apt-get install -y pkg-config libssl-dev build-essential cmake
WORKDIR /home/myapp
COPY --from=local ./myapp/. .
COPY --from=local ./mqtt-helper/ /home/mqtt-helper/
COPY --from=local ./mqtt-broker/config/certs/ca.crt ./certs/
COPY --from=local ./mqtt-broker/config/certs/mqtt-subscriber.key ./certs/
COPY --from=local ./mqtt-broker/config/certs/mqtt-subscriber.crt ./certs/
ENV TZ=Europe/Berlin
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN cargo install --path .
EXPOSE 8080
CMD ["myapp"]
Now I'm trying to run:
docker buildx build --platform linux/arm64 --build-context local=./ ./myapp/
But this call always leads into:
[+] Building 0.0s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
ERROR: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: unsupported frontend capability moby.buildkit.frontend.contexts
thank you

clone private gitlab repo on docker file using ssh

I created a Dockerfile to create an image for a container. I need to clone a private repo on this docker file
I follow this tutorial
https://vsupalov.com/better-docker-private-git-ssh/
I added this steps to my Dockerfile
RUN --mount=type=ssh
WORKDIR /
RUN git clone git#gitlab.<my private repo>.git
My sshkey on the host is
inls ~/.ssh/
id_ed25519 id_ed25519.pub known_hosts
I try to clone the private repo to the root of the container that will be build with this docker image
update
here is my final docker file
FROM python:3.8-bullseye
RUN apt-get update && \
apt-get install --yes --no-install-recommends \
openssh-client \
git \
&& apt-get clean
RUN mkdir -p -m 0600 ~/.ssh && \
ssh-keyscan -H gitlab.com ~/.ssh/known_hosts
RUN --mount=type=ssh \
git clone git#gitlab.<org>/<>repo.git
and I get this error
docker buildx build --ssh default .
[+] Building 0.6s (8/12)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.8-bullseye 0.0s
=> [1/8] FROM docker.io/library/python:3.8-bullseye 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 6.56kB 0.0s
=> CACHED [2/8] RUN apt-get update && apt-get install --yes --no-install-recommends openssh-client git && apt-get clean 0.0s
=> CACHED [3/8] RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan -H gitlab.com ~/.ssh/known_hosts 0.0s
=> ERROR [4/8] RUN --mount=type=ssh git clone git#git#gitlab.<org>/<>repo.git 0.5s
------
> [4/8] RUN --mount=type=ssh git clone git#git#gitlab.<org>/<>repo.git:
#0 0.299 Cloning into 'the-dock'...
#0 0.432 Host key verification failed.
#0 0.433 fatal: Could not read from remote repository.
#0 0.433
#0 0.433 Please make sure you have the correct access rights
#0 0.433 and the repository exists.
------
ERROR: failed to solve: executor failed running [/bin/sh -c git clone git#git#gitlab.<org>/<>repo.git]: exit code: 128

dotnet not found in alpine docker

I have the following dockerfile:
FROM jenkins/jenkins:lts-alpine
USER root
RUN apk update
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk add libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/
USER jenkins
RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-install.sh
RUN chmod +x $HOME/dotnet-install.sh
RUN $HOME/dotnet-install.sh -c 5.0
RUN dotnet --info
EXPOSE 2376 23676
But when I run docker-compose, it gives me:
Building jenkins
failed to get console mode for stdout: Invalid identifier.
[+] Building 64.6s (10/11)
[+] Building 64.7s (11/11) FINISHED
=> [internal] load build definition from jenkins.dockerfile 0.0s
=> => transferring dockerfile: 486B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jenkins/jenkins:lts-alpine 1.6s
=> [1/8] FROM docker.io/jenkins/jenkins:lts-alpine#sha256:b2f3dd63864733 0.0s
=> CACHED [2/8] RUN apk update 0.0s
=> CACHED [3/8] RUN apk add bash icu-libs krb5-libs libgcc libintl libss 0.0s
=> [4/8] RUN apk add libgdiplus --repository https://dl-3.alpinelinux.or 7.8s
=> [5/8] RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-i 2.2s
=> [6/8] RUN chmod +x $HOME/dotnet-install.sh 0.3s
=> [7/8] RUN $HOME/dotnet-install.sh -c 5.0 52.2s
=> ERROR [8/8] RUN dotnet --info 0.5s
------
> [8/8] RUN dotnet --info:
#11 0.447 /bin/sh: dotnet: not found
------
ERROR: Service 'jenkins' failed to build
I followed every step in Microsoft documentation but I keep failing. What am I doing wrong here?
Using the jenkins/jnlp-slave:alpine image
FROM jenkins/jnlp-slave:alpine
USER root
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk add libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/
RUN mkdir -p /usr/share/dotnet \
&& ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet
RUN wget https://dot.net/v1/dotnet-install.sh
RUN chmod +x dotnet-install.sh
RUN ./dotnet-install.sh -c 3.1 --install-dir /usr/share/dotnet
RUN ./dotnet-install.sh -c 5.0 --install-dir /usr/share/dotnet
RUN ./dotnet-install.sh -c 6.0 --install-dir /usr/share/dotnet
To install .Net Core 5.0 SDK in a jenkins container with volumes configure I had to do the following:
FROM jenkins/jenkins:lts-alpine AS builder
# Switch to root user to install .NET SDK
USER root
# Pre-requisits
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk update
FROM builder
# Download do script
RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-install.sh
RUN chmod +x $HOME/dotnet-install.sh
RUN $HOME/dotnet-install.sh -c 5.0
EXPOSE 2376 2376

Install yarn global on Docker file

I'm trying to get a global package recognized by yarn and the docker image.
FROM ruby:2.7.2
RUN apt-get update -qq && apt-get install -y nodejs libvips-tools yarn
# Install all gems first.
# This hits the warm cache if unchanged so bundling is faster.
COPY Gemfile* /tmp/
WORKDIR /tmp
RUN bundle install
WORKDIR /sapco
COPY . /sapco
# Get yarn and install global required packages
RUN yarn global add mjml
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
I build this with docker build -f Dockerfile.dev .
I get the following error:
=> [internal] load build definition from Dockerfile.dev 0.0s
=> => transferring dockerfile: 504B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/ruby:2.7.2 1.2s
=> CACHED [1/8] FROM docker.io/library/ruby:2.7.2#sha256:abe7034da4092958d306c37aded76a751ea9d35d5c90d1ad9e92290561bd5f3f 0.0s
=> [internal] load build context 0.4s
=> => transferring context: 220.47kB 0.4s
=> [2/8] RUN apt-get update -qq && apt-get install -y nodejs libvips-tools yarn 38.2s
=> [3/8] COPY Gemfile* /tmp/ 0.1s
=> [4/8] WORKDIR /tmp 0.0s
=> [5/8] RUN bundle install 292.6s
=> [6/8] WORKDIR /sapco 0.0s
=> [7/8] COPY . /sapco 0.5s
=> ERROR [8/8] RUN yarn global add mjml 0.7s
------
> [8/8] RUN yarn global add mjml:
#12 0.567 Parsing scenario file global
#12 0.568 ERROR: [Errno 2] No such file or directory: 'global'
The issue is that yarn is the same name for another binary provided by cmdtest
I eventually traced it down to https://github.com/yarnpkg/yarn/issues/2821 and resolved my issue with this command to run in the Dockerfile.
apt remove -y cmdtest
apt remove -y yarn
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
apt update
apt install yarn

Resources