I have the following dockerfile:
FROM jenkins/jenkins:lts-alpine
USER root
RUN apk update
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk add libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/
USER jenkins
RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-install.sh
RUN chmod +x $HOME/dotnet-install.sh
RUN $HOME/dotnet-install.sh -c 5.0
RUN dotnet --info
EXPOSE 2376 23676
But when I run docker-compose, it gives me:
Building jenkins
failed to get console mode for stdout: Invalid identifier.
[+] Building 64.6s (10/11)
[+] Building 64.7s (11/11) FINISHED
=> [internal] load build definition from jenkins.dockerfile 0.0s
=> => transferring dockerfile: 486B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jenkins/jenkins:lts-alpine 1.6s
=> [1/8] FROM docker.io/jenkins/jenkins:lts-alpine#sha256:b2f3dd63864733 0.0s
=> CACHED [2/8] RUN apk update 0.0s
=> CACHED [3/8] RUN apk add bash icu-libs krb5-libs libgcc libintl libss 0.0s
=> [4/8] RUN apk add libgdiplus --repository https://dl-3.alpinelinux.or 7.8s
=> [5/8] RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-i 2.2s
=> [6/8] RUN chmod +x $HOME/dotnet-install.sh 0.3s
=> [7/8] RUN $HOME/dotnet-install.sh -c 5.0 52.2s
=> ERROR [8/8] RUN dotnet --info 0.5s
------
> [8/8] RUN dotnet --info:
#11 0.447 /bin/sh: dotnet: not found
------
ERROR: Service 'jenkins' failed to build
I followed every step in Microsoft documentation but I keep failing. What am I doing wrong here?
Using the jenkins/jnlp-slave:alpine image
FROM jenkins/jnlp-slave:alpine
USER root
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk add libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/
RUN mkdir -p /usr/share/dotnet \
&& ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet
RUN wget https://dot.net/v1/dotnet-install.sh
RUN chmod +x dotnet-install.sh
RUN ./dotnet-install.sh -c 3.1 --install-dir /usr/share/dotnet
RUN ./dotnet-install.sh -c 5.0 --install-dir /usr/share/dotnet
RUN ./dotnet-install.sh -c 6.0 --install-dir /usr/share/dotnet
To install .Net Core 5.0 SDK in a jenkins container with volumes configure I had to do the following:
FROM jenkins/jenkins:lts-alpine AS builder
# Switch to root user to install .NET SDK
USER root
# Pre-requisits
RUN apk add bash icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib wget
RUN apk update
FROM builder
# Download do script
RUN wget https://dot.net/v1/dotnet-install.sh -O $HOME/dotnet-install.sh
RUN chmod +x $HOME/dotnet-install.sh
RUN $HOME/dotnet-install.sh -c 5.0
EXPOSE 2376 2376
Related
I'm trying to install nvm like this:
FROM maven:3-jdk-8
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN source ~/.nvm/nvm.sh
RUN nvm install 16
RUN nvm use 16
However I keep getting this error:
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 253B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3-jdk-8 1.1s
=> [1/6] FROM docker.io/library/maven:3-jdk-8#sha256:ff18d86faefa15d1445d0fa4874408cc96dec068eb3487a0fc6d07f359a24607 0.0s
=> CACHED [2/6] RUN rm /bin/sh && ln -s /bin/bash /bin/sh 0.0s
=> CACHED [3/6] RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash 0.0s
=> CACHED [4/6] RUN source ~/.nvm/nvm.sh 0.0s
=> ERROR [5/6] RUN nvm install 16 0.1s
------
> [5/6] RUN nvm install 16:
#7 0.128 /bin/sh: line 1: nvm: command not found
------
executor failed running [/bin/sh -c nvm install 16]: exit code: 127
I would expect NVM is accessible because I run this line:
RUN source ~/.nvm/nvm.sh
What am I doing wrong here? When I run this manually in my docker container it works.
Each RUN statement is executed in its own shell, therefore the source command does not affect the subsequent shells.
To fix it, use a single RUN command:
FROM maven:3-jdk-8
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN source ~/.nvm/nvm.sh && nvm install 16 && nvm use 16
The source command will not have effect on the next RUN command.
You need to have all the nvm commands in the same layer like this:
RUN source ~/.nvm/nvm.sh && nvm install 16 && nvm use 16
Or, if you would like to do it manually, the source command should be adding env variables (you can view them using the env command).
I write my bachelor thesis about E2E-Testing of a specific software for my university. I work with the Gauge-framework which includes Taiko. My tests are fine and working on my local machine.
But now I have to build a docker Container because my tests have to work autonomously regardless of which OS is in use (my mentor uses IoS and there are some problems if he just runs my code from GitLab).
And know the problem:
I read a bit about docker and watched some tutorials how to use it. Therefore, I understand what is happening in the following code to some extent. The docker file is generated when I initialize a gauge project. There is antoher dockerfile example on the gauge hompage but that doesn´t work too (it is from 2018 and maybe outdated but not changed on the doc site).
`
# Building the image
# docker build -t gauge-taiko .
# Running the image
# docker run --rm -it -v ${PWD}/reports:/gauge/reports gauge-taiko
# This image uses the official node base image.
FROM node
# The Taiko installation downloads and installs the chromium required to run the tests.
# However, we need the chromium dependencies installed in the environment. These days, most
# Dockerfiles just install chrome to get the dependencies.
RUN apt-get update \
&& apt-get install -y wget gnupg ca-certificates \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable
# Set a custom npm install location so that Gauge, Taiko and dependencies can be
# installed without root privileges
ENV NPM_CONFIG_PREFIX=/home/gauge/.npm-packages
ENV PATH=$PATH:/home/gauge/.npm-packages/bin
# ENV PATH=$PATH:/home/node/.npm-global/bin
# Add the Taiko browser arguments
ENV TAIKO_BROWSER_ARGS=--no-sandbox,--start-maximized,--disable-dev-shm-usage
ENV headless_chrome=true
ENV TAIKO_SKIP_DOCUMENTATION=true
# Uncomment the lines below to use chrome bundled with this image
#ENV TAIKO_SKIP_CHROMIUM_DOWNLOAD=true
#ENV TAIKO_BROWSER_PATH=/usr/bin/google-chrome
# Set working directory
WORKDIR /gauge
# Copy the local working folder
COPY . .
# Create an unprivileged user to run Taiko tests
RUN groupadd -r gauge && useradd -r -g gauge -G audio,video gauge && \
mkdir -p /home/gauge/.npm-packages/lib && \
chown -R gauge:gauge /home/gauge /gauge
USER gauge
# Install dependencies and plugins
RUN npm install -g #getgauge/cli && \
gauge install js && \
gauge install html-report && \
gauge install screenshot && \
gauge config check_updates false
# Default command on running the image
ENTRYPOINT ["npm", "test"]
`
and then the bulding process wont stop (see below):
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2.03kB 0.0s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/node:latest 0.6s
=> [internal] load build context 0.1s
=> => transferring context: 1.44kB 0.0s
=> [1/6] FROM docker.io/library/node#sha256:d5222e1ebd7dd7e9683f47a8861a4711cb4407a4830cbe04a582ca4986245700 0.0s
=> CACHED [2/6] RUN apt-get update && apt-get install -y wget gnupg ca-certificates && wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && sh -c 'echo "deb [arch=amd6 0.0s
=> CACHED [3/6] WORKDIR /gauge 0.0s
=> CACHED [4/6] COPY . . 0.0s
=> CACHED [5/6] RUN groupadd -r gauge && useradd -r -g gauge -G audio,video gauge && mkdir -p /home/gauge/.npm-packages/lib && chown -R gauge:gauge /home/gauge /gauge 0.0s
=> CANCELED [6/6] RUN npm install -g #getgauge/cli && gauge install js && gauge install html-report && gauge install screenshot && gauge config check_updates false 31.3s
It let it run for about 10 minutes but nothing happened and I cnaceled it by myself.
After some tests and research, I think the problem is this line
npm install -g #getgauge/cli &&
I changed the order of how the commands are executed (e.g. if I execute gauge install js first with a particlar RUN command it executes. Then it stops at the command line above again).
Then I ran antoher test and tried to install a specific version of Gauge with the npm install -g #getgauge/cli#Version command (in my test it was 1.1.1 because I have seen that in a GitHub example) and with that it worked. However, the current version is 1.4.4 and I use that version on my local machine and, therefore, want to use it within the Docker Container (plus there were some pretty usefull bugfixes in between these versions ...). Do you have any ideas how I can fix that problem or give me a hint about how to fix it or where to look up some information?
Tahnk you guys and have happy holidays!
My Dockerfile
FROM continuumio/miniconda3
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends g++ unixodbc-dev
# Copy environment.yml (if found) to a temp location so we update the environment.
COPY environment.yml /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
RUN apt install -y gnupg curl
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17
# optional: for bcp and sqlcmd
RUN ACCEPT_EULA=Y apt-get install -y mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN . ~/.bashrc
# optional: for unixODBC development headers
RUN apt-get install -y unixodbc-dev
WORKDIR /workspace
COPY . .
ENTRYPOINT ["/bin/bash"]
When I am trying to build the docker image using docker build -t my-simulator . I am getting the followings:
=> [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.09kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 34B 0.0s => [internal] load metadata for docker.io/continuumio/miniconda3:latest 1.1s => [auth] continuumio/miniconda3:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.0s => => transferring context: 11.16kB 0.0s => [ 1/15] FROM docker.io/continuumio/miniconda3#sha256:977263e8d1e476972fddab1c75fe050dd3cd17626390e874448bd92721fd659b 0.0s => CACHED [ 2/15] RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && apt-get -y install --no-install-recommends g++ unixodbc-dev 0.0s => CACHED [ 3/15] COPY environment.yml /tmp/conda-tmp/ 0.0s => CACHED [ 4/15] RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi && rm -rf /tmp/conda- 0.0s => CACHED [ 5/15] RUN apt install -y gnupg curl 0.0s => CACHED [ 6/15] RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - 0.0s => CACHED [ 7/15] RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list 0.0s => CACHED [ 8/15] RUN apt-get update 0.0s => ERROR [ 9/15] RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17 0.8s ------
> [ 9/15] RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17:
#14 0.313 Reading package lists...
#14 0.651 Building dependency tree...
#14 0.736 Reading state information...
#14 0.771 Some packages could not be installed. This may mean that you have
#14 0.771 requested an impossible situation or if you are using the unstable
#14 0.771 distribution that some required packages have not yet been created
#14 0.771 or been moved out of Incoming.
#14 0.771 The following information may help to resolve the situation:
#14 0.771
#14 0.771 The following packages have unmet dependencies:
#14 0.810 libodbc1 : PreDepends: multiarch-support but it is not installable
#14 0.810 odbcinst1debian2 : PreDepends: multiarch-support but it is not installable
#14 0.817 E: Unable to correct problems, you have held broken packages.
------
executor failed running [/bin/sh -c ACCEPT_EULA=Y apt-get install -y msodbcsql17]: exit code: 100
It seems the issue is multiarch-suppot being not installable. I have tried these solutions (#1 and #2) without success.
FROM ruby:2.6-alpine
imagemagick imagemagick-dev imagemagick-c++ ruby-rmagick
ENV CHROME_BIN=/usr/bin/chromium-browser
ENV CHROME_PATH=/usr/lib/chromium/
ENV CHROME_NO_SANDBOX=true
ENV RUBY_VER=2.6.9
RUN apk add git && apk add --update --no-cache jq curl firefox-esr xvfb mesa-dev mesa-gl chromium-chromedriver chromium build-base postgresql-dev tzdata imagemagick6-dev imagemagick-libs imagemagick imagemagick-c++ ruby-rmagick nodejs
RUN gem install rails -v '5.2.4'
WORKDIR /app
ADD Gemfile Gemfile.lock /app/
RUN gem install bundler
RUN bundle install
ADD . .
CMD ["puma"]
=> [1/8] FROM docker.io/library/ruby:2.6-alpine#sha256:382ce92de42ef027bf1bfe382c3f3c3878042c41c07da8b8aa41855db0894762 0.0s
=> CANCELED [internal] load build context 15.9s
=> => transferring context: 429.40MB 15.9s
=> CACHED [2/8] RUN apk add git && apk add --update --no-cache jq curl firefox-esr xvfb mesa-dev mesa-gl chromium-chromedriver chromium 0.0s
=> ERROR [3/8] RUN gem install rails -v '5.2.4' 15.9s
[3/8] RUN gem install rails -v '5.2.4':
#6 15.29 ERROR: While executing gem ... (Gem::FilePermissionError)
#6 15.29 You don't have write permissions for the /usr/local/bundle directory.
I'm trying to get a global package recognized by yarn and the docker image.
FROM ruby:2.7.2
RUN apt-get update -qq && apt-get install -y nodejs libvips-tools yarn
# Install all gems first.
# This hits the warm cache if unchanged so bundling is faster.
COPY Gemfile* /tmp/
WORKDIR /tmp
RUN bundle install
WORKDIR /sapco
COPY . /sapco
# Get yarn and install global required packages
RUN yarn global add mjml
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
I build this with docker build -f Dockerfile.dev .
I get the following error:
=> [internal] load build definition from Dockerfile.dev 0.0s
=> => transferring dockerfile: 504B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/ruby:2.7.2 1.2s
=> CACHED [1/8] FROM docker.io/library/ruby:2.7.2#sha256:abe7034da4092958d306c37aded76a751ea9d35d5c90d1ad9e92290561bd5f3f 0.0s
=> [internal] load build context 0.4s
=> => transferring context: 220.47kB 0.4s
=> [2/8] RUN apt-get update -qq && apt-get install -y nodejs libvips-tools yarn 38.2s
=> [3/8] COPY Gemfile* /tmp/ 0.1s
=> [4/8] WORKDIR /tmp 0.0s
=> [5/8] RUN bundle install 292.6s
=> [6/8] WORKDIR /sapco 0.0s
=> [7/8] COPY . /sapco 0.5s
=> ERROR [8/8] RUN yarn global add mjml 0.7s
------
> [8/8] RUN yarn global add mjml:
#12 0.567 Parsing scenario file global
#12 0.568 ERROR: [Errno 2] No such file or directory: 'global'
The issue is that yarn is the same name for another binary provided by cmdtest
I eventually traced it down to https://github.com/yarnpkg/yarn/issues/2821 and resolved my issue with this command to run in the Dockerfile.
apt remove -y cmdtest
apt remove -y yarn
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
apt update
apt install yarn