Convert from multi stage build to single - docker

As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"

"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.

Related

Connection refused when building Docker container

Here is my Dockerfile. Once I execute it throws an errorconnect ECONNREFUSED 172.17.0.1:27017 kindly help me out in this regard. Thank you.
FROM node:18-alpine
RUN npm install --global nodemon
RUN apk upgrade --update-cache --available && \
apk add openssl && \
apk add git && \
rm -rf /var/cache/apk/*
RUN openssl version
RUN node --version
ENV MONGO_URI=mongodb://172.17.0.1:27017/dbname
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY ./package.json ./
USER node
RUN npm install;
USER root
RUN apk del git
USER node
COPY --chown=node:node . ./
EXPOSE 8080
CMD ["npm","start"]

How to extract coverage report in multistage build?

I want to extract the coverage report while building a docker image in a multistage build. Before I was executing the tests via image.inside using the Jenkins Docker plugin but now I am executing the tests using the following command where I could not extract the coverage report.
docker build -t myapp:test --cache-from registry/myapp:test --target test --build-arg BUILDKIT_INLINE_CACHE=1 .
Is there any way to mount the Jenkins workspace like the below function is doing without running the docker image? There is a --output flag but I could not understand how can I use this if it works. Or can it be possible via RUN --mount=type ...
image.inside('-u root -v $WORKSPACE/coverage:/var/app/coverage') {
stage("Running Tests") {
timeout(10) {
withEnv(["NODE_ENV=production"]) {
sh(script: "cd /var/app; yarn run test:ci")
}
Dockerfile
FROM node:16.15.0-alpine3.15 as base
WORKDIR /var/app
RUN --mount=type=cache,target=/var/cache/apk \
apk add --update --virtual build-dependencies build-base \
curl \
python3 \
make \
g++ \
bash
COPY package*.json ./
COPY yarn.lock ./
COPY .solidarity ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn && \
yarn install --no-progress --frozen-lockfile --check-files && \
yarn cache clean
COPY . .
FROM base as test
ENV NODE_ENV=production
RUN ["yarn", "run", "format:ci"]
RUN ["yarn", "run", "lint:ci"]
RUN ["yarn", "run", "test:ci"]
FROM base as builder
RUN yarn build
FROM node:16.15.0-alpine3.15 as production
WORKDIR /var/app
COPY --from=builder /var/app /var/app
CMD ["yarn", "start:envconsul"]
You can make a stage with the output you want to extract:
FROM node:16.15.0-alpine3.15 as base
WORKDIR /var/app
RUN --mount=type=cache,target=/var/cache/apk \
apk add --update --virtual build-dependencies build-base \
curl \
python3 \
make \
g++ \
bash
COPY package*.json ./
COPY yarn.lock ./
COPY .solidarity ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn && \
yarn install --no-progress --frozen-lockfile --check-files && \
yarn cache clean
COPY . .
FROM base as test
ENV NODE_ENV=production
RUN ["yarn", "run", "format:ci"]
RUN ["yarn", "run", "lint:ci"]
RUN ["yarn", "run", "test:ci"]
FROM scratch as test-out
COPY --from=test /var/app/coverage/ /
FROM base as builder
RUN yarn build
FROM node:16.15.0-alpine3.15 as production
WORKDIR /var/app
COPY --from=builder /var/app /var/app
CMD ["yarn", "start:envconsul"]
Then you can build with:
docker build \
--output "type=local,dest=${WORKSPACE}/coverage" \
--target test-out .

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

Docker build command fails on COPY

Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/

ModuleNotFoundError When Copying a Host Directory to Container in DockerFile

I'm going crazy trying to ADD a directory from my host machine to my docker container. When building the container with docker-compose up --build, it seems to ADD just fine, but when I try to access module in my app.py file, I get the ModuleNotFoundError
My DockerFile contains the following:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
COPY /scraper/requirements.txt requirements.txt
RUN pip install -r requirements.txt
ADD /common/testmodel /scraper/testmodel
WORKDIR home/scraper/
ENTRYPOINT ["python3", "-u", "app.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "app:app"
Then when building the image, the log shows:
Step 6/9 : ADD /common/testmodel home/scraper/testmodel
---> a7b27854d751
My project structure looks like the following:
-common
-testmodel
-test.py
-scraper
-DockerFile
-requirements
-docker-compose.yml
But in my app.py file, when I run from testmodel.test import TestClass I get ModuleNotFoundError: No module named 'testmodel'
Any help with this problem is greatly appreciated as this how now taken up a much larger chunk of my day that I ever thought it would. Thank you very much.
I may be missing some context but I think you've several issues:
You COPY /scraper... and ADD /common... -- are these directories hanging from root on your local machine?
You set WORKDIR after COPY and ADD but generally (although not required), you'd set this first as a default destination and then you could COPY something . and ADD something . and these destinations (.) would refer to your WORKDIR
You use /home/scraper as your WORKDIR but you don't copy and add your files into it. It will be empty at this point.
Your ENTRYPOINT references app.py but your file is called test.py
One useful debugging tool is to shell into containers to e.g. examine the directory structure to confirm it's as expected. Assuming your image is called scraper, you could:
docker build \
--tag=scraper \
--file=scraper/Dockerfile \
. # Don't forget the period ;-)
Then Alpine's shell is called ash:
docker run \
--interactive \
--tty \
scraper:latest ash
Or, if your Dockerfile has an ENTRYPOINT, then override it using:
docker run \
--interactive \
--tty \
--entrypoint=ash \
scraper:latest
and then you could browse the container's directory structure:
You'll default to /home/scraper (WORKDIR):
/home/scraper # ls -l
total 0
You may examine /scraper using:
/home/scraper # apk install tree
/home/scraper # tree /scraper
/scraper
└── testmodel
└── test.py
1 directory, 1 file
I'm not entirely clear as to what would be the correct solution for you but I hope this helps get you progressed:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
WORKDIR home/scraper/
COPY scraper/requirements.txt .
RUN pip install -r requirements.txt
ADD common/testmodel .
ENTRYPOINT ["python3", "-u", "test.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "test:app"

Resources