File copied to Docker image dissappears - docker

I have this multi-stage Dockerfile. I make a program in the build image, tar up the contents, copy it in the main image, untar it. Once the container starts, when i go into the container, I can no longer find the file. However, using "ls" commands I'm able to see that it was copied over and extracted. I don't know if this has anything to do with the fact that I have the root directory of the application as a volume. I did that to speed up the builds after making code changes.
docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
- "5432:5432"
volumes:
- ".:/code"
environment:
- PORT=5000
# TODO: Should be set to 0 for production
- PYTHONUNBUFFERED=1
Dockerfile
# Build lab-D
FROM gcc:8.2.0 as builder
RUN apt-get update && apt-get install -y libxerces-c-dev
WORKDIR /lab-d/
RUN git clone https://github.com/lab-d/lab-d.git
WORKDIR /lab-d/lab-d/
RUN autoreconf -if
RUN ./configure --enable-silent-rules 'CFLAGS=-g -O0 -w' 'CXXFLAGS=-g -O0 -w' 'LDFLAGS=-g -O0 -w'
RUN make
RUN make install
WORKDIR /lab-d/
RUN ls
RUN tar -czf labd.tar.gz lab-d
# Main Image
FROM library/python:3.7-stretch
RUN apt-get update && apt-get install -y python3 python3-pip \
postgresql-client \
# lab-D requires this library
libxerces-c-dev \
# For VIM
apt-file
RUN apt-file update && apt-get install -y vim
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip3 install --trusted-host pypi.org -r /requirements.txt
RUN pwd
RUN ls .
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN pwd
RUN ls .
RUN tar -xzf labd.tar.gz
RUN ls .
run pwd
RUN ls .
CMD ["bash", "start.sh"]
docker-compose build --no-cache
...
Step 19/29 : RUN pwd
---> Running in a856867bf69a
/
Removing intermediate container a856867bf69a
---> f1ee3dca8500
Step 20/29 : RUN ls .
---> Running in ee8da6874808
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
requirements.txt
root
run
sbin
srv
sys
tmp
usr
var
Removing intermediate container ee8da6874808
---> e8aec80955c9
Step 21/29 : COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
---> 72d14ab4e01f
Step 22/29 : WORKDIR /code
---> Running in 17873e785c17
Removing intermediate container 17873e785c17
---> 57e8361767ca
Step 23/29 : RUN pwd
---> Running in abafd210abcb
/code
Removing intermediate container abafd210abcb
---> c6f430e1b362
Step 24/29 : RUN ls .
---> Running in 40b9e85261c2
labd.tar.gz
Removing intermediate container 40b9e85261c2
---> f9ee8e04d065
Step 25/29 : RUN tar -xzf labd.tar.gz
---> Running in 6e60ce7e1886
Removing intermediate container 6e60ce7e1886
---> 654d3c791798
Step 26/29 : RUN ls .
---> Running in 0f445b35f399
lab-d
labd.tar.gz
Removing intermediate container 0f445b35f399
---> 7863a15534b1
Step 27/29 : run pwd
---> Running in 9658c6170bde
/code
Removing intermediate container 9658c6170bde
---> 8d8e472a1b95
Step 28/29 : RUN ls .
---> Running in 19da5b77f5b6
lab-d
labd.tar.gz
Removing intermediate container 19da5b77f5b6
---> 140645efadbc
Step 29/29 : CMD ["bash", "start.sh"]
---> Running in 02b006bdf868
Removing intermediate container 02b006bdf868
---> 28d819321035
Successfully built 28d819321035
Successfully tagged -server_web:latest
start.sh
#!/bin/bash
# Start the SQL Proxy (Local-only)
pwd
ls .
./cloud_sql_proxy -instances=api-project-123456789:us-central1:sq=tcp:5432 \
-credential_file=./config/google_service_account.json &
ls .
# Override with CircleCI for other environments
cp .env.development .env
ls .
python3 -u ./server/server.py

In your Dockerfile, you
COPY --from=builder /lab-d/labd.tar.gz /code/labd.tar.gz
WORKDIR /code
RUN tar -xzf labd.tar.gz
But then your docker-compose.yml specifies
volumes:
- ".:/code"
That causes the current directory on the host to be mounted over /code in the container, and every last bit of work your Dockerfile does is hidden.

Related

Git clone not coping in to WORKDIR [ Dockerfile ]

while building a docker image - RUN git clone rep#rep.com is working. But i can't able to move the exact app.py in to WORKDIR (Dockerfile), which is in the git repo.
you can see my git repo & can chek it with my Dockerfile. please correct me if I am wrong
(If my Dockerfile is wrong please rechange )
Need Like: it have to clone only app.py from git repo to continer-WORKDIR.
Here is My Dockerfile
FROM python:3
RUN apt-get update && apt-get install -y \
unzip \
git \
curl
RUN git clone https://github.com/testgithub-trial/docktest.git
WORKDIR /usr/src/app
ADD /docktest /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "app.py" ]
Here is my Folder structure. (Note: requirement.txt will be locally present (not in git repo)
Here is my Output Terminal.
Sending build context to Docker daemon 4.096kB
Step 1/9 : FROM python:3
---> e2e732b7951f
Step 2/9 : RUN apt-get update && apt-get install -y build-essential libpng-dev libjpeg62-turbo-dev libfreetype6-dev locales zip jpegoptim optipng pngquant gifsicle vim unzip git curl
---> Using cache
---> 93a0e5877ac6
Step 3/9 : RUN git clone https://github.com/testgithub-trial/docktest.git
---> Using cache
---> 36313099edf8
Step 4/9 : WORKDIR /usr/src/app
---> Using cache
---> 35c1e7a26f44
Step 5/9 : ADD /docktest /usr/src/app
ADD failed: file not found in build context or excluded by .dockerignore: stat docktest: file does not exist
You'll want to RUN mv /docktest /usr/src/app. ADD is for files in the build context (your machine), not ones in the image itself

Dockerfile won't install cron

I am trying to install cron via my Dockerfile, so that docker-compose can create a dedicated cron container by using a different entrypoint when it's built, which will regularly create another container that runs a script, and then remove it. I'm trying to follow the Separating Cron From Your Application Services section of this guide: https://www.cloudsavvyit.com/9033/how-to-use-cron-with-your-docker-containers/
I know that order of operation is important and I wonder if I have that misconfigured in my Dockerfile:
FROM swift:5.3-focal as build
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
RUN apt-get update && apt-get install -y cron
COPY example-crontab /etc/cron.d/example-crontab
RUN chmod 0644 /etc/cron.d/example-crontab &&\
crontab /etc/cron.d/example-crontab
COPY ./Package.* ./
RUN swift package resolve
COPY . .
RUN swift build --enable-test-discovery -c release
WORKDIR /staging
RUN cp "$(swift build --package-path /build -c release --show-bin-path)/Run" ./
RUN [ -d /build/Public ] && { mv /build/Public ./Public && chmod -R a-w ./Public; } || true
RUN [ -d /build/Resources ] && { mv /build/Resources ./Resources && chmod -R a-w ./Resources; } || true
# ================================
# Run image
# ================================
FROM swift:5.3-focal-slim
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true && \
apt-get -q update && apt-get -q dist-upgrade -y && rm -r /var/lib/apt/lists/*
RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app vapor
WORKDIR /app
COPY --from=build --chown=vapor:vapor /staging /app
USER vapor:vapor
EXPOSE 8080
ENTRYPOINT ["./Run"]
CMD ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
This is relevant portion of my docker-compose file:
services:
app:
image: prizmserver:latest
build:
context: .
environment:
<<: *shared_environment
volumes:
- $PWD/.env:/app/.env
links:
- db:db
ports:
- '8080:8080'
# user: '0' # uncomment to run as root for testing purposes even though Dockerfile defines 'vapor' user.
command: ["serve", "--env", "production", "--hostname", "0.0.0.0", "--port", "8080"]
cron:
image: prizmserver:latest
entrypoint: /bin/bash
command: ["cron", "-f"]
This is my example-scheduled-task.sh:
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
And this is my crontab file:
*/5 * * * * /usr/bin/sh /example-scheduled-task.sh
My script example-scheduled-task.sh and my crontab example-crontab live inside my application folder where this Dockerfile and docker-compose.yml live.
Why won't my cron container launch?
In a multistage build, only the last FROM will be used to generate final image.
E.g., for next example, the a.txt only could be seen in the first stage, can't be seen in the final image.
Dockerfile:
FROM python:3.9-slim-buster
WORKDIR /tmp
RUN touch a.txt
RUN ls /tmp
FROM ubuntu:16.04
RUN ls /tmp
Execution:
# docker build -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM python:3.9-slim-buster
---> c2f204720fdd
Step 2/6 : WORKDIR /tmp
---> Running in 1e6ed4ef521d
Removing intermediate container 1e6ed4ef521d
---> 25282e6f7ed6
Step 3/6 : RUN touch a.txt
---> Running in b639fcecff7e
Removing intermediate container b639fcecff7e
---> 04985d00ed4c
Step 4/6 : RUN ls /tmp
---> Running in bfc2429d6570
a.txt
tmp6_uo5lcocacert.pem
Removing intermediate container bfc2429d6570
---> 3356850a7653
Step 5/6 : FROM ubuntu:16.04
---> 065cf14a189c
Step 6/6 : RUN ls /tmp
---> Running in 19755da110b8
Removing intermediate container 19755da110b8
---> 890f13e709dd
Successfully built 890f13e709dd
Successfully tagged abc:1
Back to your example, you copy crontab to the stage of swift:5.3-focal, but the final stage is swift:5.3-focal-slim which won't have any crontab.
EDIT:
For you, the compose for cron also needs to update as next:
cron:
image: prizmserver:latest
entrypoint: cron
command: ["-f"]
cron don't need to use /bash to start, directly use cron to override the entrypoint could make the trick.

Why does Docker report the entrypoint file doesn't exist when ls reports it does exist?

I'm using Docker v 19. I have this at the end of my web/Dockerfile ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y dos2unix
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
COPY entrypoint.sh entrypoint.sh
RUN tr -d '\r' < entrypoint.sh > /app/entrypoint2.sh
RUN ls /app/entrypoint2.sh
RUN ls /app/
RUN python -m pip install -r requirements.txt
RUN ls /app/entrypoint.sh
RUN dos2unix /app/entrypoint.sh
RUN ls /app/entrypoint.sh
RUN chmod +x /app/*.sh
RUN ls ./
ENTRYPOINT ["./entrypoint2.sh"]
However, when I run "docker-compose up" (which references the above), the "entrypoiet" file can't be found, which is baffling because the line above ("ls ./") shows that it exists ...
Step 14/19 : RUN ls /app/entrypoint.sh
---> Running in db8c11ce3fad
/app/entrypoint.sh
Removing intermediate container db8c11ce3fad
---> c23e69de2a86
Step 15/19 : RUN dos2unix /app/entrypoint.sh
---> Running in 9e5bbd1c0b9a
dos2unix: converting file /app/entrypoint.sh to Unix format...
Removing intermediate container 9e5bbd1c0b9a
---> 32a069690845
Step 16/19 : RUN ls /app/entrypoint.sh
---> Running in 8a53e70f219b
/app/entrypoint.sh
Removing intermediate container 8a53e70f219b
---> 5444676f45fb
Step 17/19 : RUN chmod +x /app/*.sh
---> Running in 5a6b295217c8
Removing intermediate container 5a6b295217c8
---> 8b5bfa4fd75a
Step 18/19 : RUN ls ./
---> Running in 9df3acb7deb7
entrypoint.sh
entrypoint2.sh
requirements.txt
Removing intermediate container 9df3acb7deb7
---> 009f8bbe18c8
Step 19/19 : ENTRYPOINT ["./entrypoint2.sh"]
---> Running in 41a7e28641a7
Removing intermediate container 41a7e28641a7
---> 34a7d4fceb8b
Successfully built 34a7d4fceb8b
Successfully tagged maps_web:latest
WARNING: Image for service web was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
...
Creating maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrypoint2.sh\": stat ./entrypoint2.sh: no such file or directory": unknown
How do I tell Docker how to reference that entrypoint file? The docker-compose.yml file section including the above is below
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- mysql
Based on the provided Dockerfile and docker-compose file you are doing the following
Copy files (entrypoint + requirments) to /app
Install the needed packages
Start containers with the volume that overwrite the content of the /app, which is causing the issue.
To solve the issue you have to do one of the following
Copy all the data from ./web to the docker image and remove the volume
Dockerfile : add the following lines
WORKDIR /app/
COPY ./web /app
Docker-compose: remove the below lines
volumes:
- ./web/:/app
The second option is to change the path of the entry point so it does not conflict with the volume
Dockerfile
RUN tr -d '\r' < entrypoint.sh > /entrypoint2.sh
RUN chmod +x /entrypoint2.sh
ENTRYPOINT ["./entrypoint2.sh"]

How to create and add user with password in alpine Dockerfile?

The following Dockerfile works fine for Ubuntu:
FROM ubuntu:20.04
SHELL ["/bin/bash", "-c"]
ARG user=hakond
ARG home=/home/$user
RUN useradd --create-home -s /bin/bash $user \
&& echo $user:ubuntu | chpasswd \
&& adduser $user sudo
WORKDIR $home
USER $user
COPY --chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
where entrypoint.sh is
#! /bin/bash
exec bash
How can I do the same in Alpine? I tried:
FROM alpine:3.12
SHELL ["/bin/sh", "-c"]
RUN apk add --no-cache bash
ARG user=hakond
ARG home=/home/$user
RUN addgroup -S docker
RUN adduser \
--disabled-password \
--gecos "" \
--home $home \
--ingroup docker \
$user
WORKDIR $home
USER $user
COPY chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
But this fails to build:
$ docker build -t alpine-user .
Sending build context to Docker daemon 5.12kB
Step 1/12 : FROM alpine:3.12
---> a24bb4013296
Step 2/12 : SHELL ["/bin/sh", "-c"]
---> Using cache
---> ce9a303c96c8
Step 3/12 : RUN apk add --no-cache bash
---> Running in e451a2481846
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ncurses-terminfo-base (6.2_p20200523-r0)
(2/4) Installing ncurses-libs (6.2_p20200523-r0)
(3/4) Installing readline (8.0.4-r0)
(4/4) Installing bash (5.0.17-r0)
Executing bash-5.0.17-r0.post-install
Executing busybox-1.31.1-r16.trigger
OK: 8 MiB in 18 packages
Removing intermediate container e451a2481846
---> 7b5f7f87bdf6
Step 4/12 : ARG user=hakond
---> Running in 846b4b12856e
Removing intermediate container 846b4b12856e
---> a0453cb6706e
Step 5/12 : ARG home=/home/$user
---> Running in 06550ad3f550
Removing intermediate container 06550ad3f550
---> 994d71fb0281
Step 6/12 : RUN addgroup -S docker
---> Running in 70aaec6f40e0
Removing intermediate container 70aaec6f40e0
---> 5188ed7b234c
Step 7/12 : RUN adduser --disabled-password --gecos "" --home $home --ingroup docker $user
---> Running in ff36a7f7e99b
Removing intermediate container ff36a7f7e99b
---> 97f481916feb
Step 8/12 : WORKDIR $home
---> Running in 8d7f0411d6e3
Removing intermediate container 8d7f0411d6e3
---> 5de66f4b5d4e
Step 9/12 : USER $user
---> Running in ac4abac7c3a8
Removing intermediate container ac4abac7c3a8
---> dffd2185df1f
Step 10/12 : COPY chown=$user entrypoint.sh .
COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder615220199/chown=hakond: no such file or directory
You created your new user successfully, you just wrote chown instead of --chown in your COPY command.
Your Dockerfile should look like:
FROM alpine:3.12
SHELL ["/bin/sh", "-c"]
RUN apk add --no-cache bash
ARG user=hakond
ARG home=/home/$user
RUN addgroup -S docker
RUN adduser \
--disabled-password \
--gecos "" \
--home $home \
--ingroup docker \
$user
WORKDIR $home
USER $user
COPY --chown=$user entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

Docker multi-stage build cannot find file in second stage

My environment is:
Ubuntu 18.04
Docker 19.03.11
I am working on a Flask app and am managing services with Docker. Here is my compose file:
version: '3.6'
x-build-args: &build_args
INSTALL_PYTHON_VERSION: 3.8
INSTALL_NODE_VERSION: 12
x-default-volumes: &default_volumes
volumes:
- ./:/app
- node-modules:/app/node_modules
- ./dev.db:/tmp/dev.db
services:
flask-dev:
build:
context: .
target: development
args:
<<: *build_args
user: abc
image: "testapp-development"
ports:
- "5000:5000"
- "2992:2992"
<<: *default_volumes
flask-prod:
build:
context: .
target: production
args:
<<: *build_args
image: "testapp-production"
ports:
- "5000:5000"
environment:
FLASK_ENV: production
FLASK_DEBUG: 0
LOG_LEVEL: info
GUNICORN_WORKERS: 4
<<: *default_volumes
manage:
build:
context: .
target: manage
environment:
FLASK_ENV: production
FLASK_DEBUG: 0
image: "testapp-manage"
stdin_open: true
tty: true
<<: *default_volumes
volumes:
node-modules:
static-build:
dev-db:
My Dockerfile:
# ==================================== BASE ====================================
ARG INSTALL_PYTHON_VERSION=${INSTALL_PYTHON_VERSION:-3.7}
FROM python:${INSTALL_PYTHON_VERSION}-slim-buster AS base
RUN apt-get update
RUN apt-get install -y \
curl \
gcc
ARG INSTALL_NODE_VERSION=${INSTALL_NODE_VERSION:-12}
RUN curl -sL https://deb.nodesource.com/setup_${INSTALL_NODE_VERSION}.x | bash -
RUN apt-get install -y \
nodejs \
&& apt-get -y autoclean
ARG user=sid
# Add the user and their group
RUN groupadd -r ${user} && useradd -m -r -l -g ${user} ${user}
# the /app directory contains things that npm needs
# user won't have permissions to this, so copy it
# into their home dir
WORKDIR /app
COPY --chown=${user}:${user} . /home/${user}/
USER ${user}
WORKDIR /home/${user}/app
ENV PATH="/home/${user}/.local/bin:${PATH}"
RUN npm install
# ================================= DEVELOPMENT ================================
FROM base AS development
RUN pip install --user -r requirements/dev.txt
EXPOSE 2992
EXPOSE 5000
CMD [ "npm", "start" ]
# ================================= PRODUCTION =================================
FROM base AS production
RUN pip install --user -r requirements/prod.txt
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY supervisord_programs /etc/supervisor/conf.d
EXPOSE 5000
ENTRYPOINT ["/bin/bash", "shell_scripts/supervisord_entrypoint.sh"]
CMD ["-c", "/etc/supervisor/supervisord.conf"]
# =================================== MANAGE ===================================
FROM base AS manage
RUN pip install --user -r requirements/dev.txt
ENTRYPOINT [ "flask" ]
Both the development and production target are failing
Here is the output I'm getting from running docker-compose build flask-dev:
Building flask-dev
Step 1/20 : ARG INSTALL_PYTHON_VERSION=${INSTALL_PYTHON_VERSION:-3.7}
Step 2/20 : FROM python:${INSTALL_PYTHON_VERSION}-slim-buster AS base
---> 38cd21c9e1a8
Step 3/20 : RUN apt-get update
---> Using cache
---> 5d431961b77a
Step 4/20 : RUN apt-get install -y curl gcc
---> Using cache
---> caeafb0035dc
Step 5/20 : ARG INSTALL_NODE_VERSION=${INSTALL_NODE_VERSION:-12}
---> Using cache
---> 6e8a1eb59d3c
Step 6/20 : RUN curl -sL https://deb.nodesource.com/setup_${INSTALL_NODE_VERSION}.x | bash -
---> Using cache
---> 06fe96eb13e7
Step 7/20 : RUN apt-get install -y nodejs && apt-get -y autoclean
---> Using cache
---> 1e085132d325
Step 8/20 : ARG user=sid
---> Using cache
---> 3b3faf180389
Step 9/20 : RUN groupadd -r ${user} && useradd -m -r -l -g ${user} ${user}
---> Running in 9672d6f8b64d
Removing intermediate container 9672d6f8b64d
---> 02a5c2602513
Step 10/20 : WORKDIR /app
---> Running in bd48ac652908
Removing intermediate container bd48ac652908
---> 92c16bf347d4
Step 11/20 : COPY --chown=${user}:${user} . /home/${user}/
---> 2af997c5a255
Step 12/20 : USER ${user}
---> Running in 6409cf8784ee
Removing intermediate container 6409cf8784ee
---> 012d9cf92f31
Step 13/20 : WORKDIR /home/${user}/app
---> Running in a12b164c39dd
Removing intermediate container a12b164c39dd
---> db6fba37f948
Step 14/20 : ENV PATH="/home/${user}/.local/bin:${PATH}"
---> Running in eb13f4786b17
Removing intermediate container eb13f4786b17
---> a32249e3169b
Step 15/20 : RUN npm install
---> Running in 8aefdd56e8f2
npm WARN deprecated fsevents#1.2.13: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#~2.1.2 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#2.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#^1.2.7 (node_modules/watchpack-chokidar2/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
audited 712 packages in 2.873s
43 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
Removing intermediate container 8aefdd56e8f2
---> 737faa9b974e
Step 16/20 : FROM base AS development
---> 737faa9b974e
Step 17/20 : RUN pip install --user -r requirements/dev.txt
---> Running in af5508643877
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements/dev.txt'
ERROR: Service 'flask-dev' failed to build: The command '/bin/sh -c pip install --user -r requirements/dev.txt' returned a non-zero code: 1
I think my approach is wrong to this. I need to have the Flask app work out of the app volume that I'm using in the compose file.
What I think I'm misunderstanding is the multi-stage builds. Do I need to do extra steps in the development, production, and manage stages in order to be able to reach the /home/${user}/app directory? From what I understand ARGs defined in previous stages are not available in subsequent ones, but is all the processing done in a stage also not available in subsequent stages?
I have also tried using COPY --from=base in the second stage with the full path to where the app directory sits during the base build stage, but this isn't correct either.
The relevant parts of project directory are:
testapp/
docker-compose.yml
Dockerfile
testapp/
<web app code>
requirements/
dev.txt
prod.txt
You've copied your files into a different directory:
WORKDIR /app
COPY --chown=${user}:${user} . /home/${user}/
USER ${user}
WORKDIR /home/${user}/app
The requirements directory will be in /home/${user}/ while your relative run command will use the workdir to look in /home/${user}/app. You likely want to copy into the app subdirectory:
COPY --chown=${user}:${user} . /home/${user}/app

Resources