Dockerfile Run Python Pipenv Build Fails - docker

Learning how to dockerize applications and I ran into a snag where my Dockerfile build fails outputting:
app_1 | Traceback (most recent call last):
app_1 | File "app.py", line 4, in <module>
app_1 | from flask import Flask, render_template, request, json
app_1 | ModuleNotFoundError: No module named 'flask'
bucketlist-flask-mysql_app_1 exited with code 1
The directory structure in the container is:
/app
/app -> app.py Pipfile.lock
The directory in the repository is
Dockerfile docker-compose.yml /src
/src -> Pipfile Pipfile.lock sql_scripts/ FlaskApp/app.py
The Dockerfile is:
# alpine with py3.8 - reqd version of python for pipenv
from python:3.8-alpine
#EXPOSE port, default is 5000, app uses 5000
EXPOSE 5000
# create a directory to run the app in
WORKDIR /app
# install pip system-wide
RUN pip install pipenv
#move the files into /app
COPY src/Pipfile.lock /app
# add the application files
COPY src/FlaskApp /app
# run the application at launch
RUN pipenv install --ignore-pipfile
CMD ["pipenv", "run", "python3", "app.py"]
and the docker-compose is:
version: "3"
services:
app:
build: .
links:
- db
ports:
- "5000:5000"
db:
image: mariadb
restart: always
ports:
- "32000:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./src/sql_scripts:/docker-entry-point-initdb.d/:ro
I've done quite a bit of iteration and troubleshooting. If I run start a python:3.8-alpine container and manually copy over Pipfile.lock and app.py I can run in sh:
pip install pipenv
pipenv install --ignore-pipfile
pipenv run python3 app.py
When ran from sh manually the application builds and runs perfectly, my best conclusion currently is that the processes may be running concurrently and not allowing the pipenv install to finish executing?
For reference, I believe the python is just fine but the first 4 lines of app.py are:
#!/bin/bash
"exec" "pipenv" "run" "python3" "$(pwd | sed 's?src.*?src/FlaskApp/app.py?g')"
#START: app.py code \/
from flask import Flask, render_template, request, jsom

Solution!
After a lot of troubleshooting I solved the problem by purging my docker containers, the real solution is after a change in your docker files, you need to run docker-compose down before running docker-compose up. I assumed when shutting down containers this process was involved and didnt know the docker-compose down command.

Related

Docker container fails on Windows Powershell succeeds on WSL2 with identical Dockerfile and docker-compose

Problem Description
I have a docker image which I build and run using docker-compose. Normally I develop on WSL2, and when running docker-compose up --build the image builds and runs successfully. On another machine, using Windows powershell, with an identical clone of the code, executing the same command successfully builds the image, but gives an error when running.
Error
[+] Running 1/1
- Container fastapi-service Created 0.0s
Attaching to fastapi-service
fastapi-service | exec /start_reload.sh: no such file or directory
fastapi-service exited with code 1
I'm fairly experienced using Docker, but am a complete novice with PowerShell and developing on Windows more generally. Is there a difference in Dockerfile construction in this context, or a difference in the execution of COPY and RUN statements?
Code snippets
Included are all parts of the code required to replicate the error.
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
COPY ./start_reload.sh /start_reload.sh
RUN chmod +x /start_reload.sh
COPY ./data /data
COPY ./app /app
EXPOSE 8000
CMD ["/start.sh"]
docker-compose.yml
services:
web:
build: .
container_name: "fastapi-service"
ports:
- "8000:8000"
volumes:
- ./app:/app
command: /start_reload.sh
start-reload.sh
This is a small shell script which runs a prestart.sh if present, and then launches gunicorn/uvicorn in "reload mode":
#!/bin/sh
# If there's a prestart.sh script in the /app directory, run it before starting
PRE_START_PATH=/app/prestart.sh
HOST=${HOST:-0.0.0.0}
PORT=${PORT:-8000}
LOG_LEVEL=${LOG_LEVEL:-info}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
# Start Uvicorn with live reload
exec uvicorn --host $HOST --port $PORT --log-level $LOG_LEVEL main:app --reload
The solution lies in a difference between UNIX and Windows systems, and the way they end lines. A discussion on the topic can be found [here].
(Difference between CR LF, LF and CR line break types?)
The presence/absence of these characters in the file, and configuration of the shell running the command leads to an error where the file being run is the Dockerfile start-reload.sh(CR-LF) but the file that exists is simply start-reload.sh, hence the no such file or directory error raised.

Flask and React Docker containers not communicating via Docker-Compose

I have two containers - one containing a react app, another a flask app.
I can build both using the below docker-compose file and their respective Dockerfiles, and am able to access each via the browser on the ports specified. However, my React app's API calls to Flask are not being retrieved (they work without Docker in the picture).
Any suggestions are greatly appreciated!
Docker-compose
version: '3.7'
services:
middleware:
build: ./middleware
command: python main.py run -h 0.0.0.0
volumes:
- ./middleware/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./middleware/.flaskenv
frontend:
build:
context: ./frontend/app
dockerfile: Dockerfile
volumes:
- './frontend/app:/usr/src/app'
- 'usr/src/app/node_modules'
ports:
- '3001:3000'
environment:
- NODE_ENV=development
links:
- middleware
Dockerfile for flask app
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
Dockerfile React app
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install --silent
RUN npm install react-scripts#0.9.5 -g --silent
# start app
CMD ["npm", "start"]
I also have the below in my React app in package.json which enables me to make API calls to the flask app (again, this works fine without Docker)
"proxy": "http://127.0.0.1:5000",
Finally, project structure (in case useful)
website
|
|--middleware (Flask app)
- Dockerfile
- api
|--frontend (React app)
-Dockerfile
-app
|
|-docker-compose.yml
As LinPy and leopal in the comments pointed out 127.0.0.1 in package.json needed to be changed to reference the correct flask container.
"proxy": "http://middleware:5000",

Docker Image Contains files that Docker Container doesn't

I have a Dockerfile that contains steps that create a directory and runs an angular build script outputing to that directory. This all seems to run correctly. However when the container runs, the built files and directory are not there.
If I run a shell in the image:
docker run -it pnb_web sh
# cd /code/static
# ls
assets favicon.ico index.html main.js main.js.map polyfills.js polyfills.js.map runtime.js runtime.js.map styles.js styles.js.map vendor.js vendor.js.map
If I exec a shell in the container:
docker exec -it ea23c7d30333 sh
# cd /code/static
sh: 1: cd: can't cd to /code/static
# cd /code
# ls
Dockerfile api docker-compose.yml frontend manage.py mysite.log pnb profiles requirements.txt settings.ini web_variables.env
david#lightning:~/Projects/pnb$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea23c7d30333 pnb_web "python3 manage.py r…" 13 seconds ago Up 13 seconds 0.0.0.0:8000->8000/tcp pnb_web_1_267d3a69ec52
This is my dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt install nodejs
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code/static
WORKDIR /code/frontend
RUN npm install -g #angular/cli
RUN npm install
RUN ng build --outputPath=/code/static
and associated docker-compose:
version: '3'
services:
db:
image: postgres
web:
build:
context: .
dockerfile: Dockerfile
working_dir: /code
env_file:
- web_variables.env
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
In the second example, the static directory has never been created or built into. I thought that a container is an instance of an image. How can the container be missing files from the image?
You're confusing build-time and run-time, along playing with Volumes.
Remember that host mount has priority over FS provided by the running container, so even your built image has assets, they are going to be overwritten by .services.web.volumes because you're mounting the host filesystem that overwrites the build result.
If you try to avoid volumes mounting you'll notice that everything is working as expected.

Using a docker image as a dependency, I want to run multiple commands in docker compose but can't do it

I want to create a generalized docker-compose yaml that can take any parent image/docker file build it if it doesn't exist and then run the following commands to run it in flask.
version: '3.3'
services:
web:
build: .
volumes:
- ./app:/app
ports:
- "80:80"
command: >
RUN pip3 install flask
&& COPY ./app /app
&& WORKDIR /app
&& RUN python run.py
But I keep getting an error
starting container process caused "exec: \"RUN\": executable file not found in $PATH": unknown
Not sure why.
Anyways, any help would be much appreciated.
I would say create a generalised dockerfile instead of compose.
Because when you use command, it runs your command on your container and RUN, COPY etc are not linux commands, these are dockerfile instructions.
Though you can run pip install && python run in command, bur copy is not possible from host to container usinh command.
Use below instructions in dockerfile
RUN pip install flask
COPY ./app /app
WORKDIR /app
CMD python run.py
And in docker-compose.yml you can overwrite the CMD defined in dockerfile
With something like this:
command:
python some.py or parameters

Flask running in Docker. Need to rebuild every time

I'm trying to get my Flask Docker build to be able to switch from running uWSGI + Nginx for production to simply running flask run for local development.
I have Flask running in development mode:
Here's my Dockerfile:
FROM python:3
RUN apt-get -qq update
RUN apt-get install -y supervisor nginx
RUN pip install uwsgi
# Expose dev port
EXPOSE 5000
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY nginx.conf /etc/nginx/sites-enabled/default
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN pip install --editable .
CMD [ "supervisord" ]
I'm exposing port 5000, which I run from docker-compose.yml:
version: '3'
services:
web:
build: ./
environment:
DATABASE_URL: postgresql://postgres#db/postgres
FLASK_APP: app.main
FLASK_ENV: development
command: flask run --host=0.0.0.0
ports:
- "5000:5000"
links:
- db
volumes:
- .:/app
db:
image: postgres
restart: always
ports:
- 5433:5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
driver: local
I can build this and run it fine at http://0.0.0.0:5000/ but if I change any code, in a view, or a template, nothing gets refreshed/reloaded on the server and I'm forced to rebuild.
I thought that by specifying FLASK_ENV: development in my docker-compose that this would enable auto-reloading of the Flask server. Any idea on how to debug?
Debug it on the host, outside Docker. If you need access to databases that are running in Docker, you can set environment variables like MYSQL_HOST or PGHOST to point at localhost. You don't need root privileges at all (any docker command implies unrestricted root access). Your IDE won't be upset at your code running "somewhere else" with a different filesystem layout. If you have a remote debugger, you won't need to traverse the Docker divide to get access to it.
Once you have it running reasonably on the host and your py.test tests pass, then run docker build.
(In the particular case of server-rendered views, you should be able to see almost exactly the same thing via the Flask debug server without having the nginx proxy in front of it, and you can control the library versions that get installed via requirements in your setup.py file. So the desktop environment will be much simpler, and you should be able to convince yourself it's very close to what would be running in the Docker world.)
When you run the flask app from docker-compose, it is running the version you installed with pip install --editable . in the Dockerfile. The problem is that this is NOT the version you're editing from outside Docker in the volume mounted to '/app' (rather it is a different version in /usr/src/app from the COPY . . command).
The Dockerfile requires COPY and ADD at build time, not run-time, so in order to run from the live version the flask run command can't be running from the pip installed package.
You have a few choices:
Move the pip install command to the docker-compose command (possibly by calling a bash script that does the pip install as well as the flask run).
Run the flask run command without pip installing the package. If you're in the /app folder, I believe flask will recognize the current folder as holding the code (given the FLASK_APP variable), so the pip install is unnecessary. This works for me for a simple app where I encountered the same problem, but I imagine if you have complicated imports or other things relying on the installed package this won't work.
I suppose if you reconcile the /usr/src/app and /app folders, so that you pip install --editable at build time then overwrite that folder so that python is looking in the same place but at the locally mounted volume you can trick it into working, but I haven't quite gotten this to work... when I do this, following the flask logs (docker logs -f web) indicates that it knows when a file changes, but the behavior doesn't seem to actually change. I don't know why this is exactly, but I suspect pip is upset about the folder swap.

Resources