I am working on a Flask app and my docker-compose.yml is not able to find my config folder. I have __init__.py files at every levels and I tried to add/remove the config path from my docker-compose.yml file but still the same issue. This is my project structure:
.
├── ...
├── web_messaging
│ ├── app.py
│ ├── __init__.py
│ ├── ...
│ └── config
│ ├── settings.py
│ ├── gunicorn.py
│ └── __init__.py
│
├── __init__py
├── .env
├── requirements.txt
├── docker-compose.yml
└── Dockerfile
When I am running docker-compose up --build, I am getting this error:
website_1 | - 'config' not found.
website_1 |
website_1 | Original exception:
website_1 |
website_1 | ModuleNotFoundError: No module named 'config'
website_1 | [2021-04-04 15:23:26 +0000] [8] [INFO] Worker exiting (pid: 8)
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Shutting down: Master
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Reason: Worker failed to boot.
docker-compose.yml
version: '2'
services:
website:
build: .
command: >
gunicorn -c "python:web_messaging.config.gunicorn" --reload "web_messaging.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/web_messaging'
ports:
- '8000:8000'
app.py
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
if settings_override:
app.config.update(settings_override)
error_templates(app)
app.register_blueprint(user)
app.register_blueprint(texting)
extensions(app)
configure_context_processors(app)
return app
...
Dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /web_messaging
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -c "python:web_messaging.config.gunicorn" "web_messaging.app:create_app()"
Seems I solved it. The config folder must be outside of the main_messaging folder,
Related
I have a project that is running just fine using fastAPI (main.py). Now I want to share it as a docker image. Here is what I am doing:
My project has this structure:
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
My Dockerfile:
FROM python:3.10
RUN pip install fastapi uvicorn transformers
COPY . /api /api/api
ENV PYTHONPATH=/api
WORKDIR /api
EXPOSE 8000
ENTRYPOINT ["uvicorn"]
CMD ["api.main:app", "--host", "0.0.0.0"]
docker_compose.yaml
version: "3"
services:
docker_model:
build: ./services/api
ports:
- 8000:8000
labels:
- "statspy.enable=true"
- "statspy.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
topic_v5:
image: statspy:v5.0
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "$PWD/services/statspy/statspy.dev.toml:/etc/statspy/statpys.toml"
when I run docker-compose build it fails with this error message:
Step 3/8 : COPY ./api /api/api
COPY failed: file not found in build context or excluded by .dockerignore: stat api: file does not exist
What am I doing wrong here?
Your build context in the docker-compose file is build: ./services/api
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api <--- docker_model Dockerfile executes from here
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
You later try to do COPY ./api /api/api. There is no api dir in /services/api, so the COPY directive fails.
What you probably want to do instead is COPY . /api. The rest of the Dockerfile looks correct.
I have the following directory structure:
.
├── README.md
├── alice
├── docker
│ ├── compose-prod.yml
│ ├── compose-stage.yml
│ ├── compose.yml
│ └── dockerfiles
├── gauntlet
├── nexus
│ ├── Procfile
│ ├── README.md
│ ├── VERSION.txt
│ ├── alembic
│ ├── alembic.ini
│ ├── app
│ ├── poetry.lock
│ ├── pyproject.toml
│ └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.
I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...
I have the app functioning as I need without any issues, my current directory structure is.
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.
I have a .env file in each, only simple things like URLs or hosts.
I currently run the app, with the front end and backend separately as an example.
> ./frontend
> npm run dev
> ./backend
> uvicorn ....
Can anyone give me tips /advice on how I can dockerize this as one?
As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one Dockerfile per app).
Then, you can add a docker-compose.yml file at the root of your project in order to link them together, it could look like that:
version: '3.3'
services:
app:
build:
context: ./frontend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:80:80"
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:8000:80"
The backend would be running on http://localhost:8000 and the frontend on http://localhost:80
In order to start the docker-compose you can just type in your shell:
$> docker-compose up
This implies that you already have your Dockerfile for both apps.
You can find many example online of different implementations of Dockerfile for the different technologies. For example :
For ReactJS you can configure it like this
For FastAPI Like that
Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.
Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.
.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
backend/Dockerfile
FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
frontend/Dockerfile
# pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
docker-compose.yml
version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006
Trying to run mlflow run by specifying MLproject and code which lives in a different location as MLproject file.
I have the following directory structure:
/root/mflow_test
.
├── conda
│ ├── conda.yaml
│ └── MLproject
├── docker
│ ├── Dockerfile
│ └── MLproject
├── README.md
├── requirements.txt
└── trainer
├── __init__.py
├── task.py
└── utils.py
When I'm run from: /root/
mlflow run mlflow_test/docker
I get:
/root/miniconda3/bin/python: Error while finding module specification for 'trainer.task' (ImportError: No module named 'trainer')
Since my MLproject file can't find the Python code.
I moved MLproject to mflow_test and this works fine.
This is my MLproject entry point:
name: mlflow_sample
docker_env:
image: mlflow-docker-sample
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
command: |
python -m trainer.task --job-dir {job_dir}
How can I run mlflow run and pass the MLproject and ask it to look in a different folder?
I tried:
"cd .. && python -m trainer.task --job-dir {job_dir}"
and I get:
/entrypoint.sh: line 5: exec: cd: not found
Dockerfile
# docker build -t mlflow-gcp-example -f Dockerfile .
FROM gcr.io/deeplearning-platform-release/tf-cpu
RUN git clone github.com/GoogleCloudPlatform/ml-on-gcp.git
WORKDIR ml-on-gcp/tutorials/tensorflow/mlflow_gcp
RUN pip install -r requirements.txt
I tried to change project layout in Fig+Django tutorial to something like this:
.
├── docker
│ └── django
│ ├── Dockerfile
│ └── requirements.txt
├── fig.yml
└── project
├── figexample
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
└── manage.py
And my fig.yml looks like:
db:
image: postgres
web:
build: ./docker/django
volumes:
- "project/:/code"
ports:
- "8000:8000"
links:
- db
command: "ls -a ."
But for some reasons instead of project directory it mounts current directory.
Result of fig logs in this case will be:
#$ fig logs
Attaching to figdjango_web_1, figdjango_db_1
db_1 | LOG: database system was shut down at 2014-11-05 15:15:41 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | .
web_1 | ..
web_1 | .fig.yml.swp
web_1 | docker
web_1 | fig.yml
web_1 | project
figdjango_web_1 exited with code 0
And my Dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
What I am doing wrong? How I can mount /project to /code?
sample on github
The sample you linked to on github is a bit different than what you describe in your question.
In the github sample, replace
command: python /project/manage.py runserver 0.0.0.0:8000
with
command: python /code/manage.py runserver 0.0.0.0:8000`
and it works.