Sphinx Docker Deployment - docker

I'm trying to create documentation website with docker-compose. I followed this tutorial. In local, I can run it successfully but when I try run container in server it returns me this error:
docs_1 | 2022-09-06T14:34:44.430819779Z [sphinx-autobuild] > sphinx-build -b html /etc/Sphinx/source /etc/Sphinx/build
docs_1 | 2022-09-06T14:34:44.807214119Z
docs_1 | 2022-09-06T14:34:44.807251454Z Application error:
docs_1 | 2022-09-06T14:34:44.807257159Z Cannot find source directory (/etc/Sphinx/source)
docs_1 | 2022-09-06T14:34:44.867591050Z Command exited with exit code: 2
docs_1 | 2022-09-06T14:34:44.867628073Z The server will continue serving the build folder, but the contents being served are no longer in sync with the documentation sources. Please fix the cause of the error above or press Ctrl+C to stop the server.
Here is my Dockerfile
FROM alpine:latest
WORKDIR /etc/
RUN mkdir -p /etc/Sphinx/build
RUN apk add --no-cache python3 py3-pip make git
RUN pip install git+https://github.com/sphinx-doc/sphinx && \
pip install sphinx-autobuild
CMD sphinx-autobuild -b html --host 0.0.0.0 --port 80 /etc/Sphinx/source /etc/Sphinx/build
COPY /doc/ /etc/Sphinx/source
And my docker-compose.yml
version: "3.0"
services:
docs:
image: registry.digitalocean.com/my_username/${IMAGE}
container_name: docs
build: .docker
volumes:
- ./doc:/etc/Sphinx/source
ports:
- 8100:80
And this is the docker-compose.yml in the server:
docs:
image: registry.digitalocean.com/my_username/docs
restart: unless-stopped
ports:
- 8100:80
My folder structure is:
docs/
├─ .docker/
│ ├─ dev.env
│ ├─ Dockerfile
│ ├─ prod.env
├─ doc/
│ ├─ conf.py
│ ├─ index.rst
│ ├─ getting_started.rst
├─ docker-compose.yml
├─ README.md
I tried to copy files in doc folder to /etc/Sphinx/source but it returns me this error:
failed to compute cache key: "/doc" not found: not found
Any help will be appreciated, thanks.

Related

Why my docker-compose build is failing when copying files

I have a project that is running just fine using fastAPI (main.py). Now I want to share it as a docker image. Here is what I am doing:
My project has this structure:
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
My Dockerfile:
FROM python:3.10
RUN pip install fastapi uvicorn transformers
COPY . /api /api/api
ENV PYTHONPATH=/api
WORKDIR /api
EXPOSE 8000
ENTRYPOINT ["uvicorn"]
CMD ["api.main:app", "--host", "0.0.0.0"]
docker_compose.yaml
version: "3"
services:
docker_model:
build: ./services/api
ports:
- 8000:8000
labels:
- "statspy.enable=true"
- "statspy.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
topic_v5:
image: statspy:v5.0
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "$PWD/services/statspy/statspy.dev.toml:/etc/statspy/statpys.toml"
when I run docker-compose build it fails with this error message:
Step 3/8 : COPY ./api /api/api
COPY failed: file not found in build context or excluded by .dockerignore: stat api: file does not exist
What am I doing wrong here?
Your build context in the docker-compose file is build: ./services/api
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api <--- docker_model Dockerfile executes from here
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
You later try to do COPY ./api /api/api. There is no api dir in /services/api, so the COPY directive fails.
What you probably want to do instead is COPY . /api. The rest of the Dockerfile looks correct.

Dockerizing a FastAPI backend with React Frontend - tips

I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...
I have the app functioning as I need without any issues, my current directory structure is.
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.
I have a .env file in each, only simple things like URLs or hosts.
I currently run the app, with the front end and backend separately as an example.
> ./frontend
> npm run dev
> ./backend
> uvicorn ....
Can anyone give me tips /advice on how I can dockerize this as one?
As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one Dockerfile per app).
Then, you can add a docker-compose.yml file at the root of your project in order to link them together, it could look like that:
version: '3.3'
services:
app:
build:
context: ./frontend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:80:80"
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:8000:80"
The backend would be running on http://localhost:8000 and the frontend on http://localhost:80
In order to start the docker-compose you can just type in your shell:
$> docker-compose up
This implies that you already have your Dockerfile for both apps.
You can find many example online of different implementations of Dockerfile for the different technologies. For example :
For ReactJS you can configure it like this
For FastAPI Like that
Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.
Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.
.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
backend/Dockerfile
FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
frontend/Dockerfile
# pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
docker-compose.yml
version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006

How should define Dockerfile & docker-compose if they are placed in project subdirectory?

project-root/
├─ build/
│ ├─ Dockerfile
│ ├─ docker-compose.yml
├─ internal/
│ ├─ ...
├─ ...
Dockerfile:
...
WORKDIR /app
COPY go.mod go.mod
COPY go.sum go.sum
...
docker-compose.yml:
...
services:
api:
container_name: 'api'
build: ./build/
ports:
...
after run command:
docker compose --project-directory . up
get error:
failed to solve: rpc error: code = Unknown desc = failed to compute
cache key: "/go.mod" not found: not found
It is really a lot easier to keep your docker files in the root. But you can do this the way you want, it is just somewhat more confusing, and possibly more difficult therefore to maintain.
First: in your docker-compose you have build: ./build/ Since the docker-compose file itself is located inside build directory it will look for build/build that will not work.
Then it is important to understand the concept of context/path in docker.
It will take whatever you have in that directory, and send it to the docker daemon, any files outside of it, even if their paths are corrrectly written in the dockerfile will not be found.
The paths of files inside the dockerfile, for instance for COPY commands, are relative to this context.
So build: ./build/ sends everything in the build directory to the daemon but anything outside will fail to be found.
You can the say
build:
context: ..
dockerfile: Dockerfile
That should then work again, if you have any paths in your dockerfile relative to the context! So lets say you need to copy ../internal/somefile then you would need: COPY ./internal/somefile ./somewhere

Docker-compose - module import issue

I am working on a Flask app and my docker-compose.yml is not able to find my config folder. I have __init__.py files at every levels and I tried to add/remove the config path from my docker-compose.yml file but still the same issue. This is my project structure:
.
├── ...
├── web_messaging
│ ├── app.py
│ ├── __init__.py
│ ├── ...
│ └── config
│ ├── settings.py
│ ├── gunicorn.py
│ └── __init__.py
│
├── __init__py
├── .env
├── requirements.txt
├── docker-compose.yml
└── Dockerfile
When I am running docker-compose up --build, I am getting this error:
website_1 | - 'config' not found.
website_1 |
website_1 | Original exception:
website_1 |
website_1 | ModuleNotFoundError: No module named 'config'
website_1 | [2021-04-04 15:23:26 +0000] [8] [INFO] Worker exiting (pid: 8)
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Shutting down: Master
website_1 | [2021-04-04 15:23:26 +0000] [1] [INFO] Reason: Worker failed to boot.
docker-compose.yml
version: '2'
services:
website:
build: .
command: >
gunicorn -c "python:web_messaging.config.gunicorn" --reload "web_messaging.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/web_messaging'
ports:
- '8000:8000'
app.py
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
if settings_override:
app.config.update(settings_override)
error_templates(app)
app.register_blueprint(user)
app.register_blueprint(texting)
extensions(app)
configure_context_processors(app)
return app
...
Dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /web_messaging
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -c "python:web_messaging.config.gunicorn" "web_messaging.app:create_app()"
Seems I solved it. The config folder must be outside of the main_messaging folder,

Entrypoint #FAIL - no such file or directory

In getting a django env setup, was working on how to containerize the env. In doing so, I can't get the entrypoint to work on Docker for Windows/Linux.
Successfully built e9cb8e009d91
Successfully tagged avengervision_web:latest
avengervision_db_1 is up-to-date
Starting avengervision_web_1 ... done
CONTAINER ID IMAGE COMMAND CREATED
1da83169ba41 avengervision_web "sh /usr/src/app/ent…" 44 minutes
STATUS PORTS NAMES
Exited (2) 20 seconds ago avengervision_web_1
docker logs 1da83169ba41
sh: can't open '/usr/src/app/entrypoint.sh': No such file or directory
Have simplified the entrypoint.sh to just get it to execute.
Have tried
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"] &
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Made sure the line ending in git and vscode are set to LF and ran the code through dos2unix
Ran the same Docker Compose on Windows and Linux and get the same exception on both
added to the Dockerfile as extra precaution to remove all line endings and made sure to chmod +x the script
Commented out the EntryPoint and ran docker run -tdi and I was able to docker attach and execute the script from within the container without any issue.
*****docker-compose.yml*****
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
#command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./main/:/usr/src/app/
ports:
- 8000:8000
environment:
- DEBUG=1
- SECRET_KEY=foo
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=hello_django_dev
- SQL_USER=hello_django
- SQL_PASSWORD=hello_django
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
*****Dockerfile*****
# pull official base image
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/app
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./docker/Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./docker/entrypoint.sh /usr/src/app/entrypoint.sh
#RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY main /usr/src/app/main
COPY manage.py /usr/src/app
#RUN /usr/src/app/entrypoint.sh
RUN sed -i 's/\r$//' /usr/src/app/entrypoint.sh && \
chmod +x /usr/src/app/entrypoint.sh
# run entrypoint.sh
ENTRYPOINT ["sh","/usr/src/app/entrypoint.sh"]
*****entrypoint.sh*****
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
echo "Testing"
#python /usr/src/app/manage.py flush
#python /usr/src/app/manage.py migrate
#python /usr/src/app/manage.py collectstatic --no-input --clear
exec "$#"
The goal in the end is that the container would be up and running with the django application created.
In leveraging the layout listed here - https://github.com/testdrivenio/django-on-docker it worked. The difference in what I was doing is I created a new docker directory at the root and then had docker compose leverage that. Everything seemed to copy into the container as it was supposed to, but for some reason the EntryPoint would not work. Without changing any of the code other than updating the references to the new file locations, everything worked. Below were the changes made:
web:
build:
context: .
dockerfile: ./docker/Dockerfile
to
web:
build: ./app
and then changing the directory structure from
Project Layout:
├───.vscode
├───docker
│ └───Dockerfile
│ └───entrypoint.sh
│ └───Pipfile
│ └───nginx
└───main
├───migrations
├───static
│ └───images
├───templates
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───docker-compose.yml
└───managy.py
to
Project Layout:
├───.vscode
├───app
│ └───main
│ ├───migrations
│ ├───static
│ │ └───images
│ ├───templates
│ └───Dockerfile
│ └───entrypoint.sh
│ └───managy.py
│ └───Pipfile
├───Artwork
├───django-env
│ ├───Include
│ ├───Lib
│ └───Scripts
└───nginx
└───docker-compose.yml

Resources