Why my docker-compose build is failing when copying files - docker

I have a project that is running just fine using fastAPI (main.py). Now I want to share it as a docker image. Here is what I am doing:
My project has this structure:
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
My Dockerfile:
FROM python:3.10
RUN pip install fastapi uvicorn transformers
COPY . /api /api/api
ENV PYTHONPATH=/api
WORKDIR /api
EXPOSE 8000
ENTRYPOINT ["uvicorn"]
CMD ["api.main:app", "--host", "0.0.0.0"]
docker_compose.yaml
version: "3"
services:
docker_model:
build: ./services/api
ports:
- 8000:8000
labels:
- "statspy.enable=true"
- "statspy.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
topic_v5:
image: statspy:v5.0
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "$PWD/services/statspy/statspy.dev.toml:/etc/statspy/statpys.toml"
when I run docker-compose build it fails with this error message:
Step 3/8 : COPY ./api /api/api
COPY failed: file not found in build context or excluded by .dockerignore: stat api: file does not exist
What am I doing wrong here?

Your build context in the docker-compose file is build: ./services/api
project
│ docker-compose.yaml
| requirements.txt
│
└───<dir>services
│
└───<dir>api <--- docker_model Dockerfile executes from here
| │ Dockerfile
| │ main.py
| |
| <dir>model
| file.model
| file.model
└───<dir>statspy
file.dev.toml
file.prod.toml
You later try to do COPY ./api /api/api. There is no api dir in /services/api, so the COPY directive fails.
What you probably want to do instead is COPY . /api. The rest of the Dockerfile looks correct.

Related

Sphinx Docker Deployment

I'm trying to create documentation website with docker-compose. I followed this tutorial. In local, I can run it successfully but when I try run container in server it returns me this error:
docs_1 | 2022-09-06T14:34:44.430819779Z [sphinx-autobuild] > sphinx-build -b html /etc/Sphinx/source /etc/Sphinx/build
docs_1 | 2022-09-06T14:34:44.807214119Z
docs_1 | 2022-09-06T14:34:44.807251454Z Application error:
docs_1 | 2022-09-06T14:34:44.807257159Z Cannot find source directory (/etc/Sphinx/source)
docs_1 | 2022-09-06T14:34:44.867591050Z Command exited with exit code: 2
docs_1 | 2022-09-06T14:34:44.867628073Z The server will continue serving the build folder, but the contents being served are no longer in sync with the documentation sources. Please fix the cause of the error above or press Ctrl+C to stop the server.
Here is my Dockerfile
FROM alpine:latest
WORKDIR /etc/
RUN mkdir -p /etc/Sphinx/build
RUN apk add --no-cache python3 py3-pip make git
RUN pip install git+https://github.com/sphinx-doc/sphinx && \
pip install sphinx-autobuild
CMD sphinx-autobuild -b html --host 0.0.0.0 --port 80 /etc/Sphinx/source /etc/Sphinx/build
COPY /doc/ /etc/Sphinx/source
And my docker-compose.yml
version: "3.0"
services:
docs:
image: registry.digitalocean.com/my_username/${IMAGE}
container_name: docs
build: .docker
volumes:
- ./doc:/etc/Sphinx/source
ports:
- 8100:80
And this is the docker-compose.yml in the server:
docs:
image: registry.digitalocean.com/my_username/docs
restart: unless-stopped
ports:
- 8100:80
My folder structure is:
docs/
├─ .docker/
│ ├─ dev.env
│ ├─ Dockerfile
│ ├─ prod.env
├─ doc/
│ ├─ conf.py
│ ├─ index.rst
│ ├─ getting_started.rst
├─ docker-compose.yml
├─ README.md
I tried to copy files in doc folder to /etc/Sphinx/source but it returns me this error:
failed to compute cache key: "/doc" not found: not found
Any help will be appreciated, thanks.

Dockerizing a FastAPI backend with React Frontend - tips

I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...
I have the app functioning as I need without any issues, my current directory structure is.
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.
I have a .env file in each, only simple things like URLs or hosts.
I currently run the app, with the front end and backend separately as an example.
> ./frontend
> npm run dev
> ./backend
> uvicorn ....
Can anyone give me tips /advice on how I can dockerize this as one?
As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one Dockerfile per app).
Then, you can add a docker-compose.yml file at the root of your project in order to link them together, it could look like that:
version: '3.3'
services:
app:
build:
context: ./frontend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:80:80"
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:8000:80"
The backend would be running on http://localhost:8000 and the frontend on http://localhost:80
In order to start the docker-compose you can just type in your shell:
$> docker-compose up
This implies that you already have your Dockerfile for both apps.
You can find many example online of different implementations of Dockerfile for the different technologies. For example :
For ReactJS you can configure it like this
For FastAPI Like that
Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.
Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.
.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
backend/Dockerfile
FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
frontend/Dockerfile
# pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
docker-compose.yml
version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006

How do I create a dockerfile and docker-compose.yml from the commands?

I have a problem. I have the following commands.
docker pull tensorflow/serving
docker run -it -v \folder\model:/model-p 8601:8601 --entrypoint /bin/bash tensorflow/serving
tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/model/
I would like to add these to a dockerfile and to a docker-compose.yml. The problem is that the models are under the following folder structure. So I would have to go back one folder and into another. How exactly do I make it all work?
folder structure
folder
├── model # should be copied to \model
│ └── 1
│ └── ...
│ └── 2
│ └── ...
├── server
│ └── Dockerfile
│ └── docker-compose.yml
FROM tensorflow/serving
services:
tfserving:
container_name: tfserving
image: tensorflow/serving
ports:
- "8601:8601"
volumes:
- \folder\model
Dockerfile (name it with capital D so it's recognized by docker-compose with just . (dot) since it's in the same folder):
FROM tensorflow/serving
EXPOSE 8601
RUN tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/model/
docker-compose.yml:
version: '3'
services:
tfserving:
container_name: tfserving
build: .
ports:
- "8601:8601"
volumes:
- ../model:/models/model
environment:
- TENSORFLOW_SERVING_MODEL_NAME=model
entrypoint: [ "bash", "-c", "tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/models/model/"]

docker invalid reference format

my files strucutre . i have i am building two container one is mysql database
another is python application
docker-compose.yml
version: '3'
services:
mysql-dev:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: *****
MYSQL_DATABASE: vlearn
ports:
- "3308:3308"
app:
image: ./app
ports:
- "5000:5000"
app file
FROM python:3.7
WORKDIR /usr/src/app
COPY . .
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
CMD ["python","app.py"]
When i Run docker-compose up
i get following
Error
Pulling app (./app:)...
ERROR: invalid reference format
my directory structure
├── app
│ ├── Dockerfile
│ ├── Pipfile
│ └── Pipfile.lock
└── docker-compose.yml
app:
build : ./app
ports:
- "5000:5000"
it must be build : ./app instead of image: ./app

docker-compose ERROR: Cannot locate specified Dockerfile

I have a .Net Core project that I want to deploy production environment but when I try to build it on my droplet I get this error "ERROR: Cannot locate specified Dockerfile". I couldn't figure out what's wrong with my configurations.
Project Structure
project
│ project.sln
│ docker-compose.dcproj
│ docker-compose.dev.yml
│ docker-compose.prod.yml
│ docker-compose.yml
│
└───project.Web
│ │ (mvc-files)
│ │ .dockerignore
│ │ Dockerfile
│ │ project.Web.csproj
│ │
│
└───project.Models
│
└───project.Services
│
└───project.Core
Dockerfile
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY project.Web/project.Web.csproj project.Web/
COPY project.Models/project.Models.csproj project.Models/
COPY project.Services/project.Services.csproj project.Services/
COPY project.Core/project.Core.csproj project.Core/
RUN dotnet restore project.Web/project.Web.csproj
COPY . .
WORKDIR /src/project.Web
RUN dotnet build project.Web.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish project.Web.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "project.Web.dll"]
docker-compose.yml
version: '3.7'
services:
webapp:
build:
context: .
dockerfile: project.Web/Dockerfile
I execute these codes on different environments
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
This docker-compose works on local environment but doesn't work on prod, can't find Dockerfile.
I've checked .dockerignore if it contains Dockerfile but i doesn't.
I've tried to execute with these configs but still no luck
- context: .
dockerfile: project.Web/Dockerfile
- context: .
dockerfile: Dockerfile
- context: app/
dockerfile: Dockerfile
- context: app/project.Web/
dockerfile: Dockerfile
EDIT:
I didn't think dev or prod docker-compose file is the problem but adding anyway.
docker-compose.dev.yml
version: '3.7'
networks:
network-dev:
driver: bridge
services:
webapp:
image: project
container_name: container-webapp
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
networks:
- "network-dev"
ports:
- "80"
depends_on:
- "db"
db:
image: postgres:latest
container_name: container-db
environment:
- "POSTGRES_USER=username"
- "POSTGRES_PASSWORD=password"
- "POSTGRES_DB=projectdb"
restart: always
ports:
- "13650:5432"
networks:
- "network-dev"
docker-compose.prod.yml
version: '3.7'
networks:
network-prod:
driver: bridge
services:
webapp:
image: alicoskun/project:latest
container_name: container-webapp
environment:
- ASPNETCORE_ENVIRONMENT=Production
- ASPNETCORE_URLS=http://+:80
networks:
- "network-prod"
ports:
- "80"
depends_on:
- "db"
db:
image: postgres:latest
container_name: container-db
environment:
- "POSTGRES_USER=username"
- "POSTGRES_PASSWORD=password"
- "POSTGRES_DB=projectdb"
restart: always
ports:
- "13650:5432"
networks:
- "network-prod"
Assuming you have already pushed the alicoskun/project:latest image into a repository where your production droplet can find it, you have no need to include the docker-compose.yml file as part of your docker-compose command. Instead, just run:
docker-compose -f docker-compose.prod.yml up -d --build
Including the docker-compose.yml in your docker-compose command-line will require that the Dockerfile be present, even though it will not be used to build the system.

Resources