Dockerizing a FastAPI backend with React Frontend - tips - docker

I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...
I have the app functioning as I need without any issues, my current directory structure is.
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.
I have a .env file in each, only simple things like URLs or hosts.
I currently run the app, with the front end and backend separately as an example.
> ./frontend
> npm run dev
> ./backend
> uvicorn ....
Can anyone give me tips /advice on how I can dockerize this as one?

As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one Dockerfile per app).
Then, you can add a docker-compose.yml file at the root of your project in order to link them together, it could look like that:
version: '3.3'
services:
app:
build:
context: ./frontend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:80:80"
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:8000:80"
The backend would be running on http://localhost:8000 and the frontend on http://localhost:80
In order to start the docker-compose you can just type in your shell:
$> docker-compose up
This implies that you already have your Dockerfile for both apps.
You can find many example online of different implementations of Dockerfile for the different technologies. For example :
For ReactJS you can configure it like this
For FastAPI Like that

Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.
Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.
.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
backend/Dockerfile
FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
frontend/Dockerfile
# pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
docker-compose.yml
version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006

Related

Use directory in docker-compose.yml's parent folder as volume

I have the following directory structure:
.
├── README.md
├── alice
├── docker
│   ├── compose-prod.yml
│   ├── compose-stage.yml
│   ├── compose.yml
│   └── dockerfiles
├── gauntlet
├── nexus
│   ├── Procfile
│   ├── README.md
│   ├── VERSION.txt
│   ├── alembic
│   ├── alembic.ini
│   ├── app
│   ├── poetry.lock
│   ├── pyproject.toml
│   └── scripts
nexus.Dockerfile
FROM python:3.10
RUN addgroup --system app && adduser --system --group app
WORKDIR /usr/src/pdn/nexus
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
ARG INSTALL_DEV=true
RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --no-dev ; fi"
COPY ../../nexus .
RUN chmod +x scripts/run.sh
ENV PYTHONPATH=/usr/src/pdn/nexus
RUN chown -R app:app $HOME
USER app
CMD ["./run.sh"]
The relevant service in compose.yml looks like this:
services:
nexus:
platform: linux/arm64
build:
context: ../
dockerfile: ./docker/dockerfiles/nexus.Dockerfile
container_name: nexus
restart: on-failure
ports:
- "8000:8000"
volumes:
- ../nexus:/usr/src/pdn/nexus:ro
environment:
- DATABASE_HOSTNAME=${DATABASE_HOSTNAME?}
env_file:
- .env
When I run compose up, I get the following error:
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./scripts/run.sh": permission denied: unknown
The service starts ok without the volume definition. I think it might be because of the the location of nexus in relation to the dockerfile or compose file, but the context is set to the parent.
I tried defining the volume as follows:
volumes:
- ./nexus:/usr/src/pdn/nexus:ro
But I get a similar error, in this case run.sh is not found: and a directory named nexus gets created in the docker directory
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "./run.sh": stat ./run.sh: no such file or directory: unknown
Not sure what I'm missing.
I've two comments, not sure if they can solve your issue.
First although, in your compose.yml, your are allowed to reference your parent directories, that not the case in your Dockerfile, you can't copy from outside the context which you specified in your compose.yml file (.. which resolve to your app root). So you should change those lines:
COPY ../../nexus/pyproject.toml ../../nexus/poetry.lock* ./
COPY ../../nexus .
to
COPY ./nexus/pyproject.toml ./nexus/poetry.lock* ./
COPY ./nexus .
Second the volume overrides whatever in /usr/src/pdn/nexus by the content of ../nexus. This will render your whole copies, to /usr/src/pdn/nexus, useless. That may not be an issue if the contents are the same, but whatever permission you defined in your files may be gone. So if your contents are the same, the only issue you may have is your starting script, you can put it into a separate directory out of the /usr/src/pdn/nexus so that it won't be overridden, and don't forget to reference it correctly into the CMD.

Building a Docker container for Golang code: package PACKAGE_NAME is not in GOROOT

I built a small Golang application and I want to run it on a Docker container.
I wrote the following Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.16-alpine
WORKDIR /app
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY ./* .
RUN go env -w GO111MODULE=on
RUN go build -o /docker-gs-ping
EXPOSE 8080
CMD [ "/docker-gs-ping" ]
However, when I run the command:
docker build --tag docker-gs-ping .
I get the errors:
#16 0.560 found packages controllers (controller.go) and repositories (csv_file_repository.go) in /app
#16 0.560 main.go:4:2: package MyExercise/controllers is not in GOROOT (/usr/local/go/src/MyExercise/controllers)
I want to mention that the package controllers exists in my working directory and all files associated with this directory are placed in MyExercise/controllers folder.
Do you know how to resolve this error?
Edit:
This is the directory tree:
.
├── Dockerfile
├── REDAME
├── controllers
│   └── controller.go
├── go.mod
├── go.sum
├── logging
│   └── logger.go
├── main.go
├── models
│   └── location.go
├── output.log
├── repositories
│   ├── csv_file_repository.go
│   ├── csv_file_repository_builder.go
│   ├── csv_file_repository_builder_test.go
│   ├── csv_file_repository_test.go
│   ├── repository_builder_interface.go
│   ├── repository_interface.go
│   └── resources
│   └── ip_address_list.txt
└── services
├── ip_location_service.go
├── ip_location_service_test.go
├── rate_limiter_service.go
├── rate_limiter_service_interface.go
├── rate_limiter_service_test.go
└── time_service.go
import section in main.go:
import (
"MyExercise/controllers"
"MyExercise/logging"
"MyExercise/repositories"
"MyExercise/services"
"errors"
"github.com/gin-gonic/gin"
"os"
"strconv"
"sync"
)
Do go mod vendor in your app directory. Documentaion.
For build the container docker build -t app:v1 .
Dockerfile
FROM golang:1.16-alpine
WORKDIR /app/
ADD . .
RUN go build -o /app/main
EXPOSE 5055
CMD [ "/app/main" ]
There is actually an issue with your Dockerfile.
COPY ./* .
does not actually do what you think. It will copy all files recursively in a flat structure to the /app directory.
Modify your Dockerfile to something like:
# syntax=docker/dockerfile:1
FROM golang:1.16-alpine
WORKDIR /app
ADD . /app
RUN go mod download
RUN go env -w GO111MODULE=on
RUN go build -o /docker-gs-ping
EXPOSE 8080
CMD [ "/docker-gs-ping" ]
Basically, remove all of the COPY directives and replace with a single ADD directive

dockerignore with Docker compose not ignoring .env file

Below you can see a representation of my project folder structure. I have two microservices which are called auth and profile, they are located inside the services directory. The docker-containers directory hold my docker-compose.yaml file in which I list all the images of my application.
.
├── services
│ ├── auth
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
│ ├── profile
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
└── docker-containers
├── docker-compose.yaml
Below is my docker-compose.yaml file in which I define the location of the auth service (and other images). I also want to override the local .env file with the values from the environment list. But when I run the docker compose project the values from my local .env file are still being used.
version: "3.8"
services:
auth:
build:
context: ../services/auth
container_name: auth-service
depends_on:
- redis
- mongo
ports:
- 3000:3000
volumes:
- ../services/auth/:/app
- /app/node_modules
command: yarn dev
env_file: ../services/auth/.env
environment:
FASTIFY_PORT: 3000
REDIS_HOST: redis
FASTIFY_ADDRESS: "0.0.0.0"
TOKEN_SECRET: 1d037ffb614158a9032c02f479b36f42dd33ba325f76a7692498c33839afc5d547eae2b47f0f4926b76b08fc91d19352
MONGO_URL: mongodb://root:example#mongo:27017
mongo:
image: mongo
container_name: mongo
restart: on-failure
ports:
- 2717:27017
volumes:
- ./mongo-data:/data
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
redis:
image: redis
container_name: redis
volumes:
- ./redis-data:/data
ports:
- 6379:6379
This is my Dockerfile and the .dockerignore file inside the auth service and based on my understanding the local .env file should not be copied to the docker context, because it is listed inside the .dockerignore file.
But when I log a value from my environment variables from the docker application it still logs the old value from my local .env file.
FROM node:16-alpine
WORKDIR /app
COPY ["package.json", "yarn.lock", "./"]
RUN yarn
COPY dist .
EXPOSE 3000
CMD [ "yarn", "start" ]
node_modules
Dockerfile
.env*
.prettier*
.git
.vscode/
The weird part is that the node_modules folder of the auth services is being ignored but for some reason the environment variables inside the docker container are still based on the local .env file.

docker invalid reference format

my files strucutre . i have i am building two container one is mysql database
another is python application
docker-compose.yml
version: '3'
services:
mysql-dev:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: *****
MYSQL_DATABASE: vlearn
ports:
- "3308:3308"
app:
image: ./app
ports:
- "5000:5000"
app file
FROM python:3.7
WORKDIR /usr/src/app
COPY . .
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
CMD ["python","app.py"]
When i Run docker-compose up
i get following
Error
Pulling app (./app:)...
ERROR: invalid reference format
my directory structure
├── app
│ ├── Dockerfile
│ ├── Pipfile
│ └── Pipfile.lock
└── docker-compose.yml
app:
build : ./app
ports:
- "5000:5000"
it must be build : ./app instead of image: ./app

How to specify build options for docker-compose build stage?

I have a main docker-compose.yml:
version: '3'
services:
recognizer:
container_name: recognizer
build: ./recognizer
hostname: recognizer
restart: always
ports:
- 8084:8084
network_mode: "host"
I have a folder recognizer with Dockerfile and docker-compose.yml:
Dockerfile:
FROM openjdk:8
RUN mkdir -p /app/
COPY . /app
WORKDIR /app
RUN chmod 777 /app/gradlew
RUN apt-get update && apt-get install -y netcat-traditional
RUN nc -w 2 -v localhost 5432 </dev/null; status=$?; exit $status;
RUN ./gradlew build
CMD ["./gradlew", "run"]
EXPOSE 8084
docker-compose.yml:
recognizer:
build:
context: .
ports:
- "8084:8084"
My file structure is:
.
├── docker-compose.yml
├── recognizer
│   ├── build
│   ├── build.gradle
│   ├── docker-compose.yml
│   ├── Dockerfile
│   ├── gradle
│   ├── gradlew
│   ├── gradlew.bat
│   ├── settings.gradle
│   └── src
So, I need to connect to localhost during build stage. It works. if I build Docker image with option --network=host, like that:
docker build --network=host -t recognizer .
But I don't know, how to specify build option for docker-compose.
There is an option to set network in docker-compose.yml like this:
build:
context: ./
dockerfile: Dockerfile
network: host
Then docker-compose build your-service will use host network for build stage.
This solved the problem for me

Resources