Below you can see a representation of my project folder structure. I have two microservices which are called auth and profile, they are located inside the services directory. The docker-containers directory hold my docker-compose.yaml file in which I list all the images of my application.
.
├── services
│ ├── auth
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
│ ├── profile
│ │ ├── src
│ │ ├── dist
│ │ ├── .env
│ │ ├── package.json
│ │ ├── Dockerfile
│ │ ├── .dockerignore
└── docker-containers
├── docker-compose.yaml
Below is my docker-compose.yaml file in which I define the location of the auth service (and other images). I also want to override the local .env file with the values from the environment list. But when I run the docker compose project the values from my local .env file are still being used.
version: "3.8"
services:
auth:
build:
context: ../services/auth
container_name: auth-service
depends_on:
- redis
- mongo
ports:
- 3000:3000
volumes:
- ../services/auth/:/app
- /app/node_modules
command: yarn dev
env_file: ../services/auth/.env
environment:
FASTIFY_PORT: 3000
REDIS_HOST: redis
FASTIFY_ADDRESS: "0.0.0.0"
TOKEN_SECRET: 1d037ffb614158a9032c02f479b36f42dd33ba325f76a7692498c33839afc5d547eae2b47f0f4926b76b08fc91d19352
MONGO_URL: mongodb://root:example#mongo:27017
mongo:
image: mongo
container_name: mongo
restart: on-failure
ports:
- 2717:27017
volumes:
- ./mongo-data:/data
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
redis:
image: redis
container_name: redis
volumes:
- ./redis-data:/data
ports:
- 6379:6379
This is my Dockerfile and the .dockerignore file inside the auth service and based on my understanding the local .env file should not be copied to the docker context, because it is listed inside the .dockerignore file.
But when I log a value from my environment variables from the docker application it still logs the old value from my local .env file.
FROM node:16-alpine
WORKDIR /app
COPY ["package.json", "yarn.lock", "./"]
RUN yarn
COPY dist .
EXPOSE 3000
CMD [ "yarn", "start" ]
node_modules
Dockerfile
.env*
.prettier*
.git
.vscode/
The weird part is that the node_modules folder of the auth services is being ignored but for some reason the environment variables inside the docker container are still based on the local .env file.
Related
I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...
I have the app functioning as I need without any issues, my current directory structure is.
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.
I have a .env file in each, only simple things like URLs or hosts.
I currently run the app, with the front end and backend separately as an example.
> ./frontend
> npm run dev
> ./backend
> uvicorn ....
Can anyone give me tips /advice on how I can dockerize this as one?
As a good practice, one docker image should contain one process. Therefore you should dockerize them separatly (have one Dockerfile per app).
Then, you can add a docker-compose.yml file at the root of your project in order to link them together, it could look like that:
version: '3.3'
services:
app:
build:
context: ./frontend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:80:80"
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
ports:
- "127.0.0.1:8000:80"
The backend would be running on http://localhost:8000 and the frontend on http://localhost:80
In order to start the docker-compose you can just type in your shell:
$> docker-compose up
This implies that you already have your Dockerfile for both apps.
You can find many example online of different implementations of Dockerfile for the different technologies. For example :
For ReactJS you can configure it like this
For FastAPI Like that
Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.
Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.
.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
backend/Dockerfile
FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
frontend/Dockerfile
# pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
docker-compose.yml
version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006
I have a problem. I have the following commands.
docker pull tensorflow/serving
docker run -it -v \folder\model:/model-p 8601:8601 --entrypoint /bin/bash tensorflow/serving
tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/model/
I would like to add these to a dockerfile and to a docker-compose.yml. The problem is that the models are under the following folder structure. So I would have to go back one folder and into another. How exactly do I make it all work?
folder structure
folder
├── model # should be copied to \model
│ └── 1
│ └── ...
│ └── 2
│ └── ...
├── server
│ └── Dockerfile
│ └── docker-compose.yml
FROM tensorflow/serving
services:
tfserving:
container_name: tfserving
image: tensorflow/serving
ports:
- "8601:8601"
volumes:
- \folder\model
Dockerfile (name it with capital D so it's recognized by docker-compose with just . (dot) since it's in the same folder):
FROM tensorflow/serving
EXPOSE 8601
RUN tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/model/
docker-compose.yml:
version: '3'
services:
tfserving:
container_name: tfserving
build: .
ports:
- "8601:8601"
volumes:
- ../model:/models/model
environment:
- TENSORFLOW_SERVING_MODEL_NAME=model
entrypoint: [ "bash", "-c", "tensorflow_model_server --rest_api_port=8601 --model_name=model --model_base_path=/models/model/"]
I really need your help !
I'm encountering a problem with the loading of a plugin in a docker mosquitto.
I tried to load it on a local version of mosquitto and it worked well.
The error return in the docker console is:
dev_instance_mosquitto_1 exited with code 13
The errors return in the log file of mosquitto are:
1626352342: Loading plugin: /mosquitto/config/mosquitto_message_timestamp.so
1626352342: Error: Unable to load auth plugin "/mosquitto/config/mosquitto_message_timestamp.so".
1626352342: Load error: Error relocating /mosquitto/config/mosquitto_message_timestamp.so: __sprintf_chk: symbol not found
Here is a tree output of the project:
mosquitto/
├── Dockerfile
├── config
│ ├── acl
│ ├── ca_certificates
│ │ ├── README
│ │ ├── broker_CA.crt
│ │ ├── mqtt.test.perax.com.p12
│ │ ├── private_key.key
│ │ └── server_ca.crt
│ ├── certs
│ │ ├── CA_broker_mqtt.crt
│ │ ├── README
│ │ ├── serveur_broker.crt
│ │ └── serveur_broker.key
│ ├── conf.d
│ │ └── default.conf
│ ├── mosquitto.conf
│ ├── mosquitto_message_timestamp.so
│ └── pwfile
├── data
│ └── mosquitto.db
└── log
└── mosquitto.log
Here is the Dockerfile:
FROM eclipse-mosquitto
COPY config/ /mosquitto/config
COPY config/mosquitto_message_timestamp.so /usr/lib/mosquitto_message_timestamp.so
RUN install /usr/lib/mosquitto_message_timestamp.so /mosquitto/config/
here is the docker-compose.yml:
mosquitto:
restart: always
build: ./mosquitto/
image: "eclipse-mosquitto/latests"
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/:/mosquitto/config/
- ./mosquitto/data/:/mosquitto/data/
- ./mosquitto/log/mosquitto.log:/mosquitto/log/mosquitto.log
user: 1883:1883
environment:
- PUID=1883
- PGID=1883
Here is the mosquitto.conf:
persistence true
persistence_location /mosquitto/data
log_dest file /mosquitto/log/mosquitto.log
include_dir /mosquitto/config/conf.d
plugin /mosquitto/config/mosquitto_message_timestamp.so
I'm using mosquitto 2.0.10 on a ubuntu serveur with the version 18.04.5 LTS.
In thanking you for your help.
Your best bet here is probably to set up a multi step Docker build file that uses an Alpine based image to build the plugin then copy it into the eclipse-mosquitto image.
On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.
The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'
I am working on an Express server written on TypeScript. The flow of the project is that I have a npm build script in place which takes project files in src folder and compiles them to the dist folder. Both these folders live under the root directory. The project works, but while trying to move everything to docker, although I am mounting the volume and the built files (dist directory) is there in the container, the changes are not reflected on host. (I am using Windows + VirtualBox for Docker)
I have referred to this and this questions. I see that their scenario is same, but their solutions don't seem to work for me. I made sure I am using similar techniques mentioned in answers to these, they don't seem to work.
Directory Structure of the Project:
├── Backend
│ ├── src/
│ │ ├── controllers/
│ │ ├── models/
│ │ ├── routes/
│ │ ├── services/
│ │ ├── index.ts
│ │ └── server.ts
│ ├── dist/ (Created upon compilation)
│ │ ├── controllers/
│ │ ├── data/ (Created upon starting the server)
│ │ ├── models/
│ │ ├── routes/
│ │ ├── services/
│ │ ├── index.js
│ │ └── server.js
│ ├── Dockerfile
│ ├── package.json
│ ├── tsconfig.json
│ ├── tslint.json
│ └── .env
├── Frontend/ (This part is an independent application)
├── docker-compose.yml
└── README.md
When the server starts, it creates a directory in %proj_root%/Backend/dist named data which is used to give input to the application via txt files. Compilation works fine as evident from ls commands I have put in Dockerfile, but the changes done inside of the container (creation of the dist directory) isn't reflected on host. On host, the dist directory is empty, causing the server to crash because there is no server.js file.
Here is my docker-compose.yml:
version: "3"
services:
backend:
build:
context: ./Backend/
volumes:
- ./Backend/dist:/app/dist
- /app/node_modules
- ./Backend:/app
frontend:
build:
context: ./Frontend
ports:
- "3001:8080"
volumes:
- /app/node_modules
- ./Frontend:/app
Here's the Dockerfile for Backend service:
FROM node:8
WORKDIR /app
COPY ./package.json .
RUN npm install
COPY . . # Copying everything to enable standalone usage
RUN ls # Logging before tsc build
RUN npm run build
RUN ls /app/dist # Logging after tsc build. All the built files are visible.
CMD ["npm", "run", "start"]
Upon running docker-compose up, a dist folder should be created in container (/app/dist) and should be reflected on host as %proj_root%/Backend/dist
I understand I could create a script which compiles TS and then runs docker-compose, but that looks like a hacky approach to me. Is there a better solution?
The docker-compose.yml setup you show does two independent things. First, it builds a Docker image, using the Dockerfile you give it in isolation. Second, it takes that image, mounts volumes and applies other settings, and runs a container based on those settings. The Docker image build sequence ignores all of these other settings; nothing you do in the Dockerfile can ever change files on the host system.
When you run a container, whatever content is in the volumes: settings you pass in completely replaces what came out of that image. This is always a one-way "push into the container": the contents of your host's Backend directory replace /app in the container; the contents of an anonymous volume replace its node_modules, and the contents of the host's list directory replace /app/dist.
There is one special-case exception to this. When you start a container, if the volume mount is empty, the contents from the image get copied to the volume. This only happens if there's absolutely nothing in the volume tree at all. If there's already content in the dist host directory, or the node_modules anonymous volume, this replaces whatever was in the image, even if it changed in the image (or changed in the volume, Docker has no way to tell).
As a one-off workaround, if you
rm -rf dist
then the next time you launch the container, Docker will notice that the dist directory is empty and repopulate it from the image.
I'd recommend just deleting these volumes: settings altogether. If you're actively developing the software, do it on the host: Node is very easy to install with typical OS package managers, your IDE won't be confused by your Node interpreter being hidden inside a container, and you won't hit this sort of problem. When you go to deploy, you can just use the Docker image as-is, without separately distributing the code that's also inside the image.