my docker file for ui image is as follows
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "build"]
and my docker compose looks like below.
version: "3"
services:
nginx:
depends_on:
- backend
- ui
restart: always
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
- static:/usr/share/nginx/html
build:
context: ./nginx/
dockerfile: Dockerfile
ports:
- "80:80"
backend:
build:
context: ./backend/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./backend:/app
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
ui:
tty: true
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: ./ui/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./ui:/app
- static:/app/build
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes:
static:
I am trying to build static content and copy the content between ui container to nginx container.I use shared volume.Everything works fine as expected.But when I change contents of ui and build again, changes are not reflecting.I tried following thing.
docker-compose down
docker-compose up --build
docker-compose up
None of them is replacing the static content with the new build.
Only when i remove the static volume like below
docker volume rm skeleton_static
and then do
docker-compose up --build
It is changing the content now.. How do i automatically replace the static contents on every docker-compose up or docker-compose up --build thanks.
Named volumes are presumed to hold user data in some format Docker can't understand; Docker never updates their content after they're originally created, and if you mount a volume over image content, the old content in the volume hides updated content in the image. As such, I'd avoid named volumes here.
It looks like in the setup you show, the ui container doesn't actually do anything: its main container process is to build the application, and then it exits immediately. A multi-stage build is a more appropriate approach here, and it will let you compile the application during the image build phase without declaring a do-nothing container or adding the complexity of named volumes.
# ui/Dockerfile
# First stage: build the application; note this is
# very similar to the existing Dockerfile
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN ["npm", "run", "build"] # not CMD
# Second stage: nginx server serving that application
FROM nginx:latest
COPY --from=prodnode /app/build /usr/share/nginx/html
# use default CMD from the base image
In your docker-compose.yml file, you don't need separate "build" and "serve" containers, these are now combined together.
version: "3.8"
services:
backend:
build: ./backend
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
depends_on:
- postgres
# no volumes:
ui:
build: ./ui
depends_on:
- backend
ports:
- '80:80'
# no volumes:
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes: # do persist database data
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
A similar problem will apply to the anonymous volume you've used for the backend service's node_modules directory, and it will ignore any changes to the package.json file. Since all of the application's code and library dependencies are already included in the image, I've deleted the volumes: block that would overwrite those.
Related
In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!
From my understanding,
Dockerfile is like the config/recipe for creating the image, while docker-compose is used to easily create multiple containers which may have relationship, and avoid creating the containers by docker command repeatedly.
There are two files.
Dockerfile
FROM node:lts-alpine
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=admin
- MYSQL_PASSWORD=12345
- MYSQL_DATABASE=test-db
volumes:
- ./db-data:/var/lib/mysql
ports:
- 3306:3306
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
build: ./
command: >
npm run dev
volumes:
- ./:/server
ports:
- "3030:3030"
depends_on:
- test-db
Question 1
When I run docker-compose up --build
a. The image will be built based on Dockerfile
b. What's then?
Question 2
test-db:
image: mysql:5.7
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
I am downloading the image for dockerhub with above code, but why and when do I need the image created in --build?
Question 3
volumes:
- ./db-data:/var/lib/mysql
Is this line means that the data is supposed to store at memory in location /var/lib/mysql, while I copy this data in my working directory ./db-data?
Update
Question 4
build: ./
What is this line for?
It is recommended to go through the Getting Started, most of your questions would be solved.
Let me try to highlight some of those to you.
The difference between Dockerfile and Compose file
Docker can build images automatically by reading the instructions from a Dockerfile
Compose is a tool for defining and running multi-container Docker applications
The main difference is Dockerfile is used to build an image while Compose is to build and run an application.
You have to build an image by Dockerfile then run it by Compose
After you run docker-compose up --build the image is built and cached in your system, then Compose would start the containers defined by docker-compose.yml
If you specify the image then it would be download while built if specify the build: ./
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers., Images are read-only, and all editing for a container would be destroyed after it's deleted, so you have to use Volumes if you want to persistent data.
Remember, Doc is always your friend.
I have two services in my docker-compose:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "html:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "html:/usr/share/nginx/html/"
volumes:
html:
and a Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY public_html/* /usr/share/nginx/html/
but when I run docker-compose up --build it does no update the files in the volume. I have to delete the volume for the files inside public_html to update on both services.
The volumes in your docker-compose has precedence over the files you have added in the Dockerfile.
Those containers don't take the content you are trying to add in your Dockerfile - they are taking the content from the html volume which is living in your host machine.
Those are two different techniques - mounting volume vs. adding files to an image in Dockerfile.
One solution, without using volumes might be to build both images every time:
PhpDockerfile content:
FROM php:7-fpm
COPY public_html/* /usr/share/nginx/html/
and the docker-compose.yml:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
php:
env_file:
- ".env"
build:
context: .
dockerfile: PhpDockerfile
EDIT:
The second approach, using volumes instead of adding them in dockerfile (will be quicker since you don't have to build each time, better for development environment):
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "./public_html/:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "./public_html/:/usr/share/nginx/html/"
and then you can remove the
COPY public_html/* /usr/share/nginx/html/
from your dockerfile.
Note that you might need to use the full path instead of a relative path in the docker-compose file.
I have a simple Laravel application with Nginx, PHP and MySQL each in its own container. It works great in my development environment but for production I need to remove bind volumes and copy the contents to the image itself instead. But how do I do this?
Do I need a seperate docker-compose-prod.yml file? How can I remove volues for production? How can I copy my source code and configuration to the image when deploying for production?
Here is my docker-compose.yml file
version: '3'
networks:
laranet:
services:
nginx:
image: nginx:stable-alpine
container_name: nginxcontainer
ports:
- "80:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laranet
mysql:
image: mysql:5.7.22
container_name: mysqlcontainer
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
networks:
- laranet
php:
build:
context: .
dockerfile: php/Dockerfile
container_name: phpcontainer
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laranet
and here is my php/Dockerfile
FROM php:7.2-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql
RUN chown -R www-data:www-data /var/www
RUN chmod 755 /var/www
1) copy data only for prod
You can use multistage builds to copy the contents only when you build with the target "prod".
FROM php:7.2-fpm-alpine as base
RUN docker-php-ext-install pdo pdo_mysql
RUN chown -R www-data:www-data /var/www
RUN chmod 755 /var/www
FROM base as dev
VOLUME /var/www/html
FROM base as prod
COPY data /var/www/html
VOLUME /var/www/html
your Docker-compose.yml gets a new line for prod
php:
build:
context: .
dockerfile: php/Dockerfile
target: prod
container_name: phpcontainer
ports:
- "9000:9000"
networks:
- laranet
2) No bind in prod?
would anonymous volumes for dev be a valid solution? e.g. through the definition of VOLUME /var/www/html you specify that the contents of the /var/www/html path should be put into a volume on container start. If no volume is specified in the docker-compose.yml it will create a volume for you. Sweet right?
Sidenote
I do not recommend to split your behavior between dev and prod.
I recommend that you use volumes throughout your stages. The only difference in prod could be that you copy the contents into the image -> before you define the VOLUME, since defining a VOLUME makes the folder unchangeable in the following layers.
david-maze pointed out (see comment)
Putting a VOLUME in your Dockerfile mostly only has confusing side effects, and I'd recommend doing it only if you're absolutely clear on what it means. It's definitely not needed for the OP's setup (and in fact has the likely side effect of leaking anonymous volumes on the production system)
Sources
multi-stage build in docker compose?
https://docs.docker.com/engine/reference/builder/#volume
This is my basic NGINX setup that works!
web:
image: nginx
volumes:
- ./nginx:/etc/nginx/conf.d
....
I replace the volumes by copying ./nginx to /etc/nginx/conf.d using COPY ./nginx /etc/nginx/conf.d into my container. The issue was because, by using value the nginx.conf refer to log file in my host instead of my container. So, I thought by hardcopying the config file to container it will solve my problem.
However, NGINX is not running at all at docker compose up. What is wrong?
EDIT:
Dockerfile
FROM python:3-onbuild
COPY ./ /app
COPY ./nginx /etc/nginx/conf.d
RUN chmod +x /app/start_celerybeat.sh
RUN chmod +x /app/start_celeryd.sh
RUN chmod +x /app/start_web.sh
RUN pip install -r /app/requirements.txt
RUN python /app/manage.py collectstatic --noinput
RUN /app/automation/rm.sh
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: ./
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
expose:
- "8080"
links:
- rabbit
celerybeat:
build: ./
command: /app/start_celerybeat.sh
depends_on:
- web
links:
- rabbit
celeryd:
build: ./
command: /app/start_celeryd.sh
depends_on:
- web
links:
- rabbit
This is your initial setup that works:
web:
image: nginx
volumes:
- ./nginx:/etc/nginx/conf.d
Here you have a bind volume that proxy, inside your container, all file system requests at /etc/nginx/conf.d to your host ./nginx. So there is no copy, just a bind.
This means that if you change a file in your ./nginx folder, you container will see the updated file in real time.
Load the configuration from the host
In your last setup just add a volume in the nginx service.
You can also remove the COPY ./nginx /etc/nginx/conf.d line in you web service Dockerfile, because it's useless.
Bundle configuration inside the image
Instead, if you want to bundle your nginx configuration inside a nginx image you should build a custom nginx image. Create a Dockerfile.nginx file:
FROM nginx
COPY ./nginx /etc/nginx/conf.d
And then change your docker-compose:
version: "3"
services:
nginx:
build:
dockerfile: Dockerfile.nginx
container_name: nginx_airport
ports:
- "8080:8080"
# ...
Now your nginx container will have the configuration inside it and you don't need to use a volume.