how does docker-compose.yml and dockerfile works together - docker

From my understanding,
Dockerfile is like the config/recipe for creating the image, while docker-compose is used to easily create multiple containers which may have relationship, and avoid creating the containers by docker command repeatedly.
There are two files.
Dockerfile
FROM node:lts-alpine
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=admin
- MYSQL_PASSWORD=12345
- MYSQL_DATABASE=test-db
volumes:
- ./db-data:/var/lib/mysql
ports:
- 3306:3306
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
build: ./
command: >
npm run dev
volumes:
- ./:/server
ports:
- "3030:3030"
depends_on:
- test-db
Question 1
When I run docker-compose up --build
a. The image will be built based on Dockerfile
b. What's then?
Question 2
test-db:
image: mysql:5.7
test-web:
environment:
- NODE_ENV=local
#- DEBUG=*
- PORT=3030
image: node:lts-alpine
I am downloading the image for dockerhub with above code, but why and when do I need the image created in --build?
Question 3
volumes:
- ./db-data:/var/lib/mysql
Is this line means that the data is supposed to store at memory in location /var/lib/mysql, while I copy this data in my working directory ./db-data?
Update
Question 4
build: ./
What is this line for?

It is recommended to go through the Getting Started, most of your questions would be solved.
Let me try to highlight some of those to you.
The difference between Dockerfile and Compose file
Docker can build images automatically by reading the instructions from a Dockerfile
Compose is a tool for defining and running multi-container Docker applications
The main difference is Dockerfile is used to build an image while Compose is to build and run an application.
You have to build an image by Dockerfile then run it by Compose
After you run docker-compose up --build the image is built and cached in your system, then Compose would start the containers defined by docker-compose.yml
If you specify the image then it would be download while built if specify the build: ./
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers., Images are read-only, and all editing for a container would be destroyed after it's deleted, so you have to use Volumes if you want to persistent data.
Remember, Doc is always your friend.

Related

Force update shared volume in docker compose

my docker file for ui image is as follows
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "build"]
and my docker compose looks like below.
version: "3"
services:
nginx:
depends_on:
- backend
- ui
restart: always
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
- static:/usr/share/nginx/html
build:
context: ./nginx/
dockerfile: Dockerfile
ports:
- "80:80"
backend:
build:
context: ./backend/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./backend:/app
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
ui:
tty: true
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: ./ui/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./ui:/app
- static:/app/build
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes:
static:
I am trying to build static content and copy the content between ui container to nginx container.I use shared volume.Everything works fine as expected.But when I change contents of ui and build again, changes are not reflecting.I tried following thing.
docker-compose down
docker-compose up --build
docker-compose up
None of them is replacing the static content with the new build.
Only when i remove the static volume like below
docker volume rm skeleton_static
and then do
docker-compose up --build
It is changing the content now.. How do i automatically replace the static contents on every docker-compose up or docker-compose up --build thanks.
Named volumes are presumed to hold user data in some format Docker can't understand; Docker never updates their content after they're originally created, and if you mount a volume over image content, the old content in the volume hides updated content in the image. As such, I'd avoid named volumes here.
It looks like in the setup you show, the ui container doesn't actually do anything: its main container process is to build the application, and then it exits immediately. A multi-stage build is a more appropriate approach here, and it will let you compile the application during the image build phase without declaring a do-nothing container or adding the complexity of named volumes.
# ui/Dockerfile
# First stage: build the application; note this is
# very similar to the existing Dockerfile
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN ["npm", "run", "build"] # not CMD
# Second stage: nginx server serving that application
FROM nginx:latest
COPY --from=prodnode /app/build /usr/share/nginx/html
# use default CMD from the base image
In your docker-compose.yml file, you don't need separate "build" and "serve" containers, these are now combined together.
version: "3.8"
services:
backend:
build: ./backend
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
depends_on:
- postgres
# no volumes:
ui:
build: ./ui
depends_on:
- backend
ports:
- '80:80'
# no volumes:
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes: # do persist database data
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
A similar problem will apply to the anonymous volume you've used for the backend service's node_modules directory, and it will ignore any changes to the package.json file. Since all of the application's code and library dependencies are already included in the image, I've deleted the volumes: block that would overwrite those.

Why is docker-compose running the same command and using the wrong Dockerfile?

I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image

How to configure docker-compose.yml to invalidate Docker cache based on a certain file checksum

I need to configure docker-compose.yml in a way that will invalidate the local image's docker cache, based on a certain file's checksum.
If it's not possible, I'd like to be able to somehow version the docker-compose.yml or Dockerfile, so that it would rebuild the Docker image of a specific service. I'd want to avoid pushing images to DockerHub. Unless it's an absolute the only solution.
At all costs, I want to avoid bash scripts and in general, writing imperative logic. I'm also not interested in CLI solutions, like passing additional flags to docker-compose up command.
Context:
We use docker-compose during the development of our application.
Our app has also a Dockerfile for building the app localy. We don't push docker images into DockerHub, we just have Dockerfile locally and in docker-compose.yml we declare the sourcecode and package.json (a file that for nodeJS applications use to declare dependencies) as volumes. Now sometimes, we modify the package.json, and docker-compose up throws an error, because the image is already built locally and the previous built doesn't contain the new dependencies. I'd want to be able to tell docker-compose.yml to automatically build a new image if there have been any changes to package.json file since we pull dependencies during the build stage.
docker-compose.yml
version: "3.8"
services:
web:
build:
context: .
ports:
- "8000:8000"
command: npx nodemon -L app.js
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
env_file:
- .env
depends_on:
- mongo
mongo:
image: mongo:latest
container_name: mongo_db
volumes:
- ./config/init.sh:/docker-entrypoint-initdb.d/init.sh
- ./config/mongod.conf:/etc/mongod.conf
- ./logs:/var/log/mongodb/
- ./db:/data/db
env_file:
- .env
ports:
- "27017:27017"
restart: on-failure:5
command: ["mongod", "-f", "/etc/mongod.conf"]
volumes:
db-data:
mongo-config:
Dockerfile:
FROM node:14.15.1
RUN mkdir -p /usr/src/app
# Create app directory
WORKDIR /usr/src/app
#Install app dependencies
COPY package.json /usr/src/app
# Install dependencies
RUN npm install
EXPOSE 8000
CMD ["node", "/app/app.js"]

creating a redis docker container with an exising rdb and load module at initiation?

I am trying to start a docker container using a redis db that I have a persistent copy saved to a local machine.
I currently have a docker container loading redis with a volume using this docker-compose.yml but it misses my redis.conf (which contains the loadmodule command) is located in the volume with the rdb file
version: '3'
services:
redis:
image: redis
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This begins to load the RDB but crashes out because the data uses this time series module.
I can load a seperate docker container with a fresh redis db that has the time seriese module loaded using the following dockerfile. My issue is I can't figure out how to do both at the same time!
Is there someway of calling a dockerfile from a docker-compose.yml or declaring the volume in the dockerfile?
That, or should I be creating my own image that I can call in the docker-compose.yml?
Any help woule be appreciated, I'm honeslty just going round in circles I think.
dockerfile
# BUILD redisfab/redistimeseries:${VERSION}-${ARCH}-${OSNICK}
ARG REDIS_VER=6.0.1
# stretch|bionic|buster
ARG OSNICK=buster
# ARCH=x64|arm64v8|arm32v7
ARG ARCH=x64
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK} AS builder
ARG REDIS_VER
ADD ./ /build
WORKDIR /build
RUN ./deps/readies/bin/getpy2
RUN ./system-setup.py
RUN make fetch
RUN make build
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK}
ARG REDIS_VER
ENV LIBDIR /usr/lib/redis/modules
WORKDIR /data
RUN mkdir -p "$LIBDIR"
COPY --from=builder /build/bin/redistimeseries.so "$LIBDIR"
EXPOSE 6379
CMD ["redis-server", "--loadmodule", "/usr/lib/redis/modules/redistimeseries.so"]
EDIT:
ok.. slight improvement i can call a redis-timeseries image in the docker-compose.yml
services:
redis:
image: redislabs/redistimeseries
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This is a start however I still need to increase the maximum number of db's, I have been using the redis.conf to do this in the past.
You can just have docker-compose build your dockerfile directly. Assume your docker-compose file is in folder called myproject . Also assume your dockerfile is in a folder called myredis and that myredis is in the myproject folder. Then you can replace this line in your docker-compose file:
Image: redis
With:
Build: ./myredis
That will build and use your custom image

Docker container not updating on code change

I have a Dockerfile to build my node container, it looks as follows:
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 4500
CMD ["npm", "start"]
based on this docker file, I am using docker compose to run this container and link it to a mongo container such that it refers to mongo-service. The docker-compose.yml looks as follows
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
Expected behavior: Everytime I make a new change to the project on my local computer, I want the docker-compose to restart so that the new changes are reflected.
Current behavior: To make the new changed reflect on docker-compose, I have to do docker-compose down and then delete images. I am guessing that it has to rebuild images. How do I make it so that whenever I make change, the dockerfile builds a new image?
I understand that need to use volumes. I am just failing to understand how. Could somebody please help me here? docker
When you make a change, you need to run docker-compose up --build. That will rebuild your image and restart containers as needed.
Docker has no facility to detect code changes, and it is not intended as a live-reloading environment. Volumes are not intended to hold code, and there are a couple of problems people run into attempting it (Docker file sync can be slow or inconsistent; putting a node_modules tree into an anonymous volume actively ignores changes to package.json; it ports especially badly to clustered environments like Kubernetes). You can use a host Node pointed at your Docker MongoDB for day-to-day development, and still use this Docker-based setup for deployment.
In order for you to 'restart' your docker application, you need to use docker volumes.
Add into your docker-compose.yml file something like:
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
volumes:
- .:/usr/src/app
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
The volumes tag is a simple saying: "Hey, map the current folder outside the container (the dot) to the working directory inside the container".

Resources