docker-compose and using local image with mounted volume - docker

I have an image I create with Dockerfile
FROM mhart/alpine-node:latest
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY src /app
Now in docker-compose.yml I build this image
version: '3.7'
services:
enginetonic:
build:
context: .
image: enginetonic:compose
mongodb:
image: mongo:latest
container_name: 'mongodb'
ports:
- 27017:27017
restart: always
monitor-service:
image: enginetonic:compose
container_name: monitorService
command: nodemon monitor/monitor.js
restart: on-failure
#common services
access-token-service:
image: enginetonic:compose
container_name: accessTokenService
command: nodemon service/access-token-service/access-token-service.js
restart: on-failure
depends_on:
- mongodb
In all documentation to bind:mount or use volumes I found, it is used with other docker commands
example
$ docker service create \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
--name myservice \
<IMAGE>
How to use volumes, so that every service that covers the whole /src/ directory, so that every service I start with nodemon reflects the files changed in the whole source code?

I would do a volume map in docker-compose.yml like this:
volumes:
- ./app/monitor:/path/to/your/workdir/monitor
And adjust the command to use file monitor, like nodemon, to restart service when there is any file changes:
command: ["nodemon", "/path/to/your/workdir/monitor/monitor.js"]
You may need to adjust the nodemon arguments or configs based on what you need.
PS. you do not need to tag/push your image. Simply build it directly in docker-compose#build

Related

How to create the directory in a Dockerfile

I struggle to create a directory in my Dockerfile below. Entering the container after building the image I can't find the directory "models". "ds" directory in path "/usr/src/app/ds/models" is an application directory which was copied. Could you please tell me what is wrong here.
FROM python:3.8
ENV PYTHONUNBUFFERED=1
ENV DISPLAY :0
WORKDIR /usr/src/app
COPY . .
RUN mkdir -p /usr/src/app/ds/models
My docker-compose.yaml file contains volume:
version: '3.8'
services:
app:
build: .
command:
- /bin/bash
- -c
- python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
When your docker-compose.yml file says
volumes:
- .:/usr/src/app
that host directory completely replaces the /usr/src/app directory from your image. This means pretty much nothing in your Dockerfile has an effect; if you try to deploy this setup to another system, you've never run the code in the image.
I'd recommend deleting this block, and also the command: override (make it be the default CMD in the Dockerfile instead).
I need to download models to this directory
Mount only the specific directory you need into your container; don't overwrite the entire application tree. Potentially consider keeping that data directory in a different part of the filesystem.
version: '3.8'
services:
app:
build: .
# no command:
restart: always
volumes:
# only the models subdirectory, not the entire application
- ./ds/models:/usr/src/app/ds/models
ports:
- '8000:8000'

How to configure docker-compose.yml to invalidate Docker cache based on a certain file checksum

I need to configure docker-compose.yml in a way that will invalidate the local image's docker cache, based on a certain file's checksum.
If it's not possible, I'd like to be able to somehow version the docker-compose.yml or Dockerfile, so that it would rebuild the Docker image of a specific service. I'd want to avoid pushing images to DockerHub. Unless it's an absolute the only solution.
At all costs, I want to avoid bash scripts and in general, writing imperative logic. I'm also not interested in CLI solutions, like passing additional flags to docker-compose up command.
Context:
We use docker-compose during the development of our application.
Our app has also a Dockerfile for building the app localy. We don't push docker images into DockerHub, we just have Dockerfile locally and in docker-compose.yml we declare the sourcecode and package.json (a file that for nodeJS applications use to declare dependencies) as volumes. Now sometimes, we modify the package.json, and docker-compose up throws an error, because the image is already built locally and the previous built doesn't contain the new dependencies. I'd want to be able to tell docker-compose.yml to automatically build a new image if there have been any changes to package.json file since we pull dependencies during the build stage.
docker-compose.yml
version: "3.8"
services:
web:
build:
context: .
ports:
- "8000:8000"
command: npx nodemon -L app.js
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
env_file:
- .env
depends_on:
- mongo
mongo:
image: mongo:latest
container_name: mongo_db
volumes:
- ./config/init.sh:/docker-entrypoint-initdb.d/init.sh
- ./config/mongod.conf:/etc/mongod.conf
- ./logs:/var/log/mongodb/
- ./db:/data/db
env_file:
- .env
ports:
- "27017:27017"
restart: on-failure:5
command: ["mongod", "-f", "/etc/mongod.conf"]
volumes:
db-data:
mongo-config:
Dockerfile:
FROM node:14.15.1
RUN mkdir -p /usr/src/app
# Create app directory
WORKDIR /usr/src/app
#Install app dependencies
COPY package.json /usr/src/app
# Install dependencies
RUN npm install
EXPOSE 8000
CMD ["node", "/app/app.js"]

How to apply the same volumes to multiple docker compose services?

Let's suppose there are two services and they have several volumes defined. But most of those volumes are used on both services:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service1:/services/service1
command: yarn service1:start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service2:/services/service2
command: yarn service2:start
Is there a way to prevent this duplication?
I would love to do something like this:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service1:/services/service1
command: yarn start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service2:/services/service2
command: yarn start
myVolumeList:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
Edit: I use docker compose for local development only. Volumes are great for me because changing source code files will automatically restart my services. Thus copying files once isn't enough
The code for your application should generally be in a Docker image. You can launch multiple containers from the same image, possibly with different command:. For example, you might write a Dockerfile like:
FROM node:lts-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
CMD yarn start
Having described this image, you can reference it in the docker-compose.yml, overriding the command: for each service:
version: '3'
services:
service1:
build: .
command: 'yarn service1:start'
service2:
build: .
command: 'yarn service2:start'
(Compose will probably try to build a separate image for each service, but because of Docker layer caching, "building" the service2 image will run very quickly and wind up with a second tag on the same image.)
This setup needs no bind-mounts at all, and if you push the built images to a Docker registry, you can run them on a system without the application code or even Node available.
Natively, you can do:
Maybe this solve your problem.
version: "3"
services:
srv1:
image: someimage
volumes:
- data:/data
srv2:
image: someimage
volumes:
- data:/data
volumes:
data:
There's a plugin - https://github.com/MatchbookLab/local-persist (read it before use!)- that let you change the volume mountpoint.
Basicaly, install it: curl -fsSL https://raw.githubusercontent.com/MatchbookLab/local-persist/master/scripts/install.sh | sudo bash
Then create a volume:
docker volume create -d local-persist -o mountpoint=/data/images --name=images
Then use as many containers as you want:
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two
If you whant to use docker-compose, there's a example:
version: '3'
services:
one:
image: alpine
working_dir: /one/
command: sleep 600
volumes:
- data:/one/
two:
image: alpine
working_dir: /two/
command: sleep 600
volumes:
- data:/two/
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
Almost the same question here: docker volume custom mount point
This only work on docker-compose version '2':
version: '2'
services:
srv1:
image: sometag
volumes_from:
- data
srv2:
image: sometag
volumes_from:
- data
data:
image: sometag
volumes:
- ./code-in-host:/code

Files inside Docker container not updating when I edit in host

I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example

Docker + NGINX, how can I copy the configuration file from host to container?

This is my basic NGINX setup that works!
web:
image: nginx
volumes:
- ./nginx:/etc/nginx/conf.d
....
I replace the volumes by copying ./nginx to /etc/nginx/conf.d using COPY ./nginx /etc/nginx/conf.d into my container. The issue was because, by using value the nginx.conf refer to log file in my host instead of my container. So, I thought by hardcopying the config file to container it will solve my problem.
However, NGINX is not running at all at docker compose up. What is wrong?
EDIT:
Dockerfile
FROM python:3-onbuild
COPY ./ /app
COPY ./nginx /etc/nginx/conf.d
RUN chmod +x /app/start_celerybeat.sh
RUN chmod +x /app/start_celeryd.sh
RUN chmod +x /app/start_web.sh
RUN pip install -r /app/requirements.txt
RUN python /app/manage.py collectstatic --noinput
RUN /app/automation/rm.sh
docker-compose.yml
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: ./
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
expose:
- "8080"
links:
- rabbit
celerybeat:
build: ./
command: /app/start_celerybeat.sh
depends_on:
- web
links:
- rabbit
celeryd:
build: ./
command: /app/start_celeryd.sh
depends_on:
- web
links:
- rabbit
This is your initial setup that works:
web:
image: nginx
volumes:
- ./nginx:/etc/nginx/conf.d
Here you have a bind volume that proxy, inside your container, all file system requests at /etc/nginx/conf.d to your host ./nginx. So there is no copy, just a bind.
This means that if you change a file in your ./nginx folder, you container will see the updated file in real time.
Load the configuration from the host
In your last setup just add a volume in the nginx service.
You can also remove the COPY ./nginx /etc/nginx/conf.d line in you web service Dockerfile, because it's useless.
Bundle configuration inside the image
Instead, if you want to bundle your nginx configuration inside a nginx image you should build a custom nginx image. Create a Dockerfile.nginx file:
FROM nginx
COPY ./nginx /etc/nginx/conf.d
And then change your docker-compose:
version: "3"
services:
nginx:
build:
dockerfile: Dockerfile.nginx
container_name: nginx_airport
ports:
- "8080:8080"
# ...
Now your nginx container will have the configuration inside it and you don't need to use a volume.

Resources