Docker compose behaviour affected by directory structure and user - docker

Here I have a network of Docker containers:
Docker-compose.yml:
version: "2"
services:
zookeeper:
image: zookeeper
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:latest
container_name: broker
ports:
- '9092:9092'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
KAFKA_HEAP_OPTS: "-Xmx512M -Xms256M"
kafkacat:
build: kafkacat
container_name: kafkacat
depends_on:
- broker
entrypoint:
- /bin/bash
- -c
- /scripts/get_data.sh
And the following directory structure
├── README.md
├── docker-compose.yml
├── kafka
│   ├── kafkacat
├── kafkacat
│   ├── Dockerfile
│   ├── get_data.sh
│   ├── print_data.sh
│   └── wait_for_it.sh
And kafkacat/Dockerfile:
FROM edenhill/kafkacat:1.6.0
COPY *.sh scripts/
WORKDIR scripts
RUN chmod +x .
RUN apk add --no-cache bash
RUN apk add jq;
RUN apk add curl;
When spinning with sudo docker-compose up kafkacat, the kafkacat container returns a Connection refused error:
kafkacat | %3|1667332626.747|FAIL|rdkafka#producer-1| [thrd:broker:29092/bootstrap]: broker:29092/bootstrap: Connect to ipv4#172.18.0.3:29092 failed: Connection refused (after 1ms in state CONNECT)
kafkacat | % ERROR: Local: Broker transport failure: broker:29092/bootstrap: Connect to ipv4#172.18.0.3:29092 failed: Connection refused (after 1ms in state CONNECT)
kafkacat | % ERROR: Local: All broker connections are down: 1/1 brokers are down : terminating
This error does not occur with docker-compose up kafkacat as non-superuser.
When I deleted the empty kafka directory and its contents, i.e.:
├── README.md
├── docker-compose.yml
├── kafkacat
│   ├── Dockerfile
│   ├── get_data.sh
│   ├── print_data.sh
│   └── wait_for_it.sh
The error ceased to occur with sudo docker-compose up kafkacat.
I think it's something to do with the mechanics of the Docker build, but I really can't figure it out. Does anyone have a good explanation for why this could occur?

depends_on doesn't wait for the broker container to start.
Kafka takes some time to start, therefore, you should use that wait_for_it script before you try to do any actions against the broker
e.g.
depends_on:
- broker
entrypoint: ['bash', '-c']
command:
- /scripts/wait_for_it.sh
- broker:29092
- --
- /scripts/get_data.sh
Also, ideally, you'd pass a shell argument / environment variable for the broker to your script here, but this should be enough for now

Related

Dockerize nestjs microservices application

I am trying to dockerize a microservice-based application. The api is built with nestjs and MySQL. The following is the directory structure
.
├── docker-compose.yml
├── api
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ ├── package-lock.json
│ ├── ormconfig.js
│ └── .env
├── payment
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
├── notifications
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
└
The following is the Dockerfile inside the api directory
FROM node:12.22.3
WORKDIR /usr/src/app
COPY package*.json .
RUN npm install
CMD ["npm", "run", "start:dev"]
The below is the docker-compose.yml file. Please note that the details for payment & notifications are not added yet in the docker-compose file.
version: '3.7'
networks:
server-network:
driver: bridge
services:
api:
image: api
build:
context: .
dockerfile: api/Dockerfile
command: npm run start:dev
volumes:
- ".:/usr/src/app"
- "/usr/src/app/node_modules"
networks:
- server-network
ports:
- '4000:4000'
depends_on:
- mysql
mysql:
image: mysql:5.7
container_name: api_db
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db_db:/var/lib/mysql
networks:
- server-network
volumes:
api_db:
Now, when I try to start the application using docker-compose up I'm getting the following error.
no such file or directory, open '/usr/src/app/package.json'
UPDATE
Tried removing the volumes and it didn't help too. Also, try to see what is there in the api by listing the contents of the directory by running
docker-compose run api ls /usr/src/app
and it shows the following contents in the folder
node_modules package-lock.json
Any help is much appreciated.
Your build: { context: } directory is set wrong.
The image build mechanism uses a build context to send files to the Docker daemon. The dockerfile: location is relative to this directory; within the Dockerfile, the left-hand side of any COPY (or ADD) directives is always interpreted as relative to this directory (even if it looks like an absolute path; and you can't step out of this directory with ..).
For the setup you show, where you have multiple self-contained applications, the easiest thing is to set context: to the directory containing the application.
build:
context: api
dockerfile: Dockerfile # the default value
Or, if you are using the default value for dockerfile, an equivalent shorthand
build: api
You need to set the build context to a parent directory if you need to share files between images (see How to include files outside of Docker's build context?). In this case, all of the COPY instructions need to be qualified with the subdirectory in the combined source tree.
# Dockerfile, when context: .
COPY api/package*.json ./
RUN npm ci
COPY api/ ./
You should not normally need the volumes: you show. These have the core effect of (1) replacing the application in the image with whatever's on the local system, which could be totally different, and then (2) replacing its node_modules directory with a Docker anonymous volume, which will never be updated to reflect changes in the package.json file. In this particular setup you also need to be very careful that the volume mappings match the filesystem layout. I would recommend removing the volumes: block here; use a local Node for day-to-day development, maybe configuring it to point at the Docker-hosted database.
If you also remove things that are set in the Dockerfile (command:) and things Compose can provide reasonable defaults for (image:, container_name:, networks:) you could reduce the docker-compose.yml file to:
version: '3.8'
services:
api: # without volumes:, networks:, image:, command:
build: api # shorthand corrected directory-only form
ports:
- '4000:4000'
depends_on:
- mysql
mysql: # without container_name:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db:/var/lib/mysql
volumes:
api_db:

How can I properly setup basic traefik reverse proxy?

Assume my current public IP is 101.15.14.71, I have a domain called example.com which I configured using cloudflare and I created multiple DNS entry pointing to my public ip.
Eg:
1) new1.example.com - 101.15.14.71
2) new2.example.com - 101.15.14.71
3) new3.example.com - 101.15.14.71
Now, Here's my example project structure,
├── myapp
│   ├── app
│   │   └── main.py
│   ├── docker-compose.yml
│   └── Dockerfile
├── myapp1
│   ├── app
│   │   └── main.py
│   ├── docker-compose.yml
│   └── Dockerfile
└── traefik
├── acme.json
├── docker-compose.yml
├── traefik_dynamic.toml
└── traefik.toml
Here I have two fastAPIs (i.e., myapp, myapp1)
Here's the example code I have in main.py in both myapp and myapp1, Its exactly same but return staement is different that's all
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_main():
return {"message": "Hello world for my project myapp"}
Here's my Dockerfile for myapp and myapp1, here too both are exactly same but the only difference is I deploy myapp on 7777 and myapp1 on 7778 in different containers
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
# copy required files
RUN bash -c 'mkdir -p /app'
COPY ./app /app
ENTRYPOINT /usr/local/bin/gunicorn \
-b 0.0.0.0:7777 \ # this line I use for myapp dockerfile
-b 0.0.0.0:7778 \ # this line I change for myapp1 dockerfile
-w 1 \
-k uvicorn.workers.UvicornWorker app.main:app \
--chdir /app
Here's my docker-compose.yml file for myapp and myapp1, here also I have exactly same but only difference is I change the port,
services:
myapp: # I use this line for myapp docker-compose file
myapp1: # I use this line for myapp1 docker-compose file
build: .
restart: always
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_public"
- "traefik.backend=myapp" # I use this line for myapp docker-compose file
- "traefik.backend=myapp1" # I use this line for myapp1 docker-compose file
- "traefik.frontend.rule=Host:new2.example.com" # I use this for myapp compose file
- "traefik.frontend.rule=Host:new3.example.com" # I use this for myapp1 compose file
- "traefik.port=7777" # I use this line for myapp docker-compose file
- "traefik.port=7778" # I use this line for myapp1 docker-compose file
networks:
- traefik_public
networks:
traefik_public:
external: true
Now coming to traefik folder,
acme.json # I created it using nano acme.json command with nothing in it,
but did chmod 600 acme.json for proper permissions.
traefik_dynamic.toml
[http]
[http.routers]
[http.routers.route0]
entryPoints = ["web"]
middlewares = ["my-basic-auth"]
service = "api#internal"
rule = "Host(`new1.example.com`)"
[http.routers.route0.tls]
certResolver = "myresolver"
[http.middlewares.test-auth.basicAuth]
users = [
["admin:your_encrypted_password"]
]
traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.myresolver.acme]
email = "reallygoodtraefik#gmail.com"
storage= "acme.json"
[certificatesResolvers.myresolver.acme.httpChallenge]
entryPoint = "web"
[providers]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
docker-compose.yml
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
- ./traefik_dynamic.toml:/traefik_dynamic.toml
networks:
- web
networks:
web:
These are the details about my files, what I am trying to achieve here is,
I want to setup traefik and traefik dashboard with basic authentication, and I deploy two of my fastapi services,
myapp 7777, I need to access this app via new2.example.com
myapp1 7778, I need to access this app via new3.example.com
traefik dashboard, I need to access this via new1.example.com
All of these should be https and also has certification autorenew enabled.
I got all these from online articles for latest version of traefik. But the problem is this is not working. I used docker-compose to build and deploy the traefik and I open the api dashboard. It is asking for password and user (basic auth I setup) I entered my user details I setup in traefik_dynamic.toml but its not working.
Where did I do wrong? Please help me correcting mistakes in my configuration. I am really interested to learn more about this.
Error Update:
traefik_1 | time="2021-06-16T01:51:16Z" level=error msg="Unable to obtain ACME certificate for domains \"new1.example.com\": unable to generate a certificate for the domains [new1.example.com]: error: one or more domains had a problem:\n[new1.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new1.example.com/.well-known/acme-challenge/mu85LkYEjlvnbDI-wM2xMaRFO1QsPDNjepTDb47dWF0 [2606:4700:3032::6815:55c4]: 404\n" rule="Host(`new1.example.com`)" routerName=api#docker providerName=myresolver.acme
traefik_1 | time="2021-06-16T01:51:19Z" level=error msg="Unable to obtain ACME certificate for domains \"new2.example.com\": unable to generate a certificate for the domains [new2.example.com]: error: one or more domains had a problem:\n[new2.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new2.example.com/.well-known/acme-challenge/ykiCAEpJeQ1qgVdeFtSRo3q-ATTwgKdRdGHUs2kgIsY [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp1#docker rule="Host(`new2.example.com`)"
traefik_1 | time="2021-06-16T01:51:20Z" level=error msg="Unable to obtain ACME certificate for domains \"new3.example.com\": unable to generate a certificate for the domains [new3.example.com]: error: one or more domains had a problem:\n[new3.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new3.example.com/.well-known/acme-challenge/BUZWuWdNd2XAXwXCwkeqe5-PHb8cGV8V6UtzeLaKryE [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp#docker rule="Host(`new3.example.com`)"
You only need one docker-compose file for all the services, and there is no need to define one for each container.
The project structure you should be using should be something like:
├── docker-compose.yml
├── myapp
│   ├── .dockerignore
│   ├── Dockerfile
│   └── app
│   └── main.py
├── myapp1
│   ├── .dockerignore
│   ├── Dockerfile
│   └── app
│   └── main.py
└── traefik
├── acme.json
└── traefik.yml
When creating containers, unless they are to be used for development purposes, it is recommended to not use a full-blown image, like ubuntu. Specifically for your purposes I would recommend a python image, such as python:3.7-slim.
Not sure if you are using this for development or production purposes, but you could also use volumes to mount the app directories inside the containers (especially useful if you are using this for development), and only use one Dockerfile for both myapp and myapp1, customizing it via environment variables.
Since you are already using traefik's dynamic configuration, I will do most of the setup for the container configuration via docker labels in the docker-compose.yml file.
Your dockerfile for myapp and myapp1 will be very similar at this point, but I've kept them as seperate ones, since you may need to make changes to them depending on the requirements of your apps in the future. I've used an environment variable for the port, which can allow you to change the port from your docker-compose.yml file.
You can use the following Dockerfile (./myapp/Dockerfile and ./myapp1/Dockerfile):
FROM python:3.7-slim
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN pip3 install -U pip setuptools wheel && \
pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
COPY . /app
ENV PORT=7777 # and 7778 for myapp1
ENTRYPOINT /usr/local/bin/gunicorn -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker app.main:app --chdir /app
Note: you should really be using something like poetry or a requirements.txt file for your app dependencies.
The .dockerignore file (./myapp/.dockerignore and ./myapp1/.dockerignore) should contain:
Dockerfile
Since the whole directory is being copied inside the container and you don't need the Dockerfile to be in there.
Your main traefik config (./traefik/traefik.yml) can be something like:
providers:
docker:
exposedByDefault: false
global:
checkNewVersion: false
sendAnonymousUsage: false
api: {}
accessLog: {}
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: "websecure"
scheme: "https"
websecure:
address: ":443"
ping:
entryPoint: "websecure"
certificatesResolvers:
myresolver:
acme:
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
email: "example#example.com"
storage: "/etc/traefik/acme.json"
httpChallenge:
entryPoint: "web"
Note: The above acme config will use the stage letsencrypt server. Make sure all the details are correct, and remove caServer after you've tested that everything works, in order to communicate with the letsencrypt production server.
Your ./docker-compose.yml file should be something like:
version: "3.9"
services:
myapp:
build:
context: ./myapp
dockerfile: ./Dockerfile
image: myapp
depends_on:
- traefik
expose:
- 7777
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.tls=true"
- "traefik.http.routers.myapp.tls.certResolver=myresolver"
- "traefik.http.routers.myapp.entrypoints=websecure"
- "traefik.http.routers.myapp.rule=Host(`new2.example.com`)"
- "traefik.http.services.myapp.loadbalancer.server.port=7777"
myapp1:
build:
context: ./myapp1
dockerfile: ./Dockerfile
image: myapp1
depends_on:
- traefik
expose:
- 7778
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp1.tls=true"
- "traefik.http.routers.myapp1.tls.certResolver=myresolver"
- "traefik.http.routers.myapp1.entrypoints=websecure"
- "traefik.http.routers.myapp1.rule=Host(`new3.example.com`)"
- "traefik.http.services.myapp1.loadbalancer.server.port=7778"
traefik:
image: traefik:v2.4
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- ./traefik/acme.json:/etc/traefik/acme.json
ports:
- 80:80
- 443:443
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.tls=true"
- "traefik.http.routers.api.tls.certResolver=myresolver"
- "traefik.http.routers.api.entrypoints=websecure"
- "traefik.http.routers.api.rule=Host(`new1.example.com`)"
- "traefik.http.routers.api.service=api#internal"
- "traefik.http.routers.api.middlewares=myAuth"
- "traefik.http.middlewares.myAuth.basicAuth.users=admin:$$apr1$$4zjvsq3w$$fLCqJddLvrIZA.CCoGE2E." # generate with htpasswd. replace $ with $$
You can generate the password by using the command:
htpasswd -n admin | sed 's/\$/\$\$/g'
Note: If you need a literal dollar sign in the docker-compose file you need to use $$ as documented here.
Issuing docker-compose up in the directory should bring all the services up, and working as expected.
The above should work for you based on the details you have provided, but can be further improved at multiple points, depending on your needs.
Moreover, having the credentials for the traefik dashboard in the docker-compose.yml file is probably not the best, and you may want to use docker secrets for it. You can also add healthchecks and consider placing myapp and myapp1 into a seperate internal network.
If you want to get further into it, I propose that you start with Get started with Docker Compose and also read: Dockerfile reference and Compose file version 3 reference

Connection issue with golang microservice to database microservice

I have a Golang project I am working and have multiple micro-services in the same code repository. My directory structure is roughly as follows:
├── pkg
├── cmd
│ ├── servicea
│ └── serviceb
├── internal
│ ├── servicea
│ └── serviceb
├── Makefile
├── scripts
│ └── protogen.sh
├── vendor
│ └── ...
├── go.mod
├── go.sum
└── readme.md
The main.go files for the respective services are in cmd/servicex/main.go
I've put the individual Dockerfiles for the services in cmd/servicex.
Roughly, this is how my Dockerfile looks like:
FROM golang:1.15.6
ARG version
COPY go.* <repo-path>
COPY pkg/ <repo-path>/pkg/
COPY internal/servicea internal/servicea
COPY vendor/ <repo-path>/vendor/
COPY cmd/servicea/ <repo-path>/cmd/servicea/
WORKDIR <repo-path>/cmd/servicea/
RUN GO111MODULE=on GOFLAGS=-mod=vendor CGO_ENABLED=0 GOOS=linux go build -v -ldflags "-X <repo-path>/cmd/servicea/main.version=$version" -a -installsuffix cgo -o servicea .
FROM alpine:3.12
RUN apk --no-cache add ca-certificates
WORKDIR /servicea/
COPY --from=0 <repo-path>/cmd/servicea .
EXPOSE 50051
ENTRYPOINT ["/servicea/servicea"]
I am using Scylla as my DB for this service and gRPC is the protocol for communication.
This is my docker-compose.yml for this service.
version: '3'
services:
db:
container_name: servicedb
image: scylladb/scylla
hostname: db
environment:
GET_HOST_FROM: dns
SCYLLA_USER: <user>
SCYLLA_PASS: <password>
ports:
- 9042:9042
networks:
- serviceanet
servicea:
container_name: servicea
image: servicea-production:latest
hostname: servicea
build:
context: .
dockerfile: Dockerfile
environment:
GET_HOSTS_FROM: dns
networks:
- serviceanet
volumes:
- .:<repo-path>
ports:
- 50051:50051
depends_on:
- db
links:
- db
labels:
kompose.service.type: LoadBalancer
networks:
serviceannet:
driver: bridge
I am using kompose to generate the corresponding kubernetes yaml files.
However, when I run the compose locally or try to deploy it on minikube/GKE, my service instance is not able to connect to my DB and I get an error like this:
failed to create scylla session, gocql: unable to create session: control: unable to connect to initial hosts: dial tcp 127.0.0.1:9042: connect: connection refused
Otherwise, if I run a local scylla docker instance with the following command:
docker run --name some-scylla -p 9042:9042 -d scylladb/scylla --broadcast-address 127.0.0.1 --listen-address 0.0.0.0 --broadcast-rpc-address 127.0.0.1
and then do a go run cmd/servicea/main.go my service seems to be running and the API endpoints are working(verified with Evans).
127.0.0.1 (localhost) is the host/container on which your service is running. in the case of multiple containers (either using docker-compose or k8s) they would have different IP addresses and 127.0.0.1 would correspond to different hosts depending where you are connecting from. In your gocql initialization, provide the db address using a configuration/environment variable. docker-compose will automatically configure a hostname for the db container and with k8s you can use its service discovery mechanism.

Addons in Docker Odoo 13.0 not recognized

I try to use latest (13.0) Docker image for local development and I'm using docker-compose.yml from docker documentation for spinning up containers:
version: '2'
services:
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- ./config:/etc/odoo
- ./addons/my_module:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
My odoo.conf:
[options]
addons_path = /mnt/extra-addons
data_dir = /var/lib/odoo
My file structure:
├── addons
│ └── my_module
│ ├──controllers
│ ├──demo
│ ├──models
│ ├──security
│ ├──views
│ ├──__init__.py
│ └──__manifest__.py
├── config
│ └── odoo.conf
├── docker-compose.yml
└── README.md
my_module is default module strucure from odoo website (with uncommented code) so I'm assuming it has no errors.
When I start the containers with command docker-compose up -d it starts the database and odoo without any errors (in docker and in browser console) but my_module is not visible inside application. I turned on developer mode and Updated Apps list inside App tab as suggested in other issues on github and SO but my_module is still not visible. Additionally if I login to container with docker exec -u root -it odoo /bin/bash I can cd to /mnt/extra-addons and I can see the contents of my_module mounted to container so it seems as Odoo does not recognize it?
I scanned the interned and found many similar problems but none of the solutions worked for me so I'm assuming I'm doing something wrong.
After some research I ended up with this docker-compose.yml which does load custom addons to my Docker:
version: '2'
services:
db:
image: postgres:11
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres
restart: always
odoo:
image: odoo:13
depends_on:
- db
ports:
- "8069:8069"
tty: true
command: -- --dev=reload
volumes:
- ./addons:/mnt/extra-addons
- ./etc:/etc/odoo
restart: always
odoo.conf:
[options]
addons_path = /mnt/extra-addons
logfile = /etc/odoo/odoo-server.log

Dockerfile volume: local changes are not reflected in docker

I am using docker-compose to create a multi-container environment where I have one mongodb instance and two python applications. I am having trouble when I am changing my files locally, but docker-compose up doesn't reflect the changes I made in my file. What am I doing wrong?
My project structure:
.
├── docker-compose.yml
├── form
│   ├── app.py
│   ├── Dockerfile
│   ├── requirements.txt
│   ├── static
│   └── templates
│   ├── form_action.html
│   └── form_sumbit.html
├── notify
│   ├── app.py
│   ├── Dockerfile
│   ├── requirements.txt
└── README
Dockerfiles are pretty similar for two apps. One is given below:
FROM python:2.7
ADD . /notify
WORKDIR /notify
RUN pip install -r requirements.txt
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: mongo:3.0.2
container_name: mongo
networks:
db_net:
ipv4_address: 172.16.1.1
web:
build: form
command: python -u app.py
ports:
- "5000:5000"
volumes:
- form:/form
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.2
notification:
build: notify
command: python -u app.py
volumes:
- notify:/notify
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.3
networks:
db_net:
external: true
volumes:
form:
notify:
Here is my output for docker volume ps:
local form
local healthcarereminder_form
local healthcarereminder_notify
local notify
[My understanding till now: You can see there are two instances of form and notify, with one having project folder name appended. So docker might be looking for changes in a different file. I am not sure.]
If you're trying to mount a host directory in the docker-compose file do not declare notify as a VOLUME directive.
Instead treat it like a local folder
notification:
build: notify
command: python -u app.py
volumes:
# this points to a relative ./notify directory.
- ./notify:/notify
environment:
....
volumes:
form:
#do not declare the volume here.
# notify:
When you were declare a VOLUME node at the bottom of the docker-compose file, docker makes an special internal directory meant to be shared between docker images. Here is more details: https://docs.docker.com/engine/tutorials/dockervolumes/#add-a-data-volume

Resources