I have the following docker-compose.
version: '3.9'
services:
rabbitmq:
image: rabbitmq:3.8.9-management
restart: unless-stopped
container_name: rabbitmq-sandbox-dev
networks:
- traefik_web
volumes:
- /opt/rabbitmq/sandbox/var/lib:/var/lib/rabbitmq
- /opt/rabbitmq/sandbox/config/logs.conf:/etc/rabbitmq/rabbitmq.conf
Directory structure.
(venv) toto#euler:.../rabbitmq/sandbox# tree
.
├── config
│ └── logs.conf
├── docker-compose.yml
└── var
logs.conf
default_user = admin
default_pass = rabbitmq
default_user_tags.administrator = true
default_permissions.configure = .*
default_permissions.read = .*
default_permissions.write = .*
log.console = true
log.console.level = info
log.console.formatter = json
log.file = false
However everytime I try to docker-compose up the container crash with the following error
- Conf file attempted to set unknown variable: log.console.formatter
Even if this arguments is clearly described in the rabbitmq configuration https://www.rabbitmq.com/logging.html#json. Does anyone have an idea. Am I missing something?
Thanks in advance,
Using a log formatter plugin, you can save RabbitMQ logs in JSON format. It is possible to format logs in JSON using the rabbitmq_json_logger plugin in RabbitMQ.
RabbitMQ does not enable the rabbitmq_json_logger plugin by default. In the RabbitMQ Docker container, run the following command to enable it:
rabbitmq-plugins enable rabbitmq_json_logger
Related
System Information
docker compose version
Docker Compose version v2.5.0
Design
I am using a Makefile that uses the multiple compose file logic using the -f flag in docker compose alongside the config command to build a docker-compose.yml. The Makefile can build the compose file, run it as well as bring it down and remove the file
Structure
├── conf
│ ├── grafana
│ │ ├── config
│ │ │ └── grafana.ini
| | |-- .env
├── docker-compose.base.yml
├── services
│ ├── docker-compose.grafana.yml
The conf directory has all the .env file where I pass the necessary admin credentials for Grafana
conf/grafana/.env
## Grafana Admin Credentials
GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=supersecretpass
docker-compose.base.yml
networks:
internal:
services:
grafana:
env_file:
- ./conf/grafana/.env
volumes:
- grafana:/var/lib/grafana
- ./conf/grafana/config:/usr/local/etc/grafana
services/docker-compose.grafana.yml
services:
grafana:
image: grafana/grafana:8.4.5
container_name: my-grafana
environment:
- GF_SERVER_ROOT_URL=/grafana
- GF_SERVER_SERVE_FROM_SUB_PATH=true
- GF_PATHS_CONFIG=/usr/local/etc/grafana/grafana.ini
logging:
options:
max-size: "1m"
networks:
- internal
ports:
- "3000:3000"
security_opt:
- "no-new-privileges:true"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
Makefile
.PHONY: help build compose clean down run
.SILENT: help
SERVICES_DIR=services
COMPOSE_FILES:=-f docker-compose.base.yml
# STEP: 1 Add the service name here
define OPTIONS
- grafana -
endef
export OPTIONS
ARGS=$(wordlist 2, $(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
$(eval $(ARGS):;#:)
ifeq (grafana, $(filter grafana,$(ARGS)))
COMPOSE_FILES:=$(COMPOSE_FILES) -f $(SERVICES_DIR)/docker-compose.grafana.yml
endif
SERVICES:=$(filter-out ${OPTIONS},$(ARGS))
.PHONY: $(OPTIONS)
help:
echo "refer to the README.md file"
build:
make compose grafana
compose:
docker compose $(COMPOSE_FILES) config > docker-compose.yml
clean:down
rm -rf ./docker-compose.yml
down:
docker compose down
run:build
docker compose up -d $(SERVICES)
Problem Reproduction
I run the following:
make run
which will build the docker-compose.yml file in the root using the docker compose config command and using the -f services/docker-compose.grafana.yml command with the base file.
Once the container is up and reachable on localhost:3000 I check by entering the password and it works
Now I change the password in the conf/grafana/.env to supersecretpass2 and run the make run again
This actually rewrites the docker-compose.yml file with the newly updated environment variables for grafana service and re-runs the docker compose up command again which should pick up the new configuration i.e., new password.
Problem
Even though the docker-compose.yml is updated and the CLI states that the container is recreated and restarted, upon entering the new password the Grafana UI does not pick up the adapted environment variable
Inspection
upon doing
docker inspect my-grafana
I can clearly see the
"StdinOnce": false,
"Env": [
"GF_SECURITY_ADMIN_PASSWORD=supersecretpass2",
"GF_SERVER_SERVE_FROM_SUB_PATH=true",
"GF_SECURITY_ADMIN_USER=admin",
"GF_PATHS_CONFIG=/usr/local/etc/grafana/grafana.ini",
"GF_SERVER_ROOT_URL=/grafana",
"PATH=/usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GF_PATHS_DATA=/var/lib/grafana",
"GF_PATHS_HOME=/usr/share/grafana",
"GF_PATHS_LOGS=/var/log/grafana",
"GF_PATHS_PLUGINS=/var/lib/grafana/plugins",
"GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
],
by executing:
docker compose exec -it grafana /bin/bash env
I can see that the updated env var of the password is passed into the container, however Grafana does not pick this changes up.
the UI mentions the password is invalid, but accepts the original password.
Repo for Bug Reproduction
In order to reproduce this bug I have the following repo
Assume my current public IP is 101.15.14.71, I have a domain called example.com which I configured using cloudflare and I created multiple DNS entry pointing to my public ip.
Eg:
1) new1.example.com - 101.15.14.71
2) new2.example.com - 101.15.14.71
3) new3.example.com - 101.15.14.71
Now, Here's my example project structure,
├── myapp
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
├── myapp1
│ ├── app
│ │ └── main.py
│ ├── docker-compose.yml
│ └── Dockerfile
└── traefik
├── acme.json
├── docker-compose.yml
├── traefik_dynamic.toml
└── traefik.toml
Here I have two fastAPIs (i.e., myapp, myapp1)
Here's the example code I have in main.py in both myapp and myapp1, Its exactly same but return staement is different that's all
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_main():
return {"message": "Hello world for my project myapp"}
Here's my Dockerfile for myapp and myapp1, here too both are exactly same but the only difference is I deploy myapp on 7777 and myapp1 on 7778 in different containers
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
# copy required files
RUN bash -c 'mkdir -p /app'
COPY ./app /app
ENTRYPOINT /usr/local/bin/gunicorn \
-b 0.0.0.0:7777 \ # this line I use for myapp dockerfile
-b 0.0.0.0:7778 \ # this line I change for myapp1 dockerfile
-w 1 \
-k uvicorn.workers.UvicornWorker app.main:app \
--chdir /app
Here's my docker-compose.yml file for myapp and myapp1, here also I have exactly same but only difference is I change the port,
services:
myapp: # I use this line for myapp docker-compose file
myapp1: # I use this line for myapp1 docker-compose file
build: .
restart: always
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_public"
- "traefik.backend=myapp" # I use this line for myapp docker-compose file
- "traefik.backend=myapp1" # I use this line for myapp1 docker-compose file
- "traefik.frontend.rule=Host:new2.example.com" # I use this for myapp compose file
- "traefik.frontend.rule=Host:new3.example.com" # I use this for myapp1 compose file
- "traefik.port=7777" # I use this line for myapp docker-compose file
- "traefik.port=7778" # I use this line for myapp1 docker-compose file
networks:
- traefik_public
networks:
traefik_public:
external: true
Now coming to traefik folder,
acme.json # I created it using nano acme.json command with nothing in it,
but did chmod 600 acme.json for proper permissions.
traefik_dynamic.toml
[http]
[http.routers]
[http.routers.route0]
entryPoints = ["web"]
middlewares = ["my-basic-auth"]
service = "api#internal"
rule = "Host(`new1.example.com`)"
[http.routers.route0.tls]
certResolver = "myresolver"
[http.middlewares.test-auth.basicAuth]
users = [
["admin:your_encrypted_password"]
]
traefik.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[api]
dashboard = true
[certificatesResolvers.myresolver.acme]
email = "reallygoodtraefik#gmail.com"
storage= "acme.json"
[certificatesResolvers.myresolver.acme.httpChallenge]
entryPoint = "web"
[providers]
[providers.docker]
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
docker-compose.yml
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
- ./traefik_dynamic.toml:/traefik_dynamic.toml
networks:
- web
networks:
web:
These are the details about my files, what I am trying to achieve here is,
I want to setup traefik and traefik dashboard with basic authentication, and I deploy two of my fastapi services,
myapp 7777, I need to access this app via new2.example.com
myapp1 7778, I need to access this app via new3.example.com
traefik dashboard, I need to access this via new1.example.com
All of these should be https and also has certification autorenew enabled.
I got all these from online articles for latest version of traefik. But the problem is this is not working. I used docker-compose to build and deploy the traefik and I open the api dashboard. It is asking for password and user (basic auth I setup) I entered my user details I setup in traefik_dynamic.toml but its not working.
Where did I do wrong? Please help me correcting mistakes in my configuration. I am really interested to learn more about this.
Error Update:
traefik_1 | time="2021-06-16T01:51:16Z" level=error msg="Unable to obtain ACME certificate for domains \"new1.example.com\": unable to generate a certificate for the domains [new1.example.com]: error: one or more domains had a problem:\n[new1.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new1.example.com/.well-known/acme-challenge/mu85LkYEjlvnbDI-wM2xMaRFO1QsPDNjepTDb47dWF0 [2606:4700:3032::6815:55c4]: 404\n" rule="Host(`new1.example.com`)" routerName=api#docker providerName=myresolver.acme
traefik_1 | time="2021-06-16T01:51:19Z" level=error msg="Unable to obtain ACME certificate for domains \"new2.example.com\": unable to generate a certificate for the domains [new2.example.com]: error: one or more domains had a problem:\n[new2.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new2.example.com/.well-known/acme-challenge/ykiCAEpJeQ1qgVdeFtSRo3q-ATTwgKdRdGHUs2kgIsY [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp1#docker rule="Host(`new2.example.com`)"
traefik_1 | time="2021-06-16T01:51:20Z" level=error msg="Unable to obtain ACME certificate for domains \"new3.example.com\": unable to generate a certificate for the domains [new3.example.com]: error: one or more domains had a problem:\n[new3.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://new3.example.com/.well-known/acme-challenge/BUZWuWdNd2XAXwXCwkeqe5-PHb8cGV8V6UtzeLaKryE [2606:4700:3031::ac43:d1e9]: 404\n" providerName=myresolver.acme routerName=myapp#docker rule="Host(`new3.example.com`)"
You only need one docker-compose file for all the services, and there is no need to define one for each container.
The project structure you should be using should be something like:
├── docker-compose.yml
├── myapp
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
├── myapp1
│ ├── .dockerignore
│ ├── Dockerfile
│ └── app
│ └── main.py
└── traefik
├── acme.json
└── traefik.yml
When creating containers, unless they are to be used for development purposes, it is recommended to not use a full-blown image, like ubuntu. Specifically for your purposes I would recommend a python image, such as python:3.7-slim.
Not sure if you are using this for development or production purposes, but you could also use volumes to mount the app directories inside the containers (especially useful if you are using this for development), and only use one Dockerfile for both myapp and myapp1, customizing it via environment variables.
Since you are already using traefik's dynamic configuration, I will do most of the setup for the container configuration via docker labels in the docker-compose.yml file.
Your dockerfile for myapp and myapp1 will be very similar at this point, but I've kept them as seperate ones, since you may need to make changes to them depending on the requirements of your apps in the future. I've used an environment variable for the port, which can allow you to change the port from your docker-compose.yml file.
You can use the following Dockerfile (./myapp/Dockerfile and ./myapp1/Dockerfile):
FROM python:3.7-slim
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN pip3 install -U pip setuptools wheel && \
pip3 install gunicorn fastapi uvloop httptools "uvicorn[standard]"
COPY . /app
ENV PORT=7777 # and 7778 for myapp1
ENTRYPOINT /usr/local/bin/gunicorn -b 0.0.0.0:$PORT -w 1 -k uvicorn.workers.UvicornWorker app.main:app --chdir /app
Note: you should really be using something like poetry or a requirements.txt file for your app dependencies.
The .dockerignore file (./myapp/.dockerignore and ./myapp1/.dockerignore) should contain:
Dockerfile
Since the whole directory is being copied inside the container and you don't need the Dockerfile to be in there.
Your main traefik config (./traefik/traefik.yml) can be something like:
providers:
docker:
exposedByDefault: false
global:
checkNewVersion: false
sendAnonymousUsage: false
api: {}
accessLog: {}
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: "websecure"
scheme: "https"
websecure:
address: ":443"
ping:
entryPoint: "websecure"
certificatesResolvers:
myresolver:
acme:
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
email: "example#example.com"
storage: "/etc/traefik/acme.json"
httpChallenge:
entryPoint: "web"
Note: The above acme config will use the stage letsencrypt server. Make sure all the details are correct, and remove caServer after you've tested that everything works, in order to communicate with the letsencrypt production server.
Your ./docker-compose.yml file should be something like:
version: "3.9"
services:
myapp:
build:
context: ./myapp
dockerfile: ./Dockerfile
image: myapp
depends_on:
- traefik
expose:
- 7777
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.tls=true"
- "traefik.http.routers.myapp.tls.certResolver=myresolver"
- "traefik.http.routers.myapp.entrypoints=websecure"
- "traefik.http.routers.myapp.rule=Host(`new2.example.com`)"
- "traefik.http.services.myapp.loadbalancer.server.port=7777"
myapp1:
build:
context: ./myapp1
dockerfile: ./Dockerfile
image: myapp1
depends_on:
- traefik
expose:
- 7778
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp1.tls=true"
- "traefik.http.routers.myapp1.tls.certResolver=myresolver"
- "traefik.http.routers.myapp1.entrypoints=websecure"
- "traefik.http.routers.myapp1.rule=Host(`new3.example.com`)"
- "traefik.http.services.myapp1.loadbalancer.server.port=7778"
traefik:
image: traefik:v2.4
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- ./traefik/acme.json:/etc/traefik/acme.json
ports:
- 80:80
- 443:443
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.tls=true"
- "traefik.http.routers.api.tls.certResolver=myresolver"
- "traefik.http.routers.api.entrypoints=websecure"
- "traefik.http.routers.api.rule=Host(`new1.example.com`)"
- "traefik.http.routers.api.service=api#internal"
- "traefik.http.routers.api.middlewares=myAuth"
- "traefik.http.middlewares.myAuth.basicAuth.users=admin:$$apr1$$4zjvsq3w$$fLCqJddLvrIZA.CCoGE2E." # generate with htpasswd. replace $ with $$
You can generate the password by using the command:
htpasswd -n admin | sed 's/\$/\$\$/g'
Note: If you need a literal dollar sign in the docker-compose file you need to use $$ as documented here.
Issuing docker-compose up in the directory should bring all the services up, and working as expected.
The above should work for you based on the details you have provided, but can be further improved at multiple points, depending on your needs.
Moreover, having the credentials for the traefik dashboard in the docker-compose.yml file is probably not the best, and you may want to use docker secrets for it. You can also add healthchecks and consider placing myapp and myapp1 into a seperate internal network.
If you want to get further into it, I propose that you start with Get started with Docker Compose and also read: Dockerfile reference and Compose file version 3 reference
I'm using a VPS with Debian 10 on it. I also have one domain name.
My goal is to self-host a few services, like FreshRSS or Nextcloud. To deploy these services, I'm using Docker and Docker Compose. I have one folder per service.
Because I would like to get a reverse proxy and assign subdomains to services (for example : cloud.domainname.com to Nextcloud), I installed Traefik yesterday. However, I cannot get the service to work. This is probably bad configuration from me, as I'm a total beginner in setting up reverse proxies. For example, I'm trying to get it to work with ArchiveBox, which runs on the port 8000. I would like Traefik to map my subdomain archive.domainname.com to the port 8000 of my VPS.
These are the steps I did yesterday:
Installed ArchiveBox on my VPS with Docker Compose and configured it. It's working successfully.
Created a new network for traefik: sudo docker network create --driver=bridge --subnet=192.168.0.0/16 traefik_lan
Installed Traefik with Docker Compose, added dynamic configuration by following tutorials.
Added the labels and the network to the Docker Compose file of ArchiveBox.
Started both. However, ArchiveBox creates a new network and does not seems to use the Traefik one. I can also still access directly ArchiveBox at domainname.com:8000.
Below are the config files.
ArchiveBox
ArchiveBox docker-compose.yml
# Usage:
# docker-compose up -d
# docker-compose run archivebox init
# echo "https://example.com" | docker-compose run archivebox archivebox add
# docker-compose run archivebox add --depth=1 https://example.com/some/feed.rss
# docker-compose run archivebox config --set PUBLIC_INDEX=True
# Documentation:
# https://github.com/ArchiveBox/ArchiveBox/wiki/Docker#docker-compose
version: '3.7'
services:
archivebox:
# build: .
image: ${DOCKER_IMAGE:-archivebox/archivebox:latest}
command: server 0.0.0.0:8000
stdin_open: true
tty: true
ports:
- 8000:8000
environment:
- USE_COLOR=True
- SHOW_PROGRESS=False
- SEARCH_BACKEND_ENGINE=sonic
- SEARCH_BACKEND_HOST_NAME=sonic
- SEARCH_BACKEND_PASSWORD=SecretPassword
volumes:
- ./data:/data
depends_on:
- sonic
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik_lan"
- "traefik.http.routers.archiveboxnotls.rule=Host(`${ARCHIVE_URL}`)"
- "traefik.http.routers.archiveboxnotls.entrypoints=webinsecure"
- "traefik.http.routers.archiveboxnotls.middlewares=tlsredir#file"
- "traefik.http.routers.archivebox.rule=Host(`${ARCHIVE_URL}`)"
- "traefik.http.routers.archivebox.entrypoints=websecure"
- "traefik.http.routers.archivebox.tls=true"
- "traefik.http.routers.archivebox.tls.certresolver=letsencrypt"
networks:
- traefik_lan
# Run sonic search backend
sonic:
image: valeriansaliou/sonic:v1.3.0
ports:
- 1491:1491
environment:
- SEARCH_BACKEND_PASSWORD=SecretPassword
volumes:
- ./etc/sonic/config.cfg:/etc/sonic.cfg
- ./data:/var/lib/sonic/store/
networks:
traefik_lan:
external: true
I'm then expected to run it like so:
sudo ARCHIVE_URL=archive.mydomain.com docker-compose up -d
Traefik
This is the structure of my /traefik folder in /home.
.
├── config
│ ├── acme.json
│ ├── dynamic-conf
│ │ ├── dashboard.toml
│ │ ├── tlsredir.toml
│ │ └── tls.toml
│ └── traefik.toml
└── docker-compose.yml
docker-compose.yml
version: '3'
services:
reverse-proxy:
container_name: traefik
image: traefik:v2.4
restart: unless-stopped
ports:
- "80:80"
- "443:443"
networks:
- traefik_lan
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config:/etc/traefik:ro
- ./config/acme.json:/acme.json
networks:
traefik_lan:
external: true
traefik.toml
[api]
dashboard = true
[providers]
[providers.docker]
exposedByDefault = false
[providers.file]
directory = "/etc/traefik/dynamic-conf"
watch = true
[entryPoints]
[entryPoints.websecure]
address = ":443"
[entryPoints.webinsecure]
address = ":80"
[entryPoints.dot]
address = ":853"
[certificatesResolvers.letsencrypt.acme]
email = "myemail#gmail.com"
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
#caServer: "https://acme-v02.api.letsencrypt.org/directory"
storage = "acme.json"
[certificatesResolvers.letsencrypt.acme.tlsChallenge]
[accessLog]
format = "json"
[accessLog.fields]
defaultMode = "drop"
[accessLog.fields.names]
"ClientAddr" = "keep"
"RequestAddr" = "keep"
"RequestMethod" = "keep"
"RequestPath" = "keep"
"DownstreamStatus" = "keep"
dashboard.toml
[http.routers.api]
rule = "Host(`traefik.domain.tld`)"
entrypoints = ["webinsecure"]
service = "api#internal"
middlewares = ["tlsredir#file"]
[http.routers.api-secure]
rule = "Host(`traefik.domain.tld`)"
entrypoints = ["websecure"]
service = "api#internal"
middlewares = ["secured"]
[http.routers.api-secure.tls]
certResolver = "letsencrypt"
tlsredir.toml
[http.middlewares]
[http.middlewares.tlsredir.redirectScheme]
scheme = "https"
permanent = true
tls.toml
[tls]
[tls.options]
[tls.options.default]
minVersion = "VersionTLS12"
sniStrict = true
cipherSuites = [
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
]
curvePreferences = ["CurveP521","CurveP384"]
Thank you in advance for you help.
I'm looking for using docker to emulate the minimum of our current cloud environment. We have about 10 services (each with your own MySQL8 database, redis, php-fpm and nginx). Currently they have a docker-compose.yml per repository, but they cant talk to each other, if I want to test a feature where a service needs to talk to another I'm out of luck.
My first approach was to create a Dockerfile per service (and run all together using a new docker-compose.yml), using Debian, but i didn't got too far, was able to install nginx, (php-fpm and dependencies), but when I got to the databases it got weird and I had a feeling that this isn't the right way of doing this.
Is there a way to one docker-compose.yml "include" each of docker-compose.yml from the services?
Is there a better approach to this? Or should I just keep with the Dockerfile and run them all on the same network using docker-compose?
TL;DR;
You can configure docker-compose using external networks to communicate with services from other projects or (depending on your project) use the -f command-line option / COMPOSE_FILE environment variable to specify the path of the compose file(s) and bring all of the services up inside the same network.
Using external networks
Given the below tree with project a and b:
.
├── a
│ └── docker-compose.yml
└── b
└── docker-compose.yml
Project a's docker-compose sets a name for the default network:
version: '3.7'
services:
nginx:
image: 'nginx'
container_name: 'nginx_a'
expose:
- '80'
networks:
default:
name: 'net_a'
And project b is configured to using the named network net_b and the pre-existing net_a external network:
version: '3.7'
services:
nginx:
image: 'nginx'
container_name: 'nginx_b'
expose:
- '80'
networks:
- 'net_a'
- 'default'
networks:
default:
name: 'net_b'
net_a:
external: true
... exec'ing into the nginx_b container we can reach the nginx_a container:
Note: this is a minimalist example. The external network must exist before trying to bring up an environment that is configured with the pre-existing network. Rather than modifying the existing projects docker-compose.yml I'd suggest using overrides.
The configuration gives the nginx_b container a foot inside both networks:
Using the -f command-line option
Using the -f command-line option acts as an override. It will not work with the above compose files as both specify an nginx service (docker-compose will override / merge the first nginx service with the second).
Using the modified docker-compose.yml for project a:
version: '3.7'
services:
nginx_a:
container_name: 'nginx_a'
image: 'nginx'
expose:
- '80'
... and for project b:
version: '3.7'
services:
nginx_b:
image: 'nginx'
container_name: 'nginx_b'
expose:
- '80'
... we can bring both of the environments up inside the same network: docker-compose -f a/docker-compose.yml:b/docker-compose.yml up -d:
I try to use latest (13.0) Docker image for local development and I'm using docker-compose.yml from docker documentation for spinning up containers:
version: '2'
services:
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- ./config:/etc/odoo
- ./addons/my_module:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
My odoo.conf:
[options]
addons_path = /mnt/extra-addons
data_dir = /var/lib/odoo
My file structure:
├── addons
│ └── my_module
│ ├──controllers
│ ├──demo
│ ├──models
│ ├──security
│ ├──views
│ ├──__init__.py
│ └──__manifest__.py
├── config
│ └── odoo.conf
├── docker-compose.yml
└── README.md
my_module is default module strucure from odoo website (with uncommented code) so I'm assuming it has no errors.
When I start the containers with command docker-compose up -d it starts the database and odoo without any errors (in docker and in browser console) but my_module is not visible inside application. I turned on developer mode and Updated Apps list inside App tab as suggested in other issues on github and SO but my_module is still not visible. Additionally if I login to container with docker exec -u root -it odoo /bin/bash I can cd to /mnt/extra-addons and I can see the contents of my_module mounted to container so it seems as Odoo does not recognize it?
I scanned the interned and found many similar problems but none of the solutions worked for me so I'm assuming I'm doing something wrong.
After some research I ended up with this docker-compose.yml which does load custom addons to my Docker:
version: '2'
services:
db:
image: postgres:11
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres
restart: always
odoo:
image: odoo:13
depends_on:
- db
ports:
- "8069:8069"
tty: true
command: -- --dev=reload
volumes:
- ./addons:/mnt/extra-addons
- ./etc:/etc/odoo
restart: always
odoo.conf:
[options]
addons_path = /mnt/extra-addons
logfile = /etc/odoo/odoo-server.log