I'm using jwilder/nginx-proxy to host multiple (web)apps from a single server. This is working great except that all services can communicate with each other because they are all on the same network because that is required for the proxy to work.
Proxy docker-compose.yaml
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
ports:
- "80:80"
- "443:443"
volumes:
- ./data/certs:/etc/nginx/certs:ro
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt-proxy:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt-proxy
depends_on:
- nginx-proxy
volumes:
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- ./data/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
networks:
default:
external:
name: nginx-proxy
App 1 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app1.example.com
LETSENCRYPT_HOST: app1.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
App 2 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app2.example.com
LETSENCRYPT_HOST: app2.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
With this setup both applications will use de db and cache instance of App 1. The only way to solve that is to give those services unique names like app_1_db and app_2_db. But then App 1 is still able to connect to the app_2_db which I would like to prevent.
Is there a way to isolate all services within their docker-composer.yaml file and still use the nginx proxy?
Docker version 18.09.0, build 4d60db4
docker-compose version 1.21.2, build a133471
You can connect only the app(nginx) container from your apps to the nginx-proxy network. The only edit needed should be in the app's docker-compose:
version: '3'
services:
app:
networks:
- default
- nginx-proxy
networks:
nginx-proxy:
external: true
That way the app service will be connected to nginx-proxy and default networks at the same time. (If you omit networks key, service is always connected to the default network)
Resolving service names to containers ip's then works as expected as long as no container can see (across all networks it's connected to) two containers with same service name.
If you want even more isolation, you can create nginx-proxy network for every app.
So in your nginx-proxy docker-compose you will have:
version: "3"
services:
nginx-proxy:
networks:
- default
- nginx-proxy_app1
- nginx-proxy_app2
# letsencrypt-proxy service doesn't have to have networks key
networks:
nginx-proxy_app1:
external: true
nginx-proxy_app2:
external: true
and in your apps:
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app1
networks:
nginx-proxy_app1:
external: true
and
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app2
networks:
nginx-proxy_app2:
external: true
That way in every "proxy" network there is only one (if you are not using docker-compose scaling) app container and the nginx-proxy container.
More reading:
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/overlay/#operations-for-standalone-containers-on-overlay-networks
Related
I'm having a problem persisting data with docker-compose.
I want my service chatmysql to persist data I put inside a database, but everytime i run docker-compose down it all vanishes.
I checked directory /var/lib/docker/volumes to see if it stores data there when containers are running and the volume was completely empty.
I didn't have that issue when I was running containers with docker run command so I guess its fault of my docker-compose.yaml file. Can someone help me?
I'm running this on Ubuntu 20.04.
version: '3'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/lib/var/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
You need to change the mounted volume, try this :
version: '3.7'
services:
chatmysql:
image: mysql/mysql-server
container_name: chatmysql
hostname: db
user: root
networks:
- chatnet
ports:
- 3307:3306
volumes:
- chatmysqlvolume:/var/lib/mysql
chatbackend:
depends_on:
- chatmysql
build:
context: backend/src
container_name: chatbackend
hostname: backend
networks:
- chatnet
ports:
- 8080:8080
environment:
- MYSQLUSERNAME=${MYSQLUSERNAME:-user}
- MYSQLPASSWORD=${MYSQLPASSWORD:?database password not set}
- MYSQLHOST=${MYSQLHOST:-db}
- MYSQLPORT=${MYSQLPORT:-3306}
- MYSQLDBNAME=${MYSQLDBNAME:-test}
restart: always
deploy:
restart_policy:
condition: on-failure
chatfrontend:
build: frontend
container_name: chatfrontend
hostname: front
networks:
- chatnet
ports:
- 3000:3000
volumes:
chatmysqlvolume:
networks:
chatnet:
driver: bridge
Please help me if it's possible.
I need to start 2 applications with a single database.
I have 2 applications. First domain.com, 2-nd api.domain.com. Each application has docker-compose.yaml files.
domain.com - CMS
version: "3.8"
services:
web:
container_name: domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
#command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
domain:
ipv4_address: 10.9.0.5
networks:
domain:
driver: bridge
ipam:
config:
- subnet: 10.9.0.0/16
gateway: 10.9.
volumes:
bel_baza:
api.domain.com - Laravel 5.6
version: "3.8"
services:
web:
container_name: api_domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
# command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
api_domain:
ipv4_address: 10.15.0.5
db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: api_domain_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: domain
MYSQL_USER: user
MYSQL_PASSWORD: user
volumes:
- api_domain_baza:/var/lib/mysql
- ./docker/db:/docker-entrypoint-initdb.d
networks:
api_domain:
ipv4_address: 10.15.0.6
phpmyadmin:
image: phpmyadmin
restart: always
container_name: api_domain_pma
networks:
api_domain:
ipv4_address: 10.15.0.7
redis:
image: redis:3.0
container_name: api_domain_redis
networks:
api_domain:
ipv4_address: 10.15.0.10
networks:
api_domain:
driver: bridge
ipam:
config:
- subnet: 10.15.0.0/16
gateway: 10.15.0.1
volumes:
api_domain_baza:
api_domain started successfully.
I need to connect domain.com with database api_domain_db. For connecting host, I used IP address 10.15.0.6. First application not connected to the database from 2nd application.
What is my problem?
How I can connect domain.com with the database of 2nd application?
your problem is your are using separate docker compose applications for each application. And by default those applications can not access each other inner parts:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
doc is here - https://docs.docker.com/compose/networking/
So it is like it creates separate network for each docker compose.
if you want them both to see each other inner part you can create external docker network as this:
docker network create --subnet 10.1.0.0/24 network_name
and then use that network in both docker compose like this:
networks:
default:
external:
name: network_name
services:
.....
if you need fixed IPs, you can define them as
app:
image: ...
networks:
default:
ipv4_address: 10.1.0.10
I am trying to run a WordPress site inside of a docker container on Ubuntu VPS using Nginx-Proxy.
I created the following docker-compose.yml file
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
wordpress:
image: wordpress
container_name: wordpress
restart: always
ports:
- 8080:80
environment:
- VIRTUAL_HOST=wordpress.domain.com
- VIRTUAL_PORT=5500
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=db_username
- WORDPRESS_DB_PASSWORD=db_password
- WORDPRESS_DB_NAME=db_name
depends_on:
- nginx-proxy
- db
networks:
- nginx-proxy
volumes:
- wordpress:/var/www/html
ports:
- 5500:5500
expose:
- 5500
db:
image: mysql:latest
container_name: db
restart: always
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MySQL_RANDOM_ROOT_PASSWORD: '1'
depends_on:
- nginx-proxy
networks:
- nginx-proxy
volumes:
- db:/var/lib/mysql
ports:
- 5600:5600
expose:
- 5600
volumes:
wordpress:
db:
Every time I run docker-compose up I get the following error
Service "nginx-proxy" uses an undefined network "nginx-proxy"
I created a network using the following command
docker network create nginx-proxy
Here is the output of docker network ls
Why do I get that error? How can I fix it?
Anything you name in a per-service networks: block needs to be declared in a top-level networks: block.
version: '3.4'
services:
nginx-proxy:
networks:
- nginx-proxy # <-- matches below
volumes: { ... }
networks:
nginx-proxy: # <-- matches above
# may be empty, but this block is required
If you don't declare any networks: at all, Compose creates a network named default and attaches containers to it. For almost all uses this is what you need. So it may be simpler to just delete the networks: blocks entirely.
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
# No networks:; just use automatic [default]
(You similarly do not need to manually provide a container_name:, or to expose: ports at the Compose level.)
I have one docker-compose file that have nginx, mysql and phpmyadmin.
phpmyadmin is open in port 8080 in this address mydomain.com:8080 as well.
How can I convert this address to http://phpmyadmin.mydomain.com or http://pma.mydomain.com in my server?
I have used this docker-compose file:
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: ./Dockerfile/Dockerfile
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./Beh:/var/www
- ./Config/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
# - "443:443"
volumes:
- ./Beh:/var/www
- ./Config/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
database:
image: mariadb:latest
container_name: database
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "3306:3306"
networks:
- app-network
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
- "PMA_HOST=database"
links:
- database
ports:
- "8080:80"
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
I don't know how to configure nginx file to achieve this goal.
You can use the repo -> https://github.com/jwilder/nginx-proxy
This is what I'm using to create new local domain name for each project.
You have to add "VIRTUAL_HOST" to the environment, expose the ports ("expose: 80") and also add the new domain in your /etc/hosts.
I'm trying to provide static IP address to containers. I understand that I have to create a custom network. I create it and the bridge interface is up on the host machine (Ubuntu 16.x). The containers get IP from this subnet but not the static I provided.
Here is my docker-compose.yml:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
- vpcbr
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
- vpcbr
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
aux_addresses:
mysql: 10.5.0.5
apigw-tomcat: 10.5.0.6
The containers get 10.5.0.2 and 10.5.0.3, instead of 5 and 6.
Note that I don't recommend a fixed IP for containers in Docker unless you're doing something that allows routing from outside to the inside of your container network (e.g. macvlan). DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host. With that disclaimer, here's the compose file you want:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
vpcbr:
ipv4_address: 10.5.0.5
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
vpcbr:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
I was facing some difficulties with an environment variable that is with custom name (not with container name /port convention for KAPACITOR_BASE_URL and KAPACITOR_ALERTS_ENDPOINT). If we give service name in this case it wouldn't resolve the ip as
KAPACITOR_BASE_URL: http://kapacitor:9092
In above http://[**kapacitor**]:9092 would not resolve to http://172.20.0.2:9092
I resolved the static IPs issues using subnetting configurations.
version: "3.3"
networks:
frontend:
ipam:
config:
- subnet: 172.20.0.0/24
services:
db:
image: postgres:9.4.4
networks:
frontend:
ipv4_address: 172.20.0.5
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:latest
networks:
frontend:
ipv4_address: 172.20.0.6
ports:
- "6379"
influxdb:
image: influxdb:latest
ports:
- "8086:8086"
- "8083:8083"
volumes:
- ../influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
- ../influxdb/inxdb:/var/lib/influxdb
networks:
frontend:
ipv4_address: 172.20.0.4
environment:
INFLUXDB_HTTP_AUTH_ENABLED: "false"
INFLUXDB_ADMIN_ENABLED: "true"
INFLUXDB_USERNAME: "db_username"
INFLUXDB_PASSWORD: "12345678"
INFLUXDB_DB: db_customers
kapacitor:
image: kapacitor:latest
ports:
- "9092:9092"
networks:
frontend:
ipv4_address: 172.20.0.2
depends_on:
- influxdb
volumes:
- ../kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf
- ../kapacitor/kapdb:/var/lib/kapacitor
environment:
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
web:
build: .
environment:
RAILS_ENV: $RAILS_ENV
command: bundle exec rails s -b 0.0.0.0
ports:
- "3000:3000"
networks:
frontend:
ipv4_address: 172.20.0.3
links:
- db
- kapacitor
depends_on:
- db
volumes:
- .:/var/app/current
environment:
DATABASE_URL: postgres://postgres#db
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: postgres
INFLUX_URL: http://influxdb:8086
INFLUX_USER: db_username
INFLUX_PWD: 12345678
KAPACITOR_BASE_URL: http://172.20.0.2:9092
KAPACITOR_ALERTS_ENDPOINT: http://172.20.0.3:3000
volumes:
postgres_data:
If you are never seeing the static IP address set, perhaps it could be because you are using "docker compose up". Try using "docker-compose up".
When I use "docker-compose up" (with the hyphen) I now see the static IPs assigned.
networks:
hfnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.55.0/24
gateway: 192.168.55.1
services:
web:
image: 'mycompany/webserver:latest'
hostname: www
domainname: mycompany.com
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
hfnet:
ipv4_address: 192.168.55.10
ports:
- '80:80'
- '443:443'
volumes:
- '../honeyfund:/var/www/html'
I wasted a lot of time to figure that one out. :(
I realized, that the more convenient and meaningful way is to give the container a container-name.
You can use the name in the same docker network as source.
This helped me because the docker-containers had changing IPs and by this I can communicate with another container with a static name that I can use in config-files.