Please help me if it's possible.
I need to start 2 applications with a single database.
I have 2 applications. First domain.com, 2-nd api.domain.com. Each application has docker-compose.yaml files.
domain.com - CMS
version: "3.8"
services:
web:
container_name: domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
#command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
domain:
ipv4_address: 10.9.0.5
networks:
domain:
driver: bridge
ipam:
config:
- subnet: 10.9.0.0/16
gateway: 10.9.
volumes:
bel_baza:
api.domain.com - Laravel 5.6
version: "3.8"
services:
web:
container_name: api_domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
# command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
api_domain:
ipv4_address: 10.15.0.5
db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: api_domain_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: domain
MYSQL_USER: user
MYSQL_PASSWORD: user
volumes:
- api_domain_baza:/var/lib/mysql
- ./docker/db:/docker-entrypoint-initdb.d
networks:
api_domain:
ipv4_address: 10.15.0.6
phpmyadmin:
image: phpmyadmin
restart: always
container_name: api_domain_pma
networks:
api_domain:
ipv4_address: 10.15.0.7
redis:
image: redis:3.0
container_name: api_domain_redis
networks:
api_domain:
ipv4_address: 10.15.0.10
networks:
api_domain:
driver: bridge
ipam:
config:
- subnet: 10.15.0.0/16
gateway: 10.15.0.1
volumes:
api_domain_baza:
api_domain started successfully.
I need to connect domain.com with database api_domain_db. For connecting host, I used IP address 10.15.0.6. First application not connected to the database from 2nd application.
What is my problem?
How I can connect domain.com with the database of 2nd application?
your problem is your are using separate docker compose applications for each application. And by default those applications can not access each other inner parts:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
doc is here - https://docs.docker.com/compose/networking/
So it is like it creates separate network for each docker compose.
if you want them both to see each other inner part you can create external docker network as this:
docker network create --subnet 10.1.0.0/24 network_name
and then use that network in both docker compose like this:
networks:
default:
external:
name: network_name
services:
.....
if you need fixed IPs, you can define them as
app:
image: ...
networks:
default:
ipv4_address: 10.1.0.10
Related
I have 2 containers that belongs to the same network:
version: '3'
services:
#PHP Service
app:
build:
context: ./website
dockerfile: Dockerfile
image: travellist
container_name: app
restart: unless-stopped
depends_on:
- db
tty: true
...
networks:
- app-network
administration:
build:
dockerfile: Dockerfile
image: travellist
container_name: administration
restart: unless-stopped
depends_on:
- db
tty: true
environment:
....
networks:
- app-network
#Nginx Service
webserver:
container_name: webserver
image: nginx:1.17-alpine
restart: unless-stopped
depends_on:
- db
ports:
- 8000:80
- 7999:81
...
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
as you can see the two applications runs over NGINX over 2 different ports... however, I'm unable to send a request from one application to the other one... non of the following works (from administration, that is the one that works over 81:7999):
localhost:80
localhost:8000
app:80
app:8000
From the administration container you should send your request to the webserver on port 80.
From the administration container, you can first check that you can ping the webserver, if it succeeds it means that the two can reach each other on the network and for this reason, you can execute your request.
Please note that the port 8000 is only exposed to the host machine.
i am trying to bind container with nginx to ip 172.16.238.10 , but for some reason docker ignores settings in docker-compose
#my docker-compose file
version: "3.9"
services:
nginx:
build: nginx/
ports:
- 80:80/tcp
volumes:
./dokcer/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
./dokcer/nginx/conf/hosts:/etc/hosts
./docker/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.10
php-fpm:
build: php-fpm/
ports:
- 9000:9000/tcp
volumes:
./dokcer/php-fpm/conf/www.conf:/usr/local/etc/php-fpm.d/www.conf
./dokcer/php-fpm/conf/hosts:/etc/hosts
./dokcer/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.11
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
after build and launch, I look at the docker_app_net network and see that the nginx container has ip 172.16.238.2 although I expected it to be 172.16.238.10.
what could be the problem? I will be grateful for every answer because I am confused :c
I'm using jwilder/nginx-proxy to host multiple (web)apps from a single server. This is working great except that all services can communicate with each other because they are all on the same network because that is required for the proxy to work.
Proxy docker-compose.yaml
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
ports:
- "80:80"
- "443:443"
volumes:
- ./data/certs:/etc/nginx/certs:ro
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt-proxy:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt-proxy
depends_on:
- nginx-proxy
volumes:
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- ./data/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
networks:
default:
external:
name: nginx-proxy
App 1 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app1.example.com
LETSENCRYPT_HOST: app1.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
App 2 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app2.example.com
LETSENCRYPT_HOST: app2.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
With this setup both applications will use de db and cache instance of App 1. The only way to solve that is to give those services unique names like app_1_db and app_2_db. But then App 1 is still able to connect to the app_2_db which I would like to prevent.
Is there a way to isolate all services within their docker-composer.yaml file and still use the nginx proxy?
Docker version 18.09.0, build 4d60db4
docker-compose version 1.21.2, build a133471
You can connect only the app(nginx) container from your apps to the nginx-proxy network. The only edit needed should be in the app's docker-compose:
version: '3'
services:
app:
networks:
- default
- nginx-proxy
networks:
nginx-proxy:
external: true
That way the app service will be connected to nginx-proxy and default networks at the same time. (If you omit networks key, service is always connected to the default network)
Resolving service names to containers ip's then works as expected as long as no container can see (across all networks it's connected to) two containers with same service name.
If you want even more isolation, you can create nginx-proxy network for every app.
So in your nginx-proxy docker-compose you will have:
version: "3"
services:
nginx-proxy:
networks:
- default
- nginx-proxy_app1
- nginx-proxy_app2
# letsencrypt-proxy service doesn't have to have networks key
networks:
nginx-proxy_app1:
external: true
nginx-proxy_app2:
external: true
and in your apps:
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app1
networks:
nginx-proxy_app1:
external: true
and
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app2
networks:
nginx-proxy_app2:
external: true
That way in every "proxy" network there is only one (if you are not using docker-compose scaling) app container and the nginx-proxy container.
More reading:
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/overlay/#operations-for-standalone-containers-on-overlay-networks
I am trying to set a static IP on my docker-compose v3 file but.. i can't.
Each time I set it, I can't connect to the webpage anymore.
I am getting ERR_ADDRESS_UNREACHABLE
Here is my config:
# docker-compose.yml
version: '3'
services:
web:
build: ./etc/nginx
ports:
- "90:80"
volumes:
- "./etc/ssl:/etc/ssl"
depends_on:
- php
- database
php:
build: ./etc/php
ports:
- 9000:9000
links:
- database:mysqldb
volumes:
- "./etc/php/php.ini:/usr/local/etc/php/conf.d/php.ini"
- ${APP_PATH}:/var/www/symfony
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
ports:
- 3300:3306
volumes:
- "./data/db/mysql:/var/lib/mysql"
and
# docker-compose.override.yml
version: '3'
services:
web:
networks:
test:
ipv4_address: '10.1.0.100'
networks:
test:
ipam:
driver: default
config:
- subnet: 10.1.0.0/24
It should be like this:
services:
web:
networks:
test:
ipv4_address: 10.1.0.100
networks:
test:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.1.0.0/24
And in my case I placed networks section before services.
UPDATE:
eventually I've ended up using external network for like this:
docker network create --subnet 10.5.0.0/24 local_network_dev
and then in any docker-compose you can just use it like this:
version: '3.2'
networks:
default:
external:
name: local_network_dev
and inside of image:
web:
networks:
default:
ipv4_address: 10.5.0.11
# Use root/example as user/password credentials
version: '3.3'
networks:
netBackEnd:
ipam:
driver: default
config:
- subnet: 192.168.0.0/24
services:
mongo-db:
image: mongo
container_name: cnt_mongo
restart: always
environment:
MONGO_INITDB_DATABASE: dbArland
MONGO_INITDB_ROOT_USERNAME: maguilarac
MONGO_INITDB_ROOT_PASSWORD: pwdmaguilarac
ports:
- 27017:27017
volumes:
- ./script1_creacion_usuario.js:/docker-entrypoint-initdb.d/script1_creacion_usuario.js:ro
- ./script2_creacion_coleccion.js:/docker-entrypoint-initdb.d/script2_creacion_coleccion.js:ro
- ./script4_carga_productos.js:/docker-entrypoint-initdb.d/script4_carga_productos.js:ro
- ./productos_inicial.json:/docker-entrypoint-initdb.d/productos_inicial.json:ro
- ./mongo-volume:/data/db
networks:
netBackEnd:
ipv4_address: 192.168.0.4
mongo-express:
image: mongo-express
container_name: cnt_mongo-express
restart: always
ports:
- 9081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: maguilarac
ME_CONFIG_MONGODB_ADMINPASSWORD: pwdmaguilarac
networks:
netBackEnd:
ipv4_address: 192.168.0.6
Just a very important note that should all future users know.
If you are trying to edit already existing network, you will most likely get error
Cannot start service xxx: Invalid address xxx.xxx.xxx.xxx: It does not belong to any of this network's subnets
I have been struggling for about 2 hours with this problem. The solution is to set a different name for the network or propably use docker-compose down.
Networks settings is not much different between versions 2 and 3, but here is a good link to official doc (v3): https://docs.docker.com/compose/compose-file/compose-file-v3/#ipv4_address-ipv6_address
I'm trying to provide static IP address to containers. I understand that I have to create a custom network. I create it and the bridge interface is up on the host machine (Ubuntu 16.x). The containers get IP from this subnet but not the static I provided.
Here is my docker-compose.yml:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
- vpcbr
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
- vpcbr
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
aux_addresses:
mysql: 10.5.0.5
apigw-tomcat: 10.5.0.6
The containers get 10.5.0.2 and 10.5.0.3, instead of 5 and 6.
Note that I don't recommend a fixed IP for containers in Docker unless you're doing something that allows routing from outside to the inside of your container network (e.g. macvlan). DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host. With that disclaimer, here's the compose file you want:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
vpcbr:
ipv4_address: 10.5.0.5
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
vpcbr:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
I was facing some difficulties with an environment variable that is with custom name (not with container name /port convention for KAPACITOR_BASE_URL and KAPACITOR_ALERTS_ENDPOINT). If we give service name in this case it wouldn't resolve the ip as
KAPACITOR_BASE_URL: http://kapacitor:9092
In above http://[**kapacitor**]:9092 would not resolve to http://172.20.0.2:9092
I resolved the static IPs issues using subnetting configurations.
version: "3.3"
networks:
frontend:
ipam:
config:
- subnet: 172.20.0.0/24
services:
db:
image: postgres:9.4.4
networks:
frontend:
ipv4_address: 172.20.0.5
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:latest
networks:
frontend:
ipv4_address: 172.20.0.6
ports:
- "6379"
influxdb:
image: influxdb:latest
ports:
- "8086:8086"
- "8083:8083"
volumes:
- ../influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
- ../influxdb/inxdb:/var/lib/influxdb
networks:
frontend:
ipv4_address: 172.20.0.4
environment:
INFLUXDB_HTTP_AUTH_ENABLED: "false"
INFLUXDB_ADMIN_ENABLED: "true"
INFLUXDB_USERNAME: "db_username"
INFLUXDB_PASSWORD: "12345678"
INFLUXDB_DB: db_customers
kapacitor:
image: kapacitor:latest
ports:
- "9092:9092"
networks:
frontend:
ipv4_address: 172.20.0.2
depends_on:
- influxdb
volumes:
- ../kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf
- ../kapacitor/kapdb:/var/lib/kapacitor
environment:
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
web:
build: .
environment:
RAILS_ENV: $RAILS_ENV
command: bundle exec rails s -b 0.0.0.0
ports:
- "3000:3000"
networks:
frontend:
ipv4_address: 172.20.0.3
links:
- db
- kapacitor
depends_on:
- db
volumes:
- .:/var/app/current
environment:
DATABASE_URL: postgres://postgres#db
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: postgres
INFLUX_URL: http://influxdb:8086
INFLUX_USER: db_username
INFLUX_PWD: 12345678
KAPACITOR_BASE_URL: http://172.20.0.2:9092
KAPACITOR_ALERTS_ENDPOINT: http://172.20.0.3:3000
volumes:
postgres_data:
If you are never seeing the static IP address set, perhaps it could be because you are using "docker compose up". Try using "docker-compose up".
When I use "docker-compose up" (with the hyphen) I now see the static IPs assigned.
networks:
hfnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.55.0/24
gateway: 192.168.55.1
services:
web:
image: 'mycompany/webserver:latest'
hostname: www
domainname: mycompany.com
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
hfnet:
ipv4_address: 192.168.55.10
ports:
- '80:80'
- '443:443'
volumes:
- '../honeyfund:/var/www/html'
I wasted a lot of time to figure that one out. :(
I realized, that the more convenient and meaningful way is to give the container a container-name.
You can use the name in the same docker network as source.
This helped me because the docker-containers had changing IPs and by this I can communicate with another container with a static name that I can use in config-files.