Multiple virtual hosts with different env files in docker compose - docker

I'm using docker-compose and I've a dev server with lot of virtual hosts on Nginx+PHP-FPM. At the moment nginx container handles multiple virtual hosts:
version: '3'
services:
nginx-proxy:
image: nginx:1.17.4-alpine
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
labels:
- 'com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true'
restart: always
dockergen:
[...]
letsencrypt:
[...]
nginx:
image: nginx:1.17.4-alpine
restart: always
container_name: nginx
environment:
VIRTUAL_HOST: proj1.site.com, proj2.example.com
LETSENCRYPT_HOST: proj1.site.com, proj2.example.com
LETSENCRYPT_EMAIL: tech#example.com
volumes:
- './proj1:/proj1'
- './proj2:/proj2'
- './site.conf:/etc/nginx/conf.d/site.conf'
php:
build:
context: ./php
container_name: php
volumes:
- './proj1:/proj1'
- './proj2:/proj2'
restart: always
volumes:
conf:
vhost:
html:
certs:
networks:
default:
external:
name: nginx-proxy
Now, I'd like to separate the virtual host containers, because I need to inject different env files. Should i replicate the nginx container (of course with different name) and the site.conf per each project? Am I doing it the right way? Could you please suggest me the right direction? P.S. I've read that extends keyword is deprecated for docker-compose v3, so I'd like to avoid that if possible.

Related

Service "nginx-proxy" uses an undefined network "nginx-proxy"

I am trying to run a WordPress site inside of a docker container on Ubuntu VPS using Nginx-Proxy.
I created the following docker-compose.yml file
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
- 443:443
restart: always
networks:
- nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
- /etc/certificates:/etc/nginx/certs
wordpress:
image: wordpress
container_name: wordpress
restart: always
ports:
- 8080:80
environment:
- VIRTUAL_HOST=wordpress.domain.com
- VIRTUAL_PORT=5500
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=db_username
- WORDPRESS_DB_PASSWORD=db_password
- WORDPRESS_DB_NAME=db_name
depends_on:
- nginx-proxy
- db
networks:
- nginx-proxy
volumes:
- wordpress:/var/www/html
ports:
- 5500:5500
expose:
- 5500
db:
image: mysql:latest
container_name: db
restart: always
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MySQL_RANDOM_ROOT_PASSWORD: '1'
depends_on:
- nginx-proxy
networks:
- nginx-proxy
volumes:
- db:/var/lib/mysql
ports:
- 5600:5600
expose:
- 5600
volumes:
wordpress:
db:
Every time I run docker-compose up I get the following error
Service "nginx-proxy" uses an undefined network "nginx-proxy"
I created a network using the following command
docker network create nginx-proxy
Here is the output of docker network ls
Why do I get that error? How can I fix it?
Anything you name in a per-service networks: block needs to be declared in a top-level networks: block.
version: '3.4'
services:
nginx-proxy:
networks:
- nginx-proxy # <-- matches below
volumes: { ... }
networks:
nginx-proxy: # <-- matches above
# may be empty, but this block is required
If you don't declare any networks: at all, Compose creates a network named default and attaches containers to it. For almost all uses this is what you need. So it may be simpler to just delete the networks: blocks entirely.
version: '3.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
# No networks:; just use automatic [default]
(You similarly do not need to manually provide a container_name:, or to expose: ports at the Compose level.)

How to setup Nginx reverse proxy for multiple containers with each container having its own Nginx server

I've a VPS on which I want to deploy multiple web applications (for which I've already read posts and they're perfect when we have directly sub container). I want to manage each web application having its own nginx to route to its sub domains and also some static website's related to them. I've two docker compose. My network looks like the following. Image.
My NGINX Rerverse proxy should be responsible to route the domains to respective nginx container. It might be possible or not I'm not sure (that's why I asked help). If someone can provide a better understanding that I'm open to suggestions. Below is my configuration for nginx proxy container and other web apps docker yaml file code. I've used the NGINX PROXY
NGINX PROXY Docker Compose File
version: "3.7"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
WEB APPLICATION 1 docker-compose.yml
version: "3.7"
networks:
default:
external:
name: nginx-proxy
yoda-network:
driver: bridge
services:
adminer:
container_name: ${APP_NAME}_adminer
image: adminer
depends_on:
- mysql
expose:
- 8080
environment:
VIRTUAL_HOST: yodaledger.com
VIRTUAL_PORT: 8080
networks:
- yoda-network
python:
container_name: ${APP_NAME}_python
image: python:3.6
command: bash -c "pip3 install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
volumes:
- ${APP_PATH}var/www/api.yodaledger.com/yodaledger_backend:/app
- ${APP_PATH}var/www/static.yodaledger.com:/app/static
depends_on:
- mysql
working_dir: /app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
networks:
- yoda-network
nginx:
container_name: ${APP_NAME}_nginx
image: nginx:alpine
expose:
- 443
- 80
volumes:
- ${APP_PATH}var/www:/var/www
- ${APP_PATH}var/log:/var/log/nginx
- ${APP_PATH}var/ssl:/var/ssl
- ${APP_PATH}etc/nginx:/etc/nginx
- /tmp/${APP_NAME}/nginx:/tmp
depends_on:
- python
environment:
VIRTUAL_PORT: 443
VIRTUAL_HOST: yodaledger.com,api.yodaledger.com,app.yodaledger.com,static.yodaledger.com
networks:
- yoda-network
- default
mysql:
container_name: ${APP_NAME}_mysql
image: mariadb:latest
#ports:
# - "3306:3306"
volumes:
- ${APP_PATH}var/mysql:/var/lib/mysql
- ${APP_PATH}etc/mysql:/etc/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- TZ=Europe/Paris
networks:
- yoda-network
WEB APPLICATION 2
It looks the same as web application 1.

Isolate containers on the jwilder/nginx-proxy network

I'm using jwilder/nginx-proxy to host multiple (web)apps from a single server. This is working great except that all services can communicate with each other because they are all on the same network because that is required for the proxy to work.
Proxy docker-compose.yaml
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
ports:
- "80:80"
- "443:443"
volumes:
- ./data/certs:/etc/nginx/certs:ro
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt-proxy:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt-proxy
depends_on:
- nginx-proxy
volumes:
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- ./data/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
networks:
default:
external:
name: nginx-proxy
App 1 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app1.example.com
LETSENCRYPT_HOST: app1.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
App 2 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app2.example.com
LETSENCRYPT_HOST: app2.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
With this setup both applications will use de db and cache instance of App 1. The only way to solve that is to give those services unique names like app_1_db and app_2_db. But then App 1 is still able to connect to the app_2_db which I would like to prevent.
Is there a way to isolate all services within their docker-composer.yaml file and still use the nginx proxy?
Docker version 18.09.0, build 4d60db4
docker-compose version 1.21.2, build a133471
You can connect only the app(nginx) container from your apps to the nginx-proxy network. The only edit needed should be in the app's docker-compose:
version: '3'
services:
app:
networks:
- default
- nginx-proxy
networks:
nginx-proxy:
external: true
That way the app service will be connected to nginx-proxy and default networks at the same time. (If you omit networks key, service is always connected to the default network)
Resolving service names to containers ip's then works as expected as long as no container can see (across all networks it's connected to) two containers with same service name.
If you want even more isolation, you can create nginx-proxy network for every app.
So in your nginx-proxy docker-compose you will have:
version: "3"
services:
nginx-proxy:
networks:
- default
- nginx-proxy_app1
- nginx-proxy_app2
# letsencrypt-proxy service doesn't have to have networks key
networks:
nginx-proxy_app1:
external: true
nginx-proxy_app2:
external: true
and in your apps:
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app1
networks:
nginx-proxy_app1:
external: true
and
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app2
networks:
nginx-proxy_app2:
external: true
That way in every "proxy" network there is only one (if you are not using docker-compose scaling) app container and the nginx-proxy container.
More reading:
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/overlay/#operations-for-standalone-containers-on-overlay-networks

Docker Compose | Virtual Hosts

Whats wrong in my code? thanks in advance!
I'm trying to set up a virtual host for my docker container.
On localhost: 8000 works perfectly, but when I try to access through http: //borgesmelo.local/ the error ERR_NAME_NOT_RESOLVED appears, what can be missing?
This is my -> docker-compose.yml
version: '3.3'
services:
borgesmelo_db:
image: mariadb:latest
container_name: borgesmelo_db
restart: always
volumes:
- ./mariadb/:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: My#159#Sql
MYSQL_PASSWORD: My#159#Sql
borgesmelo_ws:
image: richarvey/nginx-php-fpm:latest
container_name: borgesmelo_ws
restart: always
volumes:
- ./public/:/var/www/html
ports:
- "8000:80"
borgesmelo_wp:
image: wordpress:latest
container_name: borgesmelo_wp
volumes:
- ./public/:/var/www/html
restart: always
environment:
VIRTUAL_HOST: borgesmelo.local
WORDPRESS_DB_HOST: borgesmelo_db:3306
WORDPRESS_DB_PASSWORD: My#159#Sql
depends_on:
- borgesmelo_db
- borgesmelo_ws
borgesmelo_phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: borgesmelo_phpmyadmin
links:
- borgesmelo_db
ports:
- "8001:80"
environment:
- PMA_ARBITRARY=1
borgesmelo_vh:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8002:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
This is my hosts file (/etc/hosts) [macOS]
#DOCKER
127.0.0.1:8000 borgesmelo.local
Hosts file doesn't support ports as it is for name lookup only. So you would have to set your hosts file to:
127.0.0.1 borgesmelo.local
Then access your application with http://borgesmelo.local:8000.
If you are listening on port 8000 because you already have something else on port 80, then consider using nginx as a reverse proxy and then you can route to different applications based on the server_name. That way, you can access multiple applications through port 80. If you're dealing with docker containers, then consider looking into Traefik as a reverse proxy.

Nginx reverse proxy: Set correct ports using jwilder/nginx-proxy for gitlab container

I need to use a nginx reverse proxy. Therefore I use jwilder/nginx-proxy.
Also I'm using gitLab as a docker container.
So I came up with this docker-compose file, but accessing ci.server.com gives me a502 Bad Gateway` error.
I need some help to setup the correct ports for this docker container
version: '3.3'
services:
nginx:
container_name: 'nginx'
image: jwilder/nginx-proxy:alpine
restart: 'always'
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
environment:
VIRTUAL_HOST: ci.server.com
VIRTUAL_PORT: 50022
Before I switched to nginx reverse proxy I used this docker-compose setup, which was working. And I don't get the difference or the mistake I made by 'converting' this.
old
version: '3.3'
services:
nginx:
container_name: 'nginx'
image: 'nginx:1.13.5'
restart: 'always'
ports:
- '80:80'
- '443:443'
volumes:
- '/opt/nginx/conf.d:/etc/nginx/conf.d:ro'
- '/opt/nginx/conf/nginx.conf:/etc/nginx/nginx.conf:ro'
- '/etc/letsencrypt:/etc/letsencrypt'
links:
- 'gitlab'
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
You should set VIRTUAL_PORT: 80 in your environment.
The proxy is actually trying to redirect the 80 port to the SSH port.
To use SSL with jwilderproxy you can look here
for example, I use this.
version: '3/3'
services:
gitlab:
container_name: gitlab
image: 'gitlab/gitlab-ce:10.0.2-ce.0'
restart: always
hostname: 'ci.server.com'
ports:
- '50022:22'
volumes:
- '/opt/gitlab/config:/etc/gitlab'
- '/opt/gitlab/logs:/var/log/gitlab'
- '/opt/gitlab/data:/var/opt/gitlab'
- '/opt/gitlab/secret:/secret/gitlab/backups'
- '/etc/letsencrypt:/etc/letsencrypt'
environment:
- VIRTUAL_HOST=ci.server.com
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=ci.server.com
- LETSENCRYPT_EMAIL=youremail

Resources