I got two apache containers connect to the same bridge network. First apache 172.20.10.2 and port 8080 (internally 80) Second apache 172.20.10.6 and port 9999 (internaly 80).
First apache is configured with two virtual hosts on port 80.
First vhost support mydomain.com on that apache and everything works correctly.
Second vhost support subdomain.mydomain.com and redirect to second apache server.
This redirects don't work and on logs I got that error:
"GET /favicon.ico HTTP/1.1" 502 360
[proxy:error] [pid 43:tid 3028272160] (111)Connection refused: AH00957: http: attempt to connect to 172.20.10.6:9999 (172.20.10.6) failed
[proxy_http:error] [pid 43:tid 3028272160] [client Client_IP:PORT] AH01114: HTTP: failed to make connection to backend: 172.20.10.6
"GET / HTTP/1.1" 503 299
[proxy:error] [pid 8:tid 3011486752] [client Client_IP:PORT] AH00898: DNS lookup failure for: 172.20.10.6:9999favicon.ico returned by /favicon.ico, referer: http://subdomain.mydomain.com/
"GET /favicon.ico HTTP/1.1" 502 360
docker-compose.yml
version: "3.8"
volumes:
httpd_all:
httpd_all_2:
networks:
frontend_web:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.10.0/29
services:
httpd:
container_name: httpd
image: httpd:latest
hostname:
srv_www01
ports:
- 8080:80/tcp
- 8043:443/tcp
volumes:
- httpd_all:/usr/local/apache2/
networks:
frontend_web:
ipv4_address: 172.20.10.2
restart: unless-stopped
httpd_2:
container_name: httpd_2
image: httpd:latest
hostname:
srv_www02
ports:
- 9999:80/tcp
- 9998:443/tcp
volumes:
- httpd_all_2:/usr/local/apache2/
networks:
frontend_web:
ipv4_address: 172.20.10.6
restart: unless-stopped
vhosts on first apache 172.20.10.2
<VirtualHost *:80>
ServerName mydomain.com
ServerAlias mydomain.com
DocumentRoot /usr/local/apache2/htdocs
Alias /jasno "/usr/local/apache2/htdocs"
</VirtualHost>
<VirtualHost *:80>
ServerName subdomain.mydomain.com
ServerAlias www.subdomain.mydomain.com
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
ProxyPass "/" "http://172.20.10.6:9999"
ProxyPassReverse "/" "http://172.20.10.4:9999"
</VirtualHost>
Connections between containers ignore ports:. If the process inside the second container listens on ports 80 and 443, connections between containers will only ever use those ports, even if ports: make them visible as something else outside the host. Since port 80 is the default HTTP port, you can leave it out of your configuration entirely:
ProxyPass "/" "http://172.20.10.6"
You can simplify this setup even further. As described in Networking in Compose in the Docker documentation, Docker provides an internal DNS system, and each container is accessible from other containers in the same Compose file using its Compose service name. Instead of manually specifying the IP address, you can use that host name in the Apache setup
ProxyPass "/" "http://httpd_2"
Once you've done that, you can simplify the Compose setup considerably. It's almost always safe to let Docker pick IP addresses on its own, rather than assigning them manually. Compose can also assign container names, it creates a network named default for you, and the hostname: setting usually has no visible effect. You should be able to trim this down to:
version: "3.8"
volumes:
httpd_all:
httpd_all_2:
services:
httpd:
image: httpd:latest
ports:
- 8080:80/tcp
- 8043:443/tcp
volumes:
- httpd_all:/usr/local/apache2/
restart: unless-stopped
httpd_2:
image: httpd:latest
# ports: only if the service needs to be
# - 9999:80/tcp accessed from outside Docker; not used
# - 9998:443/tcp for connections between containers
volumes:
- httpd_all_2:/usr/local/apache2/
restart: unless-stopped
Related
I want to use Varnish above the openmaptiles and my SSL apache2 server, so I change the docker-compose.yml like that
version: "3"
volumes:
pgdata:
networks:
postgres:
driver: bridge
services:
postgres:
image: "${POSTGIS_IMAGE:-openmaptiles/postgis}:${TOOLS_VERSION}"
# Use "command: postgres -c jit=off" for PostgreSQL 11+ because of slow large MVT query processing
# Use "shm_size: 512m" if you want to prevent a possible 'No space left on device' during 'make generate-tiles-pg'
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- postgres
ports:
- "${PGPORT:-5432}:${PGPORT:-5432}"
env_file: .env
environment:
# postgress container uses old variable names
POSTGRES_DB: ${PGDATABASE:-openmaptiles}
POSTGRES_USER: ${PGUSER:-openmaptiles}
POSTGRES_PASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
import-data:
image: "openmaptiles/import-data:${TOOLS_VERSION}"
env_file: .env
networks:
- postgres
openmaptiles-tools: &openmaptiles-tools
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
env_file: .env
environment:
# Must match the version of this file (first line)
# download-osm will use it when generating a composer file
MAKE_DC_VERSION: "3"
# Allow DIFF_MODE, MIN_ZOOM, and MAX_ZOOM to be overwritten from shell
DIFF_MODE: ${DIFF_MODE}
MIN_ZOOM: ${MIN_ZOOM}
MAX_ZOOM: ${MAX_ZOOM}
#Provide BBOX from *.bbox file if exists, else from .env
BBOX: ${BBOX}
# Imposm configuration file describes how to load updates when enabled
IMPOSM_CONFIG_FILE: ${IMPOSM_CONFIG_FILE}
# Control import-sql processes
MAX_PARALLEL_PSQL: ${MAX_PARALLEL_PSQL}
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
MBTILES_FILE: ${MBTILES_FILE}
networks:
- postgres
volumes:
- .:/tileset
- ./data:/import
- ./data:/export
- ./build/sql:/sql
- ./build:/mapping
- ./cache:/cache
- ./style:/style
update-osm:
<<: *openmaptiles-tools
command: import-update
generate-changed-vectortiles:
image: "openmaptiles/generate-vectortiles:${TOOLS_VERSION}"
command: ./export-list.sh
volumes:
- ./data:/export
- ./build/openmaptiles.tm2source:/tm2source
networks:
- postgres
env_file: .env
environment:
MBTILES_NAME: ${MBTILES_FILE}
# Control tilelive-copy threads
COPY_CONCURRENCY: ${COPY_CONCURRENCY}
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
generate-vectortiles:
image: "openmaptiles/generate-vectortiles:${TOOLS_VERSION}"
volumes:
- ./data:/export
- ./build/openmaptiles.tm2source:/tm2source
networks:
- postgres
env_file: .env
environment:
MBTILES_NAME: ${MBTILES_FILE}
BBOX: ${BBOX}
MIN_ZOOM: ${MIN_ZOOM}
MAX_ZOOM: ${MAX_ZOOM}
# Control tilelive-copy threads
COPY_CONCURRENCY: ${COPY_CONCURRENCY}
#
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
postserve:
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
command: "postserve ${TILESET_FILE} --verbose --serve=${OMT_HOST:-http://localhost}:${PPORT:-8090}"
env_file: .env
environment:
TILESET_FILE: ${TILESET_FILE}
networks:
- postgres
#ports:
# - "${PPORT:-8090}:${PPORT:-8090}"
volumes:
- .:/tileset
varnish:
image: eeacms/varnish
ports:
- "6081:6081"
depends_on:
- postserve
networks:
- postgres
environment:
BACKENDS: "postserve"
BACKENDS_PORT: "8090"
BACKENDS_PROBE_INTERVAL: "60s"
BACKENDS_PROBE_TIMEOUT: "10s"
BACKENDS_PROBE_URL: "/data/openmaptiles/0/0/0.pbf"
#DNS_ENABLED: "true"
maputnik_editor:
image: "maputnik/editor"
ports:
- "8088:8888"
tileserver-gl:
image: "maptiler/tileserver-gl:latest"
command:
- --port
- "${TPORT:-8080}"
- --config
- "/style/config.json"
ports:
- "${TPORT:-8080}:${TPORT:-8080}"
depends_on:
- varnish
volumes:
- ./data:/data
- ./style:/style
- ./build:/build
And change my apache config to use the varnish port in the proxypass and proxyreverse:
<VirtualHost *:80>
ServerName tiles.example.com
Protocols h2 h2c http/1.1
ErrorDocument 404 /404.html
# disable proxy for the /font-family sub-directory
# must be placed on top of the other ProxyPass directive
ProxyPass /font-family !
Alias "/font-family" "/var/www/font-family"
#HTTP proxy
ProxyPass / http://localhost:6081/
ProxyPassReverse / http://localhost:6081/
ProxyPreserveHost On
ErrorLog ${APACHE_LOG_DIR}/tileserver-gl.error.log
CustomLog ${APACHE_LOG_DIR}/tileserver-gl.access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =tiles.example.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
<IfModule mod_ssl.c>
SSLStaplingCache shmcb:/var/run/apache2/stapling_cache(128000)
<VirtualHost *:443>
ServerName tiles.example.com
Protocols h2 h2c http/1.1
ErrorDocument 404 /404.html
# disable proxy for the /font-family sub-directory
# must be placed on top of the other ProxyPass directive
ProxyPass /font-family !
Alias "/font-family" "/var/www/font-family"
#HTTP proxy
ProxyPass / http://localhost:6081/
ProxyPassReverse / http://localhost:6081/
ProxyPreserveHost On
ErrorLog ${APACHE_LOG_DIR}/tileserver-gl.error.log
CustomLog ${APACHE_LOG_DIR}/tileserver-gl.access.log combined
SSLCertificateFile /etc/letsencrypt/live/tiles.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/tiles.example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
Header always set Strict-Transport-Security "max-age=31536000"
SSLUseStapling on
Header always set Content-Security-Policy upgrade-insecure-requests
RequestHeader set X-Forwarded-Host "tiles.example.com"
RequestHeader set X-Forwarded-Proto "https"
</VirtualHost>
</IfModule>
Then rerun docker-compose up -d
But when I access the tiles I've got a 503 error
503 Backend fetch failed
Any idea where is the error in configuration?
Thanks
Based on https://github.com/openmaptiles/openmaptiles-tools and https://hub.docker.com/r/openmaptiles/openmaptiles-tools it doesn't seem like your postserve container that runs the openmaptiles/openmaptiles-tools image actually exposes any network ports for Varnish to connect to.
And while you are specifying a --serve parameter to this container, the Dockerfile for this image doesn't have an EXPOSE definition that opens up any ports.
Maybe you should mount the generated tiles into a web server using volumes and then connect Varnish to that web server.
Please use the official Varnish Docker image that is available on the Docker hub. This image is supported by Varnish Software and receives regular updates. See https://www.varnish-software.com/developers/tutorials/running-varnish-docker for a tutorial on how to use it.
I have a Nginx webserver that I'm running from inside a docker container. The problem I am having is that all client IP addresses in the nginx access.log are 172.19.0.1, the container's gateway. I would like the IP addresses shown to nginx to be the real IP addresses of the clients.
I've tried a few solutions that I found, including
Using Nginx's realip module
set_real_ip_from 172.19.0.1;
real_ip_header X-Real-IP;
real_ip_recursive on;
which still logged the gateway ip
Setting the ports to host networking mode, which was ineffectual because I'm on macOS, and it is only supported on Linux.
The dockerfile I'm using looks like this
version: "3"
services:
nginx:
container_name: nginx
ports:
- target: 80
published: 80
- target: 443
published: 443
volumes:
# [my volumes]
restart: unless-stopped
image: nginx
Summary: 2 separate applications, both using docker-compose, how can I have http://app-1.test and http://app-2.test available at the same time?
Description:
I feel like I've missed something super-simple. I have 2 php-fpm (via nginx) applications, both run by similar docker-compose setups, somewhat like:
# docker-compose.yaml
version: '3'
services:
app:
build:
context: .
dockerfile: docker/Dockerfile
container_name: app_1
tty: true
depends_on:
- db
- dbtest
working_dir: /var/www
volumes:
- ./:/var/www
webserver:
image: nginx:stable
container_name: app_1_webserver
restart: always
ports:
- "80:80"
depends_on:
- app
volumes:
- ./:/var/www
- ./docker/app.conf:/etc/nginx/conf.d/default.conf
links:
- app
# ...
On my /etc/hosts, I can add something like
127.0.0.1 app-1.test
Now I can call the app via the browser by going to app-1.test.
The second one has a similar setup, but of course it won't go up, because port 80 is blocked. I can of course change the port, but then the url would be something like app-2.test:81 instead of app-2.test. What can I do, so I can run a second application under a different local hostname? Or is using a different port the best way to go?
You can't. What you can do is add a "router" in front of your images (a third image) which does routing (proxy passing) based on the host name.
Apache or Nginx are often used for these kinds of things.
e.g. with apache server
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
<VirtualHost *:80>
ServerName app-1.test
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://image1:80/
ProxyPassReverse / http://image1:80/
ErrorLog /var/log/apache2/error.log
LogLevel info
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
<VirtualHost *:80>
ServerName app-2.test
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://image2:80/
ProxyPassReverse / http://image2:80/
ErrorLog /var/log/apache2/error.log
LogLevel info
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
now you can add both names on the same ip in your /etc/hosts file and the server can route internally based on the provided hostname (ServerName).
The http://image1:80/ (and its like) references should be changed to the docker internal dns like specified in the docker-compose.yml
I’m trying to install Nextcloud on my server with Docker using a Caddy reverse proxy. Caddy is working for other services so I will just copy the Caddyfile here.
There are 3 ways I tried accessing it on the Docker host machine:
localhost:8080 - working
IP of host machine - it says it is not a trusted domain
domain - 502 Bad Gateway
Please help I’ve already tried multiple configurations but can not get it working.
Caddyfile:
{domain} {
tls {email}
tls {
dns godaddy
}
# Enable basic compression
gzip
# Service discovery via well-known
redir /.well-known/carddav /remote.php/carddav 301
redir /.well-known/caldav /remote.php/caldav 301
proxy / http://nextcloud:8080 {
# X-Forwarded-For, etc...
transparent
# Nextcloud best practices and security
header_downstream Strict-Transport-Security "max-age=15552000;"
header_downstream Referrer-Policy "strict-origin-when-cross-origin"
header_downstream X-XSS-Protection "1; mode=block"
header_downstream X-Content-Type-Options "nosniff"
header_downstream X-Frame-Options "SAMEORIGIN"
}
}
docker-compose file:
version: '3.7'
services:
db:
container_name: nextcloud-db
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
env_file:
- ./nextcloud/config/db.env
environment:
- MYSQL_ROOT_PASSWORD={pw}
networks:
- db
app:
container_name: nextcloud
image: nextcloud
ports:
- 8080:80
volumes:
- nextcloud:/var/www/html
env_file:
- ./nextcloud/config/db.env
environment:
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS="localhost {host ip} {domain}"
restart: always
networks:
- proxy
- db
depends_on:
- db
volumes:
db:
nextcloud:
networks:
db:
Figured it out.
In the Caddyfile the nextcloud port should be 80 instead of 8080 as it is in the inner network.
When using Traefik and Docker-compose, I would like to get the container IP to perform IP-based filtering but instead get the docker network gateway IP.
Here is the results of a curl request from the curl-client container:
docker-compose exec curl-client curl https://whoami.domain.name
Hostname: 608f3dcaf7d9
IP: 127.0.0.1
IP: 172.18.0.2
GET / HTTP/1.1
Host: whoami.domain.name
User-Agent: curl/7.58.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 172.18.0.1
X-Forwarded-Host: whoami.domain.name
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: 88756553599b
X-Real-Ip: 172.18.0.1
Here, 172.18.0.1 is the gateway for the traefik_net network. Instead, I would expect to see 172.18.0.9 in the X-Forwarded-For field, as it is the IP of the curl-client container:
docker-compose exec curl-client cat /etc/hosts
172.18.0.9 34f7b6e5472f
I've also tried using the 'traefik.frontend.whiteList.useXForwardedFor=true' option without success.
traefik.toml
logLevel = "ERROR"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.dashboard]
address = ":8080"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[api]
entrypoint="dashboard"
[acme]
email = "something#aaa.com"
storage = "acme.json"
entryPoint = "https"
[acme.dnsChallenge]
provider = "ovh"
delayBeforeCheck = 0
[[acme.domains]]
main = "*.domain.name"
[docker]
domain = "domain.name"
watch = true
network = "traefik_net"
docker-compose.yml
version: '3'
services:
traefik_proxy:
image: traefik:alpine
container_name: traefik
networks:
- traefik_net
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- ./acme.json:/acme.json
restart: unless-stopped
environment:
- OVH_ENDPOINT=ovh-eu
- OVH_APPLICATION_KEY=secretsecret
- OVH_APPLICATION_SECRET=secretsecret
- OVH_CONSUMER_KEY=secretsecret
labels:
- 'traefik.frontend.rule=Host:traefik.domain.name'
- 'traefik.port=8080'
- 'traefik.backend=traefik'
whoami:
image: containous/whoami
container_name: whoami
networks:
- traefik_net
labels:
- 'traefik.frontend.rule=Host:whoami.domain.name'
curl-client:
image: ubuntu
networks:
- traefik_net
command: sleep infinity
networks:
traefik_net:
external: true
Edit: The domain name is resolved using the following dnsmasq.conf:
domain-needed
expand-hosts
bogus-priv
interface=eno1
domain=domain.name
cache-size=1024
listen-address=127.0.0.1
bind-interfaces
dhcp-range=10.0.10.10,10.0.10.100,24h
dhcp-option=3,10.0.10.1
dhcp-authoritative
server=208.67.222.222
server=208.67.220.220
address=/domain.name/10.0.10.3
After some investigation it seems that Traefik is not the problem here, the inability to access the container IP is due to the way Docker manages its internal network (see the following comments: https://github.com/containous/traefik/issues/4352 and https://github.com/docker/for-mac/issues/180).
I was able to achieve my goal of whitelisting internal connections by running my openvpn container in nework_host mode, this way the client is assigned an IP by the system directly.
Setting the ports configuration of the docker-compose file as follows should work:
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
This ports capability is only available for docker-compose file format =>3.2