docker - port 80 public to docker IP - docker

I am trying to point a domain name to my docker container. In our router we have put port 80 forwards to 192.168.1.101 (Server Docker is running on)
The container IP address however shows up like
"nextjs-docker-pm2-nginx-master-nextjs-1:172.18.0.2/16"
"nextjs-docker-pm2-nginx-master-nginx-1:172.18.0.4/16"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cabb3c8fe03c nextjs-docker-pm2-nginx-master_nginx "/docker-entrypoint.…" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nextjs-docker-pm2-nginx-master-nginx-1
ed27e0fd2f24 nextjs-docker-pm2-nginx-master_nextjs "docker-entrypoint.s…" 16 minutes ago Up 16 minutes 3000/tcp nextjs-docker-pm2-nginx-master-nextjs-1
and my default.conf is
# Cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;
upstream nextjs {
server nextjs:3000;
}
server {
listen 80;
server_name local.DOMAIN.com.au;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 443 default_server ssl http2;
server_name local.DOMAIN.com.au;
ssl_certificate /etc/nginx/ssl/live/local.domain.com.au/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/local.domain.com.au/privkey.pem;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# BUILT ASSETS (E.G. JS BUNDLES)
# Browser cache - max cache headers from Next.js as build id in url
# Server cache - valid forever (cleared after cache "inactive" period)
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs;
}
# STATIC ASSETS (E.G. IMAGES)
# Browser cache - "no-cache" headers from Next.js as no build id in url
# Server cache - refresh regularly in case of changes
location /static {
proxy_cache STATIC;
proxy_ignore_headers Cache-Control;
proxy_cache_valid 60m;
proxy_pass http://nextjs;
}
# DYNAMIC ASSETS - NO CACHE
location / {
proxy_pass http://nextjs;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
}
}
our docker-compose file is
version: '3'
services:
nextjs:
build: ./DRN1Git
nginx:
user: $UID
build: ./nginx
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./certbot/www:/var/www/certbot/:rw
- ./certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw

By default Docker creates an internal network. (172.18.0.0/16 in your case). You need to map the port of the container to the outside of your docker host (192.168.1.101). See ports in compose for reference.
E.g.:
version: "3.9"
services:
web:
build: nginx
ports:
- "80:80"
If you provide your docker-compose.yml file, I will fit the example to suit your needs.

Related

unable to get my frontend container to communicate with my backend container with nginx

I have done this many times but for some reason I am not able to get my containers to communicate.
Even when I try to curl from the frontend I cannot connect!
example:
/etc/nginx # curl http://localhost:8081/v1/test
curl: (7) Failed to connect to localhost port 8081 after 0 ms: Connection refused
/etc/nginx #
bb36725aa567 vue-golang-grpc-base-client "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:80->8080/tcp vue-golang-grpc-base-client-1
8d9fa96e0e67 vue-golang-grpc-base-api "./main" 15 minutes ago Up 15 minutes 0.0.0.0:8081-8082->8081-8082/tcp vue-golang-grpc-base-api-1
My nginx file looks like
upstream api {
server ${API_HOST}:8081;
}
server {
server_name localhost example.com www.example.com;
listen 8080;
root /usr/share/nginx/html;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff|woff2|ttf)$ {
expires 1y;
add_header Cache-Control "public";
access_log off;
}
location / {
try_files $uri /index.html;
}
location ~ ^/(api)/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_pass http://api;
add_header Cache-Control "no-store, no-cache, must-revalidate";
expires off;
}
}
I am using Vue.js 3 with typescript and created a custom url handler because I am not using webpack.
export class EnvironmentHelper {
public static get isDevelopment(): boolean {
return process.env.NODE_ENV === "development";
}
public static get isProduction(): boolean {
return process.env.NODE_ENV === "production";
}
public get baseUrl(): string {
if (EnvironmentHelper.isDevelopment) {
return "http://localhost:8081"
}
if (EnvironmentHelper.isProduction) {
return "http://localhost:8080/"
}
return "/";
}
}
then I have a .env file
NODE_ENV=development
API_HOST=localhost
and am using axios
const url = new EnvironmentHelper()
export const api = {
async getTest() {
try{
return await grpcClient.get<TestResponse>(url + "v1/test")
/etc/nginx # echo $API_HOST
localhost
then I build the image using docker compose
version: '3.0'
services:
api:
build:
context: ./api
ports:
- "8082:8082" # gRPC
- "8081:8081" # gRPC Gateway
networks:
- personal
client:
build:
context: ./client
environment:
- API_HOST=api
- NODE_ENV=production
env_file:
- client/.env.local
networks:
- personal
ports:
- "80:8080"
networks:
personal:
I have two .env files
.env
.env.production
and both are the same
NODE_ENV=production
API_HOST=api
Any advice would be greatly appreciated...
----- Update -----
In the end I had tried many of the accepted answers suggestions however the issue was the routing. I had it set to api and not v1 i.e /v1/test.
However I hope this post helps others with correct configuration.
The devil is in the details #masseyb
localhost inside the client service is the containers localhost, not your PC's localhost where you have the ports mapped.
nginx is trying to pass connections to upstream { server localhost:8081; } given your API_HOST environment variable, there is nothing in the client service listening on port 8081 hence "Connection refused".
You can update your API_HOST to e.g. api resulting in upstream { server api:8081; }, api will be resolved to the containers IP, and the connections will be passed to <container_ip>:8081 instead of localhost:8081 (preferred) OR you can run the client service on the host network and keep using localhost.

Docker nginx reverseProxy Connection Refused

I have 2 projects one called defaultWebsite and the other one nginxProxy.
I am trying to set up the following:
in /etc/hosts i have setup 127.0.0.1 default.local, docker containers are running for all. I did not add a php-fpm container for the reverseProxy (Should i?)
nginxReverseProxy default.config
#sample setup
upstream default_local {
server host.docker.internal:31443;
}
server {
listen 0.0.0.0:80;
return 301 https://$host$request_uri;
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
ssl_certificate /etc/ssl/private/localhost/default_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/default_dev.key;
#ssl_verify_client off;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_local;
}
}
defaultWebsite config:
server {
listen 0.0.0.0:80;
server_name default.local;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 0.0.0.0:443 ssl;
server_name default.local;
root /app/public;
#this is for local. on production this will be different.
ssl_certificate /etc/ssl/default.local/localhost.crt;
ssl_certificate_key /etc/ssl/default.local/localhost.key;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-fpm:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/default_error.log;
access_log /var/log/nginx/default_access.log;
}
docker-compose.yml for defaultWebsite:
services:
nginx:
build: DockerConfig/nginx
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/nginx/htpasswd:/etc/nginx/.htpasswd
- ./data/nginx/nginx_dev.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php-fpm
- mysql
links:
- php-fpm
- mysql
ports:
- "31080:80"
- "31443:443"
expose:
- "31080"
- "31443"
environment:
VIRUAL_HOST: "default.local"
APP_FRONT_CONTROLLER: "public/index.php"
networks:
default:
aliases:
- default
php-fpm:
build: DockerConfig/php-fpm
working_dir: /app
volumes:
- .:/app
- ./log:/log
- ./data/php-fpm/php-ini-overrides.ini:/etc/php/7.3/fpm/conf.d/99-overrides.ini
ports:
- "30902:9000"
expose:
- "30902"
extra_hosts:
- "default.local:127.0.0.1"
networks:
- default
environment:
XDEBUG_CONFIG: "remote_host=172.29.0.1 remote_enable=1 remote_autostart=1 idekey=\"PHPSTORM\" remote_log=\"/var/log/xdebug.log\""
PHP_IDE_CONFIG: "serverName=default.local"
docker-compose.yml for nginxReverseProxy:
services:
reverse_proxy:
build: DockerConfig/nginx
hostname: reverseProxy
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:127.0.0.1"
volumes:
- ./data/nginx/dev/default_dev.conf:/etc/nginx/conf.d/default.conf
- ./data/certs:/etc/ssl/private/
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9a8479e6f8 default_nginx "nginx -g 'daemon of…" 12 hours ago Up 12 hours 31080/tcp, 31443/tcp, 0.0.0.0:31080->80/tcp, 0.0.0.0:31443->443/tcp default_nginx_1
5e1df4d6f1f5 default_php-fpm "/usr/sbin/php-fpm7.…" 12 hours ago Up 12 hours 30902/tcp, 0.0.0.0:30902->9000/tcp default_php-fpm_1
f3ec76cd7148 default_mysql "/entrypoint.sh mysq…" 12 hours ago Up 12 hours (healthy) 33060/tcp, 0.0.0.0:31336->3306/tcp default_mysql_1
d633511bc6a8 proxy_reverse_proxy "/bin/sh -c 'exec ng…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp proxy_reverse_proxy_1
If i access directly default.local:31443 i can see the page working.
When i try to access http://default.local it redirects me to https://default.local but in the same time i get this error:
reverse_proxy_1 | 2020/04/14 15:22:43 [error] 6#6: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.80.1, server: default.local, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:31443/", host: "default.local"
Not sure this is the answer, but the writing is to too long for a comment.
On your nginx conf, you have:
upstream default_local {
server host.docker.internal:31443;
}
and as i see it (could be wrong here;), you have a different container accessing it:
extra_hosts:
- "host.docker.internal:127.0.0.1"
but you set the hostname to 127.0.0.1, shouldn't it be the docker host ip. Since it is connecting to a different container?
In general ensure the docker host ip is used on all containers, when they need to connect to another container/outside.
ok, so it seems that the docker ip should be used on linux machines because this "host.docker.internal" variable does not exists yet (to be added in a future version)
to get docker ip in linux should be enough to run ip addr | grep "docker"
so the final config should look something like this for reverse_proxy default.conf:
upstream default_name {
server 172.17.0.1:52443;
}
#redirect to https
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
server_name default.localhost;
listen 443 ssl http2;
large_client_header_buffers 4 16k;
ssl_certificate /etc/ssl/private/localhost/whatever_dev.crt;
ssl_certificate_key /etc/ssl/private/localhost/whatever_dev.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
index index.php index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $proxy_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://default_name;
}
}

jwilder/nginx-proxy: no access to virtual host

I have a NAS behind a router. On this NAS I want to run for testing Nextcloud and Seafile together. Everything should be set up with docker. The jwilder/nginx-proxy container does no work as expected and I cannot find helpful information. I feel I am missing something very basic.
What is working:
I have a noip.com DynDNS that points to my routers ip: blabla.ddns.net
The router forwards ports 22, 80 and 443 to my NAS at 192.168.1.11
A plain nginx server running on the NAS can be accessed via blabla.ddns.net, its docker-compose.yml is this:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
ports:
- "80:80"
networks:
- web
networks:
web:
external: true
What is not working:
The same nginxserver like above behind the nginx-proxy. I cannot access this server. Calling blabla.ddns.net gives a 503 error, calling nextcloud.blabla.ddns.net gives "page not found". Viewing the logs of the nginx-proxy via docker logs -f nginxproxy logs every test with blabla.ddns.net and shows its 503 answer, but when I try to access nextcloud.blabla.ddns.net not even a log entry occurs.
This is the docker-compose.yml for one nginx behind a nginx-proxy:
version: '2'
services:
nginxnextcloud:
container_name: nginxnextcloud
image: nginx
restart: always
expose:
- 80
networks:
- web
environment:
- VIRTUAL_HOST=nextcloud.blabla.ddns.net
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginxproxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- web
networks:
web:
external: true
The generated configuration file for nginx-proxy /etc/nginx/conf.d/default.conf contains entries for my test server:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# nextcloud.blabla.ddns.net
upstream nextcloud.blabla.ddns.net {
## Can be connected with "web" network
# nginxnextcloud
server 172.22.0.2:80;
}
server {
server_name nextcloud.blabla.ddns.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://nextcloud.blabla.ddns.net;
}
}
Why is this minimal example not working?

Location for passing flask-socket events to the uWSGI server via nginx proxy in Docker

I am trying to use Socket-IO events (based on Flask-SocketIO) with my uwsgi and nginx setup on Docker. I am not sure how I should configure my nginx file to allow for the socket connection between client and server. Here is my current nginx configuration:
server {
listen 80;
server_name _;
location / {
try_files $uri #app;
}
location #app {
include /etc/nginx/uwsgi_params;
uwsgi_pass myapp:8080;
}
location /socket.io {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
uwsgi_pass myapp:8080/socket.io;
}
}
Docker Compose:
version: '3.5'
services:
web_server:
container_name: nginx
external_links:
- app
build:
context: .
dockerfile: server/Dockerfile
ports:
- 80:80
depends_on:
- app
app:
container_name: myapp
build:
context: .
dockerfile: application/Dockerfile
expose:
- 8080
Thank you in advance!
The Flask-SocketIO documentation shows an example nginx configuration. Here the Socket.IO location block from it:
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
The entire configuration is here.
You are using uwsgi_pass, which based on my understanding does not support proxying WebSocket connections. Use HTTP as this example shows.

How to setup nginx as reverse proxy with LetsEncrypt SSL encryption using Docker

I am trying to setup SSL for my homepage (www.myhomepage.com) using LetsEncrypt on a nginx reverse-proxy. I have an additional host without SSL running for testing proxying to multiple hosts (www.myotherhomepagewithoutssl.com).
The reverse-proxy and two hosts are running in three separate docker containers.
I got both hosts to work without SSL, but the encrypted one does not work, when trying to use SSL. The LetsEncrypt certificates appear to be setup/obtained correctly and are persisted in a docker volume.
I am trying to follow and adapt this tutorial to setup the LetsEncrypt SSL encryption:
http://tom.busby.ninja/letsecnrypt-nginx-reverse-proxy-no-downtime/
When trying to connect to the SSL encrypted host under www.myhomepage.com using Firefox I get this error:
Unable to connect
The other non-encrypted host under www.myotherhomepagewithoutssl.com works. And as I stated above, when I have www.myhomepage.com setup without SSL (in the same way as www.myotherhomepagewithoutssl.com), it is also reachable.
My complete setup is listed below and consists of:
* reverse_proxy_testing.sh: Bash script to clean-up, build and start the containers.
* compose_reverse_proxy.yaml: Docker-Compose file.
* reverse_proxy.docker: Dockerfile for setting up the reverse-proxy with nginx.
* nginx.conf: nginx config-file for the reverse-proxy.
I suspect that my error is located somewhere inside nginx.conf, but I cannot find it.
Any help is much appreciated!
nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
deny all;
}
upstream myhomepage {
server myhomepage_blog:80;
}
upstream docker-apache {
server apache:80;
}
server {
listen 80;
listen [::]:80;
server_name www.myhomepage.com myhomepage.com;
return 302 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443;
server_name www.myhomepage.com myhomepage.com;
ssl_certificate /etc/letsencrypt/live/myhomepage.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myhomepage.com/privkey.pem;
location /.well-known {
root /var/www/ssl-proof/myhomepage.com/;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://myhomepage;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 900s;
}
}
server {
listen 80;
server_name www.myotherhomepagewithoutssl.com myotherhomepagewithoutssl.com;
location / {
proxy_pass http://docker-apache;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
reverse_proxy.docker:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /var/www/ssl-proof/myhomepage.com/.well-known
RUN apk update && apk add certbot
compose_reverse_proxy.yaml:
version: '3.3'
services:
reverseproxy:
image: reverseproxy
ports:
- 80:80
restart: always
volumes:
- proxy_letsencrypt_ssl_proof:/var/www/ssl-proof
- proxy_letsencrypte_certificates:/etc/letsencrypt
apache:
depends_on:
- reverseproxy
image: httpd:alpine
restart: always
myhomepage_blog:
image: wordpress
links:
- myhomepage_db:mysql
environment:
- WORDPRESS_DB_PASSWORD=somepassword
- VIRTUAL_HOST=myhomepage.com
volumes:
- myhomepage_code:/code
- myhomepage_html:/var/www/html
restart: always
myhomepage_db:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=somepassword
- MYSQL_DATABASE=wordpress
volumes:
- myhomepage_dbdata:/var/lib/mysql
restart: always
volumes:
myhomepage_dbdata:
myhomepage_code:
myhomepage_html:
proxy_letsencrypt_ssl_proof:
proxy_letsencrypte_certificates:
reverse_proxy_testing.sh:
#!/bin/bash
docker rm testreverseproxy_apache_1 testreverseproxy_myhomepage_blog_1 testreverseproxy_myhomepage_db_1 testreverseproxy_reverseproxy_1
docker build -t reverseproxy -f reverse_proxy.docker .
docker-compose -f reverse_proxy_compose.yml up

Resources