Laravel Sail + Caddy - local dev site not trusted - docker
First time using Docker + Sail + Caddy. I copied the Caddy setup from here. I have a local Laravel dev project which uses 2x subdomains (sd1.project.local, sd2.project.local).
Everything looks fine on sail up. I can see the certs created in the expected local stores, one for each subdomain. I then import the two certs to Chrome.
I can get to the site, though Chrome does not trust it. I have restarted Chrome after the cert imports.
What have I missed / messed? Thanks!
The error of curl -vvv on sd1.project.local is:
$ curl -vvv https://sd1.project.local
* Trying 127.0.0.1:443...
* TCP_NODELAY set
* Connected to sd1.project.local (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The output of the sail up command for caddy is:
project-caddy-1 | {"level":"info","ts":1651470513.5381901,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
project-caddy-1 | {"level":"warn","ts":1651470513.5383825,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-Host: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5384026,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-For: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5386436,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-Host: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5386622,"logger":"caddyfile","msg":"Unnecessary header_up X-Forwarded-For: the reverse proxy's default behavior is to pass headers to the upstream"}
project-caddy-1 | {"level":"warn","ts":1651470513.5391574,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
project-caddy-1 | {"level":"warn","ts":1651470513.5395052,"logger":"admin","msg":"admin endpoint disabled"}
project-caddy-1 | {"level":"info","ts":1651470513.5397356,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001264d0"}
project-caddy-1 | {"level":"info","ts":1651470513.5441852,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
project-caddy-1 | {"level":"warn","ts":1651470513.5442154,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv1","http_port":80}
project-caddy-1 | {"level":"info","ts":1651470513.5448365,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
project-caddy-1 | {"level":"info","ts":1651470513.544853,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["sd1.project.local","sd2.project.local"]}
project-caddy-1 | {"level":"info","ts":1651470513.545812,"logger":"tls.obtain","msg":"acquiring lock","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5458567,"logger":"tls.obtain","msg":"acquiring lock","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5476136,"logger":"tls.obtain","msg":"lock acquired","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5479813,"logger":"tls","msg":"finished cleaning storage units"}
project-caddy-1 | {"level":"info","ts":1651470513.5485632,"logger":"tls.obtain","msg":"lock acquired","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5563135,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.556336,"logger":"tls.obtain","msg":"releasing lock","identifier":"sd1.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.5574615,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"info","ts":1651470513.557494,"logger":"tls.obtain","msg":"releasing lock","identifier":"sd2.project.local"}
project-caddy-1 | {"level":"warn","ts":1651470513.5588439,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
project-caddy-1 | 2022/05/02 05:48:33 define JAVA_HOME environment variable to use the Java trust
project-caddy-1 | 2022/05/02 05:48:33 not NSS security databases found
project-caddy-1 | {"level":"warn","ts":1651470513.561766,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [sd1.project.local]: no OCSP server specified in certificate","identifiers":["sd1.project.local"]}
project-caddy-1 | {"level":"warn","ts":1651470513.5634398,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [sd2.project.local]: no OCSP server specified in certificate","identifiers":["sd2.project.local"]}
project-caddy-1 | 2022/05/02 05:48:33 certificate installed properly in linux trusts
project-caddy-1 | {"level":"info","ts":1651470513.5809436,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
project-caddy-1 | {"level":"info","ts":1651470513.580976,"msg":"serving initial configuration"}
My docker-compose is:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
project.local:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
extra_hosts:
- 'host.docker.internal:host-gateway'
# ports:
# - "${APP_PORT:-80}:80"
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
GITHUB_TOKEN: '${GITHUB_TOKEN}'
FONTAWESOME_NPM_AUTH_TOKEN: '${FONTAWESOME_NPM_AUTH_TOKEN}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- redis
- mailhog
redis:
build:
context: "./Docker/Redis"
dockerfile: Dockerfile
privileged: true
command: sh -c "./init.sh"
ports:
- "${FORWARD_REDIS_PORT:-6379}:6379"
volumes:
- "sail-redis:/data"
networks:
- sail
healthcheck:
test: ["CMD", "redis-cli", "ping"]
retries: 3
timeout: 5s
mailhog:
image: "mailhog/mailhog:latest"
ports:
- "${FORWARD_MAILHOG_PORT:-1025}:1025"
- "${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025"
networks:
- sail
caddy:
build:
context: "./Docker/Caddy"
dockerfile: Dockerfile
args:
WWWGROUP: "${WWWGROUP}"
restart: unless-stopped
ports:
- "${APP_PORT:-80}:80"
- "${APP_SSL_PORT:-443}:443"
environment:
LARAVEL_SAIL: 1
HOST_DOMAIN: project.local
volumes:
- "./Docker/Caddy/file:/etc/caddy"
- ".:/srv:cache"
- "./Docker/Caddy/certificates:/data/caddy/certificates/local"
- "./Docker/Caddy/authorities:/data/caddy/pki/authorities/local"
- "sailcaddy:/data:cache"
- "sailcaddyconfig:/config:cache"
networks:
- sail
depends_on:
- project.local
networks:
sail:
driver: bridge
volumes:
sail-redis:
driver: local
sailcaddy:
external: true
sailcaddyconfig:
driver: local
My Caddifile is:
{
admin off
# debug
on_demand_tls {
ask http://project.local/caddy
}
local_certs
default_sni project
}
:80 {
reverse_proxy project.local {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port 443
# header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
:443 {
tls internal {
on_demand
}
reverse_proxy project.local {
header_up Host {host}
header_up X-Real-IP {remote}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-For {remote}
header_up X-Forwarded-Port 443
# header_up X-Forwarded-Proto {scheme}
health_timeout 5s
}
}
sd1.project.local {
reverse_proxy project.local
}
sd2.project.local {
reverse_proxy project.local
}
My Dockerfile is:
FROM caddy:alpine
LABEL maintainer="Adrian Mejias"
ARG WWWGROUP
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add --no-cache bash \
&& apk add --no-cache nss-tools \
&& rm -rf /var/cache/apk/*
RUN addgroup -S $WWWGROUP
RUN adduser -G $WWWGROUP -u 1337 -S sail
COPY start-container /usr/local/bin/start-container
RUN chmod +x /usr/local/bin/start-container
ENTRYPOINT ["start-container"]
My start-container is:
#!/usr/bin/env sh
if [ ! -z "$WWWUSER" ]; then
addgroup $WWWUSER sail
fi
if [ $# -gt 0 ];
then
# #todo find alpine equivilent of below
# exec gosu $WWWUSER "$#"
else
/usr/bin/caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
fi
Related
Bad Gateway from nginx published with docker-compose
I am learning docker-compose and now I am trying to setup app and nginx in one docker-compose script on my WSL Ubuntu. I am testing my endpoint with curl -v http://127.0.0.1/weatherforecast But I am receiving 502 Bad Gateway from nginx. If I change port exposing to port publishing in docker-compose, as below, requests bypass nginx and reach my app and I receive an expected response. ports: - 5000:8080 My setup: app's dockerfile FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base WORKDIR /app ENV ASPNETCORE_URLS=http://+:8080 FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build WORKDIR /src COPY ["WebApplication2.csproj", "."] RUN dotnet restore "./WebApplication2.csproj" COPY . . WORKDIR "/src/." RUN dotnet build "WebApplication2.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "WebApplication2.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "WebApplication2.dll"] nginx.conf events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; server { listen 80; location / { proxy_pass http://127.0.0.1:8080/; } } } docker-compose.yml version: "3.9" services: web: depends_on: - nginx build: ./WebApplication2 expose: - "8080" nginx: image: "nginx" volumes: - ./nginx.conf:/etc/nginx/nginx.conf - ./logs:/var/log/nginx/ ports: - 80:80 >docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------------------- composetest_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:80->80/tcp,:::80->80/tcp composetest_web_1 dotnet WebApplication2.dll Up 8080/tcp /var/log/nginx/error.log [error] 31#31: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.26.0.1, server: , request: "GET /weatherforecast HTTP/1.1", upstream: "http://127.0.0.1:8080/weatherforecast", host: "127.0.0.1" cURL output: * Trying 127.0.0.1:80... * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) > GET /weatherforecast HTTP/1.1 > Host: 127.0.0.1 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 502 Bad Gateway < Server: nginx/1.21.1 < Date: Fri, 13 Aug 2021 17:50:56 GMT < Content-Type: text/html < Content-Length: 157 < Connection: keep-alive < <html> <head><title>502 Bad Gateway</title></head> <body> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.21.1</center> </body> </html> * Connection #0 to host 127.0.0.1 left intact
You should redirect your request to your web container instead of 127.0.0.1. Each container is running as separate part of network (each has different IP address) and 127.0.0.1 points to local container. So, in your case, it point to nginx itself. Instead of real IP address of container, you can use DNS name (it is equal to service name in docker-compose). Use something like: events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; server { listen 80; location / { proxy_pass http://web:8080/; } } } Also, you specified that your web container depends on nginx, but it should be viceversa. Like: version: "3.9" services: web: build: . nginx: image: "nginx" depends_on: - web volumes: - ./nginx.conf:/etc/nginx/nginx.conf ports: - 80:80
nginx reverse proxy to other nginx 502 bad gateway
I would like to create two services, both of them having their own nginx. I would like to use third nginx as a reverse proxy to these two but I got 502 Bad Gateway when I request http://127.0.0.1:8080/ http://127.0.0.1:8080/one http://127.0.0.1:8080/two accessing: http://127.0.0.1:8081 http://127.0.0.1:8082 works ok I have this docker-compose.yml version: "3.3" services: nginx-one: image: nginx:1.17.8 ports: - "8081:80" networks: - frontend - backend volumes: - ./nginx-one/html:/usr/share/nginx/html nginx-two: image: nginx:1.17.8 ports: - "8082:80" networks: - frontend - backend volumes: - ./nginx-two/html:/usr/share/nginx/html nginx-reverse-proxy: image: nginx:1.17.8 ports: - "8080:80" networks: - frontend - backend volumes: - ./nginx-reverse-proxy/html:/usr/share/nginx/html - ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d debian-network: image: cslev/debian_networking stdin_open: true # docker run -i tty: true # docker run -t networks: - frontend - backend networks: frontend: internal: false backend: internal: true and dir structure . ├── docker-compose.yml ├── nginx-one │ └── html │ └── index.html ├── nginx-reverse-proxy │ ├── conf.d │ │ └── default.conf │ └── html │ └── index.html ├── nginx-two └── html └── index.html nginx.conf content user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } and conf.d/default.conf server { listen 80; location /one { proxy_pass http://127.0.0.1:8081/; } location /two { proxy_pass http://127.0.0.1:8082/; } } When I comment this line of docker-compose.yml # ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d so conf.d/default.conf is not used, and I request from the host's browser: http://127.0.0.1:8080/ it gives a proper response from the nginx-reverse-proxy itself but obviously http://127.0.0.1:8080/one http://127.0.0.1:8080/two don't provide any response from http://127.0.0.1:8081 http://127.0.0.1:8082 but 404 instead. docker ps output: IMAGE COMMAND CREATED STATUS PORTS NAMES cslev/debian_networking "bash" 25 minutes ago Up 3 minutes nginxproblem_debian-network_1 nginx:1.17.8 "nginx -g 'daemon of…" 47 minutes ago Up 3 minutes 0.0.0.0:8080->80/tcp nginxproblem_nginx-reverse-proxy_1 nginx:1.17.8 "nginx -g 'daemon of…" 14 hours ago Up 3 minutes 0.0.0.0:8082->80/tcp nginxproblem_nginx-two_1 nginx:1.17.8 "nginx -g 'daemon of…" 14 hours ago Up 3 minutes 0.0.0.0:8081->80/tcp nginxproblem_nginx-one_1 runnnig this script: #!/bin/sh docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080 docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080 docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080/one docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080/one docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8080/two docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-reverse-proxy_1:8080/two docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8081 docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-one_1:8080 docker exec nginxproblem_debian-network_1 curl -sS 127.0.0.1:8082 docker exec nginxproblem_debian-network_1 curl -sS nginxproblem_nginx-two_1:8082 gives: curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused curl: (7) Failed to connect to nginxproblem_nginx-reverse-proxy_1 port 8080: Connection refused curl: (7) Failed to connect to 127.0.0.1 port 8081: Connection refused curl: (7) Failed to connect to nginxproblem_nginx-one_1 port 8080: Connection refused curl: (7) Failed to connect to 127.0.0.1 port 8082: Connection refused curl: (7) Failed to connect to nginxproblem_nginx-two_1 port 8082: Connection refused when - ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d is not commented, and the same output when - ./nginx-reverse-proxy/conf.d:/etc/nginx/conf.d is commented in contrary to accessing ip addresses from the browser.
Unless you have host networking enabled, 127.0.0.1 is pointing to the container itself. You can refer to the other two containers from inside a container in the same network by the service name, e.g. nginx-one or nginx-two. You're also mapping the container port 80 to port 8080/8081/8082 on the host machine. This however does nothing for communication between containers in the same network. Check the docs: By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world. So, try changing http://127.0.0.1:8081/; to http://nginx-one/; and it should work.
How to Configure LetsEncrypt-Cerbot in a Standalone Container
I'm trying to find simple documentation on running certbot in a docker-container, but all I can find is complicated guides w/ running certbot + webserver etc. The official page is kinda useless... https://hub.docker.com/r/certbot/certbot/ .I already have webserver separate from my websites and I want to run certbot on it's own as well. Can anybody give me some guidance on how I could generate certificates for mysite.com with a webroot of /opt/mysite/html. As I already have services on port 443 and 80 I was thinking of using the "host-network" if needed for certbot, but I don't really understand why it needs access to 443 when my website is served over 443 already. I have found something like so to generate a certbot container, but I have no idea how to "use it" or tell it to generate a cert for my site. Eg: WD=/opt/certbot mkdir -p $WD/{mnt,setup,conf,www} cd $WD/setup cat << 'EOF' >docker-compose.yaml version: '3.7' services: certbot: image: certbot/certbot volumes: - type: bind source: /opt/certbot/conf target: /etc/letsencrypt - type: bind source: /opt/certbot/www target: /var/www/certbot entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'" EOF chmod +x docker-compose.yaml This link has something close to what I need, (obviously somehow I need to give it my domain as an argument!) Letsencrypt + Docker + Nginx docker run -it --rm \ -v certs:/etc/letsencrypt \ -v certs-data:/data/letsencrypt \ deliverous/certbot \ certonly \ --webroot --webroot-path=/data/letsencrypt \ -d api.mydomain.com I like to keep everything pretty "isolated" so I'm looking to just have certbot run in it's own container and configure nginx/webserver to use the certs seperatley and not have certbot either autoconfigure nginx or run in the same stack as a webserver.
Well I have been learing a lot about docker recently and i recently learned how to look at the Dockerfile. The certbot dockerfile gave me some more hints. Basically you can append the follow to your docker-compose.yaml and it is as if appending to certbot on the CLI. I will update with my working configs, but I was blocked due to the "Rate Limit of 5 failed auths/hour" :( See Entrypoint of DockerFile ENTRYPOINT [ "certbot" ] Docker-Compose.yaml: command: certonly --webroot -w /var/www/html -d www.examplecom -d examplecom --non-interactive --agree-tos -m example#example.com I will update with my full config once I get it working and will be including variables to utilize .env file. Full Config Example: WD=/opt/certbot mkdir -p $WD/{setup,certbot_logs} cd $WD/setup cat << 'EOF' >docker-compose.yaml version: '3.7' services: certbot: container_name: certbot hostname: certbot image: certbot/certbot volumes: - type: bind source: /opt/certbot/certbot_logs target: /var/log/letsencrypt - type: bind source: /opt/nginx/ssl target: /etc/letsencrypt - type: bind source: ${WEBROOT} target: /var/www/html/ environment: - 'TZ=${TZ}' command: certonly --webroot -w /var/www/html -d ${DOMAIN} -d www.${DOMAIN} --non-interactive --agree-tos --register-unsafely-without-email ${STAGING} EOF chmod +x docker-compose.yaml cd $WD/setup Variables: cat << 'EOF'>.env WEBROOT=/opt/example/example_html DOMAIN=example.com STAGING=--staging TZ=America/Whitehorse EOF chmod +x .env NGinx: server { listen 80; listen [::]:80; server_name www.example.com example.com; location /.well-known/acme-challenge/ { proxy_pass http://localhost:8575/$request_uri; include /etc/nginx/conf.d/proxy.conf; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; listen [::]:443; server_name www.example.com example.com; # ssl_certificate /etc/ssl/live/example.com/fullchain.pem; # ssl_certificate_key /etc/ssl/live/example.com/privkey.pem; ssl_certificate /etc/ssl/fake/fake.crt; ssl_certificate_key /etc/ssl/fake/fake.key; location / { proxy_pass http://localhost:8575/; include /etc/nginx/conf.d/proxy.conf; } ) Updated Personal Blog --> https://www.freesoftwareservers.com/display/FREES/Use+CertBot+-+LetsEncrypt+-+In+StandAlone+Docker+Container
How to enable HTTPS on AWS EC2 running an NGINX Docker container?
I have an EC2 instance on AWS that runs Amazon Linux 2. On it, I installed Git, docker, and docker-compose. Once done, I cloned my repository and ran docker-compose up to get my production environment up. I go to the public DNS, and it works. I now want to enable HTTPS onto the site. My project has a frontend using React to run on an Nginx-alpine server. The backend is a NodeJS server. This is my nginx.conf file: server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri /index.html; } location /api/ { proxy_pass http://${PROJECT_NAME}_backend:${NODE_PORT}/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } Here's my docker-compose.yml file: version: "3.7" services: ############################## # Back-End Container ############################## backend: # Node-Express backend that acts as an API. container_name: ${PROJECT_NAME}_backend init: true build: context: ./backend/ target: production restart: always environment: - NODE_PATH=${EXPRESS_NODE_PATH} - AWS_REGION=${AWS_REGION} - NODE_ENV=production - DOCKER_BUILDKIT=1 - PORT=${NODE_PORT} networks: - client ############################## # Front-End Container ############################## nginx: container_name: ${PROJECT_NAME}_frontend build: context: ./frontend/ target: production args: - NODE_PATH=${REACT_NODE_PATH} - SASS_PATH=${SASS_PATH} restart: always environment: - PROJECT_NAME=${PROJECT_NAME} - NODE_PORT=${NODE_PORT} - DOCKER_BUILDKIT=1 command: /bin/ash -c "envsubst '$$PROJECT_NAME $$NODE_PORT' < /etc/nginx/conf.d/nginx.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'" expose: - "80" ports: - "80:80" depends_on: - backend networks: - client ############################## # General Config ############################## networks: client: I know there's a Docker image for certbot, but I'm not sure how to use it. I'm also worried about the way I'm proxying requests to /api/ to the server over http. Will that also give me any problems? Edit: Attempt #1: Traefik I created a Traefik container to route all traffic through HTTPS. version: '2' services: traefik: image: traefik restart: always ports: - 80:80 - 443:443 networks: - web volumes: - /var/run/docker.sock:/var/run/docker.sock - /opt/traefik/traefik.toml:/traefik.toml - /opt/traefik/acme.json:/acme.json container_name: traefik networks: web: external: true For the toml file, I added the following: debug = false logLevel = "ERROR" defaultEntryPoints = ["https","http"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [retry] [docker] endpoint = "unix:///var/run/docker.sock" domain = "ec2-00-000-000-00.eu-west-1.compute.amazonaws.com" watch = true exposedByDefault = false [acme] storage = "acme.json" entryPoint = "https" onHostRule = true [acme.httpChallenge] entryPoint = "http" I added this to my docker-compose production file: labels: - "traefik.docker.network=web" - "traefik.enable=true" - "traefik.basic.frontend.rule=Host:ec2-00-000-000-00.eu-west-1.compute.amazonaws.com" - "traefik.basic.port=80" - "traefik.basic.protocol=https" I ran docker-compose up for the Traefik container, and then ran docker-compose up on my production image. I got the following error: unable to obtain acme certificate I'm reading the Traefik docs and apparently there's a way to configure the toml file specifically for Amazon ECS: https://docs.traefik.io/configuration/backends/ecs/ Am I on the right track?
Easiest way would be to setup a ALB and use it for HTTPS. Create ALB Add 443 Listener to ALB Generate Certificate using AWS Certificate Manager Set the Certificate to the default cert for the load balancer Create Target Group Add your EC2 Instance to the Target Group Point the ALB to the Target Group Requests will be served using the ALB with https
Enabling SSL is done through following the tutorial on Nginx and Let's Encrypt with Docker in Less Than 5 Minutes. I ran into some issues while following it, so I will try to clarify some things here. The steps include adding the following to the docker-compose.yml: ############################## # Certbot Container ############################## certbot: image: certbot/certbot:latest volumes: - ./frontend/data/certbot/conf:/etc/letsencrypt - ./frontend/data/certbot/www:/var/www/certbot As for the Nginx Container section of the docker-compose.yml, it should be amended to include the same volumes added to the Certbot Container, as well as add the ports and expose configurations: service_name: container_name: container_name image: nginx:alpine command: /bin/ash -c "exec nginx -g 'daemon off;'" volumes: - ./data/certbot/conf:/etc/letsencrypt - ./data/certbot/www:/var/www/certbot expose: - "80" - "443" ports: - "80:80" - "443:443" networks: - default The data folder may be saved anywhere else, but make sure to know where it is and make sure to reference it properly when reused later. In this example, I am simply saving it in the same directory as the docker-compose.yml file. Once the above configurations are put into place, a couple of steps are to be taken in order to initialize the issuance of the certificates. Firstly, your Nginx configuration (default.conf) is to be changed to accommodate the domain verification request: server { listen 80; server_name example.com www.example.com; server_tokens off; location / { return 301 https://$server_name$request_uri; } location /.well-known/acme-challenge/ { root /var/www/certbot; } } server { listen 443 ssl; server_name example.com www.example.com; server_tokens off; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri /index.html; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Once the Nginx configuration file is amended, a dummy certificate is created to allow for Let's Encrypt validation to take place. There is a script that does all of this automatically, which can be downloaded, into the root of the project, using CURL, before being amended to suit the environment. The script would also need to be made executable using the chmod command: curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh && chmod +x init-letsencrypt.sh Once the script is downloaded, it is to be amended as follows: #!/bin/bash if ! [ -x "$(command -v docker-compose)" ]; then echo 'Error: docker-compose is not installed.' >&2 exit 1 fi -domains=(example.org www.example.org) +domains=(example.com www.example.com) rsa_key_size=4096 -data_path="./data/certbot" +data_path="./data/certbot" -email="" # Adding a valid address is strongly recommended +email="admin#example.com" # Adding a valid address is strongly recommended staging=0 # Set to 1 when testing setup to avoid hitting request limits if [ -d "$data_path" ]; then read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then exit fi fi if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then echo "### Downloading recommended TLS parameters ..." mkdir -p "$data_path/conf" curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf" curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem" echo fi echo "### Creating dummy certificate for $domains ..." path="/etc/letsencrypt/live/$domains" mkdir -p "$data_path/conf/live/$domains" -docker-compose run --rm --entrypoint "\ +docker-compose -f docker-compose.yml run --rm --entrypoint "\ openssl req -x509 -nodes -newkey rsa:1024 -days 1\ -keyout '$path/privkey.pem' \ -out '$path/fullchain.pem' \ -subj '/CN=localhost'" certbot echo echo "### Starting nginx ..." -docker-compose up --force-recreate -d nginx +docker-compose -f docker-compose.yml up --force-recreate -d service_name echo echo "### Deleting dummy certificate for $domains ..." -docker-compose run --rm --entrypoint "\ +docker-compose -f docker-compose.yml run --rm --entrypoint "\ rm -Rf /etc/letsencrypt/live/$domains && \ rm -Rf /etc/letsencrypt/archive/$domains && \ rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot echo echo "### Requesting Let's Encrypt certificate for $domains ..." #Join $domains to -d args domain_args="" for domain in "${domains[#]}"; do domain_args="$domain_args -d $domain" done # Select appropriate email arg case "$email" in "") email_arg="--register-unsafely-without-email" ;; *) email_arg="--email $email" ;; esac # Enable staging mode if needed if [ $staging != "0" ]; then staging_arg="--staging"; fi -docker-compose run --rm --entrypoint "\ +docker-compose -f docker-compose.yml run --rm --entrypoint "\ certbot certonly --webroot -w /var/www/certbot \ $staging_arg \ $email_arg \ $domain_args \ --rsa-key-size $rsa_key_size \ --agree-tos \ --force-renewal" certbot echo echo "### Reloading nginx ..." -docker-compose exec nginx nginx -s reload +docker-compose exec service_name nginx -s reload I have made sure to always include the -f flag with the docker-compose command just in case someone doesn't know what to change if they had a custom named docker-compose.yml file. I have also made sure to set the service name as service_name to make sure to differentiate between the service name and the Nginx command, unlike the tutorial. Note: If unsure about the fact that the setup is working, make sure to set staging as 1 to avoid hitting request limits. It is important to remember to set it back to 0 once testing is done and redo all steps from amending the init-letsencrypt.sh file. Once testing is done and the staging is set to 0, it is important to stop previous running containers and delete the data folder for the proper initial certification to ensue: $ docker-compose -f docker-compose.yml down && yes | docker system prune -a --volumes && sudo rm -rf ./data Once the certificates are ready to be initialized, the script is to be run using sudo; it is very important to use sudo, as issues will occur with the permissions inside the containers if run without it. $ sudo ./init-letsencrypt.sh After the certificate is issued, there is the matter of automatically renewing the certificate; two things need to be done: In the Nginx Container, Nginx would reload the newly obtained certificates through the following ammendment: service_name: ... - command: /bin/ash -c "exec nginx -g 'daemon off;'" + command: /bin/ash -c "while :; do sleep 6h & wait $${!}; nginx -s reload; done & exec nginx -g 'daemon off;'" ... In the Certbot Container section, the following is to be add to check if the certificate is up for renewal every twelve hours, as recommended by Let's Encrypt: certbot: ... + entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'" Before running docker-compose -f docker-compose.yml up, the ownership of the data should be changed folder to the ec2-user; this is to avoid running into permission errors when running docker-compose -f docker-compose.yml up, or running it in sudo mode: sudo chown ec2-user:ec2-user -R /path/to/data/ Don't forget to add a CAA record in your DNS provider for Let's Encrypt. You may read here for more information on how to do so. If you run into any issues with the Nginx container because you are substituting variables and $server_name and $request_uri are not appearing properly, you may refer to this issue.
How to resolve a LetsEncrypt/Certbot 404 behind a Dockerized Reverse Proxy?
I have a couple web-domains behind a reverse proxy in Docker... As context, here's a snippet from the docker-compose.yml: version: '2' services: nginx-proxy: image: jwilder/nginx-proxy container_name: nginxREVERSE ports: - "80:80" - "443:443" volumes: - /var/run/docker.sock:/tmp/docker.sock:ro site1: container_name: 'nginxsite1' image: nginx:latest volumes: - ./sites-available/site1.com/index.html:/usr/share/nginx/html/index.html - ./sites-available/site1.com/nginx.conf:/etc/nginx/conf.d/default.conf ports: - 8080:80 environment: - VIRTUAL_HOST=site1.com,www.site1.com - VIRTUAL_PORT:80 - VIRTUAL_PORT:443 site2: container_name: 'nginxsite2' image: nginx:latest volumes: - ./sites-available/site2.com/index.html:/usr/share/nginx/html/index.html ports: - 8082:80 environment: - VIRTUAL_HOST=site2.com,www.site2.com - VIRTUAL_PORT:80 And this works perfectly in my browser. I can go to site1.com/www.site1.com or site2.com/www.site2.com and I get proxied to the correct Index.html page. Site1.com's nginx.conf file: server { listen 80; listen [::]:80; server_name site1.com www.site1.com; location ~ /.well-known/acme-challenge { allow all; root /usr/share/nginx/html; } root /usr/share/nginx/html; index index.html; } I'm running Certbot in docker using this command: sudo docker run -it --rm \ -v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \ -v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \ -v /docker/letsencrypt-docker-nginx/src/letsencrypt/letsencrypt-site:/data/letsencrypt \ -v "/docker-volumes/var/log/letsencrypt:/var/log/letsencrypt" \ certbot/certbot \ certonly --webroot \ --register-unsafely-without-email --agree-tos \ --webroot-path=/data/letsencrypt \ --staging \ -d site1.com -d www.site1.com When I port forward from the router to site1.com container directly, above works. When I port forward to the reverse proxy, I get this 404 error from Certbot: Failed authorization procedure. site1.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorizatin :: Invalid response from http://site1.com/.well-known/acme-challenge/x05mYoqEiWlrRFH9ye6VZfEiX-mlwEffVt2kP3twoOU: "<html>\r\n<head><ttle>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx/1.15.5</ce", www.site1.com (ttp-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.site1/.well-known/acme-challenge/AIDgGYg1WiQRm4-dOVK6fV8-vKqR940nLPzT9poFUZA: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx/1.15.5</ce" IMPORTANT NOTES: - The following errors were reported by the server: Domain: site1.com Type: unauthorized Detail: Invalid response from http://site1.com/.well-known/acme-challenge/x05mYoqEiWlrRFH9ye6VZfEiX-mlwEOU: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx/1.15.5</ce" Domain: www.site1.com Type: unauthorized Detail: Invalid response from http://www.site1.com/.well-known/acme-challenge/AIDgGYg1WiQRm4-dOVK6fV8-poFUZA: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx/1.15.5</ce" To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. What am I missing that allows me to access the sites behind the reverse proxy from my browser but won't allow Cerbot?
The challenge location in your Site1.com's nginx.conf file don't match with the certbot option --webroot-path. It's for that you get a 404 error. Next a posible correction. Site1.com's nginx.conf file: server { listen 80; listen [::]:80; server_name site1.com www.site1.com; location ~ /.well-known/acme-challenge { alias /usr/share/nginx/html; try_files $uri =404; } root /usr/share/nginx/html; index index.html; } Certbot in docker using this command: sudo docker run -it --rm \ -v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \ -v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \ -v /docker/letsencrypt-docker-nginx/src/letsencrypt/letsencrypt-site:/data/letsencrypt \ -v "/docker-volumes/var/log/letsencrypt:/var/log/letsencrypt" \ certbot/certbot \ certonly --webroot \ --register-unsafely-without-email --agree-tos \ --webroot-path=/usr/share/nginx/html \ --staging \ -d site1.com -d www.site1.com