Please help. I was stuck on it for the fourth day, tested dozens of configurations and parameter combinations (mainly in haproxy.cfg) and I don't have the strength or ideas what I'm doing wrong. This is my start with haproxy and I guess I'm making some simple mistake.
My goal is to configure the environment in such a way that the main web application running natively on the virtual machine port 8050 and application 1, application 2 etc. (web applications launched in the docker environment over the http protocol) were available from the Internet at public domain addresses secured with the HTTPS protocol.
Below are my configuration files and an overview diagram of the environment. Certbot works, I have obtained the certificate, all applications run over the http protocol using DNS names. Whenever I use https: // I get 503 service unavailable (but secured with a certificate).
I am asking for any help or hint.
haproxy: 2.6.2
server {
listen 8050 default_server;
listen [::]:8050 default_server;
root /usr/share/nginx/example.com/html;
index index.html index.htm;
# Limit File Upload Size in Nginx
client_max_body_size 1024M;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
try_files $uri $uri/ =404;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass php-fpm;
# (or 127.0.0.1:9000)
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
HAProxy
global
daemon
# Enable HAProxy runtime API
stats socket :9999 level admin expose-fd listeners
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-request 10000ms
option http-keep-alive
option log-health-checks
log global
# HTTP
frontend http-in
bind :80
# Forward HTTPS
# http-request redirect scheme https code 301 unless { ssl_fc }
# http-request redirect scheme https unless { ssl_fc }
# Certbot
acl certbot path_beg /.well-known/acme-challenge/
use_backend certbot_be if certbot
# Website 1 example.com
# redirect scheme http if { hdr(Host) -i example.com } { ssl_fc }
acl host_example.com hdr(host) -i example.com
use_backend web-example.com_be if host_example.com
# Website 2 phppgadmin.example.com
acl host_phppgadmin hdr(host) -i phppgadmin.example.com
use_backend web-phppgadmin_be if host_phppgadmin
# Website 3 portainer.example.com
acl host_portainer hdr(host) -i portainer.example.com
use_backend web-portainer_be if host_portainer
default_backend web-example.com_be
# HTTPS
frontend https-in
bind :443 ssl crt /usr/local/etc/haproxy/certificates/
http-request add-header X-Forwarded-Proto https
# Certbot
acl certbot path_beg /.well-known/acme-challenge/
use_backend certbot_be if certbot
# Docker resolver
resolvers docker
nameserver dns1 127.0.0.11:53
# Certbot backend
backend certbot_be
server xxx.lan.example.com certbot:380
backend web-example.com_be
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server xxx.lan.example.com 10.10.10.114:8050 ssl verify none
backend web-phppgadmin_be
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server xxx.lan.example.com phppgadmin:8001
backend web-portainer_be
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server xxx.lan.example.com portainer:8002
Related
I want to deploy Growthbook (A/B Testing tool) container along with Nginx reverse proxy for handling SSL/TLS encryption(i.e SSL Termination) into AWS ECS. I was trying to deploy using Docker compose file(i.e Docker ECS context). The problem is, it is creating all the necessary resources like Network Load Balances, Target Groups, ECS task definitions etc. And Abruptly fails creating ECS service and tries to delete all the resources it created when I ran docker compose --project-name growthbook up. Reason it says is "Nginx sidecar container exited".
Here is my docker-compose.yml file:
# docker-compose.yml
version: "3"
x-aws-vpc: "vpc-*************"
services:
growthbook:
image: "growthbook/growthbook:latest"
ports:
- 3000:3000
- 3100:3100
environment:
- MONGODB_URI=<mongo_db_connection_string>
- JWT_SECRET=<jwt_secret>
volumes:
- uploads:/usr/local/src/app/packages/back-end/uploads
nginx-tls-sidecar:
image: <nginx_sidecar_image>
ports:
- 443:443
links:
- growthbook
volumes:
uploads:
And Here is Dockerfile used to build nginx sidecar image:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY ssl.key /etc/nginx/ssl.key
COPY ssl.crt /etc/nginx/ssl.crt
In the above Dockerfile, SSL keys and certificates are self-signed generated using openssl and are in order.
And Here is my nginx.conf file:
# nginx Configuration File
# https://wiki.nginx.org/Configuration
# Run as a less privileged user for security reasons.
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
pid /var/run/nginx.pid;
http {
#Redirect to https, using 307 instead of 301 to preserve post data
server {
listen [::]:443 ssl;
listen 443 ssl;
server_name localhost;
# Protect against the BEAST attack by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:AES128:AES256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK;
ssl_prefer_server_ciphers on;
# Optimize TLS/SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive TLS/SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 24h;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# remember the certificate for a year and automatically connect to HTTPS
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://localhost:3000; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
location / {
proxy_pass http://localhost:3100; # TODO: replace port if app listens on port other than 80
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Basically, Growthbook exposes its service on http://localhost:3000 and http://localhost:3100. And Nginx sidecar container only listens on port 443. I need to be able to proxy to both endpoints which are exposed by Growthbook from Nginx port 443.
Help is much appreciated, if you guys find any mistakes in my configuration :)
By default, Growthbook service does not provide TLS encryption. So I used Nginx as sidecar handling SSL Termination. As a end result, I need to be able to run Growthbook service using TLS encryption hosted on AWS ECS.
I'm running docker-compose with vaultwarden image on internal port 8732, Nginx works as a proxy server outside natively, which is listening to the requests from 80/443 and forwarding them inside docker to 8732.
The problem is the static files (.css, .js, .png) not serving and getting error 404, while the main file successfully gets status 200.
Here's my docker-compose file:
version: '3'
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: always
volumes:
- ./vw-data:/data
ports:
- 8732:80
environment:
WEBSOCKET_ENABLED: "true" # Enable WebSocket notifications.
Here's the Nginx config file:
# HTTPS configuration
server {
# SSL configuration
listen [::]:443 ssl default_server ipv6only=on;
listen 443 ssl default_server;
ssl_certificate /etc/letsencrypt/live/bw.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bw.example.com/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
# include snippets/snakeoil.conf;
server_name bw.example.com;
root /opt/vault;
# index index.html index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://localhost:8732;
try_files $uri $uri/ =404;
}
#location ~ /\.ht {
# deny all;
#}
}
#HTTP config
server {
if ($host = bw.example.com) {
return 301 https://$host$request_uri;
}
listen 80 default_server;
listen [::]:80 default_server;
server_name bw.example.com;
return 404;
}
i've fixed this by commenting/deleting line in nginx configuration file try_files
#try_files $uri $uri/ =404;
try_files Checks the existence of files in the specified order and uses the first found file for request processing; the processing is performed in the current context.
I'm running into a frustrating issue I can't figure out. I have nginx running on an EC2 instance to receive requests and route them to a Docker container on the same EC2 instance running my Django app. Within the container, I have gunicorn and nginx (again) running to handle the web traffic.
All works well if I go to my domain name or IP over http but with https it just hangs and times out eventually. I don't see anything in the logs that might indicate what's going on. Since everything works with http I suspect it's an nginx config issue and nothing to do with my DNS configuration (but I'm not sure). For DNS, I've configured an A record that points to an Elastic IP and a CNAME for www.
Here is the nginx load balancer / reverse proxy config (running directly on the EC2 instance):
server {
server_name mysite.com www.mysite.com;
location / {
proxy_pass http://172.17.0.1:8080;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
#if ($host = www.mysite.com) {
# return 301 https://$host$request_uri;
#} # managed by Certbot
#if ($host = mysite.com) {
# return 301 https://$host$request_uri;
#} # managed by Certbot
listen 80;
server_name mysite.com www.mysite.com;
#return 404; # managed by Certbot
# to be deleted
location / {
proxy_pass http://172.17.0.1:8080;
}
}
I have temporarily enabled traffic on port just for testing but will disable it when everything is up and running.
Here is the nginx configuration within the Docker container (used for serving static files).
error_log /dev/stdout info;
server {
listen [::]:8080;
server_name _;
location /static/ {
alias /opt/www/mysite/static/;
expires 30d;
}
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:10000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $http_authorization;
add_header X-Cache-Status $upstream_cache_status;
}
}
Conceptually, I don't see anything wrong with my setup, even though it's a little messy to have 2 instances of nginx running. I'm not locked into using nginx (it's just what I'm most familiar with) so open to other alternatives (thinking it might be better to use traefik on the EC2 instance itself).
I'm facing the following problem. I'm running VPS with docker containers (VPS provider doesn't provide NAT module, thus all containers that connecting to the Internet must work in host network_mode).
Natively, there is NGINX reverse proxy in order to distribute HTTP flow across the application containers. Now, I would like to run docker image with Prestashop and setup NGINX to route flow by domain there.
NGINX is configured to accept SSL connections and pass request to the container as follows
server {
listen 80;
listen [::]:80;
server_name test.my-domain.pl;
return 301 https://test.my-domain.pl$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name test.my-domain.pl;
ssl on;
ssl_certificate_key /etc/letsencrypt/live/my-domain.pl/privkey.pem;
ssl_certificate /etc/letsencrypt/live/my-domain.pl/fullchain.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Content-Security-Policy upgrade-insecure-requests;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy "strict-origin" always;
add_header X-XSS-Protection "1; mode=block";
location / {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Scheme https;
proxy_pass_request_headers on;
proxy_pass "http://127.0.0.1:8402";
}
}
My docker-compose file looks like this:
version: '2'
services:
my-presta:
image: my-presta:1.7
network_mode: host
environment:
- PS_DOMAIN=test.my-domain.pl
- PS_HANDLE_DYNAMIC_DOMAIN=1
- PS_INSTALL_AUTO=0
my-mysql:
image: mysql:8.0.27
environment:
MYSQL_DATABASE: ***
MYSQL_USER: ***
MYSQL_PASSWORD: ***
MYSQL_ROOT_PASSWORD: ***
volumes:
- my-mysql:/var/lib/mysql
ports:
- '8403:3306'
command: --default-authentication-plugin=mysql_native_password
volumes:
my-mysql:
Here, I'm using my enhanced:) presta shop docker image my-presta which I build this way:
FROM prestashop/prestashop:1.7
COPY 000-default.conf /etc/apache2/sites-enabled/000-default.conf
COPY ports.conf /etc/apache2/ports.conf
COPY Link.php /var/www/html/classes/Link.php
000-default.conf and ports.conf files have changed VirtulHost port from 80 to 8402 to make apache server to work along with natively installed NGINX (port 80 and 443). 000-default.conf:
<VirtualHost *:8402>
...
</VirtualHost>
and ports.conf:
Listen 8402
Links.php file contains hack in method getBaseLink() where I force to return https. Relevat part of it:
public function getBaseLink($idShop = null, $ssl = null, $relativeProtocol = false)
{
...
$base = 'https://' . $shop->domain
...
}
Now, When I run the stack, install PrestaShop, connect it successfuly to the database I'm facing the infinitly redirect loop problem on '/' base path. Everything works fine on different paths, like /contact, /item-1 etc.
To be more specific the following curl gives me 302 return code and Location header pointing to https://test.my-domain.pl
curl http://127.0.0.1:8402/ -v -H 'Host: test.my-domain.pl' -H 'X-Schema: https' -H 'X-Forwarded-Proto: https'
Where the same curl but with extented path returns 200/404 and the HTML content
curl http://127.0.0.1:8402/contact -v -H 'Host: test.my-domain.pl' -H 'X-Schema: https' -H 'X-Forwarded-Proto: https'
In web browser is occurs to fail on https://test.my-domain.pl but it works on https://test.my-domain.pl/contact.
Any idea how to make it work?
I'm trying configure nginx to cache my api post result, but not work yet.
Below the steps I did:
1 - I installed nginx
2 - I create the configuration file:
"worker_processes 1;
events {
worker_connections 1024;
}
http {
proxy_cache_path /var/cache/nginx/oracle-query levels=1:2 keys_zone=oracle-query:10m max_size=1g
inactive=310s use_temp_path=off;
server {
listen 80;
root /home/docker-nginx/html;
index index.html index.htm;
server_name 172.17.0.1;
location /oracle-query {
auth_basic off;
add_header Cache-Control "no-cache, must-revalidate, max-age=0";
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache oracle-query;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504 http_404;
proxy_cache_lock on;
proxy_cache_valid any 600s;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_pass http://172.17.0.1:8081/r/vivo-person/person;
}
location / {
proxy_pass http://172.17.0.1:8081;
}
}
}"
3 - I started nginx cache with this configuration:
sudo docker run --link fnserver:fnserver --name nginx-cache -p 80:80 -p 443:443 -v $PWD/nginx.conf:/home/vmfn/docker-nginx/nginx.conf:ro nginx
My completer flow is sudo fn apps config set vivo-person COMPLETER_BASE_URL "http://$DOCKER_LOCALHOST:8081"
Without nginx my flow works well, but when I attribute completer listener to port 80, I have issues too.
I need some help or tutorial to configure this in my fn.
I think
proxy_pass http://172.17.0.1:8081/r/vivo-person/person;
should most likely be
proxy_pass http://172.17.0.1:8080/r/vivo-person/person;
8080 is the fn server port (by default) and 8081 is flow
It looks like you are trying to cache the whole function call so you should proxy to 8080 (fnserver)