How Install SSL Certificate on ha proxy dockerfile using letsencrypt - docker

I am using haproxy to route the domains and subdomains and it is deployed on port 80. I want that all the domains should be https or using SSL certificate.
global
log xx.xx.90.28 local0
log xx.xx.90.28 local1 notice
maxconn 2048
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
option forwardfor
option http-server-close
retries 3
timeout connect 5000
timeout client 10000
timeout server 10000
frontend balancer
bind *:80
mode http
stats enable
stats uri /stats
stats refresh 15s
stats show-node
stats auth admin:admin
acl domain hdr_dom(host) -i www.example.com
acl subdomain hdr_dom(host) -i app.example.com
acl subdomain1 hdr_dom(host) -i examplecom
use_backend go_app_1 if domain
use_backend go_app_2 if subdomain
use_backend go_app_3 if subdomain1
backend go_app_1
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
backend go_app_2
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8082 check
backend go_app_3
balance roundrobin
mode http
option forwardfor
server go xx.xx.90.28:8081 check
And here is the DockerFile
FROM haproxy:2.1
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Now I want to use letsencrypt to secure these URL please guide me how I can do this?

I think we need to setup letsencrypt locally first.
And inject the result setting to the container via volume mount.
Example command from Docker Hub:
docker run -d --name my-running-haproxy -v /path/to/etc/haproxy:/usr/local/etc/haproxy:ro haproxy:1.7
Something like below:
-v /my/letsencrypt/setting/:/etc/ssl/

Related

HAProxy config for TCP load balancing in docker container

I'm trying to put a HAProxy loadbalancer in front of my RabbitMQ cluster(which is setup with nodes in separate docker containers). I cannot find many examples for haproxy config for the above setup,
global
debug
defaults
log global
mode tcp
timeout connect 5000
timeout client 50000
timeout server 50000
frontend main
bind *:8089
default_backend app
backend app
balance roundrobin
mode http
server rabbit-1 172.18.0.2:8084
server rabbit-2 172.18.0.3:8085
server rabbit-3 172.18.0.4:8086
In this example, what should I give in the place of the ip addresses for docker containers?

Haproxy path manipulation

In Google Cloud Platform we do have a haproxy which serves as proxy for our internal VPN and do URL redirects from GCP DNS to proxy.mycompany.com (our haproxy) where we do further manipulation.
Haproxy is from image 2.0.8-alpine
Nginx is based on image 1-alpine
Lately I have installed nginx which serves static files.
Nginx is setup for autoindex on, accesible on url static-files.mycompany.com, and configured to show root of a gcs-bucket. Therefore if you visit static-files.mycompany.com you will see content of gcs-bucket:
directory static-files
any file in a bucket.
static-files directory contains static website files which were generated by hugo.
Those files contain paths like /docu/file.md
I've managed to configure haproxy that it will forward any requests for url website.mycompany.com to static-files.mycompany.com/static-files and show the generated static website by configuring this in backend section of haproxy config:
acl p_root path -i /
http-request set-path /static-files/\1 if p_root
Every time I visit website.mycompany.com the path is redirected to website.mycompany.com/static-files (which is actually static-files.mycompany.com/static-files )
However, the site is broken:
css is not loading, file is located in / and the requests aims for website.mycompany.com/mycss.file, but the file can be found under website.mycompany.com/static-files/mycss.file due to path manipulation above
Every link on site dead, requests website.mycompany.com/my-link-to-file but file is under
website.mycompany.com/static-files/my-link-to-file
I have a limited possibility to configure haproxy, most probably I can add parameters to existing backend and frontend section, here I am providing parameters I cannot change:
global
log stdout format raw local0 info
maxconn 30000
tune.ssl.default-dh-param 2048
resolvers vpc
parse-resolv-conf
hold valid 120s
defaults
mode http
log global
option dontlognull
#option tcplog
option httplog
option forwardfor
option redispatch
maxconn 3000
retries 3
timeout connect 5s
timeout client 50s
timeout server 50s
timeout tunnel 1h
timeout client-fin 30s
timeout http-keep-alive 4s
balance roundrobin
default-server check resolvers vpc
default-server on-marked-down shutdown-sessions
default-server max-reuse 100
frontend http
bind *:80
redirect scheme https code 302 if !{ ssl_fc }
monitor-uri /_healtz
use_backend %[hdr(host),lower]
frontend https
bind *:443 ssl no-sslv3 crt /config/haproxy_certificate.pem alpn http/1.1,h2
http-response add-header Via 1.1\ %[env(HOSTNAME)]
http-request add-header Via 1.1\ %[env(HOSTNAME)]
http-request add-header X-Forwarded-Proto https
http-request capture req.hdr(Host) len 40
http-request capture req.hdr(User-Agent) len 120
use_backend %[hdr(host),lower]
backend proxy.mycompany.com
stats enable
stats uri /
I would like to achieve the following:
By visiting website.mycompany.com I will be forwarded to static-files.mycompany.com/static-files in the background, however the URL will remain as website.mycompany.com and therefore the generated static website paths remain working and so the static site. Or Am I mistaken ?
I am open to any reasonable suggestion.
thank you
Well, it was quite simple in the end
http-request replace-uri ([^/:]*://[^/]*)?(.*) \1/static-files\2
Did the trick.
edit:
to avoid and infinite loop of /static-files/static-files/
acl remove_static-files path_beg -i /architecture-diagrams
http-request replace-uri ([^/:]*://[^/]*)?(.*) \1/static-files\2 if !remove_static-files

HaProxy forward proxy works on HTTP but gives 503 on HTTPS

I'm expecting the following config to receive HTTPS requests, do the SSL offloading and send HTTP requests to my backends, however with HTTPS I get "503 service unavailable".
all ACLs work correctly on HTTP and the stats page shows them as online
The stats page works correctly on HTTPS
These are all in a docker compose file, docker is doing the name resolution to internal IP correctly
Perhaps I'm missing something obvious? Quite new to attempting this so any help is appreciated.
global
tune.ssl.default-dh-param 2048
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
backend certbot
option httpchk GET /
default-server init-addr libc,none
server certbot_server certbot check port 80
backend client
option httpchk HEAD /
server client_server client check port 80
backend api
option httpchk OPTIONS /api/healthcheck
server api_server api check port 80
frontend app
bind *:80
bind *:443 ssl crt /certs/productpedia.co.uk.pem
use_backend certbot if { path_beg -i /.well-known/acme-challenge/ }
use_backend api if { path_beg /api }
default_backend client
stats enable
stats uri /stats
stats refresh 10s
stats admin if LOCALHOST
EDIT:
Attached a wireshark trace of the the 503 request, looks like server is resetting the connection, but not sure where I can go from here, or what would be causing this?
After a full day of debugging, it looks like simply specifying port 80 did the trick, although I would have expected the default port to be 80 perhaps it carries through a default port of 443? I could also get rid of the port 80 after check after this change which was the original trigger hint that something might be off there.
backend certbot
option httpchk GET /
default-server init-addr libc,none
server certbot_server certbot:80 check
backend client
option httpchk HEAD /
server client_server client:80 check
backend api
option httpchk OPTIONS /api/healthcheck
server api_server api:80 check

Dynamic DNS Resolution with HAProxy and Docker

I'm trying to setup HAProxy inside a Docker host.
Using HAProxy 1.7 and Docker 1.12
My haproxy.cfg looks like:
# Simple configuration for an HTTP proxy listening on port 81 on all
# interfaces and forwarding requests to a single backend "servers" with a
# single server "server1" listening on 127.0.0.1:8000
global
daemon
maxconn 256
resolvers docker
# nameserver dnsmasq 127.0.0.1:53
nameserver dns 127.0.0.1:53
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
default-server init-addr none
frontend http-in
bind *:80
default_backend www_somedomain1_com
# Define hosts
acl host_www_somedomain1_com hdr(host) -i somedomain1.com
acl host_www_somedomain1_com hdr(host) -i www.somedomain1.com
acl host_www_somedomain2_com hdr(host) -i www.somedomain2.com
## figure out which one to use
use_backend www_somedomain1_com if host_www_somedomain1_com
use_backend www_somedomain2_com if host_www_somedomain2_com
backend www_somedomain1_com
# Utilizing the Docker DNS to resolve below host
# server server1 www-somedomain1-com maxconn 32 check port 80
server server1 www-somedomain1-com resolvers docker check maxconn 32
backend www_somedomain2_com
# Utilizing the Docker DNS to resolve below host
# server server1 www-somedomain2-com maxconn 32 check resolvers docker resolve-prefer ipv4
server server1 www-somedomain2-com maxconn 32 check port 80
I want to use Docker's embedded DNS system - which, in my understanding, is only enabled when using a user defined network.
So I create a network (using the default bridge driver)
docker network create mynetwork
When I run my two named docker containers, (my-haproxy and www-somedomain1-com) I add them to that network with the --net flag.
Docker run commands:
docker run --name myhaproxy --net mynetwork -p 80:80 -d haproxy
docker run --name www-somedomain1-com --net mynetwork -d nginx
I know the Docker dns is functional because I can resolve from one container to the other when I hop on them in a bash shell. I can't get the right combo/config in HAProxy to enable the dynamic DNS resolution.
HAProxy stats page always shows the downstream backends as brown/resolution issue....
Some things that have helped:
- the "default-server init-addr none" helps pass the haproxy config check on startup.
Any guidance is greatly appreciated!
I think your issue is that you are using 127.0.0.1:53 for your resolver dns, when it needs to be 127.0.0.11:53 for the docker bridge network.
Here is my haproxy setup for dev docker stuff:
global
quiet
defaults
log global
mode http
option forwardfor
timeout connect 60s
timeout client 60s
timeout server 60s
default-server init-addr none
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend https-proxy
bind 0.0.0.0:80
bind 0.0.0.0:443 ssl crt /usr/local/etc/haproxy/dev_server.pem
redirect scheme https if !{ ssl_fc }
acl is_api_server hdr(host) -i mywebsite
use_backend api_server if is_api_server
backend api_server
server haproxyapi api-server-dev:80 check inter 10s resolvers docker_resolver resolve-prefer ipv4

HAProxy Sticky Sessions Node.js iOS Socket.io

I am trying to implement sticky sessions with HAProxy.
I have a HAProxy instance that routes to two different Node.js servers, each running socket.io. I am connecting to these socket servers (via the HAProxy server) using an iOS app (https://github.com/pkyeck/socket.IO-objc).
Unlike when using a web browser, the sticky sessions do not work, it is like the client is not handling the cookie properly and so the HAProxy server just routes the request to wherever it likes. Below you can see my HAProxy config (I have removed the IP addresses):
listen webfarm xxx.xxx.xxx.xxx:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
#replace XXXX with customer site name
cookie SERVERID insert indirect nocache
option httpclose
option forwardfor
#replace with web node private ip
server web01 yyy.yyy.yyy.yyy:8000 cookie server1 weight 1 maxconn 1024 check
#replace with web node private ip
server web02 zzz.zzz.zzz.zzz:8000 cookie server2 weight 1 maxconn 1024 check
This is causing a problem with the socket.io handshake, because the initial handshake routes to server1 then subsequent heartbeats from the client go to server2. This causes server2 to reject the client because the socket session ID is invalid as far as server 2 is concerned, when really all requests from the client should go to the same server.
Update the haproxy config file /etc/haproxy/haproxy.cfg by the following:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
option forwardfor
backend servers
cookie SRVNAME insert
balance leastconn
option forwardfor
server node1 127.0.0.1:3001 cookie node1 check
server node2 127.0.0.1:3002 cookie node2 check
server node3 127.0.0.1:3003 cookie node3 check
server node4 127.0.0.1:3004 cookie node4 check
server node5 127.0.0.1:3005 cookie node5 check

Resources