For some reason, haproxy is converting my request. Below is the log output.
wc_haproxy | 172.29.0.1:35296 [03/Nov/2021:22:03:44.901] http http/<NOSRV> 0/-1/-1/-1/0 302 134 - - LR-- 1/1/0/0/0 0/0 "POST /register HTTP/1.1"
wc_haproxy | 172.29.0.1:58040 [03/Nov/2021:22:03:44.906] http~ app/app1 0/0/1/0/1 404 130 - - ---- 1/1/0/0/0 0/0 "GET /register HTTP/1.1"
Below is my configuration:
global
log stdout format raw local0
defaults
mode http
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
log global
frontend http
bind *:80
bind *:443 ssl crt /usr/local/etc/haproxy/company.com.pem
redirect scheme https if !{ ssl_fc }
option httplog
acl sub1 hdr_sub(host) -i app.company.com
use_backend app if sub1
use_backend frontstore
backend frontstore
option forwardfor
option httpchk GET /
http-check send ver HTTP/1.1 hdr Host frontstore
server frontstore1 frontstore:8001 check
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
backend app
option forwardfor
option httpchk GET /check
http-check send ver HTTP/1.1 hdr Host app
server app1 app:8002 check
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
Does anyone know why it would do that?
Fyi I'm using the docker image "haproxy:lts-alpine" if that makes any difference.
Because of your frontend configuration:
frontend http
...
redirect scheme https if !{ ssl_fc }
HAProxy is redirecting your application traffic from HTTP to HTTPS. You can check it in your logs, as it is returning a 302 HTTP status code:
wc_haproxy | 172.29.0.1:35296 [03/Nov/2021:22:03:44.901] http http/<NOSRV> 0/-1/-1/-1/0 302 134 - - LR-- 1/1/0/0/0 0/0 "POST /register HTTP/1.1"
Please, consider review the HAProxy relevant documentation.
The POST 302 redirect to GET conversion is a common browser pattern. Please, consider read this SO question. I think it can be related to your problem.
Related
I am new to nginx and trying to understand what is going on here. I have a docker compose file that starts up a nginx container like so:
proxy:
image: nginx:alpine
container_name: proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
Which copies my default.conf into the nginx container, which looks like this:
server {
listen 80;
listen [::]:80;
server_name localhost testthis;
return 301 https://www.google.com$request_uri;
}
So if I run curl -I http://localhost, I see google.com as expected
HTTP/1.1 301 Moved Permanently
Server: nginx/1.21.6
Date: Fri, 25 Feb 2022 06:39:39 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: https://www.google.com/
But if I run curl -I http://testthis, I get this response:
curl: (6) Could not resolve host: testthis
Why is this happening if the server names are on the same server block? Eventually I am wanting to set up a custom domain and subdomains to forward requests to specific localhost ports per app but not understanding how this works too well.
curl -I http://localhost works because localhost, by default, resolves to an IP from your machine (127.0.0.1), not because it's listed in your nginx's default.conf. And Docker configures your machine to forward traffic on port 80 and 443 (the ones used for HTTP and HTTPS) to that container.
The server_name directive in nginx's configuration makes it recognize requests with that DNS in the request's Host: header. It does not advertise that as a name for your network.
For that to work, you need to make your computer recognize testthis as a name for your computer. On Linux, edit /etc/hosts and add this line:
127.0.0.1 testthis
On Windows, I don't know, but you can certainly search for "windows hosts file" and get a similar method.
curl --connect-to testthis:80:127.0.0.1:80 http://google.com should do the trick
I am using HAProxy with Keycloak, welcome page is showing fine but each time I enter Administration Console it shows me a blank page with no info with status code 200.
I am using let's encrypt SSL certificate and here is my HAProxy config and docker-compose.
Screenshot of the page:-
link to screenshot
HAProxy config:-
global
log stdout local0 debug
daemon
maxconn 4096
defaults
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
log global
log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}
frontend public
bind *:80
bind *:443 ssl crt /usr/local/etc/haproxy/haproxy.pem alpn h2,http/1.1
http-request redirect scheme https unless { ssl_fc }
default_backend web_servers
backend web_servers
option forwardfor
server auth1 auth:8080
docker-compose.yaml :-
version: "3"
networks:
internal-network:
services:
reverse-proxy:
build: ./reverse-proxy/.
image: reverseproxy:latest
ports:
- "80:80"
- "443:443"
networks:
- internal-network
depends_on:
- auth
auth:
image: quay.io/keycloak/keycloak:latest
networks:
internal-network:
environment:
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_USER: ***
KEYCLOAK_PASSWORD: ***
# Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the PostgreSQL JDBC driver documentation in order to use it.
#JDBC_PARAMS: "ssl=false"
the URL to the page I am trying to access is https:///auth/admin/master/console/
Notes: when trying to remove SSL from HAProxy, the Keycloak opens a page with error "https required"
One obvious issue (there can be more issues, so fix of this one may still not fix everything):
https://www.keycloak.org/docs/latest/server_installation/index.html#_setting-up-a-load-balancer-or-proxy
Configure your reverse proxy or loadbalancer to properly set X-Forwarded-For and X-Forwarded-Proto HTTP headers.
You didn't configure this part in your haproxy frontend section. You need that:
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-For %[src]
http-request set-header X-Forwarded-Proto https
Https protocol is required for OIDC, so "https required" is correct response, when you reach Keycloak via http protocol.
I have a Docker stack for my mail server.
My docker-compose.xml contains
version: '3.7'
services:
postfix:
...
dovecot:
....
ports:
- "110:110"
- "995:995"
- "143:143"
- "993:993"
networks:
- mail
....
roundcube:
image: roundcube/roundcubemail
container_name: roundcube
environment:
- ROUNDCUBEMAIL_DEFAULT_HOST=dovecot
# - ROUNDCUBEMAIL_DEFAULT_PORT=993
networks:
- proxy
- mail
I also have a Nginx container running as a proxy for all my web applications. For roundcube I have
set $roundcube_upstream http://roundcube;
location /roundcube/ {
rewrite ^/roundcube/(.*) /$1 break;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_pass $roundcube_upstream;
}
With that configuration it's working. I can go to https://www.mydomain.be/rouncube/ and I can login. The default port is 143. So roundcube si connecting to dovecot with imap.
Now, I'd like to use port 993 and ssl/tls.
I tried decommenting the ROUNDCUBEMAIL_DEFAULT_PORT=993, but also using ssl://dovecot or tls://dovecot or ssl://mail.mydomain.be, ... but nothing is working.
When I click on the connextion button, after a while I receive an nginx error page. In my proxy logs I can see
2019/01/31 09:29:25 [error] 460#460: *82483 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 194.197.210.75, server: www.mydomain.be, request: "POST /roundcube/?_task=login HTTP/1.1", upstream: "http://172.18.0.9:80/?_task=login", host: "www.mydomain.be", referrer: "https://www.mydomain.be/roundcube/"
And I don't understand where the http://172.18.0.9:80/?_task=login is coming from ?
With Thunderbird client I can connect on that port.
What's the problem ?
Edit
Using
- ROUNDCUBEMAIL_DEFAULT_HOST=ssl://dovecot
- ROUNDCUBEMAIL_DEFAULT_PORT=993
I now have a response : connection error to the storage server.
In my roundcube logs :
errors: <1db522a3> IMAP Error: Login failed for me#mydomain.be from 172.18.0.8(X-Real-IP: ...,X-Forwarded-For: ...). Could not connect to ssl://dovecot:993: Unknown reason in /var/www/html/program/lib/Roundcube/rcube_imap.php on line 196 (POST /?_task=login&_action=login)172.18.0.8 - - [31/Jan/2019:13:57:37 +0100] "POST /?_task=login HTTP/1.1" 200 3089 "https://www.mydomain.be/roundcube/?_task=login" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0"
and in dovecot logs
2019-01-31T13:57:38.002653+01:00 536ff3507263 dovecot: auth: Debug: auth client connected (pid=35),
2019-01-31T13:57:38.010096+01:00 536ff3507263 dovecot: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=192.168.240.3, lip=192.168.240.2, TLS, session=<nVssksCAT7LAqPAD>
So dovecot is well contacted but ... ? Don't know whats the problem.
Your issue is that roundcube requires TLS or SSL certificates to be verified by default. Either copy the certificate from the mail server, use letsencrypt to validate your certificate or turn off peer verification in your roundcube configuration.
Update
The details in this question are getting long, but I think it narrows down to this:
For some reason the host name matters to Nginx when it's trying to figure out whether to proxy the request. If the host name is set to git.example.com the request does not seem to go through, but if it's set to 203.0.113.2 then it goes through. Why does the host name matter?
Filed an issue with Nginx here
And docker compose
Start of original question
When I type in the IP address of the reverse proxy directly into my browser bar, it does perform the redirect.
When using a URL that is resolved via the /etc/hosts entry 203.0.113.2 git.example.com the "Welcome to Ngnix page" is shown. Any ideas? This is the configuration:
server {
listen 203.0.113.2:80 default_server;
server_name 203.0.113.2 git.example.com;
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
location / {
proxy_pass http://203.0.113.1:3000;
}
}
This is the docker-compose.yml file that is used to launch the whole thing:
version: '3'
services:
gogs-nginx:
build: ./proxy
ports:
- "80:80"
networks:
mk1net:
ipv4_address: 203.0.113.2
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
external: true
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
One interesting thing is that I can navigate to for example:
http://203.0.113.2/issues
The log for the above URL is:
gogs-nginx_1 | 203.0.113.1 - - [07/Oct/2018:11:28:06 +0000] "GET / HTTP/1.1" 200 38825 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
If I then change 203.0.113.2 with git.example.com (So that the url ends up being git.example.com I get Nginxs "404 not found" page, and the log says:
gogs-nginx_1 | 2018/10/07 11:31:34 [error] 8#8: *10 open() "/usr/share/nginx/html/issues" failed (2: No such file or directory), client: 203.0.113.1, server: localhost, request: "GET /issues HTTP/1.1", host: "git.example.com"
If I only use http://git.example.com as the URL I get the NGINX welcome page, and the following log:
gogs-nginx_1 | 203.0.113.1 - - [07/Oct/2018:11:34:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
It looks like Nginx understands that the request is for the proxy because it logs the IP of the proxy, but it does not redirect to the proxy and returns a 304 ...
Using Curl to perform requests
Using curl with a host name parameter that targets the proxy like this:
curl -H 'Host: git.example.com' -si http://203.0.113.2
Results in the Nginx welcome page:
ole#mki:~/Gogs/.gogs/docker$ curl -H 'Host: git.example.com' -si http://203.0.113.2
HTTP/1.1 200 OK
Server: nginx/1.15.1
Date: Sun, 07 Oct 2018 17:09:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 03 Jul 2018 13:27:08 GMT
Connection: keep-alive
ETag: "5b3b79ac-264"
Accept-Ranges: bytes
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
But if I change the host name to the ip address like this:
Using curl with a host name parameter that targets the proxy like this:
curl -H 'Host: 203.0.113.2' -si http://203.0.113.2
Then the proxy works as it should:
ole#mki:~/Gogs/.gogs/docker$ curl -H 'Host: 203.0.113.2' -si http://203.0.113.2
HTTP/1.1 302 Found
Server: nginx/1.15.1
Date: Sun, 07 Oct 2018 17:14:46 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 34
Connection: keep-alive
Location: /user/login
Set-Cookie: lang=en-US; Path=/; Max-Age=2147483647
Set-Cookie: i_like_gogits=845bb09d69587b81; Path=/; HttpOnly
Set-Cookie: _csrf=neGgBfG4LdOcdrdeA0snHjVGz4s6MTUzODkzMjQ4NjE5MzEzNzI3OQ%3D%3D; Path=/; Expires=Mon, 08 Oct 2018 17:14:46 GMT; HttpOnly
Set-Cookie: redirect_to=%252F; Path=/
Found.
I am sorry, I failed to realize what's happening on your side because the information is sometimes confusing and sometimes incomplete. But Stackoverflow provides a great explanation on what is considered a good question: How to create a Minimal, Complete, and Verifiable example and so I have just tried to implement a minimal example of a system you are likely going to build.
Below I am providing all the files and will show you a test run as well.
File #1: docker-compose.yml
gogs:
image: gogs/gogs
web:
build: .
ports:
- 8000:80
links:
- gogs
I have outdated Docker at my computer and I do not want to bother with Docker networking, so I have just linked both containers using Docker links. This is the most important part and the link will ensure that (1) our web container depends on gogs; (2) we are able to reference gogs IP from inside web as just gogs. Docker will resolve the name to an IP assigned to the container.
Since I want a minimal example, I've skipped everything else as irrelevant. For example, volume.
File #2: Dockerfile
Newer Compose versions support config options specified right in docker-compose.yml, but I need a custom Dockerfile instead. It's trivial:
FROM nginx:stable-alpine
COPY gogs.conf /etc/nginx/conf.d
File #3: gogs.conf
And finally we need Nginx configuration for proxy:
server {
listen 80 default_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass http://gogs:3000;
}
}
You may notice here we are referring another container simply by name gogs and we need to know what port number it is exposes. We know: 3000.
Running
$ docker-compose build
$ docker-compose up
It's up and running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f74293df630 g_web "nginx -g 'daemon off" 2 minutes ago Up 26 seconds 0.0.0.0:8000->80/tcp g_web_1
dfa2dbaa6074 gogs/gogs "/app/gogs/docker/sta" 2 minutes ago Up 26 seconds 22/tcp, 3000/tcp g_gogs_1
web container is exposed to the world at port number 8000.
Tests
by IP
Let's request it by IP:
$ curl -si http://192.168.99.100:8000/
HTTP/1.1 302 Found
Server: nginx/1.14.0
Date: Sun, 07 Oct 2018 15:13:55 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 31
Connection: keep-alive
Location: /install
Set-Cookie: lang=en-US; Path=/; Max-Age=2147483647
Set-Cookie: i_like_gogits=50411f542e2ae8f8; Path=/; HttpOnly
Set-Cookie: _csrf=ZJxRPqnqayIbpAYgZ22zrPIOaSo6MTUzODkyNTIzNTQ2NTg5MDE1NA%3D%3D; Path=/; Expires=Mon, 08 Oct 2018 15:13:55 GMT; HttpOnly
Found.
Corresponding log file:
web_1 | 192.168.99.1 - - [07/Oct/2018:15:14:24 +0000] "GET / HTTP/1.1" 302 31 "-" "curl/7.61.1" "-"
gogs_1 | [Macaron] 2018-10-07 15:14:24: Started GET / for 192.168.99.1
gogs_1 | [Macaron] 2018-10-07 15:14:24: Completed GET / 302 Found in 199.519µs
gogs_1 | 2018/10/07 15:14:24 [TRACE] Session ID: 38d06d393a9e9d21
gogs_1 | 2018/10/07 15:14:24 [TRACE] CSRF Token: Xth986dFWhhj8w8vBdIqRZu4SbI6MTUzODkyNTI2NDYxMDYzNzAyNA==
I can see from the log that (1) both containers work and they were used to process the request; (2) 192.168.99.1 is my host's IP address, which means "gogs" successfully gets a real request IP via X-Forwarded-For.
by domain name
OK, let's request using a domain name:
$ curl -H 'Host: g.example.com' -si http://192.168.99.100:8000/
Trust me, this is just sufficient. Host is an HTTP protocol header to pass domain name. And any browser will do the same under the hood.
and the corresponding log file is --
gogs_1 | [Macaron] 2018-10-07 15:32:49: Started GET / for 192.168.99.1
gogs_1 | [Macaron] 2018-10-07 15:32:49: Completed GET / 302 Found in 618.701µs
gogs_1 | 2018/10/07 15:32:49 [TRACE] Session ID: 81f64d97e9c3dd1e
gogs_1 | 2018/10/07 15:32:49 [TRACE] CSRF Token: X5QyHM4LMIfn8OSJD1gwSSEyXV46MTUzODkyNjM2OTgyODQyMjExMA==
web_1 | 192.168.99.1 - - [07/Oct/2018:15:32:49 +0000] "GET / HTTP/1.1" 302 31 "-" "curl/7.61.1" "-"
No changes, everything works as expected.
I'm trying to build stack with Traefik and Nginx based on Docker. Without HTTPS is everything fine, but I get an error as soon as I put on HTTPS configuration.
I'm getting this error from Nginx on example.com: 400 Bad Request / The plain HTTP request was sent to HTTPS port. In the address bar I can see the green lock saying connection is secure.
Certbot works fine so I have real SSL certificate inside the proper folder.
I can get to the Traefik dasboard when I visit traefik.example.com but I have to accept no SSL browser warning and dasboard is also working without HTTPS.
docker-compose.yml
version: '3.4'
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=traefik
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
networks:
- traefik
nginx:
image: nginx:latest
volumes:
- ../www:/var/www
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=nginx
- traefik.frontend.rule=Host:example.com
- traefik.port=80
- traefik.port=443
networks:
- traefik
networks:
traefik:
driver: overlay
external: true
attachable: true
traefik.toml
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/letsencrypt/live/example.com/fullchain.pem"
keyFile = "/etc/letsencrypt/live/example.com/privkey.pem"
[docker]
domain="example.com"
watch = true
exposedByDefault = true
swarmMode = false
nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
root /var/www/public;
index index.html;
}
Thanks for your help.
First there is no need to have SSL redirection configured in both Traefik and Nginx. Also Traefik frontend is matching only non www variant but backend app expects www. Finally Traefik web provider is deprecated so there should be newer api provider.
As I just stumbled upon a similar problem with Traefik v2
400 Bad Request / The plain HTTP request was sent to HTTPS port
with an Nginx error log stating
400 client sent plain HTTP request to HTTPS port while reading client request headers
and scratching my head over it, I finally found the source of that error. It's not that the TLS certs were invalid or something in the transport broken, but that the wiring between routers, services and port mappings were off.
Previously I did not see, that the Docker Compose stack had an Nginx container only listening on 80/tcp. I assumed everything's ok as I attached the ports to Traefik load balancers attached to a separate service per http/https endpoints with separated routers. This somehow did not work:
- "traefik.http.services.proxy.loadbalancer.server.port=80"
- "traefik.http.services.proxy-secure.loadbalancer.server.port=443"
Intermediary I now opened port: - "8008:80" - "8443:443" and got it working. Investigating further what's wrong with Traefik ports as those should get exposed per default. This is not a solution as those ports are now available to the outside world, but I am leaving this explanation here as I could not find anything on this topic that would point me in the right direction, so hopefully it's helpful for someone else later on.