I've got this nginx configuration to redirect http to https:
# http redirects to https
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
...
}
It works properly on firefox. If I add in /etc/hosts entry like:
127.0.0.1 my-custom-domain.com to make sure I have domain that was never used, in firefox if I enter my-custom-domain.com, I get this:
Works as expected, it redirects to https.
But if I do the same on chrome, I get this:
Chrome only opens https one if I explicitly enter https://my-custom-domain.com.. Not sure why it behaves differently on chrome.
P.S. I read some people say that server_name must not be _ and have specific name, but it works the same even if I enter sever_name my-custom-domain.com;
P.S.S I'm using 1.23.0-alpine nginx docker image.
Update
It looks like this issue is related with nginx docker image. I was not able to reproduce this with nginx installed locally. Though nginx images with tags nginx:1.18.0, nginx:1.23.0, nginx:1.23.0-alpine all had same issue.
Related
recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}
I am using Nginx as a reverse proxy. It is running as a containerized service in a Swarm cluster.
A while ago I discovered this weird behavior and I'm trying to wrap my head around it.
On my host I have three subdomains set up:
one.domain.com
two.domain.com
three.domain.com
In my Nginx server config I am specifying that the server_name I am targeting is three.domain.com, so I am expecting Nginx to only respond to requests targeting that subdomain.
events { worker_connections 1024; }
http {
upstream service {
server node:3000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name three.domain.com;
[...... ssl settings here.......]
location / {
proxy_pass http://service;
proxy_set_header Host $host;
}
}
}
What happens instead of only responding to requests sent to three.domain.com, it responds to one.domain.com and two.domain.com as well. (it routes them to three.domain.com)
If I add multiple server blocks specifically targeting subdomains one and two, it works as expected, it routes the requests where they belong.
That being said, the ideal behavior would be to only respond to subdomains which are listed in the server_name section of a server block.
Nginx tests the request’s header field “Host” (or SNI hostname in case of https) to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In your configuration above, the default server is the first (and only) one — which is nginx’s standard default behaviour. If there are multiple server blocks, it can also be set explicitly which server should be default, with the default_server parameter in the listen directive
So, you need to add another server block:
server {
listen 443 ssl default_server;
server_name default.example.net;
...
return 444;
}
I am using the dockerized Nextcloud as shown here: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm
I set this up with port 80 mapped to 12345 and port 443 mapped to 12346. When I go to https://mycloud.example.com:12346, I get the self-signed certificate prompt, but otherwise everything is fine and I see the NextCloud web UI. But when I go to http://mycloud.example.com:12345 nginx (the proxy container) gives error "503 Service Temporarily Unavailable". The error also shows up in proxy's logs.
How can I diagnose the issue? Why is HTTPS working but not HTTP?
Can you provide your docker command starting nextcloud, or docker-compose file ?
Diagnosis is as usual with docker stuff : get the id for the currently running container
docker ps
Then check the logs
docker logs [id or name of your container]
docker-compose logs [name of your service]
Connect in the container
docker exec -ti [id or name of your container] [bash or ash if alpine based container]
There read the nginx conf files involved. In your case I'ld check the redirection being made from http to https, most likely it's something like below with no specific port specified for https, hence port 443, hence not working
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri; <======== no port = 443
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}
I'm trying nginx for first time, and I do it with docker.
Basically I want to achieve the following architecture
https://example.com (business webiste`)
https://app.example.com (progressive web / single page app)
https://app.example.com/api (to avoid preflight requests, a proxy to https://api.example.com is needed)
https://api.example.com (restful api)
Every http request to be redirected to https
I'm generating the /etc/nginx/conf.d/default.conf file with some environment variables on start up. That file is then included inside the http context of the default.conf file, thus bringing some limitation to what I can configure. (related issue)
You can see my current nginx.conf file here (file is quite large to embed here).
And you can see the docker-compose.yml file here.
The problem:
400 Bad Request The plain HTTP request was sent to HTTPS port
I can't actually make that any call to http://(app/api).example.com to be redirected to its https version, I've tried with this without success: (see the the above linked file)
server {
listen 80 ssl;
listen 443 ssl;
listen [::]:80 ssl;
listen [::]:443 ssl;
server_name api.dev.local;
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
# more code...
}
Any recommendations regarding to the my actual configs are more than welcome in the comments sections! I'm just starting to use nginx and thus reading tons of artciles that provide code snippets that I simply copy and paste after reading what are they needed for.
The https protocol is an extension to http, so they are different protocols to an extent. At the moment your server does not expect http on :80, it rather expects https due to the setting listen 80 ssl. This causes the error.
You need to separate handling of http requests on :80, which should be redirected to https on :443, from handling https on :443, which should be handled normally.
This can be done by splitting out another server configuration block for http on :80:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
...and removing listening on :80 from the current block:
server {
listen 443 ssl;
listen [::]:443 ssl;
# more code...
}
The following blog article gives more details if needed https://bjornjohansen.no/redirect-to-https-with-nginx
I have a docker-compose file that right now runs two containers:
version: '3'
services:
nginx-certbot-container:
build: nginx-certbot
restart: always
links:
- ghost-container:ghost-container
ports:
- 80:80
- 443:443
tty: true
ghost-container:
image: ghost
restart: always
ports:
- 2368:2368
I have four websites, l.com, t1.l.com, t2.l.com, t3.l.com, all with ssl certificates done by letsencrypt, and working by that on the URL I can see the green lock etc...
for t2.l.com, I would like that to be a blog from ghost, but with the following nginx conf,
upstream ghost-container {
server ghost-container:2368;
}
server {
server_name t2.l.com;
location / {
proxy_pass https://ghost-container;
proxy_ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
proxy_ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers "ECDHE-ECD ... BC3-SHA:!DSS";
proxy_ssl_session_reuse on;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/l.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/l.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
}
server {
listen 80;
listen [::]:80;
server_name t2.l.com;
include /etc/nginx/snippets/letsencrypt.conf;
location / {
return 301 https://t2.l.com$request_uri;
#proxy_pass http://ghost-container;
}
}
If I comment out the return 301, and just keep the proxy_pass, I get redirected to the ghost blog no problem, except its not via ssl, But if i comment out the proxy pass, like above, and return 301, the server returns a 502 bad gateway.
Is there something I'm missing? from other peoples code it seems just having proxy certs is enough...
Edit
Well, I just did something that I was sure would not work, and set the proxy pass in the ssl part to http: instead of https:, and it all worked fine, so if anyone can explain the mechanics or logic behind why this is so, I would be very interested, it doesnt make sense in my mind.
You have to distinguish the connection from a client to nginx (your reverse proxy here) and the connection from nginx to your ghost container.
The connection from a client to the nginx server can be encrypted (https, port 443) or unencrypted (http, 80). In your config file, there is one server block for each. If the client connects via https (after a redirect or directly), nginx will use the key at /etc/letsencrypt/live/l.com/* to encrypt the content of this connection. The content could be served from the file system inside the nginx-certbot-container container or from an upstream server (thus reverse proxy).
For t2.l.com you would like to use the upstream server. Nginx will open a connection to the upstream server. It depends on the server running inside ghost-container whether it expects http or https connection on port 2368. From the information you provided I deduce that it accepts http connections. Otherwise you would need SSL certificates also for the ghost container, or create self-signed certificates and make nginx trust the self-signed upstream connection. This means your proxy_pass should use http. Since the packages of this connection will never leave your computer, I think it is fairly safe to use http for the upstream server in this case.
(If this is not what you intended, you can also create the SSL endpoint in the ghost-container. In this case, nginx has to use SNI to determine the destination host because it only sees encrypted packages. Search for nginx reverse proxy ssl or so.)
Note: Please be careful with the ports property. The above docker-compose file publishes port 2368. So the ghost server can be reached via http://t2.l.com:2368. To avoid this, replace it with expose: [2368].