I am using Nginx as a reverse proxy. It is running as a containerized service in a Swarm cluster.
A while ago I discovered this weird behavior and I'm trying to wrap my head around it.
On my host I have three subdomains set up:
one.domain.com
two.domain.com
three.domain.com
In my Nginx server config I am specifying that the server_name I am targeting is three.domain.com, so I am expecting Nginx to only respond to requests targeting that subdomain.
events { worker_connections 1024; }
http {
upstream service {
server node:3000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name three.domain.com;
[...... ssl settings here.......]
location / {
proxy_pass http://service;
proxy_set_header Host $host;
}
}
}
What happens instead of only responding to requests sent to three.domain.com, it responds to one.domain.com and two.domain.com as well. (it routes them to three.domain.com)
If I add multiple server blocks specifically targeting subdomains one and two, it works as expected, it routes the requests where they belong.
That being said, the ideal behavior would be to only respond to subdomains which are listed in the server_name section of a server block.
Nginx tests the request’s header field “Host” (or SNI hostname in case of https) to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In your configuration above, the default server is the first (and only) one — which is nginx’s standard default behaviour. If there are multiple server blocks, it can also be set explicitly which server should be default, with the default_server parameter in the listen directive
So, you need to add another server block:
server {
listen 443 ssl default_server;
server_name default.example.net;
...
return 444;
}
Related
I have created a domain(domain.com) and subdomain (abc.domain.com), and also generated SSL certificates for both by using letsencrypt. Both the Django projects are hosted on AWS EC2 and created proxy server for them which is as follow:
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass https://1.2.3.4:444;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/domain/fullchain.pem;
proxy_ssl_certificate_key /home/domain/privkey.pem;
}
}
server {
listen 443 ssl;
server_name abc.example.com;
location / {
proxy_pass https://1.2.3.4:445;
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_certificate /home/subdomain/fullchain.pem;
proxy_ssl_certificate_key /home/subdomain/privkey.pem;
}
}
I strats the proxy server and both the projects, starting not giving any problem the problem is that when i enter https://example.com on the browser it is not showing the page, but when i pull domain with port no. https://example.com:444, it starts showing the page. I do not know what I am missing.
In order to make https://example.com work you need to correctly configure Nginx with SSL configuration which include using ssl_certificate and ssl_certificate_key directives as it does not seem that you are using them.
Using proxy_ssl_certificate is for using HTTPS connection between Nginx and the Proxied Server which in your case the django application.
Using ssl_certificate is for using HTTPS connection between the user's browser and Nginx which you need to make https://example.com works as expected
For more details check configuring HTTPS servers
I use an nginx container with this config:
set $ui http://ui:9000/backend;
resolver 127.0.0.11 valid=5m;
proxy_pass $ui;
This is needed, because the "ui" container wont necessarly be up when nginx starts. This avoids the "host not found in upstream..." error.
But now I get a 404 even when the ui-container is up and running (they are both in the same network defined in the docker-compose.yml). When I proxy pass without the variable, without the resolver and start the ui container first, everything works.
Now I am looking for why docker is failing to resolve it. Could I maybe manually add a fake route to http://ui which gets replaced when the ui-container starts? Where would that be? Or can I fix the resolver?
The answer is like in this post:
https://stackoverflow.com/a/52319161/3093499
Only change is putting the resolver and set variable into the server-body instead of the location.
First you need to make sure that you have the port in the ui backend Dockerfile with EXPOSE 9000. Then you're going to want to have this as your config:
http {
upstream ui {
server ui:9000;
}
server {
# whatever port your nginx reverse proxy is listening on.
listen 80;
location / {
proxy_pass http://ui/backend;
}
}
http
{
server {
ssl_certificate /etc/tls/tls.crt;
ssl_certificate_key /etc/tls/tls.key;
resolver 127.0.0.11;
resolver_timeout 10s;
access_log /var/log/nginx/access_log.log;
location / {
set $upstream_app homer;
set $upstream_port 8080;
set $upstream_proto http;
proxy_pass http://localhost:7001;
}
}
}
i worked too
I'm trying nginx for first time, and I do it with docker.
Basically I want to achieve the following architecture
https://example.com (business webiste`)
https://app.example.com (progressive web / single page app)
https://app.example.com/api (to avoid preflight requests, a proxy to https://api.example.com is needed)
https://api.example.com (restful api)
Every http request to be redirected to https
I'm generating the /etc/nginx/conf.d/default.conf file with some environment variables on start up. That file is then included inside the http context of the default.conf file, thus bringing some limitation to what I can configure. (related issue)
You can see my current nginx.conf file here (file is quite large to embed here).
And you can see the docker-compose.yml file here.
The problem:
400 Bad Request The plain HTTP request was sent to HTTPS port
I can't actually make that any call to http://(app/api).example.com to be redirected to its https version, I've tried with this without success: (see the the above linked file)
server {
listen 80 ssl;
listen 443 ssl;
listen [::]:80 ssl;
listen [::]:443 ssl;
server_name api.dev.local;
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
# more code...
}
Any recommendations regarding to the my actual configs are more than welcome in the comments sections! I'm just starting to use nginx and thus reading tons of artciles that provide code snippets that I simply copy and paste after reading what are they needed for.
The https protocol is an extension to http, so they are different protocols to an extent. At the moment your server does not expect http on :80, it rather expects https due to the setting listen 80 ssl. This causes the error.
You need to separate handling of http requests on :80, which should be redirected to https on :443, from handling https on :443, which should be handled normally.
This can be done by splitting out another server configuration block for http on :80:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
...and removing listening on :80 from the current block:
server {
listen 443 ssl;
listen [::]:443 ssl;
# more code...
}
The following blog article gives more details if needed https://bjornjohansen.no/redirect-to-https-with-nginx
I have installed Gerrit 2.12.3 on my Ubuntu Server 16.04 system.
Gerrit is listening on http://127.0.0.1:8102.
behind an nginx server, which is listening on https://SERVER1:8102.
Some contents of the etc/gerrit.config file is as follow:
[gerrit]
basePatr = git
canonicalWebUrl = https://SERVER1:8102/
[httpd]
listenUrl = proxy-https://127.0.0.1:8102/
And some contents of my nginx settings is as follow:
server {
listen 10.10.20.202:8102 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server1.crt;
ssl_certificate_key /etc/nginx/ssl/server1.key;
location / {
# Allow for large file uploads
client_max_body_size 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8102;
}
}
Nearly all the function of Gerrit works very well now. But one problem I can not solved is that:
The url generated in notification email is https://SERVER1:8102/11 which seems right, but when I click the link, it redirects to https://SERVER1/#/c/11/ instead of https://SERVER1:8102/#/c/11/
Can anyone tell me how to solve it?
Thanks.
That the port of gerrit.canonicalWebUrl and httpd.listenUrl match makes no sense.
Specify as gerrit.canonicalWebUrl the URL that is accessible to your users through the Nginx proxy, e.g., https://gerrit.example.com.
This vhost in Nginx (listening to port 443) in turn is configured in the proxy to connect to the backend as specified in httpd.listenUrl, so e.g. port 8102 to which Gerrit would be listening in your case.
The canonicalWebUrl is just used that Gerrit knows its own host name, e.g., for sending email notifications IIRC.
You might also just follow Gerrit Documentation and stick to the ports as described there.
EDIT: I really noticed that you want the proxy AND Gerrit both to listen on port 8102 - on a public interface respectively on 127.0.0.1. While this would work, if you really make sure that Nginx is not binding to 0.0.0.0, I think it makes totally no sense. Don't you want your users to connect via HTTPS on port 443?
My company tries very hard to keep a SSO for all third party services. I'd like to make Kibana work with our Google Apps accounts. Is that possible? How?
From Elasticsearch, Kibana 5.0, shield plugin (security plugin) is embedded in x-pack (paid service). So from Kibana 5.0 you can :
use X-Pack
use Search Guard
Both these plugin can be used with basic authentication, so you can apply an Oauth2 proxy like this one. One additionnal proxy would forward the request with the right Authorization header with the digest base64(username:password)
The procedure is depicted in this article for x-pack. So you will have :
I've setup a docker-compose configuration in this repo for using either searchguard or x-pack with Kibana/Elasticsearch 6.1.1 :
docker-compose for searchguard
docker-compose for x-pack
Kibana leaves it up to you to implement security. I believe that Elastic's Shield product has support for security-as-a-plugin, but I haven't navigated the subscription model or looked much into it.
The way that I handle this is by using an oauth2 proxy application and use nginx to reverse proxy to Kibana.
server {
listen 80;
server_name kibana.example.org;
# redirect http->https while we're at it
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
# listen for traffic destined for kibana.example.org:443
listen 443 default ssl;
server_name kibana.example.org;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key.pem;
add_header Strict-Transport-Security max-age=1209600;
# for https://kibana.example.org/, send to our oauth2 proxy app
location / {
# the oauth2 proxy application i use listens on port :4180
proxy_pass http://127.0.0.1:4180;
# preserve our host and ip from the request in case we want to
# dispatch the request to a named nginx directive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 15;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
The request comes in, triggers an nginx directive that sends the request to the ouath application, which in turn handles the SSO resource and redirects to a listening Kibana instance on the server's localhost. It's secure because connections cannot be made directly to Kibana.
Use oauth2-proxy application and Kibana with configured anonymous authentication as on config below:
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "username"
password: "password"
The user which credentials are specified in config can be created either via Kibana UI or Elasticsearch create or update users API.
Note! Kibana instance should not be publicly available, otherwise anybody will be able to access Kibana UI.