Nginx Proxy: Allow IP from proxy only - docker

Here's my goal:
admin.domain.com is where we have a Magento 2 instance setup. It's locked down in Nginx for a white-list of IPs.
api.domain.com has its own white-list, and it ultimately goes to admin.domain.com/rest/..., preferably without the requester being able to see.
The idea is to enforce all API integrations to go through the api subdomain, and to hide our admin domain entirely. Note - This is inside a Docker container, not directly on a server.
Currently, how I am attempting to accomplish this is using proxy_pass and setting the allow and deny blocks accordingly. Here is a snippet of our Nginx configs
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
In theory, this should work. From testing this I noticed that all requests to api.domain.com are forwarded to admin.domain.com and admin sees the request from the Docker container's Gateway IP as the source IP. So, I can add the Gateway IP in the allow $DOCKER_IP line. The main problem here is finding a dependable way to get this IP since it changes every time the container is recreated (on each release).
Alternatively, if there's a more simple way to do this, I would prefer that. I'm trying not to over-complicate this, but I'm a little over my head here with Nginx configurations.
So, my Questions are this:
Am I way over-complicating this, and is there a recommendation of a different approach to look into?
If not, is there a dependable way to get the Docker container's Gateway IP in Nginx, or maybe in entrypoint so that I can set it as a variable and place it into the nginx config?

Since the Docker container is ephemeral and the IP can change every time (and it's very hard to pass the user's real IP address all the way through a proxy to the Docker container), it may be a lot simpler to control this with code.
I'd create a new module with a config value for the IP address, which would allow you to edit the IP address from the admin. This is architecturally more scalable as you don't need to rely on a hard-coded IP.
Within this module you'll want to create an event observer on something like the controller_action_predispatch event. You can detect an admin route, and check/prevent access to that route based on the value of the configuration object for the IP address. This way you aren't relying on Docker at all and you would have an admin-editable value to control the IP address/range.

This is how I have solved this for now. I'm still interested in better solutions if possible, but for now this is what I'm doing.
This is a snippet of the Nginx config for the API domain. It has its own whitelist for API access, and then reverse proxy to the real domain where M2 is hosted.
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
# specific whitelist for API access
include /etc/nginx/conf.d/api.whitelist;
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
And then in the final domain (admin.domain.com) we are this location block to only allow traffic to the API (/rest) that comes from the Proxy so nobody can request our API directly at this domain.
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
include /etc/nginx/conf.d/proxy.whitelist;
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
So, in order to accomplish the restriction for the proxy traffic, the file /etc/nginx/conf.d/proxy.whitelist is generated in entrypoint.sh of the docker container. I'm using a template file proxy.whitelist.template that looks like
# Docker IP
allow $DOCKER_IP;
I did this because there are a couple other hard-coded IPs we have in that file already.
Then, in entrypoint I use the following to find the Gateway IP of the Docker container.
export DOCKER_IP=$(route -n | awk '{if($4=="UG")print $2}')
envsubst < "/etc/nginx/conf.d/proxy.whitelist.template" > "/etc/nginx/conf.d/proxy.whitelist"
And so far that seems to be working for me.

Related

NGINX proxy_pass does not work; port is not handled in redirect

Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?
I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically

NGINX whitelist internal docker IP

I have a server that runs 2 docker containers, a Node.js API container, and an NGINX-RTMP container. The server itself also uses NGINX as a reverse proxy to sort traffic between these two containers based on port.
The NGINX-RTMP server accesses the API server via it's network alias like so:
on_publish http://api-server:3000/authorize
Which works great to communicate container-to-container. I can also go the other way by using urls like
http://nginx-server:8080/some-endpoint
Now I have a route on the NGINX server that I would like to restrict to just local traffic (i.e. only the API server should be able to hit this location). Now normally I can do this with a simple
# nginx conf file
location /restricted {
allow 127.0.0.1;
deny all;
}
What I would like to do is something like this:
# nginx conf file
location /restricted {
allow api-server;
deny all;
}
But I need to use the actual IP of the container. Now I can get the IP of the container by inspecting it, and I see the IP is 172.17.0.1. However when I look at other instances of this server I see some servers are 172.18.0.1 and 17.14.0.2 so it's not 100% consistent across servers. Now I could just write out all 256 variations of 172.*.0.0/24 but I imagine there must be a 'proper' way to wildcard this in nginx, or even a better way of specifying the container IP in my NGINX conf file. The only information I have found so far is to modify the type of network I'm using for my containers, but I don't want to do that.
How do I properly handle this?
# nginx conf file
location /restricted {
allow 172.*.0.0/24;
deny all;
}
I might have solved this one on my own actually.
Originally I thought I could 172.0.0.1/8 the block to allow all the IPs I thought possible for the local network, but this is wrong.
After reading this article: https://www.arin.net/reference/research/statistics/address_filters/ (archive mirror)
According to standards set forth in Internet Engineering Task Force (IETF) document RFC-1918 , the following IPv4 address ranges are reserved by the IANA for private internets
10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255
172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255
192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255
Notice that the 172 net is a /12 and not /8.
Which is explained as
In August 2012, ARIN began allocating “172” address space to internet service, wireless, and content providers.
So I believe the correct method is:
# nginx conf file
location /restricted {
allow 172.16.0.0/12;
deny all;
}

Do I need AWS Load balancer with nginx in AWS ECS?

I'm using Docker in AWS ECS. I have one EC2 machine with docker agent from AWS ECS, and the ECS task contains of 3 containers:
nginx container
application-nodejs container
staticfiles-nodejs-application container.
I want to support very huge traffic. Do I need to setup AWSLoad Balancer? or my setting for nginx upstream is enough?
nginx conf example:
upstream appwww {
server app-www:3000;
}
server {
server_name my.home.net;
location / {
proxy_pass http://appwww;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate......; # managed by Certbot
ssl_certificate_key........ # managed by Certbot
include /.......# managed by Certbot
ssl_dhparam /.....pem; # managed by Certbot
}
server {
if ($host = my.host.net) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my.host.net;
return 404; # managed by Certbot
}
Edit
I draw the currect architecture and I want to add LoadBalancer, where should I put it? auto scale fits to this drawing? should I use one or more ec2 machines? multi containers? multi upstream?
I suggest you start with using the load balancer, because:
you can configure SSL at the load balancer and terminate SSL at the load balancer
you can protect yourself from malicious attacks by configuring the load balancer to integrate with AWS WAF
you could easily add more targets in the future
the absence of load balancer requires you to configure SSL at the application level
it supports health check.
you get free ACM certificate to use with load balancer
easy to renew SSL certs every year
Note: consider using AWS S3 and cloudfront to serve your static content
introducing load balancer to your existing architecture
The application load balancer supports host based routing now, which means it makes it possible to use multiple domains (or sub domains) pointing to multiple websites. In addition to host based routing its also supporting path based routing. for e.g while mydomain.com/web1 pointing to website1 , mydomain.com/web2 can point to website2.
I can't think of a reason why you would need to use nginx (unless I am missing something).
So answering to your question, I would do this way.
introduce an application load balancer
deploy multiple containers via ECS (Fargate)
for each service, i will have a target group dedicated to manage scaling and health checks.
finally, I would do host based routing, s1.mydomain.com, s2.mydomain.com each pointing to different target groups (one per service)
Reference:
https://aws.amazon.com/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/
Hope this helps.
As you are saying that
I want to support very huge traffic.
I would expect that at some point you will need to scale your AWS ECS cluster horizontally to multiple instances and at that point, you will need an Elastic Load Balancer to balance your traffic between them.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide///service-load-balancing.html
If this statement is true
I want to support very huge traffic
In addition to ECS tasks, you need to read about different concepts within AWS ECS:
Services
Application Load Balancer
Listeners
Target groups
AutoScaling (Because you're going to handle huge traffic)
In order to properly use AWS ECS you need to use those services together.

Gerrit redirects to wrong URL

I have installed Gerrit 2.12.3 on my Ubuntu Server 16.04 system.
Gerrit is listening on http://127.0.0.1:8102.
behind an nginx server, which is listening on https://SERVER1:8102.
Some contents of the etc/gerrit.config file is as follow:
[gerrit]
basePatr = git
canonicalWebUrl = https://SERVER1:8102/
[httpd]
listenUrl = proxy-https://127.0.0.1:8102/
And some contents of my nginx settings is as follow:
server {
listen 10.10.20.202:8102 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server1.crt;
ssl_certificate_key /etc/nginx/ssl/server1.key;
location / {
# Allow for large file uploads
client_max_body_size 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8102;
}
}
Nearly all the function of Gerrit works very well now. But one problem I can not solved is that:
The url generated in notification email is https://SERVER1:8102/11 which seems right, but when I click the link, it redirects to https://SERVER1/#/c/11/ instead of https://SERVER1:8102/#/c/11/
Can anyone tell me how to solve it?
Thanks.
That the port of gerrit.canonicalWebUrl and httpd.listenUrl match makes no sense.
Specify as gerrit.canonicalWebUrl the URL that is accessible to your users through the Nginx proxy, e.g., https://gerrit.example.com.
This vhost in Nginx (listening to port 443) in turn is configured in the proxy to connect to the backend as specified in httpd.listenUrl, so e.g. port 8102 to which Gerrit would be listening in your case.
The canonicalWebUrl is just used that Gerrit knows its own host name, e.g., for sending email notifications IIRC.
You might also just follow Gerrit Documentation and stick to the ports as described there.
EDIT: I really noticed that you want the proxy AND Gerrit both to listen on port 8102 - on a public interface respectively on 127.0.0.1. While this would work, if you really make sure that Nginx is not binding to 0.0.0.0, I think it makes totally no sense. Don't you want your users to connect via HTTPS on port 443?

How to setup Kibana SSO (through OAuth)?

My company tries very hard to keep a SSO for all third party services. I'd like to make Kibana work with our Google Apps accounts. Is that possible? How?
From Elasticsearch, Kibana 5.0, shield plugin (security plugin) is embedded in x-pack (paid service). So from Kibana 5.0 you can :
use X-Pack
use Search Guard
Both these plugin can be used with basic authentication, so you can apply an Oauth2 proxy like this one. One additionnal proxy would forward the request with the right Authorization header with the digest base64(username:password)
The procedure is depicted in this article for x-pack. So you will have :
I've setup a docker-compose configuration in this repo for using either searchguard or x-pack with Kibana/Elasticsearch 6.1.1 :
docker-compose for searchguard
docker-compose for x-pack
Kibana leaves it up to you to implement security. I believe that Elastic's Shield product has support for security-as-a-plugin, but I haven't navigated the subscription model or looked much into it.
The way that I handle this is by using an oauth2 proxy application and use nginx to reverse proxy to Kibana.
server {
listen 80;
server_name kibana.example.org;
# redirect http->https while we're at it
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
# listen for traffic destined for kibana.example.org:443
listen 443 default ssl;
server_name kibana.example.org;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/cert.key.pem;
add_header Strict-Transport-Security max-age=1209600;
# for https://kibana.example.org/, send to our oauth2 proxy app
location / {
# the oauth2 proxy application i use listens on port :4180
proxy_pass http://127.0.0.1:4180;
# preserve our host and ip from the request in case we want to
# dispatch the request to a named nginx directive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 15;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
The request comes in, triggers an nginx directive that sends the request to the ouath application, which in turn handles the SSO resource and redirects to a listening Kibana instance on the server's localhost. It's secure because connections cannot be made directly to Kibana.
Use oauth2-proxy application and Kibana with configured anonymous authentication as on config below:
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: "username"
password: "password"
The user which credentials are specified in config can be created either via Kibana UI or Elasticsearch create or update users API.
Note! Kibana instance should not be publicly available, otherwise anybody will be able to access Kibana UI.

Resources