NGINX whitelist internal docker IP - docker

I have a server that runs 2 docker containers, a Node.js API container, and an NGINX-RTMP container. The server itself also uses NGINX as a reverse proxy to sort traffic between these two containers based on port.
The NGINX-RTMP server accesses the API server via it's network alias like so:
on_publish http://api-server:3000/authorize
Which works great to communicate container-to-container. I can also go the other way by using urls like
http://nginx-server:8080/some-endpoint
Now I have a route on the NGINX server that I would like to restrict to just local traffic (i.e. only the API server should be able to hit this location). Now normally I can do this with a simple
# nginx conf file
location /restricted {
allow 127.0.0.1;
deny all;
}
What I would like to do is something like this:
# nginx conf file
location /restricted {
allow api-server;
deny all;
}
But I need to use the actual IP of the container. Now I can get the IP of the container by inspecting it, and I see the IP is 172.17.0.1. However when I look at other instances of this server I see some servers are 172.18.0.1 and 17.14.0.2 so it's not 100% consistent across servers. Now I could just write out all 256 variations of 172.*.0.0/24 but I imagine there must be a 'proper' way to wildcard this in nginx, or even a better way of specifying the container IP in my NGINX conf file. The only information I have found so far is to modify the type of network I'm using for my containers, but I don't want to do that.
How do I properly handle this?
# nginx conf file
location /restricted {
allow 172.*.0.0/24;
deny all;
}

I might have solved this one on my own actually.
Originally I thought I could 172.0.0.1/8 the block to allow all the IPs I thought possible for the local network, but this is wrong.
After reading this article: https://www.arin.net/reference/research/statistics/address_filters/ (archive mirror)
According to standards set forth in Internet Engineering Task Force (IETF) document RFC-1918 , the following IPv4 address ranges are reserved by the IANA for private internets
10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255
172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255
192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255
Notice that the 172 net is a /12 and not /8.
Which is explained as
In August 2012, ARIN began allocating “172” address space to internet service, wireless, and content providers.
So I believe the correct method is:
# nginx conf file
location /restricted {
allow 172.16.0.0/12;
deny all;
}

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX proxy_pass does not work; port is not handled in redirect

Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?
I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically

Nginx Proxy: Allow IP from proxy only

Here's my goal:
admin.domain.com is where we have a Magento 2 instance setup. It's locked down in Nginx for a white-list of IPs.
api.domain.com has its own white-list, and it ultimately goes to admin.domain.com/rest/..., preferably without the requester being able to see.
The idea is to enforce all API integrations to go through the api subdomain, and to hide our admin domain entirely. Note - This is inside a Docker container, not directly on a server.
Currently, how I am attempting to accomplish this is using proxy_pass and setting the allow and deny blocks accordingly. Here is a snippet of our Nginx configs
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
In theory, this should work. From testing this I noticed that all requests to api.domain.com are forwarded to admin.domain.com and admin sees the request from the Docker container's Gateway IP as the source IP. So, I can add the Gateway IP in the allow $DOCKER_IP line. The main problem here is finding a dependable way to get this IP since it changes every time the container is recreated (on each release).
Alternatively, if there's a more simple way to do this, I would prefer that. I'm trying not to over-complicate this, but I'm a little over my head here with Nginx configurations.
So, my Questions are this:
Am I way over-complicating this, and is there a recommendation of a different approach to look into?
If not, is there a dependable way to get the Docker container's Gateway IP in Nginx, or maybe in entrypoint so that I can set it as a variable and place it into the nginx config?
Since the Docker container is ephemeral and the IP can change every time (and it's very hard to pass the user's real IP address all the way through a proxy to the Docker container), it may be a lot simpler to control this with code.
I'd create a new module with a config value for the IP address, which would allow you to edit the IP address from the admin. This is architecturally more scalable as you don't need to rely on a hard-coded IP.
Within this module you'll want to create an event observer on something like the controller_action_predispatch event. You can detect an admin route, and check/prevent access to that route based on the value of the configuration object for the IP address. This way you aren't relying on Docker at all and you would have an admin-editable value to control the IP address/range.
This is how I have solved this for now. I'm still interested in better solutions if possible, but for now this is what I'm doing.
This is a snippet of the Nginx config for the API domain. It has its own whitelist for API access, and then reverse proxy to the real domain where M2 is hosted.
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
# specific whitelist for API access
include /etc/nginx/conf.d/api.whitelist;
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
And then in the final domain (admin.domain.com) we are this location block to only allow traffic to the API (/rest) that comes from the Proxy so nobody can request our API directly at this domain.
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
include /etc/nginx/conf.d/proxy.whitelist;
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
So, in order to accomplish the restriction for the proxy traffic, the file /etc/nginx/conf.d/proxy.whitelist is generated in entrypoint.sh of the docker container. I'm using a template file proxy.whitelist.template that looks like
# Docker IP
allow $DOCKER_IP;
I did this because there are a couple other hard-coded IPs we have in that file already.
Then, in entrypoint I use the following to find the Gateway IP of the Docker container.
export DOCKER_IP=$(route -n | awk '{if($4=="UG")print $2}')
envsubst < "/etc/nginx/conf.d/proxy.whitelist.template" > "/etc/nginx/conf.d/proxy.whitelist"
And so far that seems to be working for me.

Multiple Sub-Domains on a single server. Docker + NGINX # EC2

I have multiple NGNIX-uWSGI based Django Applications deployed using Docker and hosted in EC2 (currently at different ports like 81, 82, ...). Now I wish to add in sub-domains to this such that sub1.domain.com and sub2.domain.com will both work from the same EC2 instance.
I am fine with multiple ports, BUT they dont work via DNS settings.
sub1.domain.com -> 1.2.3.4:81
sub2.domain.com -> 1.2.3.4:82
What I cannot do
Multiple IPs ref: allocation of a new ip for each deployed sub-domain is not possible.
NGINX Proxy ref: This looks like the ideal solution BUT this is not maintained by an org like Docker or NGINX, so I am un-sure of the security and reliability.
What I am considering:
I am considering to write my own NGINX reverse proxy, similar to Apache Multiple Sub Domains With One IP Address BUT then the flow is will via multiple proxies since already there is an NGINX-uWSGI proxy via the Tech Stack
you can use nginx upstream
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
server_name sub.test.com www.sub.test.com;
location / {
proxy_pass http://backend;
}
}

Re-resolve of backend in nginx SNI docker swarm

I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy

Resources