NGINX proxy_pass does not work; port is not handled in redirect - docker

Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?

I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically

Related

Dockercontainer with Nginx share the same network but can´t reach each other

recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}

NGINX Server to Redirect to Docker Container

The following i want to achieve:
On Server A there is docker installed. There are, lets say, 3 Containers:
Container 1: App1, ip: 172.17.0.2, network: mynet, Simple HTML Welcome page, accessible by port 80
Container 2: App2, ip: 172.17.0.3, network: mynet, a Wiki System -> dokuwiki, accessible by port 8080
Container 3: App3, ip: 172.17.0.4, network: mynet, something else
You can see, every container are in the same Docker network. The Containers are accessible by different Ports.
The Clients on the same network needs to access all of the Containers. I can't use DNS in this case (Reverse Proxy via VHOST), because i am not control the DNS. My Goal:
Container 1 : accessible via http://myserver.home.local/app1/
Container 2 : accessible via http://myserver.home.local/app2/
Container 3 : accessible via http://myserver.home.local/app3/
What i did to solve this is the following: Add another Container with nginx, and do proxy_pass to the containers. I use the official nginx image (docker pull nginx), then i mount my custom config into the /etc/nginx/conf.d dir. My Config looks like the follow:
server {
location / {
root /usr/share/nginx/html;
index: index.html index.htm;
}
location /app1/ {
proxy_pass http://app1/
}
location /app2/ {
proxy_pass http://app2:8080/
}
location /app3/ {
proxy_pass http://app3/
}
}
The app1 does work. The app2 does not: It prints me some ugly html output. In the Browser Web Console, i see a lot of 404. I guess that has something to do with Reverse / Rewrite of nginx, because, the app2 is Dokuwiki. I also add the apache ProxyPassReverse equivalent for nginx, without success.
I just do not know what to do in this case, or where to start. How can i know, what to be rewrite? I hope someone can help me.
As mentioned in the comments:
As soon as I use the dokuwiki basedir / baseurl config, the proxy is working as expected. To do so, edit the dokuwiki.php configuration file located in the conf folder:
conf/dokuwiki.php
change the following settings to your environment
$conf['basedir'] = '/dokuwiki';
$conf['baseurl'] = '';

NGINX whitelist internal docker IP

I have a server that runs 2 docker containers, a Node.js API container, and an NGINX-RTMP container. The server itself also uses NGINX as a reverse proxy to sort traffic between these two containers based on port.
The NGINX-RTMP server accesses the API server via it's network alias like so:
on_publish http://api-server:3000/authorize
Which works great to communicate container-to-container. I can also go the other way by using urls like
http://nginx-server:8080/some-endpoint
Now I have a route on the NGINX server that I would like to restrict to just local traffic (i.e. only the API server should be able to hit this location). Now normally I can do this with a simple
# nginx conf file
location /restricted {
allow 127.0.0.1;
deny all;
}
What I would like to do is something like this:
# nginx conf file
location /restricted {
allow api-server;
deny all;
}
But I need to use the actual IP of the container. Now I can get the IP of the container by inspecting it, and I see the IP is 172.17.0.1. However when I look at other instances of this server I see some servers are 172.18.0.1 and 17.14.0.2 so it's not 100% consistent across servers. Now I could just write out all 256 variations of 172.*.0.0/24 but I imagine there must be a 'proper' way to wildcard this in nginx, or even a better way of specifying the container IP in my NGINX conf file. The only information I have found so far is to modify the type of network I'm using for my containers, but I don't want to do that.
How do I properly handle this?
# nginx conf file
location /restricted {
allow 172.*.0.0/24;
deny all;
}
I might have solved this one on my own actually.
Originally I thought I could 172.0.0.1/8 the block to allow all the IPs I thought possible for the local network, but this is wrong.
After reading this article: https://www.arin.net/reference/research/statistics/address_filters/ (archive mirror)
According to standards set forth in Internet Engineering Task Force (IETF) document RFC-1918 , the following IPv4 address ranges are reserved by the IANA for private internets
10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255
172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255
192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255
Notice that the 172 net is a /12 and not /8.
Which is explained as
In August 2012, ARIN began allocating “172” address space to internet service, wireless, and content providers.
So I believe the correct method is:
# nginx conf file
location /restricted {
allow 172.16.0.0/12;
deny all;
}

Re-resolve of backend in nginx SNI docker swarm

I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy

Tricking a Rails App to think it's on a different port

I have a Rails app that is running on port 8080 that I need to trick to think it's running on port 80.
I am running Varnish on port 80 and forwarding requests to nginx on port 8080, but when the user tries to login with OmniAuth and the Devise gem generates a url to redirect back to the server, it thinks its on port 8080 which then the user will see.
Is there any way to trick the Rails app to hard code the port as 80 (I would think it's a bad practice), or have nginx forward the request as if it's running on port 80?
Since I am not running a nginx proxy to the Rails app I can't think of a way to trick the port.
Has anyone ran into this issue before, if so what sort of configuration is needed to fix it?
Thanks in advance!
EDIT:
Both nginx and Varnish are running on the same server.
I have the same setup with Varnish on port 80 and nginx on port 8080 and OmniAuth (no Devise) was doing exactly the same thing. I tried setting X-Forwarded-Port etc in Varnish and fastcgi_param SERVER_PORT 80; in nginx, both without success. The other piece in my setup is Passenger (which you didn't mention) but if you are indeed using Passenger then you can use:
passenger_set_cgi_param SERVER_PORT 80;
(The docs say you can set this in an http block but that didn't work for me and I had to add it to the server block.)
http://modrails.com/documentation/Users%20guide%20Nginx.html#passenger_set_cgi_param
Set up X-Forwarded-Port in Varnish. See this example and the other results from a Google search for "varnish x-forwarded-port".
You must also, of course, set up X-Forwarded-For and X-Forwarded-Proto.
The headers X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port are a way for HTTP reverse proxies such as Nginx, Squid, or Varnish to communicate to the "back-end" HTTP application server, your Rails application running in Thin or Unicorn, who the user actually is and how the user actually connected.
For example, suppose you have Nginx in front of your Rails application. Your Rails application was booted with Thin and is listening on 127.0.0.1:8080, while Nginx is listening on 0.0.0.0:80 for HTTP and 0.0.0.0:443 for HTTPS. Nginx is configured to proxy all connections to the Rails app. Then your Rails app will think that any user's IP address is 127.0.0.1, the port is 8080, and the scheme is http, even if the actual user connected from 1.2.3.4 and requested the page via https on port 443. The solution is to configure Nginx to set the headers:
X-Forwarded-For: 1.2.3.4
X-Forwarded-Scheme: https
X-Forwarded-Port: 443
and the Rails app should use these parameters instead of the default ones.
The same applies for whatever reverse proxy you use, such as Varnish in your case.
You can make a proxy and server it as whatever port you want.
Maybe with apache on top and passenger stand alone...
<VirtualHost *:80>
ServerName <name>
DocumentRoot /home/deploy/<name>
PassengerEnabled off
ProxyPass / http://127.0.0.1:<port>/
ProxyPassReverse / http://127.0.0.1:<port>/
</VirtualHost>
In shell:
passenger start -e staging -p 3003 -d
Your problem seems you're getting redirects to port 8080. The best solution would be to configure Rails (or the OmniAuth/Devise gem) to treat the requests as if they were fired on port 80 (but I have no idea how or if it is possible).
Like ablemike said; Apache has a great module for this (mod_proxy), with ProxyPassReverse it rewrites the redirects back to port-80 redirects. Better even, with mod_proxy_html it will replace port-8080 links in HTML pages with port-80 links.
If you only need to rewrite redirects, you can rewrite redirects in Varnish VCL with something like:
sub vcl_fetch {
...
#Rewrite redirect from port 8080 to port 80
if ( obj.http.Location ~ "^http://[^:]+:8080/.*" ) {
set obj.http.Location = regsub(obj.http.Location, ""^(http://[^:]+):8080(/.*)","\1\2");
}
}
(I think you have to replace obj with beresp if you use varnish >= 2.1)
If you have to rewrite HTML pages, this will be a lot harder to do completely correct with varnish.

Resources