I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy
Related
recently I'm trying to set up a litte Home Server with a buildin DNS.
The DNS Service is given by lancacheDNS and set up in combination with a Monolithic-Cache (Port 1234) in two docker containers on 192.168.178.11 (Host machine) in my local network.
Since I want to serve a Website(Port 8080) along with some independent APIs (Ports 8081, 8082 or whatsoever) I decided to use Nginx as a reverse Proxy.
The DNS does the following:
getr.me --> 192.168.178.11
The routing works completely fine and getr.me:8080 gives me my website as expected.
Now the tricky part (for me);
Set up Nginx such that:
website.getr.me --> serving website
api1.getr.me --> serving the API1
api2.getr.me --> serving the API2
For that I created a Newtwork "default_dash_nginx".
I edited the nginx to connect to that via:
networks: default: name: default_dash_nginx external: true
Also I connected my website serving container (dashboard) to the network via --network default_dash_nginx.
The serving website gets the IP 172.20.0.4 (received via docker inspect default_dash_nginx) and also the nginx server is connected to the network.
Nginx works and I can edit the admin page.
But unfortunaly event though I edited the proxyHost to the IP + Port of my website receiced from the network, the site is not available. Here the output of my network inspection: https://pastebin.com/jsuPZpqQ
I hope you have another Idea,
thanks in advance,
Maxi
Edit:
The nginx container is actually a NginxReverseProxyManager Container (I don´t know of it was unclear above or simply not important)
The Nginx container can actually Ping the website container ang also get the HTML files from Port 80 from it.
So it seems like the nginx itself isn´t working like it should.
The first answer got no results( I tried to save it as every of the mentioned files
here
Do I have missed something or am I just not smart enough?
nginx config, try and understand
server {
listen 80;
server_name api1.getr.me;
location / {
proxy_pass http://localhost:8081;
}
}
server {
listen 80;
server_name api2.getr.me;
location / {
proxy_pass http://localhost:8082;
}
}
server {
listen 80;
server_name some.getr.me;
location / {
proxy_pass http://localhost:XXXX;
}
}
Similar questions appear on this site but I cannot figure this one out. I am running a dockerized config. I can hit my site, benweaver-VirtualBox:3000/dev/test/rm successfully. But I want to be able to hit the site without the port: benweaver-VirtualBox/dev/test/rm .
The port does not seem to be handled in my proxy_redirect. I tried commenting out default nginx configuration to no effect. Because I am running a dockerized config I thought the default config may not be relevant anyhow. It is true that a netstat -tlpn |grep :80 does not find nginx. But docker-compose config has nginx as port 80 both in the container and on export. The config:
server {
listen 80;
client_max_body_size 200M;
location /dev/$NGINX_PREFIX/rm {
proxy_pass http://$PUBLIC_IP:3000/dev/$NGINX_PREFIX/rm;
PUBLIC_IP is set to the hostname of the box: benweaver-VirtualBox. This hostname is defined in /etc/hosts:
127.0.0.1 benweaver-VirtualBox
I suspect the problem to lie with my hostname.
What config of my hostname, benweaver-VirtualBox, is preventing a successful proxy_pass from a portless URL to benweaver-VirtualBox (127.0.0.1) : 3000 where my app is running?
I got things to work. Here are some take-aways: (1) if you use an address that includes a port, such as my benweaver-VirtualBox:3000/dev/test/rm you might not be hitting NGINX at all! Your first step is to make certain you are hitting NGINX; (2) Know how your hosts are associated with ip addresses in the /etc/hosts file. It is ok to associate two or more hostnames with the same numerical ip address; (3) learn about the use of trailing forward slashes in NGINX location expressions. There are two "styles" of writing a URL proxy. In one the writer appends a trailing forward slash onto the end of the location path. Should he or she wish to use location paths in the proxied URL, they must replicate those paths, appending the path elements themselves in the proxy_pass line. Omission of the trailing forward slash ensures that the appending of the location path onto the proxied URL is done automatically
I have a server that runs 2 docker containers, a Node.js API container, and an NGINX-RTMP container. The server itself also uses NGINX as a reverse proxy to sort traffic between these two containers based on port.
The NGINX-RTMP server accesses the API server via it's network alias like so:
on_publish http://api-server:3000/authorize
Which works great to communicate container-to-container. I can also go the other way by using urls like
http://nginx-server:8080/some-endpoint
Now I have a route on the NGINX server that I would like to restrict to just local traffic (i.e. only the API server should be able to hit this location). Now normally I can do this with a simple
# nginx conf file
location /restricted {
allow 127.0.0.1;
deny all;
}
What I would like to do is something like this:
# nginx conf file
location /restricted {
allow api-server;
deny all;
}
But I need to use the actual IP of the container. Now I can get the IP of the container by inspecting it, and I see the IP is 172.17.0.1. However when I look at other instances of this server I see some servers are 172.18.0.1 and 17.14.0.2 so it's not 100% consistent across servers. Now I could just write out all 256 variations of 172.*.0.0/24 but I imagine there must be a 'proper' way to wildcard this in nginx, or even a better way of specifying the container IP in my NGINX conf file. The only information I have found so far is to modify the type of network I'm using for my containers, but I don't want to do that.
How do I properly handle this?
# nginx conf file
location /restricted {
allow 172.*.0.0/24;
deny all;
}
I might have solved this one on my own actually.
Originally I thought I could 172.0.0.1/8 the block to allow all the IPs I thought possible for the local network, but this is wrong.
After reading this article: https://www.arin.net/reference/research/statistics/address_filters/ (archive mirror)
According to standards set forth in Internet Engineering Task Force (IETF) document RFC-1918 , the following IPv4 address ranges are reserved by the IANA for private internets
10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255
172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255
192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255
Notice that the 172 net is a /12 and not /8.
Which is explained as
In August 2012, ARIN began allocating “172” address space to internet service, wireless, and content providers.
So I believe the correct method is:
# nginx conf file
location /restricted {
allow 172.16.0.0/12;
deny all;
}
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip and are running in global mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502; but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
Is upstream necessary here since we directly use docker service discovery?
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik instead, it has its own load balancer.
I am trying to set up a Docker Nginx Proxy server to forward incoming requests to their corresponding Docker Container on 192.168.1.120 or to the Router's Web-Admin at 192.168.1.1
So right now I am in a bit of a pickle, but I need to set this up regardless. I have this setup right now
Router 192.168.1.1 (Web Admin + Port Forwarding)
Server1 LAMP - (Router Forwards -> port 80 for LAMP Server)
Server2 Docker - (Router Forwards -> 20 SSH, 8080, 9000 Docker Admin)
So I have to configure the port forwarding through my Router's web interface, which is accessible on port 8080. But the issue is that right now I moved to Florida, and I had stupidly added a port-forwarding rule on 8080 to forward to Shipyard Docker Manager, which I eventually planned to install an Nginx-Proxy Forwarding Docker container. I never got the forwarding Docker container working, and I eventually switched to Portainer on port 9000 which I had to configure because it was the only other port I had already set forwarded before I lost access to my Router's web interface, and thus lost the ability to forward ports.
The downside is that I cannot access my Router's web interface. The upside is that - I still have to implement an Nginx-Proxy port forwarding Container anyways, to set up dynamic port 80 forwarding to different Docker containers based on the URL.
So I want to mvoe my LAMP server on as a new Docker Container, and then I will also have a few other Rails Docker containers - but I need to configure a Docker Container to forward the app to differnt servers based on the port. I assume I need to have 2 dockers running - one for port 80 forwarding, and then one for port 8080 forwarding - this is not a problem.
I have not been able to correctly configure my Nginx config to forwarding an incoming request from my domain-name that I have point to my server (my.domain.com below), needs to get forwarded to my router 192.168.1.1. Any help / suggestions on how to configure my Nginx-Proxy Docker Container to forward this correctly, or what I should setup here to forward incoming requests to a web-server dynamically based on the URL. I can install any Docker containers I need for this.
My current Config /etc/nginx/nginx.conf, running on a Nginx-Proxy Docker Container on port 8080 (Google to find the Docker Image for nginx-proxy)
# My Nginx Config to forward my.domain.com
http {
resolver 127.0.0.1;
access_log /var/logs/nginx/access.log;
server {
listen 8080;
server_name my.domain.com;
return 301 http://192.168.1.1:8080/$request_uri;
}
}
I get these errors:
[error] 55#55: *2274 datacenter.URL.com could not be resolved (110: Operation timed out), client: 166.172.189.185, server: datacenter.URL.com, request: "GET / HTTP/1.1", host: "datacenter.URL.com:8080"
[error] 55#55: recv() failed (111: Connection refused) while resolving, resolver: 192.168.1.1:8080
EDIT: I just noticed that I can only have one Docker Container running at-a-time for each port. So I need to figure out how to forward requests to different servers's + ports based on the Domain Name. So each URL forwarding rule entry needs to be able to go to different servers all running on all different ports.