I have two nginx instances that I will call application and main. I have many reasons to do this.
The main instance running in front is reachable. If I replace its proxy_pass directive with return 200;, I get a 200 response. However, whether I return 200; in application, or I try to make it do what it's actually supposed to do (proxy again to a third app), it never receives the request.
The request eventually fails with 504 Gateway Time-out.
There's nothing out of the ordinary in any logs. The docker logs show the exact same output as when it was working just a few days ago, except there is no output for when I made the request, because the request is never received/registered. It basically stops at "startup complete; waiting for requests". So the app never receives the request in the first place when going through the reverse proxy. There's nothing at all in the nginx logs in either of the nginx containers.
Here's a minimal, reproducible example: https://github.com/Audiopolis/MultiServerRepro
I am not able to get the above example working. I randomly did for a little bit on Windows 10 (but not on Ubuntu 20.04), but it's not working any more. Can you see anything wrong with the configuration in the example?
The end goal is to easily add several applications to the same server using non-standard ports being proxied to by one "main" instance of nginx, which selects the appropriate app based on the host name. Note that each application must be capable of running on its own with its own nginx instance as well.
Thanks.
Both your application and main nginx are running in different docker networks. main is mapping ports 80:80 and application is mapping ports 10000:80. The mapping is to the host machine so main can't proxy_pass to application.
There are a lot of ways to solve it but I will suggest 2:
Run main with network: host and remove the port forwarding from main (since it's not needed anymore). NOTE: Network host is working only on linux machines (No windows)
Run both nginx servers on the same docker network and then main can proxy_pass to application using docker network address, which is also mapped to it's docker container name (e.g: proxy_pass application_container_name:80).
Related
I have a docker container running haproxy inside, and while it seems to work well, I'm running into a problem where the client IP that reaches the frontend, and shows up on the haproxy logs, is always the same, just with a different port. This IP seems to be the same as the IPV4 IPAM Gateway of the network where the container is running inside (192.168.xx.xx).
The problem with this is that since every request that reaches the proxy has the same client IP, no matter the machine or network where it came from, it's easy for someone with bad intentions to trigger the security restrictions, which bans said IP and no request gets through until the proxy is reset, because every request seems to be coming from the same banned IP.
This is my current haproxy config: (I tried to reduce it to the bare minimum, without the restrictions rules, timeouts, etc, for ease of understanding. I'm testing with this setup and the problem is still present)
global
log stdout format raw local0 info
defaults
mode http
log global
option httplog
option forwardfor
frontend fe
bind :80
default_backend be
backend be
server foo_server $foo_server_IP_and_Port
backend be_abuse_table
stick-table type ip size 1m expire 15m store conn_rate(3s),conn_cur,gpc0,http_req_rate(15s),http_err_rate(20s)
I have tried setting and adding headers, I've also tried to put the container running in the host network, but the problem is that the request does not reach the backend server because it's in a different network, furthermore, I would like to keep the container in the network where it's at, alongside the other containers.
Also, does the backend server configuration influence in any way this problem I'm having? My understanding is that since the problem is already present when reaching the frontend, the backend configuration doesn't matter for this problem.
Any suggestions? This has been driving me crazy for 2 days now. Thank you so much!
Turns out the problem was I was using docker rootless mode. You can either use normal docker or you can use rootless and install the slirpn4netns package and change the port forwarder, following these steps (the section about changing the port forwarder): https://rootlesscontaine.rs/getting-started/docker/#changing-the-port-forwarder
I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.
I've successfully deployed dockerized versions camunda operate, zeebe broker, and elasticsearch. I can access the web ui for camunda operate at localhost:8080. I need to put these behind an existing nginx reverse proxy. My first attempt was to do:
location /operate/ {
proxy_pass http://operate:8080/;
}
Operate is the name of the docker container, and the nginx container is running within the same network. Navigating to https://localhost/operate shows me a blank page with the Camunda Operate title showing. What seems to be happening is the initial /operate call is proxied successfully, but subsequent calls to static resources go to https://localhost/static/some-script.js instead of https://localhost/operate/static/some-script.js
I thought specifying /operate/ instead of /operate in the nginx location would do the trick, but no luck. Any clarification would be appreciated!
I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.
Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.
I use docker-compose stacks to run things on my personal VPS. Right now, I have a stack that's composed of:
Nginx (exposed port 443)
Ghost (blogging)
MySQL (for Ghost)
Adminer (for MySQL, exposed port 8080)
I wanted to try out Matomo analytics software, but I didn't want to add that to my current stack until I was happy with it, so I decided to create a second docker-compose stack for just the Matomo analytics:
Nginx (exposed port 444)
Matomo
MariaDB (for Matomo)
Adminer (for MariaDB, exposed port 8081)
With both stacks running, I can access everything at its appropriate port, but only by IP address. If I try to use my domain, it can only connect to the first Nginx, the one exposing port 443. If I try https://www.example.com:444 in my browser, it isn't able to connect. If I try https://myip:444 in my browser, it connects to the second Nginx instance exposing port 444, warning me that the SSL certificate has issues (since I'm connecting to my IP, not my domain), and then lets me through.
I was wondering if anyone knew why this behavior was happening. I'm admittedly new to setting up VPSs, using Nginx to proxy to other hosted services, etc. If it turns out Nginx cannot be used this way, I'd love to hear recommendations on how else I could arrange this. (Do I have to only have one Nginx instance running at a time, and I must proxy through it to everything else? etc)
Thank you!
I was able to fix this by troubleshooting my Cloudflare. I posted this question while waiting for my domain to accept the name servers for my VPS instead of Cloudflare. When this was finished, I tested it and it did get through at https://example.com:444. This proved it was Cloudflare blocking me.
I found this page which explained that the free Cloudflare plans only support a few ports, which does not include port 444. If I upgraded to a pro plan, I would have that option.
Therefore, I can conclude that the solution to my problem is to either upgrade my Cloudflare plan or merge the two docker-compose stacks so that I can accept requests for everything on just port 443.