AWS Service discovery, nginx and node issue - docker

I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.

Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.

Related

Routing all net traffic from a k8s container through another in the same pod

I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?

how to deal with changing ips of docker compose containers?

I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.

Nginx: Proxy to another nginx instance: Gateway time-out

I have two nginx instances that I will call application and main. I have many reasons to do this.
The main instance running in front is reachable. If I replace its proxy_pass directive with return 200;, I get a 200 response. However, whether I return 200; in application, or I try to make it do what it's actually supposed to do (proxy again to a third app), it never receives the request.
The request eventually fails with 504 Gateway Time-out.
There's nothing out of the ordinary in any logs. The docker logs show the exact same output as when it was working just a few days ago, except there is no output for when I made the request, because the request is never received/registered. It basically stops at "startup complete; waiting for requests". So the app never receives the request in the first place when going through the reverse proxy. There's nothing at all in the nginx logs in either of the nginx containers.
Here's a minimal, reproducible example: https://github.com/Audiopolis/MultiServerRepro
I am not able to get the above example working. I randomly did for a little bit on Windows 10 (but not on Ubuntu 20.04), but it's not working any more. Can you see anything wrong with the configuration in the example?
The end goal is to easily add several applications to the same server using non-standard ports being proxied to by one "main" instance of nginx, which selects the appropriate app based on the host name. Note that each application must be capable of running on its own with its own nginx instance as well.
Thanks.
Both your application and main nginx are running in different docker networks. main is mapping ports 80:80 and application is mapping ports 10000:80. The mapping is to the host machine so main can't proxy_pass to application.
There are a lot of ways to solve it but I will suggest 2:
Run main with network: host and remove the port forwarding from main (since it's not needed anymore). NOTE: Network host is working only on linux machines (No windows)
Run both nginx servers on the same docker network and then main can proxy_pass to application using docker network address, which is also mapped to it's docker container name (e.g: proxy_pass application_container_name:80).

Running nginx container in kubernetes pod which has python app running using gunicorn

I have a container which runs a chatbot using python, exposed port 5000 where the bot is running. Now when i deploy this container on kubernetes, I have few questions
Do i need to run nginx container in the pod where my app container is
running ? If yes why do i need to ? since kubernetes does load
balancing
If i run nginx container on port 80, do I need to run my
app container also on 80 or (can i use a different port like 5000)
what role does gunicorn play here ?
I am a little confused because most of the examples i see online everyone pretty much have nginx container in their pods along with the app containers
As you mentioned Kubernetes can handle its own load balancing, so the answer to your first question is no, you don't need to run nginx especially in the pod where your application is.
Typically, services and pods have IPs that are routable by the cluster network and all traffic which ends at an edge router will be dropped. So in Kubernetes there is a collection of rules that allows inbound connections to reach a cluster services. Which we call Ingress:
An API object that manages external access to the services in a
cluster, typically HTTP.
Confusing part is that Ingress on it's own does not do much. You will have to create an Ingress controller which is a daemon deployed as a Pod. Its job is to read the Ingress Resource information and to process that accordingly. Actually any system capable of reverse proxying can be ingress controller. You can read more about Ingress and Ingress controller in practical approach in this article. Also I do not know about your environment so please remember that you should use type LoadBalancer if you are on the Cloud and type NodePort if in bare-metal environment.
Going to your second question you can run your application on any port you want, just remember to adjust that port in all other configuration files.
About ports and how to expose services you should check the sources in documentation on how the Kubernetes model works in comparison to containers model. You can find a an instructive article here.
Unfortunately I do not have experience with gunicorn so I won't be able to tell you what role does it play in here. Hope this helps.

Docker 1.12 Port Fowarding Services Across Nodes

So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/

Resources