Docker Nginx: host not found in upstream - docker

I have my docker app running in the aws EC2 instance, and I am currently trying to map the app to the external IP address using Nginx. Here is a snap shot of the containers that I have running:
My test-app is a fairly simple app that displays a static html website. And I deployed it using the following command:
docker run -d --name=test-app test-app
The nginx-proxy has the following proxy.conf
server {
listen 80;
location / {
proxy_pass http://test-app;
}
}
Here is the Dockerfile for the nginx proxy:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d/
nginx-proxy is run using the following command:
docker run -d -p 80:80 --name=nginx-proxy nginx-proxy
However, the nginx container never runs, and here the error log I get
2020/03/27 15:55:19 [emerg] 1#1: host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
nginx: [emerg] host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5

While your two containers are both running and you have properly exposed the ports required for the NGINX container. You have not exposed any ports for the test-app container. The NGINX container has no way of talking to it. Exposing ports directly with docker run would likely defeat the point of using a reverse proxy in your situation. So instead, what you should do in this situation is create a Network for both of your Docker containers and then add them to it. Then they will be able to communicate with one-another over a bridge. For example:
docker network create example
docker run -d --network=example --name=test-app test-app
docker run -d -p 80:80 --network=example --name=nginx-proxy nginx-proxy
Now that you have both of your pods on the same network, Docker will enable DNS-based service discovery between them by container name and you will be able to resolve them from each other. You can test connectivity like so: docker exec -it nginx-proxy ping test-app. Well, that is provided ping is installed in that Docker container.

Related

use nginx as reverse proxy for a docker container

I am new to using docker and nginx together so i apologize in advance for my simple question which I am unable to get an answer to despite going through many resources on youtube.
I created an ubuntu server and ran the following command
sudo apt install nginx
Now I have a very simple flask application docker image(publicly available on docker hub and not developed by me) and I want to configure my nginx to work as a reverse proxy to my container running the said image.
My code for the reverse proxy in nginx configuration is as follows:
server{
listen 80;
location / {
proxy_pass "http://192.168.0.20:8000"
}
}
192.168.0.20 is my server ip and 8000 is the host port over which i am forwarding my docker container like
docker container run -p 8000:5000 <image>
But when I run http://192.168.0.20/ it opens default nginx index.html whereas I want it to forward to my app container to serve that static file because when i run http://192.168.0.20:8000/ it gives me desired output.
This might sound like a dumb question but i have been struggling to get a hang of nginx.
Thanks in advance for the help !!!
To reach the host from inside the container, you can't use 192.168.0.20, since that address isn't known inside the container. Depending on your host OS, you can use 172.17.0.1 (Linux) or host.docker.internal (Windows). If your OS is Linux, you should change your config to
server {
listen 80;
location / {
proxy_pass "http://172.17.0.1:8000/"
}
}
Rather than installing nginx yourself, you can use the nginx images that are available on docker hub. To get your config file into it, you copy it to /etc/nginx/conf.d/default.conf. Create a Dockerfile containing
FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
Build and run it with
docker build -t mynginx .
docker run -d -p 80:80 mynginx
Now you should be able to go to http://192.168.0.20/ and get a response from the Flask app.

Manage other container with nginx container

How do I use the nginx in the container and access other container with setup config file?
I am a beginner for docker.
I try to learn how to use nginx manage my applications by docker containers.
I will use the "pgadmin" as an application in container for example.
Create & start containers. I try to use the [link] parameter to connect two containers.
sudo docker create -p 80:80 -p 443:443 --name Nginx nginx
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --link Nginx:PSQLA --name PSQLA dpage/pgadmin4
sudo docker start Nginx
sudo docker start PSQLA
Go to Nginx bash and install nano edit.
sudo docker exec -it Nginx bash
apt update
apt install nano
Create and setup the nginx config file in admin.conf.
nano etc/nginx/conf.d/admin.conf
In the admin.conf is following blow.
{
listen 80;
server_name admin.my-domain-name;
location / {
proxy_pass http://PSQLA:80;
}
}
I get this error blow.
2020/10/17 01:57:16 [emerg] 333#333: host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
nginx: [emerg] host not found in upstream "PSQLA" in /etc/nginx/conf.d/admin.conf:5
How do I use the nginx in the container and access other container with setup config file?
Try the following commands (in the same order) to launch the containers:
sudo docker create -e PGADMIN_DEFAULT_EMAIL=houzeyu2683#gmail.com -e PGADMIN_DEFAULT_PASSWORD=20121006 -p 5001:80 --name PSQLA dpage/pgadmin4
sudo docker create -p 80:80 -p 443:443 --link PSQLA:PSQLA --name Nginx nginx
sudo docker start PSQLA
sudo docker start Nginx
Now edit the Nginx configurations and you should not encounter the error anymore.
Tl;dr
As mentioned in the docker documentation:
When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source.
In order to access PSQLA from Nginx container, we need to link Nginx container to PSQLA container and not the other way around.
Now the question is: What difference does that even makes?
For this we need to understand how --link option works in docker.
The docker adds a host entry for the source container to the /etc/hosts file
We can verify this in the /etc/hosts file inside the Nginx container. It contains a new entry something like this (The id and IP might be different in your case):
172.17.0.4 PSQLA 1117cf1e8a28
This entry makes Nginx container access PSQLA container using the container name.
Please refer this for better understanding:
https://docs.docker.com/network/links/#updating-the-etchosts-file
Important Note
As mentioned in the Docker documentation:
The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.

How to use name of container to resolve to its IP in nginx's upstream server?

I'm running 2 Docker containers on a host. In my first container, I started it this way:
docker run -d --name site-a -p 127.0.0.1:3000:80 nginx
This maps the port 80 to the host machine's port 3000. It also has a name called site-a, which I want to use it in another container.
Then in my other container, which is a main reverse proxy container, I configured the nginx's configuration to have an upstream pointing the the first container (site-a):
upstream my-site-a {
server site-a:80;
}
I then run the reverse proxy container this way:
docker run -d --name reverse-proxy -p 80:80 nginx
So that my reverse-proxy container will serve from site-a container.
However, there are 2 problems here:
The upstream in my nginx configuration doesn't work when I use server site-a:80;. How can I get
nginx to resolve the alias "site-a" to the IP of site-a container?
When starting site-a container, I followed an answer at here
and bound it to the host machine's port 3000 with this: -p 127.0.0.1:3000:80 Is this neccessary?
In order for your containers to be mutually reachable via their name, you need to add them to the same network.
First create a network with this command:
docker network create my-network
Then, when running your containers, add the --network flag like this:
docker run -d --name site-a -p 127.0.0.1:3000:80 --network my-network nginx
Of course you need to do the same thing to both containers.
As per your second question, there's no need to map the port on your host with the -p flag as long as you don't want to reach site-a's container directly from your host.
Of course you still need to use the -p flag on the reverse proxy container in order to make it reachable.
If you combine multiple containers to more complex infrastructure it's time to move to more complex technologies. Basically you have the choice between docker-compose and docker stack. Kubernetes could also be an option but it's more complicated.
That techniques provide solutions for container discovery and internal name resolving.
I suggest to use docker stack. Instead of compose it has no additional requirements beside docker.

Container should communicate to host network, but does not

I have a two HTTP servers on my host machine; one listening on 8080, the other listening on 8081. The 8080 is a webapp, and the 8081 is an API.
I also have a Docker container that should connect to the webapp on 8080 using an automated tool, and that webapp should make HTTP requests to the API that's on 8081.
Here is a visual representation of what I want:
Host machine HTTP 8080
⇩ ⇖
⇧ Docker container
Host machine HTTP 8081
The problem I'm having is that the Docker container cannot connect to the website on the host machines 8080. I'm not sure why, because I set the --network=host flag, so shouldn't it be using the host machines network?
This is my Docker image:
## Redacted irrelevant stuff...
EXPOSE 8080 8081
This is how run the container:
docker run -d -p 8080:8080 -p 8081:8081 --network=host --name=app app
Any ideas what's wrong with my setup?
So you have two services running directly on the machine and you want to deploy a Docker container that should connect to one of those services.
In that case, you shouldn't map those port to the container and you shouldn't expose those ports in the Dockerfile as those ports are not for the container.
Remove the Expose ports from the Dockerfile
Start the container using docker run -d --network=host --name=app app. The container should be able to access the services using localhost:8080.

Nginx docker container proxy pass to another port

I want to run Nginx in a docker container, it listens to port 80 and I want it to proxy_pass to port 8080 when the url starts with word api, and I have some web app listening port 8080. This has been working for me without docker, but with docker, I couldn't get it to work.
My nginx.conf is like:
location /{
# serve static page
}
location /api {
proxy_pass http://0.0.0.0:8080;
}
I run my nginx container with docker run -d -p 80:80 -p 8080: 8080 nginx
My problem is now I can no longer run my web app because it can't listen to port 8080 since that container is already listening to it.
docker run -d --net host nginx
Try it!
Nginx container will share the host network with IP and all ports
First, you need to create a network to place both containers:
docker network create nginx_network
Then, you should specify Docker's DNS server in nginx configuration:
location /api {
#Docker DNS
resolver 127.0.0.11;
#my_api - name of container with your API, see below
proxy_pass http://my_api:8080;
}
Finally, run your containers:
docker run --network="nginx_network" -d --name my_api your_api_container
docker run --network="nginx_network" -d -p 80:80 nginx
Note:
--name parameter's value for API's container must match domain name in Nginx config
it's enough to specify only 80 port for your nginx container
first run your API's container and then Nginx's container (see below)
both containers must be in the same network
This should work.
In case if you run nginx container first, then nginx will try to resolve domain name my_api on startup and fail, because container with this name doesn't exist yet. In this case there is a following workaround (not sure if it is good solution). Modify nginx config:
location /api {
#Docker DNS
resolver 127.0.0.11;
#hack to prevent nginx to resolve domain on start up
set $docker_host "my_api";
#my_api - name of container with your API, see below
proxy_pass http://$docker_host:8080;
}
You can (or rather should) only have one process per docker container which means you will have nginx running in one container and your application on another. The old way is to create links between containers like this:
$ docker run --name my-app -d myself/myapp
$ docker run --name proxy --link my-app:my-app -d nginx
This will add a line in /etc/hosts in the nginx container so it will be able to call the other container by it's name.
And then in nginx.conf file:
location /api {
proxy_pass http://my-app:8080;
}
However according to official Docker docs this method is deprecated and you should only use it's "absolutely needed". Instead you should use the docker networking. Theoretically if both containers are in the same network and the local DNS server is working (embedded in docker) they should be able to see each other without the --link parameter. Unfortunately it didn't work for me for some reason. Nginx didn't have the correct DNS configured in /etc/resolv.conf, but read the article and play around it, I'm sure it will work.

Resources