I want to run Nginx in a docker container, it listens to port 80 and I want it to proxy_pass to port 8080 when the url starts with word api, and I have some web app listening port 8080. This has been working for me without docker, but with docker, I couldn't get it to work.
My nginx.conf is like:
location /{
# serve static page
}
location /api {
proxy_pass http://0.0.0.0:8080;
}
I run my nginx container with docker run -d -p 80:80 -p 8080: 8080 nginx
My problem is now I can no longer run my web app because it can't listen to port 8080 since that container is already listening to it.
docker run -d --net host nginx
Try it!
Nginx container will share the host network with IP and all ports
First, you need to create a network to place both containers:
docker network create nginx_network
Then, you should specify Docker's DNS server in nginx configuration:
location /api {
#Docker DNS
resolver 127.0.0.11;
#my_api - name of container with your API, see below
proxy_pass http://my_api:8080;
}
Finally, run your containers:
docker run --network="nginx_network" -d --name my_api your_api_container
docker run --network="nginx_network" -d -p 80:80 nginx
Note:
--name parameter's value for API's container must match domain name in Nginx config
it's enough to specify only 80 port for your nginx container
first run your API's container and then Nginx's container (see below)
both containers must be in the same network
This should work.
In case if you run nginx container first, then nginx will try to resolve domain name my_api on startup and fail, because container with this name doesn't exist yet. In this case there is a following workaround (not sure if it is good solution). Modify nginx config:
location /api {
#Docker DNS
resolver 127.0.0.11;
#hack to prevent nginx to resolve domain on start up
set $docker_host "my_api";
#my_api - name of container with your API, see below
proxy_pass http://$docker_host:8080;
}
You can (or rather should) only have one process per docker container which means you will have nginx running in one container and your application on another. The old way is to create links between containers like this:
$ docker run --name my-app -d myself/myapp
$ docker run --name proxy --link my-app:my-app -d nginx
This will add a line in /etc/hosts in the nginx container so it will be able to call the other container by it's name.
And then in nginx.conf file:
location /api {
proxy_pass http://my-app:8080;
}
However according to official Docker docs this method is deprecated and you should only use it's "absolutely needed". Instead you should use the docker networking. Theoretically if both containers are in the same network and the local DNS server is working (embedded in docker) they should be able to see each other without the --link parameter. Unfortunately it didn't work for me for some reason. Nginx didn't have the correct DNS configured in /etc/resolv.conf, but read the article and play around it, I'm sure it will work.
Related
I am new to using docker and nginx together so i apologize in advance for my simple question which I am unable to get an answer to despite going through many resources on youtube.
I created an ubuntu server and ran the following command
sudo apt install nginx
Now I have a very simple flask application docker image(publicly available on docker hub and not developed by me) and I want to configure my nginx to work as a reverse proxy to my container running the said image.
My code for the reverse proxy in nginx configuration is as follows:
server{
listen 80;
location / {
proxy_pass "http://192.168.0.20:8000"
}
}
192.168.0.20 is my server ip and 8000 is the host port over which i am forwarding my docker container like
docker container run -p 8000:5000 <image>
But when I run http://192.168.0.20/ it opens default nginx index.html whereas I want it to forward to my app container to serve that static file because when i run http://192.168.0.20:8000/ it gives me desired output.
This might sound like a dumb question but i have been struggling to get a hang of nginx.
Thanks in advance for the help !!!
To reach the host from inside the container, you can't use 192.168.0.20, since that address isn't known inside the container. Depending on your host OS, you can use 172.17.0.1 (Linux) or host.docker.internal (Windows). If your OS is Linux, you should change your config to
server {
listen 80;
location / {
proxy_pass "http://172.17.0.1:8000/"
}
}
Rather than installing nginx yourself, you can use the nginx images that are available on docker hub. To get your config file into it, you copy it to /etc/nginx/conf.d/default.conf. Create a Dockerfile containing
FROM nginx
COPY nginx.conf /etc/nginx/conf.d/default.conf
Build and run it with
docker build -t mynginx .
docker run -d -p 80:80 mynginx
Now you should be able to go to http://192.168.0.20/ and get a response from the Flask app.
I have a Docker EE running on a Host with IP 172.10.100.17. I have installed UCP using the default parameters and I have also deployed nginx container with host port 443 mapped to 443 on the container.
docker run -it --rm --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp install --host-address 172.10.100.17 --interactive
docker run -it -d --name ngx -p 80:80 -p 443:443 nginx
Can UCP and Nginx co-exist with both serving at
https://172.10.100.17?
What is the best practice for deploying UCP when my primary goal is to have nginx/apache serving on Host IP?
Would it be recommended to set a static IP to nginx container/service?
(Note:https is enabled on nginx)
The key is in the -p parameter, which handles port mapping. The first port listed is on the host, and the second is in the container. So -p 80:80 means to map port 80 on the host to port 80 in the container.
Let’s expand this to Nginx. I’m going to assume you want to use HTTPS with both UCP and Nginx. Only one application can listen per port on a host. So, if two containers both expose port 443, you can have one use port 443 on the host (-p 443:443) and the other use a different port (-p 4443:443). Then you’ll access them at ports 443 and 4443 on the host, respectively, even though both containers expose port 443 - Docker is doing the port forwarding.
It may be that you’re asking how to run both containers on a single port using Nginx as a reverse proxy. That’s a possibility as well, though more complex.
I have my docker app running in the aws EC2 instance, and I am currently trying to map the app to the external IP address using Nginx. Here is a snap shot of the containers that I have running:
My test-app is a fairly simple app that displays a static html website. And I deployed it using the following command:
docker run -d --name=test-app test-app
The nginx-proxy has the following proxy.conf
server {
listen 80;
location / {
proxy_pass http://test-app;
}
}
Here is the Dockerfile for the nginx proxy:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d/
nginx-proxy is run using the following command:
docker run -d -p 80:80 --name=nginx-proxy nginx-proxy
However, the nginx container never runs, and here the error log I get
2020/03/27 15:55:19 [emerg] 1#1: host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
nginx: [emerg] host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
While your two containers are both running and you have properly exposed the ports required for the NGINX container. You have not exposed any ports for the test-app container. The NGINX container has no way of talking to it. Exposing ports directly with docker run would likely defeat the point of using a reverse proxy in your situation. So instead, what you should do in this situation is create a Network for both of your Docker containers and then add them to it. Then they will be able to communicate with one-another over a bridge. For example:
docker network create example
docker run -d --network=example --name=test-app test-app
docker run -d -p 80:80 --network=example --name=nginx-proxy nginx-proxy
Now that you have both of your pods on the same network, Docker will enable DNS-based service discovery between them by container name and you will be able to resolve them from each other. You can test connectivity like so: docker exec -it nginx-proxy ping test-app. Well, that is provided ping is installed in that Docker container.
I have an application running on http://localhost:8000 using a docker image that was made by me.
Now I want to use NGINX as reverse proxy listening on port 80, to redirect to the localhost:8000.
Here my nginx.conf file
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://localhost:8000;
}
}
}
Here my Dockerfile:
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY index.html /usr/share/nginx/html
COPY nginx.conf /etc/nginx
CMD nginx
To build the image I use the command
docker build --no-cache -t mynginx .
To run it, I use
docker run -p 80:80 -d mynginx
Now, if I test from my local computer with curl localhost:8000 everything works, but if I try with curl localhost, I get a Bad Gateway error.
Moreover, I tried to serve static content and it works, but with the reverse proxy settings it does not work.
The reason you are getting a bad gateway is that from within your nginx container, localhost resolves to the container itself and not to the host system.
If you want to access your application container from your reverse proxy container, you need to put both containers into a network.
docker network create my-network
docker network connect --alias application my-network <application container id/name>
docker network connect --alias reverse-proxy my-network <reverse proxy container id/name>
--network can be an arbitrary name and --alias should be the hostnames you want to resolve to your containers.
In fact you do not need to provide an alias if you already assigned a (host)name (docker run --name ...) to your containers.
You can then change your proxy_pass directive to the (alias) name of your application container:
proxy_pass http://application:8000;
Also refer to the documentation: Container networking
curl localhost:8000 works => your application is running
curl localhost returns bad gateway means proxy_pass http://localhost:8000; does not work. This is because "localhost" is relative to its caller, in this case the nginx container, which does not have anything running on port 8000.
You need to point it to your application using proxy_pass http://your_app:8000; where "your_app" is the service name (docker-compose) or the container name or its network alias. Make sure they are in the same network.
I'm running 2 Docker containers on a host. In my first container, I started it this way:
docker run -d --name site-a -p 127.0.0.1:3000:80 nginx
This maps the port 80 to the host machine's port 3000. It also has a name called site-a, which I want to use it in another container.
Then in my other container, which is a main reverse proxy container, I configured the nginx's configuration to have an upstream pointing the the first container (site-a):
upstream my-site-a {
server site-a:80;
}
I then run the reverse proxy container this way:
docker run -d --name reverse-proxy -p 80:80 nginx
So that my reverse-proxy container will serve from site-a container.
However, there are 2 problems here:
The upstream in my nginx configuration doesn't work when I use server site-a:80;. How can I get
nginx to resolve the alias "site-a" to the IP of site-a container?
When starting site-a container, I followed an answer at here
and bound it to the host machine's port 3000 with this: -p 127.0.0.1:3000:80 Is this neccessary?
In order for your containers to be mutually reachable via their name, you need to add them to the same network.
First create a network with this command:
docker network create my-network
Then, when running your containers, add the --network flag like this:
docker run -d --name site-a -p 127.0.0.1:3000:80 --network my-network nginx
Of course you need to do the same thing to both containers.
As per your second question, there's no need to map the port on your host with the -p flag as long as you don't want to reach site-a's container directly from your host.
Of course you still need to use the -p flag on the reverse proxy container in order to make it reachable.
If you combine multiple containers to more complex infrastructure it's time to move to more complex technologies. Basically you have the choice between docker-compose and docker stack. Kubernetes could also be an option but it's more complicated.
That techniques provide solutions for container discovery and internal name resolving.
I suggest to use docker stack. Instead of compose it has no additional requirements beside docker.