How can I connect the Nginx container to my React container? - docker

I have tried reading through the other stackoverflow questions here but I am either missing something or none of them are working for me.
Context
I have two docker containers setup on a DigitalOcean server running Ubuntu.
root_frontend_1 running on ports 0.0.0.0:3000->3000/tcp
root_nginxcustom_1 running on ports 0.0.0.0:80->80/tcp
If I connect to http://127.0.0.1, I get the default Nginx index.html homepage. If I http://127.0.0.1:3000 I am getting my react app.
What I am trying to accomplish is to get my react app when I visit http://127.0.0.1. Following the documentation and suggestions here on StackOverflow, I have the following:
docker-compose.yml in root of my DigitalOcean server.
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./nginx.conf:/root/nginxcustom/conf/custom.conf
tty: true
backend:
build: https://github.com/Twitter-Clone/twitter-clone-api.git
ports:
- "8000:8000"
tty: true
frontend:
build: https://github.com/dougmellon/react-api.git
ports:
- "3000:3000"
stdin_open: true
tty: true
nginxcustom/conf/custom.conf :
server {
listen 80;
server_name http://127.0.0.1;
location / {
proxy_pass http://root_frontend_1:3000; # this one here
proxy_redirect off;
}
}
When I run docker-compose up, it builds but when I visit the ip of my server, it's still showing the default nginx html file.
Question
What am I doing wrong here and how can I get it so the main URL points to my react container?
Thank you for your time, and if there is anything I can add for clarity, please don't hesitate to ask.

TL;DR;
The nginx service should proxy_pass to the service name (customnginx), not the container name (root_frontend_1) and the nginx config should be mounted to the correct location inside the container.
Tip: the container name can be set in the docker-compose.yml for services setting the container_name however beware you can not --scale services with a fixed container_name.
Tip: the container name (root_frontend_1) is generated using the compose project name which defaults to using the current directory name if not set.
Tip: the nginx images are packaged with a default /etc/nginx/nginx.conf that will include the default server config from /etc/nginx/conf.d/default.conf. You can docker cp the default configuration files out of a container if you'd like to inspect them or use them as a base for your own configuration:
docker create --name nginx nginx
docker cp nginx:/etc/nginx/conf.d/default.conf default.conf
docker cp nginx:/etc/nginx/nginx.conf nginx.conf
docker container rm nginx
With nginx proxying connections for the frontend service we don't need to bind the hosts port to the container, the services ports definition can be replaced with an expose definition to prevent direct connections to http://159.89.135.61:3000 (depending on the backend you might want prevent direct connections as well):
version: "3"
services:
...
frontend:
build: https://github.com/dougmellon/react-api.git
expose:
- "3000"
stdin_open: true
tty: true
Taking it a step further we can configure an upstream for the
frontend service then configure the proxy_pass for the upstream:
upstream frontend {
server frontend:3000 max_fails=3;
}
server {
listen 80;
server_name http://159.89.135.61;
location / {
proxy_pass http://frontend/;
}
}
... then bind-mount the custom default.conf on top of the default.conf inside the container:
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
tty: true
... and finally --scale our frontend service (bounce the services removing the containers to make sure changes to the config take effect):
docker-compose stop nginxcustom \
&& docker-compose rm -f \
&& docker-compose up -d --scale frontend=3
docker will resolve the service name to the IP's of the 3 frontend containers which nginx will proxy the connections for in a (by default) round robin manner.
Tip: you can not --scale a service that has ports mappings, only a single container can bind to the port.
Tip: if you've updated the config and can connect to your load balanced service then you're all set to create a DNS record to resolve a hostname to your public IP address then update your default.conf's server_name.
Tip: for security I maintain specs for building a nginx docker image with Modsecurity and Modsecurity-nginx pre-baked with the OWASP Core Rule Set.

In Docker when multiple services needs to communicate with each other, you can use the service name in the url (set in the docker-composer.yml instead of the ip (which is attributed from the available pool of the network, default by default), it will automatically be resolve to the right container ip due to network management by docker.
For you it would be http://frontend:3000

Related

nginx docker does not redirect gogs docker container

i'm new to docker networking and nginx stuff, but try to "dockerize" everything on a local devserver. for tests a docker image with nginx which should redirect another container (gogs) from port 3000 to a specific url with port 80. And i want to have the reverse proxy configs and the docker images "separated", for each "app" an own docker-compose file.
so i should reach with http://app.test.local the gog installation.
BUT: i reach with http://app.test.local only a bad gateway of nginx and with http://app.test.local:3000 i reach the gog installation...
i tried many tutorials, but somehwere there have to be an error, thats slips in every time
so what i did:
$ docker network create testserver-network
created
docker-compose for nginx:
version: '3'
services:
proxy:
container_name: proxy
hostname: proxy
image: nginx
ports:
- 80:80
- 443:443
volumes:
- /docker/proxy/config:/etc/nginx
- /docker/proxy/certs:/etc/ssl/private
networks:
- testserver-network
networks:
testserver-network:
and one for gogs:
version: '3'
services:
gogs:
container_name: gogs
hostname: gogs
image: gogs/gogs
ports:
- 3000:3000
- "10022:22"
volumes:
- /docker/gogs/data:/var/gogs/data
networks:
- testserver-network
networks:
testserver-network:
(mapped directories work)
configured default.conf of nginx:
# upstream gogs {
# server 0.0.0.0:10880;
# }
server {
listen 80;
server_name app.test.local;
location / {
proxy_pass http://localhost:3000;
}
}
and added to hosts file on client
app.test.local <server ip>
docker exec proxy nginx -t and docker exec proxy nginx -s reload say everythings fine...
Answer
You should connect both containers to the same docker network and then proxy to http://gogs:3000 instead. You also shouldn't need to expose port 3000 on your localhost unless you want http://app.test.local:3000 to work. I think ideally you should remove that, so http://app.test.local should proxy to your gogs server, and http://app.test.local:3000 should error out.
Explanation
gogs is exposed on port 3000 inside its container, which is then further exposed on port 3000 on your host machine. The nginx container does not have access to port 3000 on your host, so when it tries to proxy to http://localhost:3000 it is proxying to port 3000 inside the nginx container (which is hosting nothing).
After you have joined the containers to the same network, you should be able to reference the gogs container from the nginx container by its hostname (which you've set to gogs). Now nginx will proxy through the docker network. So you should be able to perform the proxy without needing to expose 3000 on your local machine.

Host multiple web apps on NGINX Docker

I want to host multiple Flask apps on my docker nginx image. I want each app to listen on a different port.
However, i am unable to do so.
nginx.conf
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass flask1:8080;
}
}
server {
listen 81;
location / {
include uwsgi_params;
uwsgi_pass flask2:8081;
}
}
docker-compose.yml
version: "3.7"
services:
flask1:
build: ./flask1
container_name: flask1
restart: always
environment:
- APP_NAME=MyFlaskNginxDockerApp
expose:
- 8080
flask2:
build: ./flask2
container_name: flask2
restart: always
environment:
- APP_NAME=MyFlaskNginxDockerApp
expose:
- 8081
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "8080:80"
- "8081:81"
nginx - Dockerfile
# Use the Nginx image
FROM nginx
# Remove the default nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
# Replace with our own nginx.conf
COPY nginx.conf /etc/nginx/conf.d/
When I built and run this docker-compose, my websites are not available.
I want flask1 to be accessible via localhost:8080 and flask 2 to be accessible via localhost:8081
Can someone please help point out what I did wrong ?
You should not be using the service name, instead use host.docker.internal which resolves request to the host. Make this change in your nginx.conf
I would suggest using docker networks instead..
What you've set up right now is that external clients can connect to your flask apps on :81 and :82. Other containers like nginx can connect on flask1:80 and flask2:80.
Nginx could also be set up with host network mode to go back out and connect to :81 and :82, but that's probably not what you want. In fact, my guess would be that exposing external ports on the flask apps at all is probably not what you want to do in the long term, although it can be helpful for debugging because it gives you a way bypass the proxy.
Oops, forgot to add, you need to set up Nginx to use docker's internal dns to resolve the service names to IPs as mentioned here:
https://stackoverflow.com/a/37656784/9194976

Nginx revers proxy can't reach docker container by host name

Nginx reverse proxy can't reach docker host. Hosting on amazon (EC2)
I want to load different apps depends on location.
nginx.conf
server {
listen 80 ;
server_name localhost;
location /web {
proxy_pass http://web:4000/;
}}
Location works and it means that nginx image builded correct either.
docker-compose file
services:
web:
image: web
container_name: web
ports:
- 4000:4000
hostname: web
networks:
- default
nginx:
image: nginx
container_name: nginx
ports:
- 80:80
depends_on:
- web
networks:
- default
networks:
default:
external:
name: my-network
I expect
- when I type in url /web it should show app from docker container
I've tried
Run single container - works fine (web or nginx)
Added 127.0.0.1 web in /etc/hosts (I can do 'curl web' but it shows localhost response)
Added index index.html in location section
Added resolver in the location section
Use links instead of network
When "docker-compose up" I can inspect docker container (web) and see IP - 192.168.10.2 . Then curl 192.168.10.2 shows me index.html. But I can't make curl http://web:4000 seems that hostname in unreachable, but I think that using IP in proxy_pass is a bad decision.
I wasn't able to handle those issues, so I've chose another approach.
Create network ipam
docker network create --gateway 172.20.0.1 --subnet 172.20.0.0/24 ipam
Assigned for each service ipv4address in docker-compose file
networks:
default:
ipv4_address: 172.20.0.5 for web
where
networks:
default:
external:
name: ipam
Add chmod for directory /var/www/html in my web docker image
chmod -R 755 /var/www/html
(seems this additional step required if you build LINUX container under windows docker)

How to NGINX Reverse Proxy outside of Docker to proxy_pass to docker containers

I have an NGINX running on a CentOS server and would like to proxy_pass to running docker containers running on the same host.
When using proxy_pass with the IP of container it works, however if machine gets rebooted sometimes the IP changes of the container and have to manually edit the nginx.conf to re-point to new ip of Container. I know that NGINX can be setup inside of its own docker container and linked to other running containers but that would take a long time to setup and test.
Is there a way to use somehow container name or another identifier that does not change directly on the host's NGINX?
I know that NGINX can be setup inside of its own docker container and linked to other running containers but that would take a long time to setup and test.
Short answer
If you don't want to run nginx in it's own container, you can create a docker network with a fixed IP range:
docker network create --driver=bridge --subnet=192.168.100.0/24 nginx.docker
And start your container with a fixed IP
docker run --net nginx.docker --ip 192.168.100.1 ...
See docker network create and Assign static IP to Docker container
Long answer
But I would still suggest to run nginx in a container as well, then docker will take care of the DNS resolution and routing. It's actually quickly done and pretty straight forward. You can either define all services in one docker-compose.yml and make sure they all share the same network or:
Create a docker network with docker network create nginx.docker
Add the network to the docker-compose.yml files of your services
Adjust your nginx.conf
For example:
nginx
The docker-compose.yml of nginx
services:
nginx:
image: nginx:alpine
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- type: bind
source: ./nginx.conf
target: /etc/nginx/nginx.conf
networks:
- nginx.docker
networks:
nginx.docker:
name: nginx.docker
external: true
Note that "80:80" will bind to all interfaces, use the IP of an interface, e.g. "192.168.0.1:80:80" to bind to one specific interface only.
Your service container
docker-compose.yml
services:
my_service:
image: image_name
container_name: myservice
networks:
- nginx.docker
networks:
nginx.docker:
name: nginx.docker
external: true
nginx config
An in the server section of your nginx.conf:
server {
listen 443 ssl;
server_name your.server.name;
# Docker DNS
resolver 127.0.0.11;
set $upstream_server http://myservice:8080; # or myservice.nginx.docker
location / {
proxy_pass $upstream_server;
# further proxy config ...
}
}
Note the resolver 127.0.0.11 explicitly telling nginx to use the docker DNS. Not sure if it is still needed, but I had problems before, when not using it.
There is no need to create a network. It is possible to use default one. Default bridge network has gateway on 172.17.0.1. You can use this IP address in your nginx.conf
server {
listen 80;
server_name example.com;
location /{
proxy_pass http://172.17.0.1:81;
}
}
You can check your bridge gateway IP address by running command docker network inspect bridge

Why my Nginx proxy succeed to find my node webserver since my docker-compose doesn't expose any webserver port on the network?

My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)

Resources