Host multiple web apps on NGINX Docker - docker

I want to host multiple Flask apps on my docker nginx image. I want each app to listen on a different port.
However, i am unable to do so.
nginx.conf
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass flask1:8080;
}
}
server {
listen 81;
location / {
include uwsgi_params;
uwsgi_pass flask2:8081;
}
}
docker-compose.yml
version: "3.7"
services:
flask1:
build: ./flask1
container_name: flask1
restart: always
environment:
- APP_NAME=MyFlaskNginxDockerApp
expose:
- 8080
flask2:
build: ./flask2
container_name: flask2
restart: always
environment:
- APP_NAME=MyFlaskNginxDockerApp
expose:
- 8081
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "8080:80"
- "8081:81"
nginx - Dockerfile
# Use the Nginx image
FROM nginx
# Remove the default nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
# Replace with our own nginx.conf
COPY nginx.conf /etc/nginx/conf.d/
When I built and run this docker-compose, my websites are not available.
I want flask1 to be accessible via localhost:8080 and flask 2 to be accessible via localhost:8081
Can someone please help point out what I did wrong ?

You should not be using the service name, instead use host.docker.internal which resolves request to the host. Make this change in your nginx.conf
I would suggest using docker networks instead..

What you've set up right now is that external clients can connect to your flask apps on :81 and :82. Other containers like nginx can connect on flask1:80 and flask2:80.
Nginx could also be set up with host network mode to go back out and connect to :81 and :82, but that's probably not what you want. In fact, my guess would be that exposing external ports on the flask apps at all is probably not what you want to do in the long term, although it can be helpful for debugging because it gives you a way bypass the proxy.
Oops, forgot to add, you need to set up Nginx to use docker's internal dns to resolve the service names to IPs as mentioned here:
https://stackoverflow.com/a/37656784/9194976

Related

Getting 502 Bad gateway when proxy pass to keycloak container from nginx container

so I am very much new to the docker world. Currently facing this "502 Bad Gateway" error when trying to proxy pass to a keycloak container. I can't seem to understand the cause of the error. Below are my codes which I have written:
proxy.conf file
server{
listen 80;
location / {
proxy_pass http://myapp;
}
}
Dockerfile
FROM nginx:alpine
RUN rm etc/nginx/conf.d/*
COPY proxy.conf etc/nginx/conf.d/
docker-compose file
version: '3'
services:
nginx_app:
build: .
container_name: nginxapp
ports:
- "9000:80"
depends_on:
- myapp
myapp:
image: jboss/keycloak:latest
container_name: myapp
ports:
- "8443"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
What I am trying to do is that when I hit host-ip:9000 it should pass it to keycloak screen. But looks like something's wrong. Grateful for any help. Thanks
You need to make sure you've got your "wiring" to your ports set up correctly. JBoss (the container platform Keycloak currently runs on) generally listens on 8080 for HTTP and 8443 for HTTPS.
In your configuration you have it routing to port 80 with proxy_pass http://myapp; because that is what HTTP uses by default.
I'd suggest just pointing it at the HTTPS endpoint (or you can use HTTP on port 8080 if you really want) like so:
server{
listen 80;
location / {
proxy_pass https://myapp:8443;
}
}
We will also need to add two additional environment vars that the keycloak image uses to make things work more smoothly behind the proxy. See here for more details on these image env vars
PROXY_ADDRESS_FORWARDING as linked to by Jan Garaj
KEYCLOAK_FRONTEND_URL
myapp:
image: jboss/keycloak:latest
container_name: myapp
ports:
- "8443"
environment:
- KEYCLOAK_FRONTEND_URL=http://localhost:9000/auth/
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
Once everything starts up you should be able to access keycloak admin console via http://localhost:9000/auth/admin/
If this is intended for more than a development/testing setup, you should also work on configuring TLS in nginx and maybe even having real certs for the "backend" keycloak server.

502 Bad Gateway Error on Nextcloud Docker Container proxied through Subdomain on Nginx Webserver

I am running an Nginx Webserver on my Raspberry Pi 4. I am trying to configure a reverse proxy on subdomain to a Nextcloud Docker container. However I am getting a 502 Bad Gateway error when I try to visit this container in my browser. I have made sure to generate an SSL certificate for the subdomain I am trying to server Nextcloud over.
This is what the server block for my subdomain looks like:
server {
listen 443 ssl;
server_name subdomain.domain.tld;
ssl_certificate /pathtokey/subdomain.domain.tld/fullchain.pem;
ssl_certificate_key /pathtokey/subdomain.domain.tld/privkey;
location / {
proxy_pass https://127.0.0.1:9000/;
proxy_ssl_server_name on;
}
}
And this is what my docker-compose.yml file for Nextcloud looks like:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: linuxserver/mariadb
# command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=<rootPassword>
- MYSQL_PASSWORD=<mysqlPassword>
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
ports:
- 127.0.0.1:9000:9000
links:
- db
volumes:
- /mnt/hdd/nextcloud:/var/www/html
restart: always
After changing the .yml file, I make sure to run docker-compose up -d.
After changing nginx.conf file I run sudo systemctl restart nginx. I have also run sudo nginx -t.
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
I am not sure where my mistake is in these configurations. I would appreciate any advice on how to fix this.
You are using nextcloud:fpm image which is only a php fpm instance without a web server.
Your nginx proxy config is fine but it won't work because you will need nginx fastcgi_proxy for this to proxy request to a backend php instance.
Here's a simple illustration:
nginx(fastcgi) <-> php-fpm(nextcloud) <-> db
1st solution:
May refer to nextcloud official doc on how to configure nginx or simply copy the config: nginx configuration
2nd solution:
Use nextcloud:apache image instead. This image already included an apache web server and you can directly access it without needing another nginx instance.

docker-compose warning on scale containers

I've this docker-compose file:
version: "3.8"
services:
web:
image: apachephp:v1
ports:
- "80-85:80"
volumes:
- volume:/var/www/html
network_mode: bridge
ddbb:
image: mariadb:v1
ports:
- "3306:3306"
volumes:
- volume2:/var/lib/mysql
network_mode: bridge
environment:
- MYSQL_ROOT_PASSWORD=*********
- MYSQL_DATABASE=*********
- MYSQL_USER=*********
- MYSQL_PASSWORD=*********
volumes:
volume:
name: volume-data
volume2:
name: volume2-data
When run this:
docker-compose up -d --scale web=2
Its works as well but receive this warning:
WARNING: The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Can somebody help to avoid these warning?, thank you advance.
Best regards.
I suppose, you try to access the web service without knowing the port of the specific container and to distribute the requests to a container here. If i rigth, to do that, you need a load balancing mechanisms to the system configuration.In the following example, i'll use NGINX as the load balancer.
version: "3.8"
services:
web:
image: apachephp:v1
expose: # change 'ports' to 'expose'
- "7483" # <- this is web running port (Change to your web port)
....
ddbb:
....
## New Start ##
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web # your web service name
ports:
- "80:80"
## New End ##
volumes:
...
So you don’t need to map the port 80-85:80 from the web services to a host machine port, if you want to scale the service. So i remove the port mapping configuration from your Docker Compose file and only expose the port as above:
In the nginx service and i add port mappings to the host container for that server. In example, i configured NGINX to listen on the port 4000, which is why we have to add port mappings for this port.
nginx.conf file contents:
user nginx;
events {
worker_connections 1000;
}
http {
server {
listen 4000;
location / {
proxy_pass http://pspdfkit:5000;
}
}
}
You will find here more details, to Use Docker Compose to Run Multiple Instances of a Service in Development.

How can I connect the Nginx container to my React container?

I have tried reading through the other stackoverflow questions here but I am either missing something or none of them are working for me.
Context
I have two docker containers setup on a DigitalOcean server running Ubuntu.
root_frontend_1 running on ports 0.0.0.0:3000->3000/tcp
root_nginxcustom_1 running on ports 0.0.0.0:80->80/tcp
If I connect to http://127.0.0.1, I get the default Nginx index.html homepage. If I http://127.0.0.1:3000 I am getting my react app.
What I am trying to accomplish is to get my react app when I visit http://127.0.0.1. Following the documentation and suggestions here on StackOverflow, I have the following:
docker-compose.yml in root of my DigitalOcean server.
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./nginx.conf:/root/nginxcustom/conf/custom.conf
tty: true
backend:
build: https://github.com/Twitter-Clone/twitter-clone-api.git
ports:
- "8000:8000"
tty: true
frontend:
build: https://github.com/dougmellon/react-api.git
ports:
- "3000:3000"
stdin_open: true
tty: true
nginxcustom/conf/custom.conf :
server {
listen 80;
server_name http://127.0.0.1;
location / {
proxy_pass http://root_frontend_1:3000; # this one here
proxy_redirect off;
}
}
When I run docker-compose up, it builds but when I visit the ip of my server, it's still showing the default nginx html file.
Question
What am I doing wrong here and how can I get it so the main URL points to my react container?
Thank you for your time, and if there is anything I can add for clarity, please don't hesitate to ask.
TL;DR;
The nginx service should proxy_pass to the service name (customnginx), not the container name (root_frontend_1) and the nginx config should be mounted to the correct location inside the container.
Tip: the container name can be set in the docker-compose.yml for services setting the container_name however beware you can not --scale services with a fixed container_name.
Tip: the container name (root_frontend_1) is generated using the compose project name which defaults to using the current directory name if not set.
Tip: the nginx images are packaged with a default /etc/nginx/nginx.conf that will include the default server config from /etc/nginx/conf.d/default.conf. You can docker cp the default configuration files out of a container if you'd like to inspect them or use them as a base for your own configuration:
docker create --name nginx nginx
docker cp nginx:/etc/nginx/conf.d/default.conf default.conf
docker cp nginx:/etc/nginx/nginx.conf nginx.conf
docker container rm nginx
With nginx proxying connections for the frontend service we don't need to bind the hosts port to the container, the services ports definition can be replaced with an expose definition to prevent direct connections to http://159.89.135.61:3000 (depending on the backend you might want prevent direct connections as well):
version: "3"
services:
...
frontend:
build: https://github.com/dougmellon/react-api.git
expose:
- "3000"
stdin_open: true
tty: true
Taking it a step further we can configure an upstream for the
frontend service then configure the proxy_pass for the upstream:
upstream frontend {
server frontend:3000 max_fails=3;
}
server {
listen 80;
server_name http://159.89.135.61;
location / {
proxy_pass http://frontend/;
}
}
... then bind-mount the custom default.conf on top of the default.conf inside the container:
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
tty: true
... and finally --scale our frontend service (bounce the services removing the containers to make sure changes to the config take effect):
docker-compose stop nginxcustom \
&& docker-compose rm -f \
&& docker-compose up -d --scale frontend=3
docker will resolve the service name to the IP's of the 3 frontend containers which nginx will proxy the connections for in a (by default) round robin manner.
Tip: you can not --scale a service that has ports mappings, only a single container can bind to the port.
Tip: if you've updated the config and can connect to your load balanced service then you're all set to create a DNS record to resolve a hostname to your public IP address then update your default.conf's server_name.
Tip: for security I maintain specs for building a nginx docker image with Modsecurity and Modsecurity-nginx pre-baked with the OWASP Core Rule Set.
In Docker when multiple services needs to communicate with each other, you can use the service name in the url (set in the docker-composer.yml instead of the ip (which is attributed from the available pool of the network, default by default), it will automatically be resolve to the right container ip due to network management by docker.
For you it would be http://frontend:3000

How to proxy_pass to a node docker container on port 80 with nginx container

In short, I'm trying to set up an nginx container to proxy_pass to other containers on port 80.
I was following along with this tutorial: https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg
They describe having a docker compose file that looks something like:
version: '3'
services:
nginx:
image: nginx:latest
container_name: production_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx/error.log:/etc/nginx/error_log.log
- ./nginx/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
ports:
- 80:80
- 443:443
your_app_1:
image: your_app_1_image:latest
container_name: your_app_1
expose:
- "80"
your_app_2:
image: your_app_2_image:latest
container_name: your_app_2
expose:
- "80"
your_app_3:
image: your_app_3_image:latest
container_name: your_app_3
expose:
- "80"
Then in the nginx config they do a proxy_pass based on the path like this:
proxy_pass http://your_app_1:80;
This all makes sense to me, however when I was making a test node server to listen on port 80, I'm getting the error: Error: listen EACCES: permission denied 0.0.0.0:80. In my Dockerfile for the node server, I'm using a different user:
USER node
I know I'm getting this error because non root users are not supposed to be able to bind below port 1024 or something. And I know it's bad practice to run as root in a container... so how in the world is something like this possible? I feel like I'm missing something here. It would be nice to not have to remember some custom high port your server is running on every time you do a proxy_pass in nginx... or is that just a fact of life?
I see zero issues in doing an expose on the port,as long as we dont publish the port.
EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.
But its doable at the cost of adding security holes by granting kernel capabilities using --add-cap flag on the Docker client or the Docker-Compose cap_add. NET_BIND_SERVICE is the capability that we should be adding.

Resources