nginx reverse-proxy docker applications - docker

I am trying to set up nginx as proxy with the following functionality:
If www.mydomain.com is called, hand out static content.
If www.mydomain.com/wekan is called, redirect to my Wekan-Dokan container.
For keeping it simple, everything is on localhost and no network required.
This is my nginx configuration:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
server{
location / {
proxy_pass http://localhost:9050;
}
location /wekan {
proxy_pass http://localhost:3001;
}
location /pics {
proxy_pass http://localhost/example.jpg;
}
location ~ \.(gif|jpg|png)$ {
root /home/myUser/serverTest/data/images;
}
}
server{
listen 9050;
root /home/myUser/serverTest/data/up1;
location / {
}
}
}
And here is my docker-compose for the Wekan App:
version: '2'
services:
wekandb:
image: mongo:3.2.21
container_name: wekan-db
restart: always
command: mongod --smallfiles --oplogSize 128
networks:
- wekan-tier
expose:
- 27017
volumes:
- /home/myUser/wekan/wekan-db:/data/db
- /home/myUser/wekan/wekan-db-dump:/dump
wekan:
image: quay.io/wekan/wekan
container_name: wekan-app
restart: always
networks:
- wekan-tier
ports:
# Docker outsideport:insideport
- 3001:8080
environment:
- MONGO_URL=mongodb://wekandb:27017/wekan
- ROOT_URL=http://localhost
So as my basic understanding of nginx,
calling localhost:3001 and calling localhost/wekan should be the same, since localhost/wekan gets redirected to localhost:3001.
This is not the case (it gets redirected to a wekan "page not found" version)

Related

How do I configure/ reconfigure an existing NGINX server to proxy to a docker container?

I have an existing NGINX server hosting 2 websites, one as standard and one on a node server. I want to run 3 docker containers as well on this.
All of the tutorials suggest running NGINX in a container, however this would conflict with my existing set up.
nodejs server, ports 3030:3030
mysql, ports 3360:3360
phpmyadmin, ports 8080:80
They run on localhost on my local machine fine, but I cant get NGINX on the remote server to host them.
I want to be able to access the node server at http://publicIP:3030
I have tried to follow this answer but NGINX is giving me 404 error when trying to access.
my nginx config is:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /paragon/ {
proxy_pass http://localhost:3030/;
# proxy_set_header X-SRV paragon;
}
location /phpmyadmin {
proxy_pass http://localhost:8080/;
# proxy_set_header X-SRV phpmyadmin;
}
location /mysql {
proxy_pass http://localhost:3360/;
# proxy_set_header X-SRV mysql;
}
I have tried it with the X-SRV headers uncommented as well.
My docker-compose.yml config is:
services:
web:
container_name: paragon_web
build: .
command: npm run
depends_on:
- db
volumes:
- ./:/app
- /node_modules
networks:
- paragon_net
ports:
- "3030:3030"
db:
container_name: paragon_db
image: mysql:8.0
command:
--default-authentication-plugin=mysql_native_password
--init-file ./src/data/db_init.sql
restart: unless-stopped
volumes:
- ./src/data/db_init.sql:/docker-entrypoint-initdb.d/
- mysql-data:/var/lib/mysql
ports:
- "3360:3306"
expose:
- "3306"
environment:
MYSQL_DATABASE: paragon
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: admin
MYSQL_PASSWORD: paragon99
SERVICE_TAG: dev
SERVICE_NAME: paragon_db
networks:
- paragon_net
# volumes:
phpmyadmin:
container_name: sql_admin
image: phpmyadmin:5.2.0-apache
restart: always
depends_on:
- db
ports:
- "8090:80"
networks:
- paragon_net
networks:
paragon_net:
driver: bridge
The location of the new site on the server are at /var/www/newsite

Configure NGINX, Docker Compose, AWS EC2

I have an EC2 AWS instance and I want to create a test environment for an application.
All other containers are working properly, but NGINX's is the only one that is not stable and keeps restarting constantly.
I tried in several ways to make NGINX run. Change addresses I used Return 301. Tried to remove certbot, nothing worked.
I'm an intern and I need to create this test environment for an assessment. Can someone help me?
my docker-compose.yml is like this
version: "3.9"
services:
backend:
container_name: backend
image: <my_image>
restart: always
ports:
- "3030:3030"
environment:
- "DATABASE_URL=${DATABASE_URL}"
- "SERVER_PORT=${SERVER_PORT}"
- "JWT_SECRET=${JWT_SECRET}"
sniffer:
container_name: mqtt-sniffer
image: <my_image>
restart: always
environment:
- "POSTGRES_DB=${POSTGRES_DB}"
- "POSTGRES_USER=${POSTGRES_USER}"
- "POSTGRES_PASSWORD=${POSTGRES_PASSWORD}"
- "POSTGRES_HOST=${POSTGRES_HOST}"
- "MQTT_CLIENT_ID=${MQTT_CLIENT_ID}"
- "MQTT_USER=${MQTT_USER}"
- "MQTT_PASSWORD=${MQTT_PASSWORD}"
- "MQTT_HOST=${MQTT_HOST}"
- "MQTT_PORT=${MQTT_PORT}"
web:
container_name: web
image: <my_image>
restart: always
nginx:
container_name: nginx
image: nginx:latest
restart: always
ports:
- "80:80"
- "443:443"
environment:
- API_SERVER_NAME=${API_SERVER_NAME}
volumes:
- /home/ubuntu/app/nginx/default.conf:/etc/nginx/nginx.conf:ro
- /home/ubuntu/app/certbot/www:/etc/nginx/acme_challenge:ro
- /home/ubuntu/app/certbot/certificate:/etc/nginx/certificate:ro
and my default.conf is like this
events {}
http {
server {
listen 80;
listen [::]:80;
server_name 18.231.90.250;
location /.well-known/acme-challenge/ {
root /etc/nginx/acme_challenge;
}
location / {
proxy_pass http://18.231.90.250;
}
}
server {
listen 80;
listen [::]:80;
location /.well-known/acme-challenge/ {
root /etc/nginx/acme_challenge;
}
location / {
proxy_pass http://web:80;
}
}
server {
listen 443 default_server ssl;
listen [::]:443 ssl;
ssl_certificate /etc/nginx/certificate/live/teste.clientautoponia.com/fullchain.pem;
ssl_certificate_key /etc/nginx/certificate/live/teste.clientautoponia.com/privkey.pem;
location / {
proxy_pass http://backend:3030;
}
}
I don't need to have the ssl certificate, I just want to be able to communicate when accessing the instance. Get a 200 when trying to access the address.

NGINX container as a proxy for other docker containers

I have bind9 DNS server running as a container, and is mapped to port 53003 on the docker host:
version: "3"
services:
DNS-SRV:
container_name: DNS-SRV
image: ubuntu/bind9
ports:
- "53003:53"
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
and I wonder if I can use NGINX server as a proxy for it and other containerized services, here is my nginx.config file:
events {
worker_connections 1024;
}
stream {
upstream dns_servers {
server <docker_hostIP>:53003;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}
What I'm trying to do is to map any dns requests for the docker host on port 53 to port 53003, I'm not sure if there is another way:
Client -- DNS request ---> 53 (( [ NGINX ] --> 53003:53 ))
My setup isn't working, I'm testing it by doing nslookup like this:
# nslookup <domain> <docker_hostIP>
but I'm getting connection timed out, what could be the issue?
You could do this?
Create a docker-compose incorporating both services
version: "3"
services:
DNS-SRV:
container_name: DNS-SRV
image: ubuntu/bind9
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
NGINX:
image: nginx:alpine
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- "DNS-SRV"
Then in your nginx.conf do something like this?
events {
worker_connections 1024;
}
stream {
upstream dns_servers {
server DNS-SRV:53;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}

Nginx does not redirect to the correct port

I am using a docker-compose to run my frontend application, backend application and nginx webserver. I would like to redirect the requests to the correct port (backend or frontend), but for some reason I just get Internal Server errors.
This is my docker-compose:
version: "3"
services:
webserver:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
- project
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx/error.log:/etc/nginx/error_log.log
- ./nginx/cache/:/etc/nginx/cache
- /etc/letsencrypt/:/etc/letsencrypt/
backend:
build:
context: ./project-backend
dockerfile: stage.Dockerfile
env_file:
- ./project-backend/environments/stage.env
volumes:
- ./project-backend/src:/usr/src/app/src
ports:
- "3000:3000"
networks:
- project
frontend:
build:
context: ./project-frontend
dockerfile: stage.Dockerfile
ports:
- "4200:80"
networks:
- project
networks:
project:
This works fine. I can access both of the frontend and backend.
This is my nginx.conf file:
events {}
http {
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m;
server {
proxy_cache one;
proxy_cache_key $request_method$request_uri;
proxy_cache_min_uses 1;
proxy_cache_methods GET;
proxy_cache_valid 200 1y;
listen 80;
server_name localhost;
location /api {
proxy_pass http://localhost:3000/api;
rewrite ^/api(.*)$ $1 break;
}
location / {
proxy_pass http://localhost:4200;
rewrite ^/(.*)$ $1 break;
}
}
}
Try to reach by service name, instead of localhost.
example:
Change proxy_pass http://localhost:3000/api; -> proxy_pass http://backend:3000/api;
Change proxy_pass http://localhost:4200; -> proxy_pass http://frontend:4200;

Ngnix and docker compose - not return static files

This is my docker-file.yml
version: '2'
services:
nginx:
image: nginx:latest
ports:
- '80:80'
- '443:443'
volumes:
- ./conf.d:/etc/nginx/conf.d/
- ./logs/nginx_access.log:/var/log/nginx_access.log
- ./logs/nginx_error.log:/var/log/nginx_error.log
- ./src/app/static:/flask-app/src/app/static
depends_on:
- web
web:
build: ./
command: gunicorn manage:app --bind 0.0.0.0:8000 --access-logfile=logs/gunicorn_access_log.txt
ports:
- '8000:8000'
volumes:
- ./:/flask-app
environment:
DATABASE_URL: postgresql://postgres:pass#localhost/flask_deploy
REDIS_HOST: redis
SECRET_KEY: 'BbGd3qe$dsf1'
CONFIG_NAME: 'prod'
links:
- postgres:postgres
- redis:redis
depends_on:
- postgres
- redis
postgres:
image: postgres:9.4
volumes:
- ./psql-data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: 'pass'
POSTGRES_DB: 'flask_deploy'
ports:
- '5432:5432'
redis:
image: "redis:3.0-alpine"
command: redis-server
ports:
- '6379:6379'
And this is my ngnix config (web is name from docker-compose file) :
server {
listen 80;
server_name web;
# запись доступа и журналы ошибок в /var/log
access_log /var/log/nginx_access.log;
error_log /var/log/nginx_error.log;
location / {
# переадресация запросов приложений на сервер gunicorn
proxy_pass http://web:8000;
}
location /static {
# обрабатывать статические файлы напрямую, без пересылки в приложение
autoindex on;
alias /flask-app/src/app/static;
expires 1d;
}
}
And my site avaliable on 127.0.0.1 (without port). But.. I have trouble with static files. Flask url_for generate url like:
http://web:8000/static/img/do.jpg
And this link unavailable.
I can try this:
http://127.0.0.1:8000/static/img/do.jpg
And i saw picture. But this picture returned by gunicorn, not ngnix :(
I am beginner in docker-compose and ngnix. Maybe, some comments about my config? Thanks!
Solution:
proxy_set_header Host $host:8000;
Full config:
server {
listen 80;
server_name localhost;
root /flask-app/src/app;
access_log /var/log/nginx_access.log;
error_log /var/log/nginx_error.log;
location / {
proxy_set_header Host $host:8000;
proxy_pass http://web:8000;
}
location /static {
autoindex on;
expires 1d;
}
}

Resources