I am trying to run of containers on my UBUNTU server, these containers are:
DNS servers with bind9.
NTP server with cturra/ntp.
NGINX for reverse proxy => reverse proxy for DNS and NTP
I have these containers in the same yaml file:
version: '3'
services:
reverse-proxy-engine:
image: nginx
container_name: reverse-proxy-engine
volumes:
- ~/core/reverse-proxy/:/usr/share/nginx/
ports:
- "80:80"
- "443:443"
- "53:53"
- "123:123/udp"
depends_on:
- "DNS-SRV"
- "ntp"
DNS-SRV:
container_name: DNS-SRV
image: ubuntu/bind9
user: root
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
ntp:
image: cturra/ntp
container_name: ntp
restart: always
read_only: true
tmpfs:
- /etc/chrony:rw,mode=1750
- /run/chrony:rw,mode=1750
- /var/lib/chrony:rw,mode=1750
environment:
- NTP_SERVERS=time.cloudflare.com
- LOG_LEVEL=0
After running this yaml file, the containers are created and I see the ports mapped correctly:
admin#main-srv:~/core/yamls$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4720bae2a44c nginx "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 0.0.0.0:53->53/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:123->123/udp reverse-proxy-engine
1681814f651e cturra/ntp "/bin/sh /opt/startu…" 6 seconds ago Up 5 seconds (health: starting) 123/udp ntp
dde2f9094b45 ubuntu/bind9 "docker-entrypoint.sh" 6 seconds ago Up 5 seconds 53/tcp DNS-SRV
I am able to access the nginx webpage on the browser using port 80 with <UBUNTU_SERVER_IP:80>, but I'm unable to use this same IP to resolve DNS or NTP on the same network, but within the containers network, it's working.
So I think that NGINX ports are exposed to the UBUNTU server, but the DNS and NTP ports are not exposed to NGINX, would that be correct? What am I missing?
Below is my NginX configuration file:
events {
worker_connections 1024;
}
stream {
upstream dns_servers {
server DNS-SRV:53;
}
upstream ntp_server {
server ntp:123;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
server {
listen 123 udp;
listen 123; #tcp
proxy_pass ntp_server;
error_log /var/log/nginx/ntp.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}
So far it seems logical to me, any ideas?
I think that's because you don't set hostname for bind and ntp container, I use below configuration and get it working
version: '3'
services:
reverse-proxy-engine:
image: nginx
container_name: reverse-proxy-engine
volumes:
- ~/core/reverse-proxy/:/usr/share/nginx/
- $PWD/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "443:443"
- "53:53"
- "123:123/udp"
depends_on:
- "DNS-SRV"
- "ntp"
DNS-SRV:
container_name: DNS-SRV
hostname: DNS-SRV
image: ubuntu/bind9
user: root
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
ntp:
image: cturra/ntp
container_name: ntp
hostname: ntp
restart: always
read_only: true
tmpfs:
- /etc/chrony:rw,mode=1750
- /run/chrony:rw,mode=1750
- /var/lib/chrony:rw,mode=1750
environment:
- NTP_SERVERS=time.cloudflare.com
- LOG_LEVEL=0
In above configuration I add hostname for bind and ntp container, also i mount nginx configuration and replace the default configuration.
Below nginx.conf configuration
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
stream {
upstream dns_servers {
server DNS-SRV:53;
}
upstream ntp_server {
server ntp:123;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
server {
listen 123 udp;
listen 123; #tcp
proxy_pass ntp_server;
error_log /var/log/nginx/ntp.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}
Note: Make sure the binding port you use 80, 443, 53, 123 not used by other application.
Related
I'm using nginx in docker-compose file for handling my frontend and backend website.
I had no problems for a long time but once I've got the error "504 Gateway Time-out" when I try to access my project through localhost and it's port
http://localhost:8080
when I type docker Ip and its port
http://172.18.0.1:8080
I can access the project and nginx works correctly.
I'm sure my config file is correct because It was working for 6 months and I don't know what happened for it.
what should I check to find the problem?
docker-compose file:
.
.
.
nginx:
container_name: nginx
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend_appx
networks:
backend_appx :
external: true
.
.
nginx config file:
upstream nextjs_upstream {
server next_app:3000;
}
server {
listen 80 default_server;
server_name _;
server_tokens off;
# set root
root /var/www/html/frontend;
# set log
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location /_next/static {
proxy_cache STATIC;
proxy_pass http://nextjs_upstream;
add_header X-Cache-Status $upstream_cache_status;
}
}
I created a Github repo weeks ago with Docker Compose, Odoo, PostgreSQL, Certbot, Nginx as a proxy server, and a little bit of PHP stuff (Symfony) -> https://github.com/Inushin/dockerOdooSymfonySSL When I was trying the config I found that NGINX worked as it was supposed to and you get the correct HHTP -> HTTPS redirect, BUT if you put the port 8069, the browser goes to HTTP. One of the solutions should be configured de another VPC, but I was thinking about using this repo for other "minimal VPS services" and not needing another VPC, so... how could I solve this? Maybe from Odoo config? Is something missing in the NGINX conf?
NGINX
#FOR THE ODOO DOMAIN
server {
listen 80;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name DOMAIN_ODOO;
server_tokens off;
location / {
proxy_pass http://web:8069;
proxy_set_header Host DOMAIN_ODOO;
proxy_set_header X-Forwarded-For $remote_addr;
}
ssl_certificate /etc/letsencrypt/live/DOMAIN_ODOO/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DOMAIN_ODOO/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
docker-compose.yml
nginx:
image: nginx:1.15-alpine
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
volumes:
- ./data/nginx:/etc/nginx/conf.d/:rw
- ./data/certbot/conf:/etc/letsencrypt/:rw
- ./data/certbotSymfony/conf:/etc/letsencrypt/symfony/:rw
- ./data/certbotSymfony/www:/var/www/certbot/:rw
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
web:
image: odoo:13.0
depends_on:
- db
ports:
- "8069:8069/tcp"
volumes:
- web-data:/var/lib/odoo
- ./data/odoo/config:/etc/odoo
- ./data/odoo/addons:/mnt/extra-addons
I am currently an application with the following components, all running in docker containers on an AWS server:
angularJs
nodeJS
nginx (using jwilder/nginx-proxy image)
letsencrypt (using jrcs/letsencrypt-nginx-proxy-companion:v1.12 image)
The application allows me to upload files from the frontend, which sends them to an API endpoint in the nodeJs backend. Multiple files can be uploaded together, and these are all base64 encoded and sent in the same request.
When the files are small (up to about 5Mb total) this works perfectly fine, but recently I've tried slightly larger files (still less than 10Mb total) and I am experiencing the following error in my Chrome browser:
{"message":"request entity too large","additionalMessage":null,"dictionaryKey":"server_errors.general_error","hiddenNotification":false,"handledError":false}
Inspecting the traffic, I realised that the request was never making it to my backend, and thus assume that this error is caused by nginx blocking the request, presumably via the client_max_body_size property in my nginx config. Checking the logs of the nginx docker container, it doesn't actually show me any errors, but does warn that a tmp file is used (note I hgave masked IPs and URLs):
nginx.1 | 2021/11/12 03:19:15 [warn] 389#389: *9 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001, client: 100.00.00.000, server: my-host-url.com, request: "POST /api/docs/upload HTTP/2.0", host: "my-host-url.com", referrer: "https://my-host-url.com/"
After some googling, I found and followed this article https://learn.coderslang.com/0018-how-to-fix-error-413-request-entity-too-large-in-nginx/ which explains the issue pretty clearly and even references the same docker image that I use. Unfortunately this has not fixed the issue. I also read the nginx docs which show that this property can be applied at
http, server, location
level, and so I updated my nginx config accordingly and restarted nginx on it's own, and also shut down and restarted the containers. Still no luck :(
My docker/nginx config is as follows, noting that I am now using client_max_body_size 0; to completely disable the check instead of just increasing the size
Docker compose
version: '2.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
environment:
# DEBUG: "true"
DEFAULT_HOST: my-host-url.com
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- nginx-certs:/etc/nginx/certs:ro
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf
sysctls:
- net.core.somaxconn=65536
mem_limit: 200m
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
networks:
- common
restart: 'always'
logging:
options:
max-size: 100m
max-file: '5'
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.12
depends_on:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- nginx-certs:/etc/nginx/certs:rw
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
# environment:
# DEBUG: "true"
networks:
- common
restart: 'always'
nginx.conf copied to container
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
client_max_body_size 0;
proxy_connect_timeout 301;
proxy_send_timeout 301;
proxy_read_timeout 301;
send_timeout 301;
include /etc/nginx/conf.d/*.conf;
server {
client_max_body_size 0;
location / {
client_max_body_size 0;
}
}
}
daemon off;
I used different docker-compose.yml to up two sites
nginx1 mapping 8080:80
nginx2 mapping 8081:80
nginx-proxy forward 8080 to 80 and 8081 to 81
The results localhost:8080 and localhost:8081 are fine, but localhost:80 and localhost:81 are not works, I don`t know why, and tried add containers to a common networks, not works too
I expect visit localhost:80 and localhost:81 to get right responses
nginx-proxy directory
#docker-compose.yml
version: "3"
services:
test-nginx-proxy:
image: nginx:stable-alpine
container_name: test-nginx-proxy
restart: always
ports:
- 80:80
- 81:81
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- custom-nginx-net
networks:
custom-nginx-net:
external:
name: nginx-net
# nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 81;
server_name localhost;
location / {
proxy_pass http://localhost:8081;
}
}
}
nginx1
#docker-compose.yml
version: "3"
services:
test-nginx1:
image: nginx:stable-alpine
container_name: test-nginx1
restart: always
ports:
- 8080:80
volumes:
- ./:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- custom-nginx-net
networks:
custom-nginx-net:
external:
name: nginx-net
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
}
}
}
nginx2
version: "3"
services:
test-nginx2:
image: nginx:stable-alpine
container_name: test-nginx2
restart: always
ports:
- 8081:80
volumes:
- ./:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- custom-nginx-net
networks:
custom-nginx-net:
external:
name: nginx-net
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
}
}
}
In your docker-compose file for the ports section you have added like this
ports:
- 8080:80
which means port 8080 will be mapped to port 80. here 8080 is the host port, this port you will be able to access in your machine, whereas port 80 is container port, this will not be accessible outside the container.
To access port 80 outside you need to map it to host port the similar way you did for port 8080 as shown below
ports:
- 8080:80
- 80:80
do the same thing for port 81 as well
Below is my ngionx.conf
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
access_log /var/log/nginx/access.log main;
location /beta/ {
proxy_pass http://localhost:9001;
}
location /qa/ {
proxy_pass http://localhost:9002;
}
location /alpha/ {
proxy_pass http://localhost:9003;
}
location / {
proxy_pass http://www.google.com;
}
}
}
and below is my docker-compose.yml
version: '3'
services:
Reverse-proxy:
image: nginx
ports:
- 80:80
volumes:
- /nginx.conf:/etc/nginx/nginx.conf
restart: always
GQLbeta:
image: gql-beta
ports:
- 9001:80
restart: always
GQLqa:
image: gql-qa
ports:
- 9002:80
restart: always
GQLalpha:
image: gql-alpha
ports:
- 9003:80
restart: always
When I run docker-compose up -d, all container is running good.
Then I went localhost:80 on my browerser, it show
which I expected to see google page.
And when i went to localhost/beta, it will show
502 Bad Gateway
which i expected will go to localhost: 9001
Why this happened? Am i miss something to setup?
localhost in the docker container is the container itself, so you should to give a names to your app containers and describe them as a upstreams - it will fix your 502. With default location, try this:
location / {
return 301 http://google.com;
}