client_max_body_size not taking effect - docker

I am currently an application with the following components, all running in docker containers on an AWS server:
angularJs
nodeJS
nginx (using jwilder/nginx-proxy image)
letsencrypt (using jrcs/letsencrypt-nginx-proxy-companion:v1.12 image)
The application allows me to upload files from the frontend, which sends them to an API endpoint in the nodeJs backend. Multiple files can be uploaded together, and these are all base64 encoded and sent in the same request.
When the files are small (up to about 5Mb total) this works perfectly fine, but recently I've tried slightly larger files (still less than 10Mb total) and I am experiencing the following error in my Chrome browser:
{"message":"request entity too large","additionalMessage":null,"dictionaryKey":"server_errors.general_error","hiddenNotification":false,"handledError":false}
Inspecting the traffic, I realised that the request was never making it to my backend, and thus assume that this error is caused by nginx blocking the request, presumably via the client_max_body_size property in my nginx config. Checking the logs of the nginx docker container, it doesn't actually show me any errors, but does warn that a tmp file is used (note I hgave masked IPs and URLs):
nginx.1 | 2021/11/12 03:19:15 [warn] 389#389: *9 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001, client: 100.00.00.000, server: my-host-url.com, request: "POST /api/docs/upload HTTP/2.0", host: "my-host-url.com", referrer: "https://my-host-url.com/"
After some googling, I found and followed this article https://learn.coderslang.com/0018-how-to-fix-error-413-request-entity-too-large-in-nginx/ which explains the issue pretty clearly and even references the same docker image that I use. Unfortunately this has not fixed the issue. I also read the nginx docs which show that this property can be applied at
http, server, location
level, and so I updated my nginx config accordingly and restarted nginx on it's own, and also shut down and restarted the containers. Still no luck :(
My docker/nginx config is as follows, noting that I am now using client_max_body_size 0; to completely disable the check instead of just increasing the size
Docker compose
version: '2.4'
services:
nginx-proxy:
image: jwilder/nginx-proxy
environment:
# DEBUG: "true"
DEFAULT_HOST: my-host-url.com
ports:
- '80:80'
- '443:443'
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- nginx-certs:/etc/nginx/certs:ro
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/nginx.conf
sysctls:
- net.core.somaxconn=65536
mem_limit: 200m
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
networks:
- common
restart: 'always'
logging:
options:
max-size: 100m
max-file: '5'
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.12
depends_on:
- nginx-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- nginx-certs:/etc/nginx/certs:rw
- nginx-vhost:/etc/nginx/vhost.d
- nginx-html:/usr/share/nginx/html
# environment:
# DEBUG: "true"
networks:
- common
restart: 'always'
nginx.conf copied to container
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
client_max_body_size 0;
proxy_connect_timeout 301;
proxy_send_timeout 301;
proxy_read_timeout 301;
send_timeout 301;
include /etc/nginx/conf.d/*.conf;
server {
client_max_body_size 0;
location / {
client_max_body_size 0;
}
}
}
daemon off;

Related

Deploying nginx with docker /api

How do you deploy static web with express api and mongodb?
Ive tried all different kind of ways to configure nginx but i cant get it to talk to the api at the location /api
ive tested that i can access api and mongodb with the api but i cant access the api from the nginx server http://localhost:8082/api/ gives me 404
Here is the docker-compose for the stack.
version: "3.8"
services:
js-alist-api:
image: "js-alist-api:latest"
ports:
- "5005:5005"
restart: always
container_name: "js-alist-api"
env_file:
- ./server/.env
volumes:
- "./js-alist-data/public:/public"
- "./server/oldDb.json:/oldDb.json"
js-alist-client:
image: "js-alist-client:latest"
ports:
- "8082:80"
restart: always
container_name: "js-alist-client"
volumes:
#- ./nginx-api.conf:/etc/nginx/sites-available/default.conf
- ./nginx-api.conf:/etc/nginx/conf.d/default.conf
database:
container_name: mongodb
image: mongo:latest
restart: always
volumes:
- "./js-alist-data/mongodb:/data/db"
Here is js-alist-client.dockerfile:
FROM nginx:alpine
COPY ./client-vue/vue/dist/ /usr/share/nginx/html/ # here i copy my static web
EXPOSE 80/tcp
next here is the nginx-api.conf:
server {
listen 80;
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
location /api/ {
proxy_pass http://localhost:5005/;
}
}
If i access the http://localhost:5005 it works
If i run my api it adds data to mongodb
If i run http://localhost:8082/ i can see static web
if i run http://localhost:8082/api or http://localhost:8082/api/ i get 404.
Also ive noticed if i change the:
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
to
location / {
root /usr/share/nginx/html2/;
index index.html index.htm;
}
i still can access the static web, even if the path dont exist. That leads me to believe that the conf file is not enabled.
But i checked in the js-alist-client container: /etc/nginx # cat nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
it shows that everything in /etc/nginx/conf.d/ is included
Now i dont know what is going on, and it seems my conf file is not loading. What am i doing wrong?
EDIT:
After some trial and error, not sure what im doing, but i saw this line elsewhere on internet:
listen [::]:80;
Added this line and added the suggested proxy_pass to service name of the container and got it working, but only halfassed. Meaning it only goes to the root subpath of /api. Every other subpath such as /api/images/something/else is not working.
New nginx conf file:
server {
listen 80;
listen [::]:80;
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
location /api/ {
proxy_pass http://js-alist-api:5005/;
}
}
How do i get that every and all subpaths are allowed?
EDIT2:
The next day i come in and now even this .conf is not working (posted in EDIT) I have no idea why sometimes it works and sometimes it doesnt. What a load of carp.
In a container context, localhost means the container itself. So when you say proxy_pass http://localhost:5005/;, Nginx passes the request on to port 5005 in the client container.
Docker-compose creates a docker network where the containers can talk to each other using their service names as host names. So you need to change the proxy_pass statement to
proxy_pass http://js-alist-api:5005/;

NGINX container as a proxy for other containers

I am trying to run of containers on my UBUNTU server, these containers are:
DNS servers with bind9.
NTP server with cturra/ntp.
NGINX for reverse proxy => reverse proxy for DNS and NTP
I have these containers in the same yaml file:
version: '3'
services:
reverse-proxy-engine:
image: nginx
container_name: reverse-proxy-engine
volumes:
- ~/core/reverse-proxy/:/usr/share/nginx/
ports:
- "80:80"
- "443:443"
- "53:53"
- "123:123/udp"
depends_on:
- "DNS-SRV"
- "ntp"
DNS-SRV:
container_name: DNS-SRV
image: ubuntu/bind9
user: root
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
ntp:
image: cturra/ntp
container_name: ntp
restart: always
read_only: true
tmpfs:
- /etc/chrony:rw,mode=1750
- /run/chrony:rw,mode=1750
- /var/lib/chrony:rw,mode=1750
environment:
- NTP_SERVERS=time.cloudflare.com
- LOG_LEVEL=0
After running this yaml file, the containers are created and I see the ports mapped correctly:
admin#main-srv:~/core/yamls$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4720bae2a44c nginx "/docker-entrypoint.…" 5 seconds ago Up 4 seconds 0.0.0.0:53->53/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:123->123/udp reverse-proxy-engine
1681814f651e cturra/ntp "/bin/sh /opt/startu…" 6 seconds ago Up 5 seconds (health: starting) 123/udp ntp
dde2f9094b45 ubuntu/bind9 "docker-entrypoint.sh" 6 seconds ago Up 5 seconds 53/tcp DNS-SRV
I am able to access the nginx webpage on the browser using port 80 with <UBUNTU_SERVER_IP:80>, but I'm unable to use this same IP to resolve DNS or NTP on the same network, but within the containers network, it's working.
So I think that NGINX ports are exposed to the UBUNTU server, but the DNS and NTP ports are not exposed to NGINX, would that be correct? What am I missing?
Below is my NginX configuration file:
events {
worker_connections 1024;
}
stream {
upstream dns_servers {
server DNS-SRV:53;
}
upstream ntp_server {
server ntp:123;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
server {
listen 123 udp;
listen 123; #tcp
proxy_pass ntp_server;
error_log /var/log/nginx/ntp.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}
So far it seems logical to me, any ideas?
I think that's because you don't set hostname for bind and ntp container, I use below configuration and get it working
version: '3'
services:
reverse-proxy-engine:
image: nginx
container_name: reverse-proxy-engine
volumes:
- ~/core/reverse-proxy/:/usr/share/nginx/
- $PWD/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "443:443"
- "53:53"
- "123:123/udp"
depends_on:
- "DNS-SRV"
- "ntp"
DNS-SRV:
container_name: DNS-SRV
hostname: DNS-SRV
image: ubuntu/bind9
user: root
environment:
- TZ=UTC
volumes:
- ~/core/bind9/:/etc/bind/
ntp:
image: cturra/ntp
container_name: ntp
hostname: ntp
restart: always
read_only: true
tmpfs:
- /etc/chrony:rw,mode=1750
- /run/chrony:rw,mode=1750
- /var/lib/chrony:rw,mode=1750
environment:
- NTP_SERVERS=time.cloudflare.com
- LOG_LEVEL=0
In above configuration I add hostname for bind and ntp container, also i mount nginx configuration and replace the default configuration.
Below nginx.conf configuration
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
stream {
upstream dns_servers {
server DNS-SRV:53;
}
upstream ntp_server {
server ntp:123;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 1s;
}
server {
listen 123 udp;
listen 123; #tcp
proxy_pass ntp_server;
error_log /var/log/nginx/ntp.log info;
proxy_responses 1;
proxy_timeout 1s;
}
}
Note: Make sure the binding port you use 80, 443, 53, 123 not used by other application.

Connect two wordpress containers with same NGINX docker

I use nginx in a docker to connect my two wordpress websites, which are dockerized too.
I can set up one website with the following settings:
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/html
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In my NGINX conf file located in /nginx/conf.d
server {
[...]
root /var/www/html/;
[...]
}
However to add a 2nd website, I try to change the root and the websites return a 404
In docker-compose.yml
nginx:
image: nginx:alpine
volumes:
- ./web_ndnb_prod/src:/var/www/web_ndnb_prod
- ./web_ndnb_test/src:/var/www/web_ndnb_test
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- web_ndnb_test
- web_ndnb_prod
In one of the 2 NGINX conf files
server {
[...]
root /var/www/web_ndnb_prod/;
[...]
}
If I execute
sudo docker exec -ti nginx ls /var/www/web_ndnb_prod
It outputs the wordpress files correctly
Why does Nginx not find them?
Edit 1
The main nginx.conf file is
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

Docker Compose Host Network Access and Reverse Proxy

I have a situation where I have created a microservice that can change the network settings of the cat5 ethernet ports on the running server. To do this it needs to be set in network_mode: host mode. The microservice exposes a HTTP rest api that I would like to have behind my nginx reverse proxy, but since it uses a bridge network, I cannot seem to access to the network_utilities(see the docker-compose file below) service. Any suggestions on how to make this work?
Here is my condensed docker-compose file:
version: '3.3'
services:
nginx:
image: nginx:stable
container_name: production_nginx
restart: unless-stopped
ports:
- 80:80
depends_on:
- smart-mobile-server
- network_utilities
volumes:
- ./config/front-end-gui:/var/www/html
- ./config/nginx:/etc/nginx
networks:
- smart-mobile-loopback
smart-mobile-server:
container_name: smart-mobile-rest-api
image: smartmobile_server:latest
build:
context: .
dockerfile: main.Dockerfile
environment:
NODE_ENV: production
command: sh -c "pm2 start --env production && pm2 logs all"
depends_on:
- 'postgres'
restart: unless-stopped
networks:
- smart-mobile-loopback
volumes:
- ~/server:/usr/app/dist/express-server/uploads
- ~/server/logs:/usr/app/logs
network_utilities:
image: smartgear/network-utilities-service:latest
network_mode: host
environment:
NODE_ENV: production
REST_API_PORT: '64000'
privileged: true
networks:
smart-mobile-loopback:
driver: bridge
nginx.conf
worker_processes 2;
events {
# Connections per worker process
worker_connections 1024;
# Turning epolling on is a handy tuning mechanism to use more efficient connection handling models.
use epoll;
# We turn off accept_mutex for speed, because we don’t mind the wasted resources at low connection request counts.
accept_mutex off;
}
http {
upstream main_server {
# least_conn Specifies that a group should use a load balancing method where a request is
# passed to the server with the least number of active connections,
# taking into account weights of servers. If there are several such
# servers, they are tried in turn using a weighted round-robin balancing method.
ip_hash;
# These are references to our backend containers, facilitated by
# Compose, as defined in docker-compose.yml
server smart-mobile-server:10010;
}
upstream network_utilities {
least_conn;
server 127.0.0.1:64000;
}
server {
# GZIP SETTINGS FOR LARGE FILES
gzip on;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_min_length 0;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
include /etc/nginx/mime.types;
## SECURITY SETTINGS
# don't send the nginx version number in error pages and Server header
server_tokens off;
# when serving user-supplied content, include a X-Content-Type-Options: nosniff header along with the Content-Type: header,
# to disable content-type sniffing on some browsers.
add_header X-Content-Type-Options nosniff;
listen 80;
location / {
# This would be the directory where your React app's static files are stored at
root /var/www/html/;
index index.html;
try_files $uri /index.html;
}
location /api/documentation/network-utilities {
proxy_pass http://network_utilities/api/documentation/network-utilities;
proxy_set_header Host $host;
}
location /api/v1/network-utilities/ {
proxy_pass http://network_utilities/;
proxy_set_header Host $host;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://main_server/api/;
}
}
}

How to setup nginx when using docker-compose

Below is my ngionx.conf
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
access_log /var/log/nginx/access.log main;
location /beta/ {
proxy_pass http://localhost:9001;
}
location /qa/ {
proxy_pass http://localhost:9002;
}
location /alpha/ {
proxy_pass http://localhost:9003;
}
location / {
proxy_pass http://www.google.com;
}
}
}
and below is my docker-compose.yml
version: '3'
services:
Reverse-proxy:
image: nginx
ports:
- 80:80
volumes:
- /nginx.conf:/etc/nginx/nginx.conf
restart: always
GQLbeta:
image: gql-beta
ports:
- 9001:80
restart: always
GQLqa:
image: gql-qa
ports:
- 9002:80
restart: always
GQLalpha:
image: gql-alpha
ports:
- 9003:80
restart: always
When I run docker-compose up -d, all container is running good.
Then I went localhost:80 on my browerser, it show
which I expected to see google page.
And when i went to localhost/beta, it will show
502 Bad Gateway
which i expected will go to localhost: 9001
Why this happened? Am i miss something to setup?
localhost in the docker container is the container itself, so you should to give a names to your app containers and describe them as a upstreams - it will fix your 502. With default location, try this:
location / {
return 301 http://google.com;
}

Resources