504 on long upload with Docker Nginx - docker

I spent so much time trying to sort this out, it's ridiculous...
I need to upload files up to 15Gb.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 800s;
include /etc/nginx/conf.d/*.conf;
}
Custom nginx conf
server {
listen 80;
listen [::]:80;
server_name example.com;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
if ($http_x_forwarded_proto != "https") {
rewrite ^(.*)$ https://$server_name$REQUEST_URI permanent;
}
root /var/www/webroot;
index index.php;
client_body_timeout 800s;
client_header_timeout 800s;
client_max_body_size 15000m;
client_body_temp_path /store/nginx-tmp 1 2;
#fastcgi_buffers 8 1600k;
#fastcgi_buffer_size 3200k;
add_header X-Frame-Options sameorigin;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass web:9000;
fastcgi_index index.php;
fastcgi_intercept_errors on;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_connect_timeout 800s;
fastcgi_read_timeout 800s;
fastcgi_send_timeout 800s;
#proxy_connect_timeout 800s;
#proxy_send_timeout 800s;
#proxy_read_timeout 800s;
#send_timeout 800s;
}
}
The commented lines are additional configs I tried.
I added client_body_temp_path /store/nginx-tmp 1 2; for Nginx to use a mounted volume to store temp files if needed because the EC2 instance only has a 8Gb disk.
Relevant PHP config:
upload_max_filesize = 15000M
post_max_size = 15000M
max_execution_time = 0
request_terminate_timeout = 800s
I am running a CakePHP application, in case that helps cracking this case.
When I upload a large file, I get a 504 after 2 to 2.5 minutes. The upload in incomplete ($_FILES['file']['error'] = 3, UPLOAD_ERR_PARTIAL)
There nothing in the PHP logs.
Nginx logs only has this:
2020/04/17 06:38:04 [warn] 479#479: *943 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000007, client: 172.31.20.155, server: example.com, request: "POST /materials/add/38999 HTTP/1.1", host: "example.com", referrer: "https://example.com/jobs/view/38999"
Can anyone save my sanity here?
Thanks,

Related

Cannot upload a larger file nginx

I have configured Nginx with client_max_body_size 100M;
Although still, I am facing an error while uploading a file of 25 MB
The nginx.conf is as follows:
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
The ngnix.conf has an included conf file which is as follows:
client_max_body_size 100M;
upstream third-party{
server cognitiveview-thirdparty;
}
server {
listen 443 ssl;
ssl_certificate <key1>;
ssl_certificate_key <key2>;
server_name <>;
server_tokens off;
location / {
proxy_pass http://;
proxy_set_header Host "host.com";
proxy_connect_timeout 1500s;
proxy_send_timeout 1500s;
proxy_read_timeout 1500s;
send_timeout 1500s;
proxy_ssl_server_name on;
client_max_body_size 100M;
}
}
server {
listen 80;
server_name <>;
return 301 https://$host$request_uri;
client_max_body_size 100M;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
client_max_body_size 100M;
}
location /uploads {
client_max_body_size 100M;
}
}
I tried multiple which suggests setting the client_max_body_size in server, location, and HTTP. As per my understanding, I have updated the values, although still I am not successful in uploading the file for a larger size.
The file is being uploaded through a react application which is going to nginx to a backemd

Welcome to Nginx page displays instead of actual webpage

I am working on a Ruby on Rails project on an Ubuntu server. Whenever I try and access the app, I am always greeted with:
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
Here is my code from the "sites_enabled" directory:
pstream unicorn_site {
server unix:/tmp/unicorn.site.sock fail_timeout=0;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
server_name http://[ip_address];
root /data/site/current/public;
try_files $uri/index.html $uri #unicorn_site;
location #unicorn_site {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn_site;
# limit_req zone=one;
access_log /var/log/nginx/site.access.log;
error_log /var/log/nginx/site.error.log;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
}
I am not sure what the problem is as the app seems to have everything it need to be working correctly. I can provide any part of the code anyone needs. Help is much appreciated.
EDIT: I am also always getting this error when I attempt to load the page:
2018/06/15 15:39:10 [warn] 15280#0: server name "http://[ip_address]" has suspicious symbols in /etc/nginx/sites-enabled/site:14
EDIT 2: Here is nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Move 'try_files $uri/index.html $uri #unicorn_site;' under location /
location / {
try_files $uri/index.html $uri #unicorn_site;
}
The server_name http://[ip_address]; looks wrong. try server_name example.com
Have you configured ruby_passenger line ?
Follow this carefully and your deployment should be successful
Choose your ubuntu version there and proceed
https://gorails.com/deploy/ubuntu/16.04

Successfully installed magento2 but the admin page throwing not found error

I created a docker container for magento2 and successfully created the container.
Installed the magento2 successfully following these steps in CLI:
1. ./magento setup:config:set --db-host=172.17.0.3 --db-name=mydb --db-user=admin --db-password=password
Database details from another linked mysql container
2. ./magento setup:install --admin-user='new-admin' --admin-password='!admin123!' --admin-email='info#domain.com' --admin-firstn
ame='Jon' --admin-lastname='Doe' --use-rewrites=1
Initially I missed --use-rewrites but added than
This successfully installs the magento2 and displays the success message too. Than Opening the page in browser I had the following error which I fixed by changing the permission.
Warning: file_put_contents(/usr/html/var/cache//mage-tags/mage---196_CONFIG): failed to open stream: Permission denied in /usr/html/vendor/colinmollenhour/cache-backend-file/File.php on line 663
Now when I open the admin url the link automatically gets changed and the error message appears.
This is the error log from docker logs containername.
nginx: [emerg] "location" directive is not allowed here in /etc/nginx/sites-enabled/magento.conf:191
So must be the nginx setup error
/etc/nginx/sites-enabled/magento.conf
upstream fastcgi_backend {
server unix:/run/php7-fpm.sock;
}
server {
listen 80;
server_name localhost;
set $MAGE_ROOT /usr/html;
root $MAGE_ROOT/pub;
index index.php index.html;
autoindex off;
charset UTF-8;
error_page 404 403 = /errors/404.php;
#add_header "X-UA-Compatible" "IE=Edge";
# PHP entry point for setup application
location ~* ^/setup($|/) {
root $MAGE_ROOT;
location ~ ^/setup/index.php {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_pass fastcgi_backend;
fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off";
fastcgi_param PHP_VALUE "memory_limit=756M \n max_execution_time=600";
fastcgi_read_timeout 600s;
fastcgi_connect_timeout 600s;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ ^/setup/(?!pub/). {
deny all;
}
location ~ ^/setup/pub/ {
add_header X-Frame-Options "SAMEORIGIN";
}
}
# PHP entry point for update application
location ~* ^/update($|/) {
root $MAGE_ROOT;
location ~ ^/update/index.php {
fastcgi_split_path_info ^(/update/index.php)(/.+)$;
fastcgi_pass fastcgi_backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include fastcgi_params;
}
# Deny everything but index.php
location ~ ^/update/(?!pub/). {
deny all;
}
location ~ ^/update/pub/ {
add_header X-Frame-Options "SAMEORIGIN";
}
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location /pub/ {
location ~ ^/pub/media/(downloadable|customer|import|theme_customization/.*\.xml) {
deny all;
}
alias $MAGE_ROOT/pub/;
add_header X-Frame-Options "SAMEORIGIN";
}
location /static/ {
# Uncomment the following line in production mode
# expires max;
# Remove signature of the static files that is used to overcome the browser cache
location ~ ^/static/version {
rewrite ^/static/(version\d*/)?(.*)$ /static/$2 last;
}
location ~* \.(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
}
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
add_header X-Frame-Options "SAMEORIGIN";
}
location /media/ {
try_files $uri $uri/ /get.php$is_args$args;
location ~ ^/media/theme_customization/.*\.xml {
deny all;
}
location ~* \.(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
try_files $uri $uri/ /get.php$is_args$args;
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
try_files $uri $uri/ /get.php$is_args$args;
}
add_header X-Frame-Options "SAMEORIGIN";
}
location /media/customer/ {
deny all;
}
location /media/downloadable/ {
deny all;
}
location /media/import/ {
deny all;
}
# PHP entry point for main application
location ~ (index|get|static|report|404|503|health_check)\.php$ {
try_files $uri =404;
fastcgi_pass fastcgi_backend;
fastcgi_buffers 1024 4k;
fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off";
fastcgi_param PHP_VALUE "memory_limit=756M \n max_execution_time=18000";
fastcgi_read_timeout 600s;
fastcgi_connect_timeout 600s;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/x-javascript
application/json
application/xml
application/xml+rss
image/svg+xml;
gzip_vary on;
# Banned locations (only reached if the earlier PHP entry point regexes don't match)
location ~* (\.php$|\.htaccess$|\.git) {
deny all;
}
}
This is the /etc/nginx/nginx.conf
user docker;
worker_processes 4;
pid /run/nginx.pid;
daemon off;
events {
worker_connections 768;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_requests 10;
keepalive_timeout 300;
types_hash_max_size 2048;
client_body_buffer_size 128K;
client_header_buffer_size 1k;
client_body_temp_path /tmp 1 2;
client_max_body_size 10m;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
#therefore there should be default named config not default.conf
#so made a change
include /etc/nginx/sites-enabled/*.conf;
The main page is running
give file permission for /var/html/{magento_folder/file giving issue} as
sudo chown -R www-data:www-data /var/html/{magento_folder/file giving issue}

Unicorn failed adding listener unicorn.sock

I'm trying to run my rails app on VPS server. But have a fatal error on adding listener to Unicorn socket.
I'm using Nginx + ngx_pagespeed, foreman, Unicorn, Capistrano.
In nginx.error.log
2014/10/06 22:18:23 [crit] 268#0: *5 connect()
to unix:/var/www/apps/APP_NAME/socket/.unicorn.sock failed
(2: No such file or directory) while connecting to upstream,
client: XXX.XXX.XXX.XXX,
server: _,
request: "GET / HTTP/1.1",
upstream: "http://unix:/var/www/apps/APP_NAME/socket/.unicorn.sock:/",
host: "DOMAIN_NAME"
And in unicorn.stderr.log
F, [2014-10-07T20:39:49.320008 #24012]
FATAL -- : error adding listener addr=/var/sockets/unicorn.APP_NAME.sock
My unicorn.rb
worker_processes 2
working_directory '/var/www/apps/APP_NAME/current' # available in 0.94.0+
listen '/var/www/apps/APP_NAME/socket/.unicorn.sock', :backlog => 64
listen 8080, :tcp_nopush => true
timeout 30
pid '/var/www/apps/APP_NAME/run/unicorn.pid'
stderr_path '/var/www/apps/APP_NAME/log/unicorn.stderr.log'
stdout_path '/var/www/apps/APP_NAME/log/unicorn.stdout.log'
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true
check_client_connection false
before_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And nginx.conf
user nginx web;
pid /var/run/nginx.pid;
error_log /var/www/log/nginx.error.log;
events {
worker_connections 1024;
accept_mutex off;
use epoll;
}
http {
include mime.types;
types_hash_max_size 2048;
server_names_hash_bucket_size 64;
default_type application/octet-stream;
access_log /var/www/log/nginx.access.log combined;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 0;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
gzip_proxied expired no-cache no-store private auth;
gzip_comp_level 9;
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
upstream app_server {
server unix:/var/www/apps/APP_NAME/socket/.unicorn.sock fail_timeout=0;
}
server {
pagespeed on;
pagespeed FileCachePath /var/ngx_pagespeed_cache;
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" {
add_header "" "";
}
location ~ "^/ngx_pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }
location /ngx_pagespeed_statistics {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
location /ngx_pagespeed_global_statistics {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
pagespeed MessageBufferSize 100000;
location /ngx_pagespeed_message {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
location /pagespeed_console {
allow 127.0.0.1; allow 5.228.169.73; deny all;
}
charset utf-8;
listen 80 default deferred; # for Linux
client_max_body_size 4G;
server_name _;
keepalive_timeout 5;
root /var/www/apps/APP_NAME/current/public;
try_files $uri/index.html $uri.html $uri #app;
location ~ ^/(assets)/ {
root /var/www/apps/APP_NAME/current/public;
expires max;
add_header Cache-Control public;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /var/www/apps/APP_NAME/current/public;
}
}
}
Sorry, if there is a lot useless info in my question. This is my first deployment on VPS, so I'm not good in it, yet. Of course, I've changed APP_NAME to my current application name.
Thanks for any help!

Missing Content-Length header when using Nginx + Gzip + Unicorn

I don't know why http response is missed "Content-Length header" when I use gzip in nginx, i'm really getting stuck,please somebody help me, thank you so much!
this is my config file,
nginx.conf
user nobody nobody ;
worker_processes 8;
events {
worker_connections 1024;
accept_mutex on; # "on" if nginx worker_processes > 1
use epoll; # enable for Linux 2.6+
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
upstream backend-unicorn {
server unix:/tmp/unicorn_app.sock fail_timeout=0;
#server localhost:5000;
}
#access_log logs/access.log main;
sendfile on;
keepalive_timeout 100;
gzip on;
gzip_static on;
gzip_proxied any;
gzip_http_version 1.1;
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain application/zip text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image$
server {
listen 8080;
server_name misago;
access_log /var/log/nginx/unicorn.access.log main;
client_max_body_size 64M;
location /uploads/ {
root /usr/local/rails_apps/me_management_tool/current/public/uploads;
break;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
I don't know why http response is missed "Content-Length header" when I use gzip in nginx
Because in this case the length of content is unknown at the moment of sending headers. Nginx cannot know how well the content will be compressed.
Below is a performant Nginx solution to add x-file-size header to Gzip responses. Full discussion here: https://github.com/AnthumChris/fetch-progress-indicators/issues/13
location / {
## Nginx Lua module must be installed https://docs.nginx.com/nginx/admin-guide/dynamic-modules/lua/
## https://github.com/openresty/lua-nginx-module#header_filter_by_lua
header_filter_by_lua_block {
function file_len(file_name)
local file = io.open(file_name, "r")
if (file == nil) then return -1 end
local size = file:seek("end")
file:close()
return size
end
ngx.header["X-File-Size"] = file_len(ngx.var.request_filename);
}
}

Resources