I am getting a 400 Bad Request request header or cookie too large from nginx with my Rails app. Restarting the browser fixes the issue. I am only storing a string id in my cookie so it should be tiny.
Where can I find the nginx error logs? I looked at nano /opt/nginx/logs/error.log, but it doesn't have anything related.
I tried to set following and no luck:
location / {
large_client_header_buffers 4 32k;
proxy_buffer_size 32k;
}
nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/app/.rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19;
passenger_ruby /home/app/.rvm/wrappers/ruby-1.9.3-p392/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 20M;
server {
listen 80;
server_name localhost;
root /home/app/myapp/current/public;
passenger_enabled on;
#charset koi8-r;
#access_log logs/host.access.log main;
# location / {
# large_client_header_buffers 4 32k;
# proxy_buffer_size 32k;
# }
# location / {
# root html;
# index index.html index.htm;
# client_max_body_size 4M;
# client_body_buffer_size 128k;
# }
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443;
# server_name localhost;
# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
Here's my code storing the cookies and a screenshot of the cookies in Firebug. I used firebug to check stored session and I found New Relic and jQuery are storing cookies too; could this be why the cookie size is exceeded?
def current_company
return if current_user.nil?
session[:current_company_id] = current_user.companies.first.id if session[:current_company_id].blank?
#current_company ||= Company.find(session[:current_company_id])
end
It's just what the error says - Request Header Or Cookie Too Large. One of your headers is really big, and nginx is rejecting it.
You're on the right track with large_client_header_buffers. If you check the docs, you'll find it's only valid in http or server contexts. Bump it up to a server block and it will work.
server {
# ...
large_client_header_buffers 4 32k;
# ...
}
By the way, the default buffer number and size is 4 and 8k, so your bad header must be the one that's over 8192 bytes. In your case, all those cookies (which combine to one header) are well over the limit. Those mixpanel cookies in particular get quite large.
Fixed by adding
server {
...
large_client_header_buffers 4 16k;
...
}
With respect to answers above, but there is client_header_buffer_size needs to be mentioned:
http {
...
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
...
}
I get the error almost per 600 requests when web scraping. Firstly, assumed that a proxy server or remote ngix limits. I've tried to delete all cookies and other browser solutions that generally talked by related posts, but no luck. Remote server is not in my control.
In my case, I made a mistake about adding over and over new header to the httpClient object. After defined a global httpclient object, added header once and the problem doesn't appear again. It was a little mistake but unfortunately instead of try to understand the problem, jumped to the stackoverflow :) Sometimes, we should try to understand the problem own.
In my case (Cloud Foundry / NGiNX buildpack) the reason was the directive proxy_set_header Host ..., after removing this line nginx became stable:
http {
server {
location /your-context/ {
# remove it: # proxy_set_header Host myapp.mycfdomain.cloud;
}
}
}
Related
This is a follow up to Turn off https in Docker with some more information. I still haven't figured it out.
I asked in the Docker slack group and they are convinced it's coming from the nginx or traefik config.
In Firefox there is a SSL_ERROR_UNRECOGNIZED_NAME_ALERT error, and in Chrome it's the similar ERR_SSL_UNRECOGNIZED_NAME_ALERT. I'm not finding out much about either of those by searching.
My nginx config:
user nginx;
daemon off;
worker_processes auto;
error_log /proc/self/fd/2 debug;
events {
worker_connections 1024;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
fastcgi_buffers 16 32k;
fastcgi_buffer_size 32k;
fastcgi_intercept_errors on;
fastcgi_read_timeout 900;
include fastcgi_params;
access_log /proc/self/fd/1;
port_in_redirect off;
send_timeout 600;
sendfile on;
client_body_timeout 600;
client_header_timeout 600;
client_max_body_size 256M;
client_body_buffer_size 16K;
client_header_buffer_size 4K;
large_client_header_buffers 8 16K;
keepalive_timeout 60;
keepalive_requests 100;
reset_timedout_connection off;
tcp_nodelay on;
tcp_nopush on;
server_tokens off;
upload_progress uploads 1m;
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 2;
gzip_http_version 1.1;
gzip_min_length 20;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/x-icon application/vnd.ms-fonto
gzip_vary on;
gzip_proxied any;
gzip_disable msie6;
add_header X-XSS-Protection '1; mode=block';
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
map $http_x_forwarded_proto $fastcgi_https {
default $https;
http '';
https on;
}
map $uri $no_slash_uri {
~^/(?<no_slash>.*)$ $no_slash;
}
upstream backend {
server php:9000;
}
include conf.d/*.conf;
}
My nginx.conf.default:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
My docker-compose.yml is unchanged from the previous question.
I've looked for anthing resembling traefik config and can't find anything.
Things I've tried so far:
swapping things round inside the map $http_x_forwarded_proto $fastcgi_https i.e. default $http; http on; https '';
deleting that whole map block
removing the references to https in line 140 of docker-compose.yml
removing line 143 from docker-compose.yml
removing line 147 from docker-compose.yml
creating a self-signed certificate for localhost
sackcloth and ashes
I'm genuinely at a loss, any help appreciated.
After more tests from OP, and other user's comments: it seemed that the redirection (HTTP to HTTPS) was occurring after Nginx handled the request.
OP also tested using a single index.html file and was not redirected to HTTPS: confirming that the redirection came from PHP (or at least not from Nginx).
The next steps were to look into Drupal configuration, and/or htaccess configuration. OP changed some Drupal configuration (about redirections), and successfully got the drupal setup page working with HTTP only.
Best in those case is always to try to pin-point the where the issue come from:
Make your Nginx configuration minimal: simple index.html
Clear browser cache regularly: they sometimes cache the redirection
Check/remove htaccess to see if behavior changes
Finally, if Nginx is "clean" from any issue, and htaccess doesn't seem to be the issue: it's mostly "after", so the issue may come from "to who Nginx is sending the request"
From "large" frameworks/CMS like Drupal, Woocommerce, Laravel... Redirection is usually handled "easily" from configuration files or DB settings.
When you have custom code handling redirections: it'll need debugging
No matter what I do, I keep running into the problem where my website publishes the default nginx website. I'm trying to dockerize my webserver such that it can point to home assistant running in another container. I've been able to get it to work when both were hosted on the same raspi, not running in containers, but not when both are running in containers.
I've attached my nginx.conf, Dockerfile and default.conf that I was using to start the environment up. I've spent the last 2 days looking for someone who was trying to do something similar, but I assume I'm making such a stupid mistake that most have been able to figure it out on their own..
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
default.conf (/etc/nginx/conf.d/hass.conf)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# These shouldn't need to be changed
listen [::]:80 default_server ipv6only=off;
return 301 https://$host$request_uri;
}
server {
# Update this line to be your domain
server_name nekohouse.ca;
# Ensure these lines point to your SSL certificate and key
ssl_certificate fullchain.pem;
ssl_certificate_key privkey.pem;
# Use these lines instead if you created a self-signed certificate
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# Ensure this line points to your dhparams file
ssl_dhparam /etc/nginx/dhparams.pem;
# These shouldn't need to be changed
listen [::]:443 default_server ipv6only=off; # if your nginx version is >= 1.9.5 you can also add the "http2" flag here
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
proxy_buffering off;
location / {
proxy_pass http://localhost:8123;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
default.conf (/etc/nginx/conf.d/default.conf)
server {
listen 8100;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
The problem is because of this line
proxy_pass http://localhost:8123;
When running in containers, you should understand that localhost refers to the nginx container and not the docker host.
So, you should either change localhost to the hostname of your docker host or use docker-compose so that you can change it to the name of the container defined.
If you are just running the containers separately, you could also just use the container IP for now but note that it will change everytime the container is restarted.
I am following the step in https://www.sitepoint.com/deploy-your-rails-app-to-aws/ to deploy my ruby on rails app to AWS(Amazon Linux).
I did everything sucessfully except for the setting of nginx.
In the artcile, it asks me to comment out the existing content and paste the following into /etc/nginx/sites-available/default.
upstream app {
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/contactbook/shared/tmp/sockets/puma.sock fail_timeout=0;
}
server {
listen 80;
server_name localhost;
root /home/deploy/contactbook/public;
try_files $uri/index.html $uri #app;
location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_pass http://app;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
But after I installed nginx on Amazon Linux, there is no folder /etc/nginx/sites-available
So I created this folder and file default.
But I got 404 error when I try to access my home page.
Then, I found I have /etc/nginx/nginx.conf, so I updated this file. But when I did sudo service nginx restart, I got error msg as:
nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:1
nginx: configuration file /etc/nginx/nginx.conf test failed
Does anyone know how should I do this correctly?
FYI, the content of /etc/nginx/nginx.conf before I broke it:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
listen 443 ssl;
# listen [::]:443 ssl;
# server_name localhost;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# # It is *strongly* recommended to generate unique DH parameters
# # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
# #ssl_dhparam "/etc/pki/nginx/dhparams.pem";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:$
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
I have a client that wanted SSL on its site so I got the certificate and set up the nginx conf (below is the config) with it. If I dont point the root of the HTTPS part to the real server root it works, but if I set the root to the site files HTTPS gets redirected to HTTP. No error messages.
Any ideas?
user www-data;
worker_processes 4;
error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/rvm/gems/ruby-1.9.3-p448/gems/passenger-4.0.14;
passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p448/ruby;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name www.nope.se;
passenger_enabled on;
root /var/www/current/public/;
#charset koi8-r;
#access_log logs/host.access.log main;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
server {
listen 443;
server_name www.nope.se;
ssl on;
ssl_certificate /opt/nginx/cert/www.nope.se.crt;
ssl_certificate_key /opt/nginx/cert/www.nope.se.key;
ssl_session_timeout 10m;
#ssl_protocols SSLv2 SSLv3 TLSv1;
#ssl_ciphers HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
passenger_enabled on;
root /var/www/current/public/;
# location / {
# root html;
# index index.html index.htm;
# }
}
}
I honestly do not understand your question. But here is some gyan on how a typical nginx-https configuration is done. hope you find it useful.
SSL is a protocol that works one layer below HTTP. Think of it as a tunnel inside which HTTP protocol travels. Hence your SSL certificates are loaded, no matter where you specify them, before any HTTP related configuration. This is also the reason why there should be only one SSL setting per nginx instance.
I recommend that you move your ssl certificate related logic to a separate server block like this.
server {
listen 443 ssl default_server;
ssl_certificate ssl/website.pem;
ssl_certificate_key ssl/website.key;
ssl_trusted_certificate ssl/ca.all.pem;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # default on newer versions
ssl_prefer_server_ciphers on;
# The following is all one long line. We use an explicit list of ciphers to enable
# forward secrecy without exposing ciphers vulnerable to the BEAST attack
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:RC4-SHA:RC4-MD5:ECDHE-RSA-AES256-SHA:AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:DES-CBC3-SHA:AES128-SHA;
# The following is for reference. It needs to be specified again
# in each virtualhost, in both HTTP and non-HTTP versions.
# All this directive does it to tell the browser to use HTTPS version of the site and remember this for a month
add_header Strict-Transport-Security max-age=2592000;
}
I also recommend that you set a 301 redirect in your non-https server block as shown below.
Change this:
server {
listen 80;
server_name www.nope.se;
...
}
to something like this:
server {
listen 80;
server_name www.nope.se;
add_header Strict-Transport-Security max-age=7200;
return 301 https://$host$request_uri;
}
With this in place, when a user visits http://www.nope.se they will be automatically redirected to https://www.nope.se
Basically I have the following URL:
http://website.com/details.php?id=10&id=1
And I wanted to be rewritten in
http://website.com/details.customextension
I've tried adding the following line:
location /details {
rewrite ^/details\.customextension$ /details.php?id=$1&hit=$2;
}
But when I follow the details.php link, it redirects me to index.php.
This is the vhost's conf file:
# You may add here your
# server {
# ...
# }
# statements for each of your virtual hosts to this file
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
server {
listen 80 default; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/website/application;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name website.com www.website.com;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Logs
access_log /var/www/website/logs/website.access.log main;
error_log /var/www/website/logs/website.error.log ;
# File uploads
client_max_body_size 10M;
client_body_buffer_size 128k;
# Add www to all urls
if ($host ~* ^([a-z0-9\-]+\.(be|fr|nl|de))$) {
rewrite ^(.*)$ http://www.$host$1 permanent;
}
location / {
expires 1d;
# Note that nginx does not use .htaccess.
# This will allow for SEF URL’s
try_files $uri $uri/ /index.php?$args;
}
# Here we set up static files access and caching
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|
js)$ {
access_log off;
expires max;
}
location ~* /(admin|cache|include|install)/ {
deny all;
}
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/www;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
proxy_cache_key "$host$request_uri$cookie_sessioncookie";
# Send PHP scripts to FPM on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/public$fastcgi_script_name;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
include fastcgi_params;
expires 0d;
}
}
What am I doing wrong? Thank you!