I have the following configuration for Varnish. But when I access the application, it doesn't ask for login, it just login.
What I'm doing wrong?
default.vcl
backend default {
.host = "127.0.0.1";
.port = "80";
}
sub vcl_recv {
if(req.url ~ "sign_in" || req.url ~ "sign_out" || req.request == "POST" || req.request == "PUT" || req.request == "DELETE") {
return (pass);
}
return (lookup);
}
sub vcl_fetch {
if(req.url ~ "logout" || req.url ~ "sign_out"){
unset beresp.http.Set-Cookie;
}
if (req.request == "GET") {
unset beresp.http.Set-Cookie;
set beresp.ttl = 360m;
}
if (req.url ~ "images/" || req.url ~ "javascripts" || req.url ~ "stylesheets" || req.url ~ "assets"){
set beresp.ttl = 360m;
}
}
/etc/default/varnish
DAEMON_OPTS="-a 192.241.136.37:80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
/etc/nginx/sites-enabled/default
upstream app {
server unix:/tmp/unicorn.socket fail_timeout=0;
}
server {
listen 80;
client_max_body_size 2G;
server_name localhost;
keepalive_timeout 5;
root /home/deploy/apps/wms/current/public;
access_log off;
error_log off;
if ($request_method !~ ^(GET|HEAD|PUT|POST|DELETE|OPTIONS)$ ){
return 405;
}
location ~ ^/(assets)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location / {
try_files $uri/index.html $uri.html $uri #app;
error_page 404 /404.html;
error_page 422 /422.html;
error_page 500 502 503 504 /500.html;
error_page 403 /403.html;
}
location #app {
proxy_pass http://app;
}
location = /favicon.ico {
expires max;
add_header Cache-Control public;
}
location ~ \.php$ {
deny all;
}
}
You are preventing your backend to delete your session cookie, so you can't log out unless you explicitly delete your browsers' cookies.
Looking at your fetch VCL (Comment inline):
sub vcl_fetch {
# This prevents server from deleting the cookie in the browser when loging out
if(req.url ~ "logout" || req.url ~ "sign_out"){
unset beresp.http.Set-Cookie;
}
if (req.request == "GET") {
unset beresp.http.Set-Cookie;
set beresp.ttl = 360m;
}
if (req.url ~ "images/" || req.url ~ "javascripts" || req.url ~ "stylesheets" || req.url ~ "assets"){
set beresp.ttl = 360m;
}
}
So your backend can't delete client's cookie unless as result of a POST request.
IMHO you shouldn't mess with backend's Set-Cookie headers unless you know (and test well) posible side effects
Related
When i try to run mutiple reactapps using docker and nginx reverse proxy, iam getting an error : Upstream timed out while connecting to the upstream.
The error you can see in the below screenshot when i check the nginx logs
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
add_header 'Cache-Control' "public, max-age=31536000";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options: "nosniff";
ssl_certificate /etc/nginx/conf.d/cert.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl.key;
server_name <Domain-ip>;
location / {
proxy_pass http://domainname:3000;
#try_files $uri /index.html;
}
location /elderly {
proxy_pass http://domainname:3001;
#try_files $uri /index.html;
}
location /carer {
proxy_pass http://domainname:3002;
#try_files $uri /index.html;
}
#For gzip text compression
gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/x-javascript text/xml text/css application/xml application/javascript
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
#For optimization
location ~* \.(ico|css|js|webp|gif|jpeg|jpg|png|woff|ttf|otf|svg|woff2|eot)$ {
expires 365d;
add_header Cache-Control "public, max-age=31536000";
}
}
I am trying to add CORS header to my app when deploying it to cloud via docker I get the error:
nginx: [emerg] "server" directive is not allowed here in /etc/nginx/conf.d/default.conf:1
My nginx file
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
root https://srm-master.nonprod.com;
index index.html index.htm;
set $cors "";
if ($http_origin ~* (.*\.ini.com)) {
set $cors "true";
}
server_name .ini.com;
location / {
if ($cors = "true") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS,
DELETE, PUT';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'User-Agent,Keep-
Alive,Content-Type';
}
}
}
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Actually, the problem has nothing to do with Docker, the cause of the error is nginx config. Because nginx allow only one http section, and it has been defined at /etc/nginx/nginx.conf. Remove the http section in your config, and it should be worked
server {
root https://srm-master.nonprod.com;
index index.html index.htm;
set $cors "";
if ($http_origin ~* (.*\.ini.com)) {
set $cors "true";
}
server_name .ini.com;
location / {
if ($cors = "true") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS,
DELETE, PUT';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'User-Agent,Keep-
Alive,Content-Type';
}
}
}
I try to set up ssl with Let’s Encrypt using this article https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
my nginx config
server {
listen 80;
server_name kcr.ttfr.ru;
server_name www.kcr.ttfr.ru;
root /var/www/k4fntr/public;
index /frontend/index.html;
client_max_body_size 128M;
gzip on; # enable gzip
gzip_disable "msie6";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log debug;
location / {
try_files /frontend/$uri $uri $uri/ /index.php?$args; # permalinks
client_max_body_size 128M;
}
location ~ /\. {
deny all; # deny hidden files
}
location ~* /(?:uploads|files)/.*\.php$ {
deny all; # deny scripts
}
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max; # cache static files
try_files /frontend/$uri $uri $uri/ /index.php?$args; # permalinks
}
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
fastcgi_pass k4fntr_php-fpm:9000;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_read_timeout 300;
}
location /socket.io {
proxy_pass http://k4fntr_echo:6001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location ~ /\.ht {
deny all;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/kcr.ttfr.ru/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/kcr.ttfr.ru/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location /.well-known/acme-challenge/ { root /var/www/certbot; }
}
but my challenges were failed because of url /.well-known/acme-challenge/ returns 403:Forbidden
what's wrong with my nginx configuration?
change your location to something like this:
location /.well-known/acme-challenge {
root /var/www/certbot;
default_type text/plain;
}
another question. Do you want to redirect all non-http traffic to https?
In that case I would create a server block listen port 80 and another one listen on 443.
server {
listen 80;
server_name domain.io;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge {
root root /var/www/certbot;
default_type text/plain;
}
}
server {
listen 443 ssl;
server_name domain.io;
add_header Strict-Transport-Security "max-age=31536000" always;
...
}
I want to redirect http to https automatically.
Below is my nginx conf.
upstream puma_tn{
# Path to Puma SOCK file, as defined previously
server unix:/home/deploy/tn/shared/tmp/sockets/tn-puma.sock fail_timeout=0;
}
server {
listen 80;
server_name www.tn.com.au;
#return 301 https://$host$request_uri;
return 301 https://$server_name$request_uri;
#if ($scheme = http) {
# return 301 https://$server_name$request_uri;
# }
}
server {
listen 443 default_server ssl;
server_name www.tn.com.au;
root /home/deploy/tn/current/public;
try_files $uri/index.html $uri #app;
ssl_certificate /etc/ssl/certs/tn.crt;
ssl_certificate_key /etc/ssl/private/tn.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
#securrity Changes-Start
server_tokens off;
more_set_headers 'Server: Eff_You_Script_Kiddies!';
# Securty Changes-End
# location / {
location #app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN";
proxy_set_header Connection '';
proxy_pass http://puma_tn;
}
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
underscores_in_headers on;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
Assuming you're deploying it to production, Add below config to production.rb
config.force_ssl = true
force HTTPS connection inside server block
if ($scheme != "https") {
rewrite ^ https://$host$uri permanent;
}
Or inside location / block write
location / {
return 301 https://$server_name$request_uri;
}
also i don't think we need config.force_ssl = true
I have written the following in server ssl block to make it working.
server {
listen 443 default_server ssl;
server_name www.tn.com.au;
if ($http_x_forwarded_proto = 'http') {
return 301 https://$server_name$request_uri;
}
.....other configurations
}
I got this exception
No route matches [GET] "/ng-template/manager/index.html"
Everything works fine on my local machine, but failed after deploying to staging server.
Any idea?
angular route
App.config(['$routeProvider', function($routeProvider) {
$routeProvider.
when('/',
{
templateUrl: '/ng-template/manager/index.html',
controller: 'indexCtrl'}
).
when('/api_usage',
{
templateUrl: '/ng-template/manager/api_usage.html',
controller: 'ApiUsageCtrl'
}
nginx.conf
upstream myapp {
server unix:/tmp/puma.myapp.sock fail_timeout=0;
}
server {
listen 80;
server_name www.myapp.com myapp.com;
root /home/deploy/current/public ;
location /assets/ {
gzip_static on;
expires max;
add_header Cache-control public;
add_header ETag "";
break;
}
location ~* ^.+\.(jpg|jpeg|png|gif|ico|css|js)$ {
expires max;
try_files $uri #myapp;
}
location ~* ^.+\.(zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|flv|mpeg|avi|woff2|ttf|woff)$ {
expires max;
try_files $uri #myapp;
}
location #myapp {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://myapp;
proxy_connect_timeout 10s;
proxy_read_timeout 30s;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
}
try_files $uri/index.html $uri #myapp;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
production.rb
Rails.application.configure do
config.eager_load = true
config.serve_static_files = true
config.assets.js_compressor = :uglifier
config.assets.css_compressor = :sass
config.assets.compile = true
config.assets.digest = true
end