403 Forbidden ~ nginx on Bluemix - ruby-on-rails

This is in continuation to the question, that I posted on SO yesterday. I am not sure if should make another questions or just link it to the older post.
Neways, I have been trying to deploy ruby on rails app onto Bluemix. After much trail and error, I finally managed to push the app and start it. But, when I try to open the web app. Bluemix throws me an error
403 Forbidden
nginx
If I understand correctly, this has something to do with permissions to access certain folder on my RoR app. How to resolve this. DO I have to change permission on my local app before pushing it to bluemix or is there something to be done on bluemix?
Here is the link
EDIT :
This is my error log on ngnix folder on bluemix
2015/07/23 10:16:39 [error] 37#0: *2 directory index of "/home/vcap/app/public/" is forbidden, client: 75.126.52.20, server: localhost, request: "GET / HTTP/1.1", host: "csw-events.mybluemix.net"
ngnix.conf file on bluemix
worker_processes 1;
daemon off;
error_log /home/vcap/app/nginx/logs/error.log;
events { worker_connections 1024; }
http {
log_format cloudfoundry '$http_x_forwarded_for - $http_referer - [$time_local] "$request" $status $body_bytes_sent';
access_log /home/vcap/app/nginx/logs/access.log cloudfoundry;
default_type application/octet-stream;
include mime.types;
sendfile on;
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types text/plain text/css text/js text/xml text/javascript application/javascript application/x-javascript application/json application/xml application/xml+rss;
tcp_nopush on;
keepalive_timeout 30;
port_in_redirect off; # Ensure that redirects don't include the internal container PORT - 61596
server_tokens off;
server {
listen 61596;
server_name localhost;
location / {
root /home/vcap/app/public;
index index.html index.htm Default.htm;
}
}
}
The ngix folder does not exist on my local system. It is created when i push my app to bluemix (Or am I missing something here?)

You can not use the Nginx (static buildpack to serve your Ruby on Rails app. You must use the Ruby buildpack for that.

The problem was caused for two reasons (atleast in my scenario).
a) Not using the correct build pack. The one which solved my problem was using this buildpack from CF
cf push csw-events -b https://github.com/cloudfoundry/ruby-buildpack.git
b) Sensitivity of the manifest file - Bluemix started throwing the following error:
FAILED
Error reading manifest file:
yaml: [] control characters are not allowed at line 1, column 1
This was resolved by downloading the manifest file again and replacing only specific part of it (like the command part).

Related

Nginx ssl_protocol setting doesn't work

I'm trying to change the settings for Nginx ssl_protocols, but the changes don't reflect on the server.
At first I thought it was because we were using Ubuntu 12.04, but now we're updated to 14.04.
Nginx version:
nginx version: nginx/1.10.1
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --sbin-path=/usr/local/sbin/nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module
Openssl version:
OpenSSL 1.0.1f 6 Jan 2014
Ngnix.conf:
http {
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay off;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log debug;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
client_body_timeout 10;
client_header_timeout 10;
sendfile on;
# output compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_proxied any;
gzip_types text/plain text/html text/css text/js application/x-javascript application/javascript application/json;
# include config for each site here
include /etc/nginx/sites/*;
/etc/nginx/sites/site.conf:
server {
listen 443 ssl;
server_name server_name;
root /home/deploy/server_name/current/public;
access_log /var/log/nginx/server_name.access.log main;
ssl_certificate /usr/local/nginx/conf/ssl/wildcard.server_name.com.crt;
ssl_certificate_key /usr/local/nginx/conf/ssl/wildcard.server_name.com.key.unsecure;
ssl_client_certificate /usr/local/nginx/conf/ssl/geotrust.crt;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
location ~ ^/assets/ {
expires max;
add_header Cache-Control public;
add_header ETag "";
break;
}
location / {
try_files $uri #server_name;
proxy_set_header X-Forwarded-Proto https;
}
location #server_name {
include proxy.conf;
proxy_pass http://server_name;
proxy_set_header X-Forwarded-Proto https;
}
# stats url
location /nginx_stats {
stub_status on;
access_log off;
}
}
The config files get loaded properly and are both being used as intended. If it has any relevance the server is running Ruby on Rails with Unicorn.
Does anyone have an idea what could be wrong?
Description
I had a similar problem. My changes would be applied (nginx -t would warn about duplicate and invalid values), but TLSv1.0 and TLSv1.1 would still be accepted. My line in my sites-enabled/ file reads
ssl_protocols TLSv1.2 TLSv1.3;.
I ran grep -R 'protocol' /etc/nginx/* to find other mentions ssl_protocols, but I only found the main configuration file /etc/nginx/nginx.conf and my own site config.
Underlying problem
The problem was caused by a file included by certbot/letsencrypt, at /etc/letsencrypt/options-ssl-nginx.conf. In certbot 0.31.0 (certbot --version) the file includes this line:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
This somewhat sneakily enabled these versions of TLS.
I was tipped off by Libre Software.
0.31.0 is the most up-to-date version I was able to get for Ubuntu 18.04 LTS
Solution
TLS versions <1.2 were disabled by default in the certbot nginx config starting from certbot v0.37.0 (thank you mnordhoff). I copied the file from there into the letsencrypt config (options-ssl-nginx.conf), added a note to myself and subsequent maintainers and everything was all right with the world again.
How to not get into this mess in the first place
grepping one level higher (/etc/* instead of /etc/nginx*) would have allowed me to find the culprit. But a more reliable and powerful tool is nginx -T, which prints out all the configuration files that are considered.
Other useful commands:
nginx -s reload after you change configs
nginx -v to find out your nginx version. To enable TSL version 1.3, you need version 1.13.0+.
openssl version: you need at least OpenSSL 1.1.1 "built with TLSv1.3 support"
curl -I -v --tlsv<major.minor> <your_site> for testing whether a certain version of TLS is in fact enabled
journalctl -u nginx --since "10 minutes ago" to make absolutely sure something else isn't going on.
Want to add another (somewhat obscure) possibility since the CERTbot one didn't cover me. Mine is only for NGINX installs with multiple domains. Basically the info you specify for your specific domain may be modified because of the server default. That default is set from the first domain name encountered when reading the config (basically alphabetically). Details on this page.
http://nginx.org/en/docs/http/configuring_https_servers.html
A common issue arises when configuring two or more HTTPS servers listening on a single IP address:
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.crt;
...
}
server {
listen 443 ssl;
server_name www.example.org;
ssl_certificate www.example.org.crt;
...
}
With this configuration a browser receives the default server’s certificate, i.e. www.example.com regardless of the requested server name. This is caused by SSL protocol behaviour. The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.
The issue wasn't in the server itself, but instead in the AWS Load Balancer having wrong SSL Ciphers selected.

Azure VM NGINX Plus + Web App leads to 404

I'm working on NGINX Plus setup as a reverse proxy for traffic management and routing on my Azure Cloud Solution.
I'm just getting started and everything works independently, but when I try to use the proxy_pass to route web traffic to a .NET Web App that rests in the cloud, I get 404 errors.
I've tried with an app I've had deployed for a while(a .NET MVC Web App) and also a node express app that is nothing more than the basic offering as a test:
http://rpsexpressnodetest.azurewebsites.net/
Each of these runs as expected when I go the directly to them, but then when I enable the pass thru I get a 404 error.
I'm using the following config file for nginx:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream web_rps{
server rpsexpressnodetest.azurewebsites.net;
}
# ssl_certificate /etc/nginx/ssl/server.crt;
# ssl_certificate_key /etc/nginx/ssl/server.key;
# drop requests with no Host header
# server{
# listen 80 default_server;
# server_name "";
# return 444;
# }
server{
listen *:80;
# listen *:443 ssl;
root /usr/share/nginx/html;
location / {
proxy_pass http://web_rps;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
In any case, if I navigate to http://rpsnginx.cloudapp.net/ (my nginx vm), I always get a 404 web app not found...
Error 404 - Web app not found.
The web app you have attempted to reach is not available in this
Microsoft Azure App Service region. This could be due to one of
several reasons:
The web app owner has registered a custom domain to point to the Microsoft Azure App Service, but has not yet configured Azure to
recognize it. Click here to read more.
The web app owner has moved the web app to a different region, but the DNS cache is still directing to the old IP Address that was used
in the previous region. Click here to read more.
If I remove the pass through proxy I get the standard "Welcome to NGINX" index.html file, so the NGINX seems to work just fine too...
I sincerely hope my new(b)ness is causing the issue.
Any assistance would be a great help!
First off, big props to NGINX Support for getting back to me as quickly as I could transpose this post from an email I sent them...
More importantly, here is the answer provided by them that worked!
My guess that this is this the source of the problem.
Try adding following directive to "location /" block:
proxy_set_header Host rpsexpressnodetest.azurewebsites.net;
Worked like a champ!

uwsgi invalid request block size

I am running uwsgi in emperor mode
uwsgi --emperor /path/to/vassals/ --buffer-size=32768
and getting this error
invalid request block size: 21327 (max 4096)...skip
What to do? I also tried -b 32768.
I aslo ran into same issue while following some tutorial.
The problem was that I set the option socket = 0.0.0.0:8000 instead of http = 0.0.0.0:8000.
socket option intended to be used with some third-party router (nginx for instance), while when http option is set uwsgi can accept incoming HTTP requests and route them by itself.
The correct solution is not to switch to HTTP protocol. You just need to increase the buffer size in uWSGI settings.
buffer-size=32768
or in commandline mode:
-b 32768
Quote from official documentation:
By default uWSGI allocates a very small buffer (4096 bytes) for the headers of each request. If you start receiving “invalid request block size” in your logs, it could mean you need a bigger buffer. Increase it (up to 65535) with the buffer-size option.
If you receive ‘21573’ as the request block size in your logs, it could mean you are using the HTTP protocol to speak with an instance speaking the uwsgi protocol. Don’t do this.
From here: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
I could fix it adding --protocol=http to the uwsgi.
I ran into the same issue trying to run it under nginx and was following the
docs here. It is important to note that once you switch to nginx you have to make sure you are not trying to access the app on the port specified by the --socket param but rather the "listen" port in nginx.conf. Although your problem is described differently the title matches exactly the issue I had.
This error is shown when uWSGI server is using uwsgi protocol and one tries to access it via http protocol by curl or web browser directly. If you can, try configuring your uWSGI server to use http protocol, so you can access it via web browser or curl.
In case you cannot (or do not want to) change it, you can use a reverse proxy (e.g. nginx) in front of local or remote uWSGI server, see https://uwsgi-docs.readthedocs.org/en/latest/Nginx.html
If it feels like too much work, give a try to uwsgi-tools python package:
$ pip install uwsgi-tools
$ uwsgi_curl 10.0.0.1:3030
There is also a simple reverse proxy server uwsgi_proxy if you need to access your application(s) via web browser etc. See more expanded answer https://stackoverflow.com/a/32893520/179581
As pointed out in another comment from the docs:
If you receive ‘21573’ as the request block size in your logs, it could mean you are using the HTTP protocol to speak with an instance speaking the uwsgi protocol. Don’t do this.
If you are using Nginx, this will occur if you are have this configuration (or something similarly odd):
proxy_pass http://unix:/path/to/socket.sock
this is speaking HTTP to uWSGI (which makes it grumpy). Instead, use:
uwsgi_pass unix:/path/to/socket.sock;
man i m havin same issue;
so i did it ...
look using UWSGI + DJANGO + NGINX + REACT +
1 - nano /etc/uwsgi/sites/app_plataform.ini [uwsgi]
DJANGO_SETTINGS_MODULE = app_plataform.settings env =
DJANGO_SETTINGS_MODULE settings.configure()
chdir = /home/app_plataform home = /root/app_plataform
module = prometheus_plataform.wsgi:application
master = true processes = 33 buffer-size=32768
socket = /home/app_plataform/app_plataform.sock
chmod-socket = 777 vacuum = true
2 - make a serious performance upgrade on nginx ... user www-data;
worker_processes auto; worker_processes 4; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf;
events { worker_connections 4092; multi_accept on; }
http { ##UPGRADE CONFIGS
client_body_buffer_size 16K; client_header_buffer_size 16k;
client_max_body_size 32m; #large_client_header_buffers 2 1k;
client_body_timeout 12; client_header_timeout 12; keepalive_timeout
15; send_timeout 10; access_log off;
## # Basic Settings ##
sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65;
types_hash_max_size 2048; server_tokens off;
server_names_hash_bucket_size 64; # server_name_in_redirect off;
include /etc/nginx/mime.types; default_type
application/octet-stream;
## # SSL Settings ##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
## # Logging Settings ##
access_log /var/log/nginx/access.log; error_log
/var/log/nginx/error.log;
## # Gzip Settings ##
gzip on; gzip_comp_level 2; gzip_min_length 1000; gzip_proxied
expired no-cache no-store private auth; gzip_types text/plain
application/x-javascript text/xml text/css application/xml; gzip_vary
on;
#gzip_proxied any; #gzip_comp_level 6; gzip_buffers 16 8k;
gzip_http_version 1.1; #gzip_types text/plain text/css
application/json application/javascript text/xml application/xml
application/xml+rss text/javascript;
## # Virtual Host Configs ##
include /etc/nginx/conf.d/.conf; include
/etc/nginx/sites-enabled/; }
3 - then ... restart services or reebot server ...
systemctl restart uwsgi & systemctl restart nginx
for this particular error invalid request block size: 21327 (max 4096)...skip
it depends on where you're running your solution either on a local machine or a remote server(AWS......).
this solution worked for me as I shrugged with this working on my local machine and as well within a docker container
1 -- change from socket = :8000 in your ini file to http= :8000
and this would work perfectly within docker as well
You can increase the buffer size in uWSGI settings.
The quick solution would be, Remove cookies from the browser for that URL.
Open developer tools in browser > Go to Application tab and > remove cookies associated with the URL.

running multiple rails websites using phusion passenger 3.0.17 with nginx

I searched google for deploying multiple rails websites using phusion passenger 3.0.17 with nginx but I didn't get relevant results. Any how I completed passenger nginx setup by running passenger-install-nginx-module command.
Ques 1) I am looking for proper beginner tutorial for running multiple rails websites using phusion passenger 3.0.17 with nginx
Ques 2) I am looking commands for start, stop, restart the (whole passenger nginx server (ie) for all websites) and also for (Individual rails websites)
Note: I am not looking for passenger standalone solution. I am using REE 1.8.7 and rails 2.3.14
According to the documentation for Passenger, you create a new vhost for each app you want to deploy.
And point the site root at your apps public directory, and add the passenger_enabled directive. Exactly the same as deploying with Apache.
http {
...
server {
listen 80;
server_name www.mycook.com;
root /webapps/mycook/public;
passenger_enabled on;
}
...
}
More here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_a_ror_app
In regards question 2. Restarting depends on what you are trying to do. I'm going to assume you're using a distro that uses init.d
These are 3 cases where you do a different kind of 'restart'.
You have an issue with some config you have on Nginx. Or it's behaving strangely.
So you would restart the Nginx service like this: /etc/init.d/nginx restart
The next case is you have a rails or sinatra app deployed on Nginx with the passenger module.
And you want to make it reload some changes you just pushed to the server.
Passenger watches the tmp/restart.txt file in your application. So by simply runnging touch tmp/restart.txt. While cd'd into the app's folder will tell Passenger to reload the application.
And the last case for restarting/reloading is reload for Nginx.
You use this when you add or change your VHOSTs.
/etc/init.d/nginx reload. This allows you to reload your vhosts and other config without dropping connections.
Have a gander at the Passenger Documentation, it is very thorough. nginx-passenger docs
Here is a step-by-step tutorial on Configuring Nginx for multiple virtual hosts: http://articles.slicehost.com/2007/12/7/ubuntu-gutsy-installing-nginx-via-aptitude
Note that:
You cannot restart an individual website/virtual host, if you change some configurations in Nginx conf, as stuartc mentions. You have to restart Nginx for the changes to take effect. You can however do a $ touch current/tmp/restart.txt in the server app directory after pushing files, if you want to apply a production fix directly.
I have experience problems with Nginx restart on Ubuntu; explict stop and start seem to give more assured results. Use <NGINX_HOME>/bin/nginx stop to stop and then <NGINX_HOME/bin/nginx to start.
To help you, here are my configuration files.
nginx.conf:
#user nobody;
worker_processes 4;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /rails/common/ruby-1.9.2-p290/lib/ruby/gems/1.9.1/gems/passenger-3.0.17;
passenger_ruby /rails/common/ruby-1.9.2-p290/bin/ruby_with_env;
passenger_max_pool_size 30;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
include /rails/common/nginx/conf/sites-enabled/*.conf;
}
A sample site.conf inside sites-enabled folder:
server {
listen 80;
server_name domainname1.com;
root /rails/myapp1/current/public;
passenger_enabled on;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
error_page 503 #maintenance;
location #maintenance {
rewrite ^(.*)$ /system/maintenance.html break;
}
}
A new file in sites-enabled is all it takes to add a new site.

Nginx: Couldn't forward the HTTP response back to the HTTP client

I have a server with the next components:
Ubuntu 12.04
Nginx 1.2.2
Passenger 3.0.15
I'm running a Ruby on Rails app on this server.
Now in my error.log of Nginx I found this error popping up regularly.
[ pid=12615 thr=3065355072 file=ext/nginx/HelperAgent.cpp:923 time=2012-10-22 09:31:03.929 ]: Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser.
Does anybody has an idea where this issue comes from?
This is my Nginx conf:
user deployer staff;
worker_processes 5;
error_log /var/log/nginx/error.log notice;
pid /var/log/nginx/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
passenger_root /var/lib/gems/1.9.1/gems/passenger-3.0.15;
passenger_ruby /usr/bin/ruby1.9.1;
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
passenger_enabled on;
server_name xxx.com;
listen 80;
rails_env production;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
root /var/rails/alfa_paints/current/public;
error_page 404 /404.html;
error_page 500 502 503 504 /500.html;
}
}
You configuration looks fine. I think the error is exactly what it says: the end user clicked "stop" on their browser, closing the TCP connection to the server. Everything in your application stack is likely working as designed. Unless you have end users complaining about the app not working, that's the most likely explanation.
That said, if you're seeing this error a lot, the next question you might ask is "why are users hitting the stop button so much"? Maybe part of your application is taking too long to respond to users, and you need to either speed it up or add some sort of progress indicator. You might look back at your logs and see if you can correlate the errors with a particular kind of request.
another situation maybe like this nginx Timeout serving large requests
to resolve this problem,you can try this:
gem install passenger #use the lastest passenger
passenger-config --root # get passenger_root_path
#download the lastest nginx file and cd nginx-X.X.X
./configure --with-http_ssl_module --with-http_gzip_static_module --with-cc-opt=-Wno-error --add-module=$passenger_root_path/ext/nginx
make
make install
after this, config nginx.conf and restart,you will find everything is ok!

Resources