Capistrano & X-Sendfile - ruby-on-rails

I'm trying to make X-Sendfile work for serving my heavy attachments with capistrano. I found that X-Sendfile is not working with symlinks. How could I handle the files inside a folder symlinked by Capistrano so?
my web server is apache2 + passenger
in my production.rb:
config.action_dispatch.x_sendfile_header = "X-Sendfile"
in my controller action:
filename = File.join([Rails.root, "private/videos", #lesson.link_video1 + ".mp4"])
response.headers["X-Sendfile"]= filename
send_file filename, :disposition => :inline, :stream => true, :x_sendfile => true
render nothing: true
my filesystem structure (where a "->" stands for "symlink" and indentation means subfolder):
/var/www/myproject
releases/
....
current/ -> /var/www/myproject/releases/xxxxxxxxxxxx
app/
public/
private/
videos/ -> /home/ftp_user/videos
my apache config
XSendFile on
XSendFilePath / #also tried /home/ftp_user/videos
My application is able to serve small files, but with big ones it gives a NoMemoryError(failed to allocate memory)
I think it's not using x-sendfile, because the behavior is the same if I don't use it.
Here are the response headers of the file i'm trying to serve
Accept-Ranges:bytes
Cache-Control:private
Connection:Keep-Alive
Content-Disposition:inline
Content-Range:bytes 0-1265/980720989
Content-Transfer-Encoding:binary
Content-Type:video/mp4
Date:Sat, 01 Mar 2014 13:24:19 GMT
ETag:"70b7da582d090774f6e42d4e44ae3ba5"
Keep-Alive:timeout=5, max=97
Server:Apache/2.4.6 (Ubuntu)
Status:200 OK
Transfer-Encoding:chunked
X-Content-Type-Options:nosniff
X-Frame-Options:SAMEORIGIN
X-Powered-By:Phusion Passenger 4.0.37
X-Request-Id:22ff0a30-c2fa-43fe-87c6-b9a5e7da12f2
X-Runtime:0.008150
X-UA-Compatible:chrome=1
X-XSS-Protection:1; mode=block
I really don't know how to debug it, if it's a x-sendfile issue or if I'm trying to do something impossible for the symlinks problem
EDIT:
Following the suggested answer in the accepted one, it "magically" started working!
I created a capistrano task this way:
task :storage_links do
on roles(:web), in: :sequence, wait: 2 do
#creo i link simbolici alle risorse
within "/var/www/my_application/current/private" do
execute :ln, "-nFs", "/home/ftp_user/videos"
end
end
end
I didn't manage to run it after finalize_update, so i run it after the restart, by hand.
And i corrected my apache configuration in this way:
XSendFilePath /var/www/my_application
(before i was pointing x-sendfile to the ftp folder)
In my response headers also now X-Sendfile is not appearing, and i got a 206 - partial content, but everything seems to work and apache is serving files in the right way (also very heavy files).
I know this can be a security issue, but i will try to point it to the last release of my application cause pointing it to the current symlink is not working.

Maybe I found a solution. How did you make your symlinks?
maybe you did ln -s, and it's not enough
Here they suggest using ln -nFs, so he recognizes it's a directory that you are linking in

Related

Cannot start ruby on rails application on server but works locally, wrong environment path

I've encountered an issue that is preventing me from starting my Ruby on Rails application on our Production server despite it running correctly on both Development and Staging environments. The error message is as follows:
[ E 2021-04-26 14:53:39.4216 14896/T11 age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /path/to/application: The application encountered the following error: No such file to load -- /path/to/application/app/config/environment.rb (LoadError)
For some reason Passenger is attempting to find the config/environment.rb inside of an app folder when instead it should just be looking for:
/path/to/application/config/environment.rb
Passenger is being configured using Apache and the site config can be seen below:
<VirtualHost *:80>
# PassengerFriendlyErrorPages on
# PassengerStartTimeout 90
ServerAdmin email#example.com
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
DocumentRoot /path/to/application/public
<Directory /path/to/application/public>
AllowOverride None
Options -Multiviews
Require all granted
</Directory>
</VirtualHost>
PassengerPreStart http://localhost
PassengerAppEnv production
PassengerLogFile ${APACHE_LOG_DIR}/passenger.log
The server is running Ubuntu 18.04. I've included what I think are the relevant versions below:
Ruby - 2.5.1
Ruby on Rails - 5.2.5
Passenger - 6.0.7
Does anyone know what config I may be missing that is causing Passenger to be looking in the wrong place?
look .. i read about passenger , i am beginner to ruby ;"beginner"= when i remember about ruby,python i download them and i try to compile them to exe to find out if they changed things = .exe smaller or for the another alternative of compilers i usually use ..
today's test was in ruby + apache + php 8.1 as testing a output from a http://localhost/test/ and without any compilation (this isn't the answer to your question but if anyone needs this new point of view and you use php and ruby always ..)
in windows with apache ,php ,.. and ruby installed , in your htdocs create test folder and put these into it:
index.php
<?php
// Highest priority
proc_nice(-20);
echo shell_exec('index.rb');
//system('index.rb');
?>
index.rb
#puts "Content-type: text/html\r\n"
puts "Hello World" + "<br>"
time1 = Time.new
puts "Current Time : " + time1.inspect + "<br>"
# Time.now is a synonym:
time2 = Time.now
puts "Current Time : " + time2.inspect + "<br>"
now open the browser and go to http://localhost/test/
then if you click refresh or you hit f5 key(refresh) in your bowser after the first results then the time shoul display the refresh values

Unable to access certain file types via Rails public folder on Heroku

We're hosting a Rails app on Heroku, and having some trouble serving some static assets out of the public folder. (I know, this isn't recommended. We actually use CloudFront with a custom origin of our main application so we're getting the benefit of a CDN still. But that's irrelevant for this issue.)
Some assets serve up fine and exactly as expected. Ex: /404.html, /favicon.png
Others return an HTTP 406 error: /video-js/swf/video-js-4.1.0.swf
Rails 4.1.1
Ruby 2.1.2
The rails_12factor gem is present.
production.rb file sets config.serve_static_assets = true
The file definitely exists.. Using the console on Heroku:
File.size("/app/public/video-js/swf/video-js-4.1.0.swf")
=> 14059
But when I try to access the file, I get an HTTP 406 Not Acceptable response
curl -I http://my-rails-application.com/video-js/swf/video-js-4.1.0.swf
returns:
HTTP/1.1 406 Not Acceptable
Content-length: 0
Content-Type: text/html; charset=utf-8
Date: Wed, 04 Jun 2014 02:25:37 GMT
Status: 406 Not Acceptable
X-Request-Id: d184b097-7326-49cb-ad05-539459f7df08
X-Runtime: 0.532988
Connection: keep-alive
I tried adding a Mime Type for swf files (Mime::Type.register "application/x-shockwave-flash", :swf) and now I get a 404 instead of a 406, but that's still not very useful.
To recap, html, png, jpg, ico, and txt files all are served perfectly. swf, ttf, woff and a few others aren't being served out of the public folder.
What could cause some file types in the public folder to work properly, but not others?

Robots.txt file on Rails heroku app not updating

I've got a rails app hosted on Heroku for which I'm trying to update the robot.txt file.
The local file, which is at /public/robots.txt, reads:
User-agent: *
Disallow: /admin/
However when I deploy the app to Heroku, the robots file is seemingly not updated. The remote version reads:
# See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
#
# To ban all spiders from the entire site uncomment the next two lines:
User-agent: *
Disallow: /
The live url is at http://www.cowboypicks.com/robots.txt.
Running curl -I http://www.cowboypicks.com/robots.txt yields:
HTTP/1.1 200 OK
Age: 2317078
Cache-Control: public, max-age=2592000
Content-length: 200
Content-Type: text/plain
Date: Wed, 30 Apr 2014 17:01:43 GMT
Last-Modified: Thu, 03 Apr 2014 14:21:08 GMT
Status: 200 OK
X-Content-Digest: 10c5b29b9aa0c6be63a410671662a29a796bc772
X-Rack-Cache: fresh
Connection: keep-alive
Indicating the file hasn't been updated since 3 April, however it has been updated today (30 April). Even stranger, when I run heroku run bash followed by cat public/robots.txt I get:
User-agent: *
Disallow: /admin/
Indicating the file is being updated on Heroku but it's showing an older (I assume cached) version for some reason. There is some caching on the app using dalli/memcache, but I wouldn't have thought that would affect static files? Any ideas on how I could further debug the problem?
It turned out that Dalli had indeed been caching the robots.txt file in production. The expiry date was set in production.rb with the line:
config.static_cache_control = "public, max-age=2592000"
Running the following from the rails console flushed the cache and sorted the problem:
Rails.cache.dalli.flush_all

updating the robots

Ok so I want to add this
User-Agent: *
Disallow: /
to the robots.txt in all the enviroments other then production...any idea on the best want to do this. Should i remove it from the public folder and create a routes/views
I am using rails 3.0.14 prior to asset pipeline...any suggestions
Capistrano task for uploading a blocking robots.txt
I wrote this up again today, same path as Sergio's answer essentially, but sharing robots-specific result might save someone time :)
namespace :deploy do
desc "Uploads a robots.txt that mandates the site as off-limits to crawlers"
task :block_robots, :roles => :app do
content = [
'# This is a staging site. Do not index.',
'User-agent: *',
'Disallow: /'
].join($/)
logger.info "Uploading blocking robots.txt"
put content, "#{current_path}/public/robots.txt"
end
end
Then trigger it from your staging recipe with something like
after "deploy:update_code", "deploy:block_robots"
Here's a real working code from my project (it's a nginx config, not robots.txt, but idea should be clear).
task :nginx_config do
conf = <<-CONF
server {
listen 80;
client_max_body_size 2M;
server_name #{domain_name};
-- snip --
}
CONF
put conf, "/etc/nginx/sites-available/#{application}_#{rails_env}"
end
So, basically, you create content of your file in a string and then do put to desired path. This will make capistrano upload the content through SFTP.

Thinking Sphinx min_inflex_len and delta not working on production server

We have an issue with TS min_inflex_len and delta indexes on our production servers
I have everything working in development mode on OSX but when we deploy via capistrano to our Ubuntu server running passenger / apache, both delta indexing seems to stop as well as min_inflex_len
We're deploying as ubuntu user which also runs apache. We had an issue yesterday with production folder not being created but we manually created and I can see a list of the delta files in there now.
I've followed the docs through..
I can see the delta flag set to true on record creation but when searching it doesn't find the record. Once we rebuild index (as ubuntu user) I can find record, but only with full string.
My sphinx.conf file is as follows:
production:
enable_star: 1
min_infix_len: 3
bin_path: "/usr/local/bin"
version: 2.0.5
mem_limit: 128M
searchd_log_file: "/var/log/searchd.log"
development:
min_infix_len: 3
bin_path: "/usr/local/bin"
version: 2.0.5
mem_limit: 128M
Rebuild, start and conf work fine and my production.conf file contains this:
index company_core
{
source = company_core_0
path = /var/www/html/ordering-main/releases/20110831095808/db/sphinx/production/company_core
charset_type = utf-8
min_infix_len = 1
enable_star = 1
}
I also have this in my production.rb env file:
ThinkingSphinx.deltas_enabled = true
ThinkingSphinx.updates_enabled = true
My searchd.log file only has this in:
[Wed Aug 31 09:40:04.437 2011] [ 5485] accepting connections
Nothing at all appears in apache error / access log
-- EDIT ---
define_index do
indexes :name
has created_at, updated_at
set_property :delta => true
end
Not sure if it's the cause, but the version values in your sphinx.yml are for the version of Sphinx, not Thinking Sphinx - so you may want to run indexer to double-check what that value should be (likely one of 0.9.9, 1.10-beta or 2.0.1-beta).
Also: on the server, in script/console production, can you share the full output of the following (not interested in the value returned, hence why I'm forcing it to be an empty string - it'll just get in the way otherwise):
Company.define_indexes && Company.index_delta; ''
``
delta not working on production server for passenger user, you have to give the write permission to your passenger user when creating index and write it to db/sphinx/production folder.
Or you can set two line in your nginx/conf/nginx.conf
passenger_user_switching off;
passenger_default_user root;
Example:
passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0;
passenger_ruby /usr/local/bin/ruby;
passenger_user_switching off;
passenger_default_user root;

Resources