How to upload large file in Ruby on rails? - ruby-on-rails

I am Using Rails 3.0.9 , Ruby 1.9.3p0
I am trying to upload a zip/tar file to server.
When i am uploading file 2.5mb. Evey thing works fine. But when i am uplaoing a zip file of 350mb. It creates error.(network error connection time out/ or "aborted" in response of firebug) .
In apache config, i had increased timeout from 300 from 300000 but nothing happened.
Is this possibly an apache setting? Or could it be something on their end? Any suggestions on where I should look would be greatly appreciated.

I have used both apache and Nginx. For such big file uploads i prefer NGINX. If you use Nginx you can set client_max_body_size to something like 500 MB inside your nginx.conf . One of the question who prefers the same here .

Related

Rails + thin: Not possible to download large files

I have a rails app where users can manage large files (currently up to 15 GB). They have also the possibility to download the stored files.
Everything works perfect for files < 510 MB. But for > 510 MB, the download stops after 522,256 KB (510 MB).
I think thin produces this issue. When I start my dev server using thin, I cannot download the complete file. When I start the dev server using webrick, everything works.
I used top to compare the RAM/CPU behavior, but both server, thin and webrick, behave the same way. In development, both server read the complete file into RAM and then send it to the user/client.
I tried to change some options of send_file like stream, or buffer_size. I also set length manually. But again, I was not able to download the complete file using thin.
I can reproduce this behavior using Firefox, Chrome, and curl.
The problem is that my productive rails app uses 4 thin servers behind an nginx proxy. Currently, I cannot use unicorn, or passenger.
In development, I use thin 1.6.3, rails 4.1.8, ruby 2.1.2.
def download
file_path = '/tmp/big_file.tar.gz' # 5 GB
send_file(file_path, buffer_size: 4096, stream: true)
end
If you are using send_file, it is ideal to use a front end proxy to pass off the responsibility of serving the file. You said you are using nginx in production, so:
In your production.rb file, uncomment config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'.
You will also have to change your nginx configuration to accommodate Rack::Sendfile. Its documentation is located here. The changes amount to adding:
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/files/; # or something similar that doesn't interfere with your routes
to your existing location block and adding an additional location block that handles the X-Accel-Mapping that you added. That new location block might look like:
location ~ /files(.*) {
internal;
alias $1;
}
You will know it is working correctly when you ssh to your production server and curl -I the thin server (not nginx) and see a X-Accel-Redirect header. curl (no -I) directly to the thin server should not send the file contents.
You can see my recent struggle with nginx and send_file here.

Why nginx/rails did not show me the full content?

On production server I have nginx+passenger and rails 3.1.2
Somehow it can't show we full output. I have the page with ~ 150 images and links underneath. But when page loads I can see only small part of them. Is it problem in nginx config? On my local all ok (MRI)
Nginx gzip helped me, I just turned on it in nginx.conf

Rails 3, apache & passenger, send_file sends zero byte files

I'm struggling with send_file with rails 3.0.9 running ruby 1.9, passenger 3.0.8 on apache on ubuntu lucid
The xsendfile module is installed and loaded into apache
root~# a2enmod xsendfile
Module xsendfile already enabled
Its symlinked correctly in mods-enabled
lrwxrwxrwx 1 root root 32 Aug 8 11:20 xsendfile.load -> ../mods-available/xsendfile.load
config.action_dispatch.x_sendfile_header = "X-Sendfile" is set in my production.rb
using send_file results in zero byte files being sent to the browser
filepath = Rails.root.join('export',"#{filename}.csv")
if File.exists?(filepath)
send_file filepath, :type => 'text/csv'
end
I believe the previous answer isn't the right way to go because, as far as I can tell, Apache isn't handling the downloads at all when this solution is applied, instead the rails process is. That's why the nginx directive, which shouldn't work, appears to. You get the same result by commenting out the config directive.
Another drawback (aside from tying up a rails process for too long) is that when the data streaming is handled by the rails process the response doesn't seem to send the content length header. So a user doesn't know how large the file they're downloading is, nor how long it will take (a usability problem).
I was able to get it to work by ensuring that mod_sendfile was properly included and loaded in my apache config, like so (this will be dependent on your apache install, etc.):
LoadModule xsendfile_module /usr/lib64/httpd/modules/mod_xsendfile.so
...
# enable mod_x_sendfile for offloading zip file downloads from rails
XSendFile on
XSendFilePath /

Rails sends 0 byte files using send_file

I can't get send_file(Model.attachment.path) to work.
It doesn't fail, instead, it sends a 0 byte size file to the client, the file names are correct though.
This problem started happening after I did a big migration from Rails 2.3.8 to 3.
There were a lot of other things that took place in this migration and I will try my best to detail all of them.
Distrubution change/Server Change. Rackspace RHEL5 to Linode Ubuntu 10.04LTS
Ruby version change, 1.8.6 -> 1.9.2
Rails version change, 2.3.8 -> 3.0.0
httpd platform change, apache2 -> nginx (However I tried on apache2 as well and it did not work).
I moved the attachments via ftp as they were not part of my git repositories so they were published via cap deploy, instead manual ftp remote(RHEL5) to local(Win7) then local(Win7) to remote(Ubuntu10).
I do know that FTPing does not retain the file permissions through the transfers, so what I've also done is mimicked the chmods that were seen on my previous servers so they are almost identical. (users/groups are different, set to root:root instead of olduser:olduser).
A snippet of the request to download a attachment from my production log.
Started GET "/attachments/replies/1410?1277105698" for 218.102.140.205 at 2010-09-16 09:44:31 +0000
Processing by AttachmentsController#replies as HTML
Parameters: {"1277105698"=>nil, "id"=>"1410"}
Sent file /srv/app/releases/20100916094249/attachments/replies/UE0003-Requisition_For_Compensation_Leave.doc (0.2ms)
Completed 200 OK in 78ms
Everything's okay. Let me also rule out local issues, I've tried downloading via Chrome on both Win7 and Ubuntu (on Vbox).
Let me also assure you that the path is indeed correct.
root#li162-41:/srv/app/current# tail /srv/app/releases/20100916094249/attachments/replies/UE0003-Requisition_For_Compensation_Leave.doc
#
#
%17nw
HQ��+1ae����
%33333333333(��QR���HX�"%%��#9
��#�p4��#P#��Unknown������������G��z �Times New Roman5��Symbol3&�
�z �Arial5&�
So to sum up the question, how do I get send_file to actually send files instead of fake 0 byte junk.
send_file has :x_sendfile param which defaults to true in Rails 3.
This feature offloads streaming download to front server - Apache (with mod_xsendfile) or lighttpd, by returning empty response with X-Sendfile header with path.
Nginx uses X-Accel-Redirect header for same functionality but you have to
configure Rails properly in proper environment file:
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
Rails 3 update: this line already exists in production.rb, just uncomment it.
Add sendfile on; to your nginx config to utilize header sent by Rails.
Remember the absolute path must be used and nginx must have read access to file.
Another way for aliased files:
For better security I use aliases in nginx instead of absolute paths,
however send_file method checks existence of file which fails with alias.
Thus I changed my action to:
head(
'X-Accel-Redirect'=> file_item.location,
'Content-Type' => file_item.content_type,
'Content-Disposition' => "attachment; filename=\"#{file_item.name}\"");
render :nothing => true;
In Rails 3, just uncomment the line config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' in production.rb inside environments folder.
Yes, I had the same problem with X-sendfile being enabled by default in Rails 3 too.
If you have large volume of "send_file" calls,
you can just comment-out following line in config/environments/production.rb:
#config.action_dispatch.x_sendfile_header = "X-Sendfile"
Then send_file method started working perfectly.
Because I can't install x-sendfile extension to Apache, I just searched a little and found this.
I hope it helps.
I've had similar issues with send_file() in the past, using send_data() instead saved me back then (e.g. send_data File.read(filename), :disposition => 'inline', :type => "some/mimetype")
On Rails 4, I realize my problem is that I deleted the temporary file which I've generated to send to user.
If I didn't delete the file, send_file works. I've not tested on thin but it works great on Passenger 5 as stand-alone server.

My Rails app is returning HTTP 500 for all its URLs, but nothing shows up in the log file. How can I diagnose the problem?

I have a Rails app that is running on a production server with Apache and Phusion Passenger. The app works fine locally when using Mongrel, but whenever I try to load a URL on the production server, it returns HTTP 500. I know the server is working properly, because I can get the static elements of the application (e.g., JavaScript files, stylesheets, images) just fine. I've also checked the Passenger status and it is loading the app (it must be, since the app's 500 Internal Server Error page is returned, not just the default Apache one). Also, when I load the app via script/console production and do something like app.get("/"), 500 is also returned.
The problem is that there is nothing in the log files to indicate the problem. production.log is empty. The Apache error logs show no problems with Apache, either. I'm stumped as to what's going on and I'm not sure how to diagnose the problem.
I know I may have been a bit vague, but can anyone give a suggestion on what the problem may be? Or at least a way I can go about diagnosing it?
The answer for this specific situation was a problem with my app. One of the model classes used a different database connection than the rest of the app. This database connection was not configured properly. I think the reason why nothing was written to the log files is that Rails bailed out without having any idea what to do.
Since it may be helpful for others to see how I diagnosed this problem, here was my thought process:
The issue couldn't be with Apache: no errors were written into the Apache log files.
The issue probably wasn't with Passenger: Passenger wasn't writing any errors to the Apache log file, and it seemed to be loading my app properly, since passenger-status showed it as loaded and it was display my app's 500 Internal Server Error page (not the default Apache one).
From there I surmised that it must be something broken in my app that happened very early on in the initialization phase, but wasn't something that caused the app to completely bail and throw an exception. I poked around in the Phusion Passenger Google Group, and ultimately stumbled upon this helpful post, which suggested that the error may be a database connectivity issue. Sure enough, removing this misconfigured database and all references to it made the app work!
Have you tried running the app locally using Passenger?
Try running the application locally on Mongrel in Production mode, to make sure that there's no weird issues with that particular environment. If that works, then you know that it's not an issue with your codebase. Since your static components are being served properly, that tells me that Apache is working fine. The only gear in the system left is Passenger. At this point, I would say it's an improperly configured Passenger. You should post up your Passenger config file, and ask the question on ServerFault.
A couple of things to try :
Have you gone though the following from the docs:
6.3.7. My Rails application’s log file is not being written to
There are a couple things that you
should be aware of:
By default, Phusion Passenger runs Rails applications in production
mode, so please be sure to check
production.log instead of
development.log. See RailsEnv for
configuration.
*
By default, Phusion Passenger runs Rails applications as the owner
of environment.rb. So the log file can
only be written to if that user has
write permission to the log file.
Please chmod or chown your log file
accordingly.
See User switching (security) for details.
If you’re using a RedHat-derived Linux
distribution (such as Fedora or
CentOS) then it is possible that
SELinux is interfering. RedHat’s
SELinux policy only allows Apache to
read/write directories that have the
httpd_sys_content_t security context.
Please run the following command to
give your Rails application folder
that context:
Have you checked your vhost or httpf.conf file ? Do you have any logging directives ?
Check the top level apache log file
Try setting PassengerLogLevel to 1 or 2 or 3, as shown here http://www.modrails.com/documentation/Users%20guide.html#_passengerloglevel_lt_integer_gt
Do you have any rack apps installed ?
My suggestion would be to go right back to "Hello World" land and create the smallest possible Ruby example application and upload it to see if there is a problem with Passenger or Ruby on the server.
May be a silly suggestion but I suggest you start by increasing the logging levels on production while you are testing. Do this in config/environments/production.rb and use:
config.log_level = :debug
This should at least get you some sort of backtrace so you can start to find the problem.
If you still get nothing - you may find that you have an issue with something as simple as a missing gem/plugin on your production server. That sort of thing may well manifest as a "500" error and just not be very verbose for you.
Can you run the test suite on your production server?

Resources