I've got the following rails code
send_file '/test.pdf'
The file seems to be downloading with 0 bytes, has anyone got any ideas on how to fix this?
thanks
I believe that send_file depends on support from your web server to work. Are you running your app using the rails built in server? If so, I think you'll see the behaviour you've got here.
Basically, the idea behind send_file is that it sets an HTTP header 'X-Sendfile' pointing to the file you want to send. The web server sees this header and rather than returning the body of your response, sends the specified file instead.
The benefit of this approach is that your average web server is highly optimised for sending the content of static files very quickly, and will typically get the job done many times more quickly than rails itself can.
So, the solutions to your problem are to either:
* Use a webserver that supports X-Sendfile (e.g. Apache), or
* As rubyprince commented, use send_data instead, which will make ruby do the heavy lifting.
Just as an aside, you should be able to confirm that this is what's happening by looking at the response headers using either Firebug or Safari/Chrome's developer tools.
Related
We are using Twilio's API (using TwiML), and are serving ".wav" files to them via our standard Rails/Passenger/nginx stack. However, I'm running into an issue -- according to Twilio, our server is sending "application/octet-stream" for .wav files, instead of the required "audio/wav".
I've made sure both Rails (in the mime_types initializer) and nginx (in the mime.types file) have the appropriate mime type. Yet, roughly half the time, Twilio reports that it cannot retrieve the file due to an "application/octet-stream" mime type. The other half of the time, it works fine.
Has anyone experienced behavior like this?
I may have finally cleared this issue up... I modified our verbs to use full URLs instead of a path, and I used HTTP protocol in those URLs.
# Old
<Play>/assets/path/to/file.wav</Play>
# New
<Play>http://server.com/assets/path/to/file.wav</Play>
I'm not sure whether it's the shift to HTTP, or the shift to URL, but my error rate immediately dropped from around 50% to 0%. I'll leave this open for other answers in case someone knows why this might be the case or they have another solution.
I'm using heroku to host a web application with the primary focus of hosting videos. The videos are hosted through vimeo pro, and I'm using the vimeo gem by matthooks to help handle the upload process. Upload works for small files, but not for larger ones (~50mb, for example).
A look at heroku logs shows that I am getting http error 413, which stands for "Request Entity Too Large." I believe this might have to do with a limit that heroku places on file uploads (greater than 30mb, according to this webpage). The problem though is that any information I can find on the subject seems to be outdated and conflicting (like this page that claims there is no size limit). I also couldn't find anything on heroku's site about this.
I've searched google and found a few somewhat relevant pages (one and two), but no solutions that worked for me. Most of the pages I found deal with uploading large files to amazon s3, which is different from what I'm trying to do.
Here's the relevant output of the logs:
2012-07-18T05:13:31+00:00 heroku[nginx]: 152.3.68.6 - - [18/Jul/2012:05:13:31 +0000]
"POST /videos HTTP/1.1" 413 192 "http://neoteach.com/components/19" "Mozilla/5.0
(Macintosh; Intel Mac OS X 10.7; rv:13.0) Gecko/20100101 Firefox/13.0.1" neoteach.com
There are no other errors in the logs. This is the only output that appears when I try to upload a video that is too large. Which means that this is not a timeout error or a problem with exceeding the allotted memory per dyno.
Does heroku really place a limit on upload sizes? If so, is there any way to change this limit? Note that the files themselves are not being stored on heroku's servers at all, they are merely being passed on to vimeo's servers.
If the problem is not limit on upload sizes, does anyone have an idea of what else might be going wrong?
Much thanks!
Update:
OP here. I'm still not exactly sure why I was getting this particular 413 error, but I was able to come up with a solution that works using the s3_swf_upload gem. The implementation involves flash, which is less than ideal, but it was the only solution (out of 3 or 4 that I tried) that I could get working.
As Neil pointed out (thanks Neil!), the error I should have been getting is "H12 - Request timeout". And I did end up running into this error after repeated trials. The problem occurs when you try to upload large files to the heroku server from your controller (using a web dyno), because it takes too long for the server to respond to the post request.
The proper approach is to send the file directly to s3 without passing through heroku.
Here's a high-level overview of my approach:
Use the s3_swf_upload gem to supply a direct upload form to s3.
Detect when the file is done uploading with the javascript callback function provided in the gem.
Using javascript, send rails a post message to let your server know the file is done uploading.
The controller that responds to the javascript post does two things: (a) assigns an s3_key attribute to the video object (served up as a param in the form). (b) initiates a background task using the delayed_job gem.
The background task retrieves the file from s3. I used the aws-sdk gem to accomplish this, because it was already included in s3_swf_upload. Note that this is distinctly different from the aws-s3 gem (in fact they conflict with one another).
After the file has been retrieved from s3, I used the vimeo gem to upload it to vimeo (still in the background).
The implementation above works, but it isn't perfect. For files that are close to 500MB in size, you'll still run into R14 errors in your worker dynos. This occurs because heroku only allots 512MB of memory per dyno, so you can't load the entire file into memory at once. The way around this problem is to implement some sort of chunking in the final step, where you retrieve the file from s3 and upload it to vimeo piece by piece. I'm still working on this part, and I'd love to hear any suggestions you might have.
Hopefully this might help someone. Feel free to ask me any questions. Like I said, my solution isn't perfect so feel free to add your own answer if you think it could be better.
I think the best option here is indeed to upload directly to S3. It's much cheaper and much more secure than allowing users to upload files to your own server (or Heroku in this case). It's also a well-proven pattern used by lots of video hosting platforms (I know vzaar do this).
Check out the jQuery upload plugin, which allows direct uploads to S3: https://github.com/blueimp/jQuery-File-Upload
Also check out the Railscasts around this topic: #381 and #383.
Your biggest problem is not the size of the files here, but the fact that you are expecting the user to upload large files to Heroku, and then pass them on. The issue here is that all requests on the Heroku platform must return the first byte within 30 seconds - which in your case is very unlikely.
Therefore, you need to look at getting users to upload direct to S3/Vimeo/whereever and then connect your application data to these uploaded assets.
If you're using Ruby, then the carrier-wave direct gem might be worth a look for how it's done . Failing that there are 3rd party services out there which allow you to do this via some code which you can drop into the page, but these come with an attached cost.
I have a Rails app that saves files in mongo. This works great and I have it set up to serve those files, but with some use cases I need to get the file and write it to disk (merging pdf files).
In IRB or from a simple Ruby file I can run the following code and get the file almost instantly, but when the same code is called from within Rails it times out.
require 'open-uri'
open('id1_front.pdf', 'wb') do |file|
file << open('http://127.0.0.1:3000/files/uploads/id1_front.pdf').read
p file
end
-
Timeout::Error (Timeout::Error):
app/controllers/design_controller.rb:38:in `block in save'
app/controllers/design_controller.rb:37:in `save'
Anyone know why it would be timing out in Rails? Any alternate solutions to get a file out of mongo and write it to disk?
thanks!
When you're running your development server, you have only one thread on which to respond to requests. This thread will be blocked when a request is being served: so, you request design_controller#save, which then tries to make another request to the web server for an uploaded file. This request will never successfully complete, because the webserver is still trying to complete the previous one.
You might be able to get around this problem by using thin as your Rails server, instead of webrick. Add gem thin to your gemfile and start your server with rails s thin. I'm not sure if this will allow more than one request to be serviced simultaneously, but it's at least worth a shot.
--EDIT--
After some testing I determined that thin is also single-threaded, unfortunately, so will also have this exact same problem.
After a bit of Googling, I did discover shotgun. It hasn't been active for awhile but it looks like it might fix your problem, since it spawns a new application per request in development. Give it a shot.
I'm having trouble figuring out how to do this using Rails, though it is probably cause I don't know the proper term for it.
I basically want to do this:
def my_action
sleep 1
# output something in the request, but keep it open
print '{"progress":15}'
sleep 3
# output something else, keep it open
print '{"progress":65}'
sleep 1
# append some more, and close the request
print '{"sucess":true}'
end
However I can't figure out how to do this. I basically want to replicate a slow internet connection.
I need to do this because I am scraping websites, which takes time, where I am 'sleeping' above.
Update
I'm reading this using iOS, so I don't want a websocket server, I think.
Maybe this is exactly what you're looking for:
Infinite streaming JSON from Rails 3.1
You probably want to do some reading around HTML5 WebSockets (there are backwards compatible hacks for older browsers) which let you push data to the client from the server.
Rails has a number of ways to implement a WebSockets server. This question gives some of the options Best Ruby on Rails WebSocket tool
If that would work on the server-side, how would you handle it on the client-side?
HTTP requests normally can just have one response (which may be chunked when using streaming, which wouldn't work in your case I think).
I guess you would either have to look into websockets or make separate requests for each step.
I'm building a rails app that interacts with a 3rd party API
When a user uploads a file to rails, it should be forwarded on to the 3rd party site via an HTTP POST.
In some cases, the upload can be several hundred MBs.
At the moment, I've just been re-posting to the API using Net::HTTP and accessing the multipart form object like so
#tempfile = params[:video][:file_upload].tempfile
This is hella slow though and feels kinda dirty.
Is there a better way to do this?
Is it possible to have the user post directly to the 3rd party service or do you have to handle the API through your Rails stack? Ideally you would be able to do this and would not have to load the file into your stack and then re-post it to the API. If you can't post directly, I would recommend seeing if the API has a streaming service so that you can send parts of the file instead of the entire thing at once. Either way I think you'll start running into Timeout errors on your side and on the API side with large files, so you'll have to increase your own timeouts or create a different type of streaming file uploader.
Spin up a background job using DelayedJob. In the delayed job, you could try rails redirect_to.
https://github.com/tobi/delayed_job
http://apidock.com/rails/ActionController/Base/redirect_to