How to post images directly to s3 on a heroku app from a json request? - ruby-on-rails

I have a rails app hosted on heroku and a mobile app made with rhodes.
I'd like to send images from the mobile app to my rails app using an HTTP POST request. Since heroku doesn't allow you to store files, I'm using amazon s3.
I can't send the file from heroku to s3 because it takes more than 30 seconds and causes a timeout. I've seen plenty of examples of uploading a file direct to s3 when the user has a form, but this obviously won't work in this case.
I tried using the suggestion here:
rails 3, heroku, aws-s3, simply trying to upload a file to S3 that is POSTed (http/multipart) to our app
but I still get a 503 request timeout.
I don't want to put my amazon s3 keys on the app.
Right now, I feel like my only option is to host my app on EC2 which I would rather not do as I like the simplicity of Heroku.
Also, it seems strange that these uploads would take so long regardless. I'm only posting images from a mobile phone camera, so they're not huge files.

I was getting the same error in a project in my job. Some people says that the only way to solve this is by uploading files directly to the S3 bucket. This is difficult in our case, because we are using Paperclip Gem for Rails and different size versions of the image.
Some other people says that "The Heroku timeout is a set in stone thing that you need to work around. Direct upload to S3 is the only option, with some sort of post-upload processing required", so I recomend to do the next:
Maybe this is not a solution but, it could be very useful, it was for me in a Rails App:
Worker Dynos, Background Jobs and Queueing
Perhaps you should move this heavy lifting into a background job which can run asynchronously from your web request.
Regards!

So I finally figured out how to do this.
After lots of back and forth with AWS reps and Cloudfiles reps and pulling my hair out, I realized it would be a lot less work to just get another rails server that could write to the filesystem.
So, I started another rails app on openshift. It's just as easy as Heroku to get started (in fact, I might consider moving my rails app there, but it's too new for my taste right now and doesn't have the community around it that Heroku does).
Then, I just had to have communications between my two rails apps.
I know it's not the best/scalable/elegant fix, but it got the job done, and that's what matters in the end!

Related

Rails. Seahorse::Client::NetworkingError Amazon S3

It's duplicating this thread Seahorse::Client::NetworkingError Amazon S3 file upload with rails
But I don't understand what's actually going on there. So I am looking for a better explanation here.
The issue - my app asks a user for a photo upload. Then saves the photo to AWS S3 using gem aws-sdk-rails. Everything works fine for the most of the times.
But yesterday I got an error notification:
Seahorse::Client::NetworkingError: Connection reset by peer
...
99 File "/app/app/controllers/customers_controller.rb" line 22 in create
...
Here it says it might be a network issue https://github.com/aws/aws-sdk-ruby/issues/1572
But I still can't figure out how to handle this kind of situation.
Should I use some sort of background worker for this? I have no experience with background workers (if it's a right track at all).

How to speed up image uploading carrierwave and Rails4

I am working with Rails4 and carrierwave, uploading the images and files to S3. But it's taking much time and very slow. How to handle this situation to speed up the server speed!!!
How to handle this using Background Jobs and Handle request from lot of users.
Also getting images is very slow into my application!!!
Can you suggest me how to achieve Rails severe works fast while uploading files?
You might consider uploading directly from the client to S3 via Ajax. This would nearly completely take your server out of the mix.
Uploading Image to Amazon s3 with HTML, javascript & jQuery with Ajax Request (No PHP)
This is a well documented concept elsewhere online.
Amazon S3 now has notifications for newly created objects.
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
You could drop the upload notifications into an Amazon SQS queue. You could then use a gem like Fog to create a background worker to pull events off the queue to create or update records in the database to reflect the newly completed upload.
https://github.com/fog/fog
Regardless of the solution, if you're uploading big files, it's likely your local network's upload speed that is the bottleneck.

How to Upload Large Files on Heroku (Particularly Videos)

I'm using heroku to host a web application with the primary focus of hosting videos. The videos are hosted through vimeo pro, and I'm using the vimeo gem by matthooks to help handle the upload process. Upload works for small files, but not for larger ones (~50mb, for example).
A look at heroku logs shows that I am getting http error 413, which stands for "Request Entity Too Large." I believe this might have to do with a limit that heroku places on file uploads (greater than 30mb, according to this webpage). The problem though is that any information I can find on the subject seems to be outdated and conflicting (like this page that claims there is no size limit). I also couldn't find anything on heroku's site about this.
I've searched google and found a few somewhat relevant pages (one and two), but no solutions that worked for me. Most of the pages I found deal with uploading large files to amazon s3, which is different from what I'm trying to do.
Here's the relevant output of the logs:
2012-07-18T05:13:31+00:00 heroku[nginx]: 152.3.68.6 - - [18/Jul/2012:05:13:31 +0000]
"POST /videos HTTP/1.1" 413 192 "http://neoteach.com/components/19" "Mozilla/5.0
(Macintosh; Intel Mac OS X 10.7; rv:13.0) Gecko/20100101 Firefox/13.0.1" neoteach.com
There are no other errors in the logs. This is the only output that appears when I try to upload a video that is too large. Which means that this is not a timeout error or a problem with exceeding the allotted memory per dyno.
Does heroku really place a limit on upload sizes? If so, is there any way to change this limit? Note that the files themselves are not being stored on heroku's servers at all, they are merely being passed on to vimeo's servers.
If the problem is not limit on upload sizes, does anyone have an idea of what else might be going wrong?
Much thanks!
Update:
OP here. I'm still not exactly sure why I was getting this particular 413 error, but I was able to come up with a solution that works using the s3_swf_upload gem. The implementation involves flash, which is less than ideal, but it was the only solution (out of 3 or 4 that I tried) that I could get working.
As Neil pointed out (thanks Neil!), the error I should have been getting is "H12 - Request timeout". And I did end up running into this error after repeated trials. The problem occurs when you try to upload large files to the heroku server from your controller (using a web dyno), because it takes too long for the server to respond to the post request.
The proper approach is to send the file directly to s3 without passing through heroku.
Here's a high-level overview of my approach:
Use the s3_swf_upload gem to supply a direct upload form to s3.
Detect when the file is done uploading with the javascript callback function provided in the gem.
Using javascript, send rails a post message to let your server know the file is done uploading.
The controller that responds to the javascript post does two things: (a) assigns an s3_key attribute to the video object (served up as a param in the form). (b) initiates a background task using the delayed_job gem.
The background task retrieves the file from s3. I used the aws-sdk gem to accomplish this, because it was already included in s3_swf_upload. Note that this is distinctly different from the aws-s3 gem (in fact they conflict with one another).
After the file has been retrieved from s3, I used the vimeo gem to upload it to vimeo (still in the background).
The implementation above works, but it isn't perfect. For files that are close to 500MB in size, you'll still run into R14 errors in your worker dynos. This occurs because heroku only allots 512MB of memory per dyno, so you can't load the entire file into memory at once. The way around this problem is to implement some sort of chunking in the final step, where you retrieve the file from s3 and upload it to vimeo piece by piece. I'm still working on this part, and I'd love to hear any suggestions you might have.
Hopefully this might help someone. Feel free to ask me any questions. Like I said, my solution isn't perfect so feel free to add your own answer if you think it could be better.
I think the best option here is indeed to upload directly to S3. It's much cheaper and much more secure than allowing users to upload files to your own server (or Heroku in this case). It's also a well-proven pattern used by lots of video hosting platforms (I know vzaar do this).
Check out the jQuery upload plugin, which allows direct uploads to S3: https://github.com/blueimp/jQuery-File-Upload
Also check out the Railscasts around this topic: #381 and #383.
Your biggest problem is not the size of the files here, but the fact that you are expecting the user to upload large files to Heroku, and then pass them on. The issue here is that all requests on the Heroku platform must return the first byte within 30 seconds - which in your case is very unlikely.
Therefore, you need to look at getting users to upload direct to S3/Vimeo/whereever and then connect your application data to these uploaded assets.
If you're using Ruby, then the carrier-wave direct gem might be worth a look for how it's done . Failing that there are 3rd party services out there which allow you to do this via some code which you can drop into the page, but these come with an attached cost.

fb_graph upload photo from base64 encoded string

I'm receiving attachments from postmarkapp (described here: http://developer.postmarkapp.com/developer-inbound-parse.html#attachments).
I want to upload those photos to facebook using fb_graph (https://github.com/nov/fb_graph) using its photo! method (https://github.com/nov/fb_graph/wiki/Photo-and-Album).
This is easy and works fine when testing by specifying a :source like in the examples from an actual file.
However I'm trying to not write out to a file but instead just convert the base64 encoded string to StringIO and pass that as the :source argument. This doesn't work and I get this error:
ruby
FbGraph::InvalidRequest: OAuthException :: (#324) Requires upload file
The reason I don't want to write out a file is because I'm using heroku and delayed_job so I'm not sure if a file I write out will still be around when the job is processed. That would be nice however since my current plan is to store the images in the db with delayed job.
Thanks.
I couldn't find a way to make this work with heroku without first uploading it to mongohq with gridfs. You can't use the cedar ephemeral file system because those files, written during a controller action, won't be visible to your delayed_job worker.
So even though it sucks I do this;
Receive upload from postmarkapp (blocks one dyno)
Write to mongohq using grid fs (this involves one upload which blocks one dyno)
Queue job using delayed_job
Read back from mongohq (blocks one worker while downloading)
Re-upload to FB when posting
So why not just post directly to facebook if I'm going to incur that initial blocking cost of uploading to mongohq anyway? Because that upload is way faster than uploading to FB for reasons unkonwn.
The right answer on heroku is to have a node.js dyno handling these callbacks from postmark so the dyno isn't blocked during either the read from postmark or the write to mongohq (or facebook) then to do some extra work to have the node app interact with the rails app to stay synced.

Why would you upload assets directly to S3?

I have seen quite a few code samples/plugins that promote uploading assets directly to S3. For example, if you have a user object with an avatar, the file upload field would load directly to S3.
The only way I see this being possible is if the user object is already created in the database and your S3 bucket + path is something like
user_avatars.domain.com/some/id/partition/medium.jpg
But then if you had an image tag that tried to access that URL when an avatar was not uploaded, it would yield a bad result. How would you handle checking for existence?
Also, it seems like this would not work well for most has many associations. For example, if a user had many songs/mp3s, where would you store those and how would you access them.
Also, your validations will be shot.
I am having trouble thinking of situations where direct upload to S3 (or any cloud) is a good idea and was hoping people could clarify either proper use cases, or tell me why my logic is incorrect.
Why pay for storage/bandwidth/backups/etc. when you can have somebody in the cloud handle it for you?
S3 (and other Cloud-based storage options) handle all the headaches for you. You get all the storage you need, a good distribution network (almost definitely better than you'd have on your own unless you're paying for a premium CDN), and backups.
Allowing users to upload directly to S3 takes even more of the bandwidth load off of you. I can see the tracking concerns, but S3 makes it pretty easy to handle that situation. If you look at the direct upload methods, you'll see that you can force a redirect on a successful upload.
Amazon will then pass the following to the redirect handler: bucket, key, etag
That should give you what you need to track the uploaded asset after success. Direct uploads give you the best of both worlds. You get your tracking information and it unloads your bandwidth.
Check this link for details: Amazon S3: Browser-Based Uploads using POST
If you are hosting your Rails application on Heroku, the reason could very well be that Heroku doesn't allow file-uploads larger than 4MB:
http://docs.heroku.com/s3#direct-upload
So if you would like your users to be able to upload large files, this is the only way forward.
Remember how web servers work.
Unless you're using a sort of async web setup like you could achieve with Node.JS or Erlang (just 2 examples), then every upload request your web application serves ties up an entire process or thread while the file is being uploaded.
Imagine that you're uploading a file that's several megabytes large. Most internet users don't have tremendously fast uplinks, so your web server spends a lot of time doing nothing. While it's doing all of that nothing, it can't service any other requests. Which means your users start to get long delays and/or error responses from the server. Which means they start using some other website to get the same thing done. You can always have more processes and threads running, but each of those costs additional memory which eventually means additional $.
By uploading straight to S3, in addition to the bandwidth savings that Justin Niessner mentioned and the Heroku workaround that Thomas Watson mentioned, you let Amazon worry about that problem. You can have a single-process webserver effectively handle very large uploads, since it punts that actual functionality over to Amazon.
So yeah, it's more complicated to set up, and you have to handle the callbacks to track things, but if you deal with anything other than really small files (and even in those cases), why cost yourself more money?
Edit: fixing typos

Resources