S3 for Rails API endpoint - is it necessary? - ruby-on-rails

A mobile iOS team wants an endpoint that they can send a username, a user ranking, and either an image related to that user or an array of images if the user has a gif or animation related to them. Then they will ping that endpoint in order to render these images on another page in their mobile app.
I decided to build this in Rails as a prototype. At first I thought I would need S3 in order to store the images, but now I'm not so sure. Can't I use the built-in database within my Rails app, since my Rails app won't be rendering any of the images? What are the advantages of using S3 in this case? Is S3 necessary?

For prototyping purposes, no, S3 isn't needed. You can store the images locally in the filesystem or even in DB, though, I'd say you should start with Postgres right away, because it has an array type you can use for the images table.
However, if you are making this an actual feature of a product, yes, consider S3 very seriously. Actually, something else you might want to look into is CloudFront, because it'll allow you to build a CDN and fetching images will be faster on the user side. Locality is significant portion of a fast UI, especially if images are a big part of it.

Related

How to upload and display images on Ruby on Rails without storing them?

I'm making a demo for a ruby gem I created that process images and I have my application for the demo using the gem. I want the demo to allow the user to upload an image and try the demo but I don't really want to store the image in a database.
I read about redis but I'm not sure if is the right solution since I don't think is intended to be used with images.
maybe you could simply ask the user to upload an image address as a string rather than a file?
Storing in Redis is not a bad idea, you could also set an expiry date to delete keys older than x days / hours. But keep in mind that Redis is still a database which you need to maintain.
Another approach is to just store it on the filesystem and delete the oldest file before storing a new one so you only store the latest x files.
http://qnimate.com/storing-binary-data-in-redis/
maybe you could simply ask the user to upload an image address as a string rather than a file?
I like this approach by Hassan, you could ask the user to upload to Dropbox / Google files for instance. Or ask to enter the email address and use Gravatar. This would be a light weight approach so if it's just an example I would go for this.

How to manage files uploaded by users?

I have a web app which uses nginx. Suppose it's in Rails, but it doesn't really matter.
I'm planning to have around a hundred of pictures/files uploaded every day by users. I don't want to use any custom solution for storying and uploading images and instead I want to manage that myself.
1) Is there an idiomatic place/path where I should store those images? Or will any path within the reach of nginx will work?
2) Should it be a separate folder from the one that I'm use for storying images for CSS?
3) How would I organize folders forest? That is, should it be something like /my_base_image_folder/{year}/{month}/{day}/{image_sequence_number}.jpg?
Or maybe /my_base_image_folder/{article_id}/{image_sequence_number}.jpg? Or should I put them in the same folder `/my_base_image_folder/{img_guid}.jpg?
And why?
4) What's a recommended solution for naming uploading files? GUID? Or a sequence number?
It all depends on your use case for consuming those images again. If you're planning to retrieve images uploaded by the users for those users on later sessions, ie., if your users have to browse through their images in your application, it's better to store it in a_public_folder/user_id_hashed/. If you're planning to retrieve images based on the uploaded time or date, it's better if you go with a_public_folder/year/month/day/... Make sure the public folder is accessible by your ngnix by using not-too-open permission, just to be safe. Also, naming the image files, a combination of timestamp and a small random hex should do I guess, no big deal.

Storage of user data

When looking at how websites such as Facebook stores profile images, the URLs seem to use randomly generated value. For example, Google's Facebook page's profile picture page has the following URL:
https://scontent-lhr3-1.xx.fbcdn.net/hprofile-xft1/v/t1.0-1/p160x160/11990418_442606765926870_215300303224956260_n.png?oh=28cb5dd4717b7174eed44ca5279a2e37&oe=579938A8
However why not just organise it like so:
https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png
Clearly this would be much easier in terms of storage and simplicity. Am I missing something? Thanks.
Companies like Facebook have fairly intense CDNs. They may look like randomly generated urls but they aren't, each individual route is on purpose and programed to be handled in that manner.
They aren't after simplicity of storage like you would be if you were just using a FTP to connect to a basic marketing website server. While you may put all your images in a /images folder, Facebook is much too complex for this. Dozens of different types of applications accessing hundreds if not thousands of CDNs and servers world wide.
If you ever build a web app, such as a Ruby on Rails app, and you work with a services such as AWS (Amazon Web Services) you'll also encounter what seems like nonsensical urls. But it's all part of the fast delivery network provided within the architecture. Every time you "push" your app up to the server new urls are generated for each unique resource automatically, css files, JavaScript files, image files, etc all dynamically created. You don't have to type in each of these unique urls individually each time you publish the app, the code simply knows where to look for those as a part of the publishing process.
Example: you tell the web app to look for
//= require jquery
and it returns you http://example.com/assets/jquery-eb3e278249152b5b5d5170b73d9dbf52.js?body=1 in your header.
It doesn't matter that the url is more complex than it should be, the application recognizes it, and that's all that matters.
Simply put, I think it can boil down to two main reasons: Security and Cache:
Security - Adding these long unpredictable hashes prevent others from guessing photo URLs and makes it pretty hard to download photos you aren't supposed to.
Consider what would happen if I could easily guess your profile photo URL and download it, even when you explicitly chose to share it only with friends.
Cache - by adding "random" query params to each photo, you make sure each photo instance gets its own URL. Thus you can store the photo in browser's cache for a long time, knowing that whenever you replace it with a new one, the new photo will have a fresh URL and the browser won't keep showing you the old photo.
If you were to keep the same URL for each user's profile photo (e.g. https://scontent-lhr3-1.xx.fbcdn.net/{{ profile_id }}/50x50.png), and then upload a new photo, either one of these can happen:
If you stored the photo in browser's cache for a long time, the browser will keep showing you the cached version (as long as URL is the same, and cache hasn't expired, there's no need to re-download the image).
If, instead, you only keep the image in cache for short period of time, you end up hitting your server much more then actually needed, increasing the load and hurting performance.
I hope this clarifies it.
With your route scheme, how would you avoid strangers to access the pictures of a private account? The hash also prevent bots to downloads all the pictures.
I get your pain :-) I might not stay with describing how this problem could appear more, but rather let me speak of a solution. Well it is normal that in general code while dealing with hashed value or even base64ed value it seems likes mess to deal with, but with an identifier to explain along, it does not remain much!
I use to work in a company where we use to collate Facebook post, using Graph API get its Insights Object and extract information from it for easy passing around within UI and sending back to our Redis cache store; and once we defined a data-structure in TaffyDB how an object organization is going to look like, everything just made sense with its ability to query the useful finite from long junk looking stream of minified Javascript stream
Refer: http://www.taffydb.com/
The extra values in the URL are useful to:
Track access. This is like when a newspaper appends "&homepage" vs. "&email" to an article URL, so their system knows how a reader found the page.
Avoid abuse and control access. Imagine that a user loaded a small, popular pornographic image into a profile image. They could then hijack the CDN to be a free web host for their porn site. But that code is used internally by the CDN to limit the number of views.

profile pictures - how to store [duplicate]

This question already has answers here:
Storing Images in DB - Yea or Nay?
(56 answers)
Closed 10 years ago.
in my website, user can upload a profile image.
I wanted to know what is the best way to store those images.
my thought is simply dedicated directory. the image name will be the user_id.
is that a good solution, or there's a smarter one?
You have two options, store the images or use an external source (gravatar).
If you're going to store the images, do you want these images to be publically available or are they private? If they are publically available, then you can store them in your public folder.
You can use something like carrierwave to handle the uploading, versioning and storing of the images.
For public stuff, I'll store the file in the public directory under the uploader/model name/field name/id location. This is more for organizational purposes on my part.
Check out http://railscasts.com/episodes/253-carrierwave-file-uploads for a good tutorial.
For private images, I'll set the store directory to something outside of the public folder and will create a download action within the controller with the file. This way, the user cannot download the file unless it goes through the controller action. With authorization (cancan) I can allow or disallow a user to access the download action for that particular file (hence making it somewhat secure). If you are going to be using a production server like apache or nginx, make sure that you set the appropriate handlers for sending the file (ie x_sendfile).
Its very common to store images in a directory for small applications. However there are a few of things to take into consideration here:
Do you have anticipate a lot of users? If you have a million users, storing everyone's photo in your directory will take up a lot of memory when running your application
Are you deploying on Heroku? Many RoR apps are, and if you deploy on Heroku it will destroy any files you store locally when your app is moved to a different dyno (and you generally have no way of predicting when this will happen). You can read about the Ephemeral filesystem here https://devcenter.heroku.com/articles/dynos#isolation-and-security
In general I would advise against storing all your images locally because rewriting the code as you scale will become painful. I recommend you upload to an Amazon S3 Bucket and download the images as you need (and cache them for when your user is logged in). Its helpful becasue you might have to deal with image processing (for example resizing the images that are uploaded, creating thumbnail versions of the uploaded images) and its easier to do this when you have background processes that have persistent access to these files. I've used the 'aws' gem and S3 libraries for this, and its really easy to use, you can read more about it here: http://amazon.rubyforge.org/
However, if you intend for this to be a small app and are not deploying on Heroku, just saving it to a local directory is a lot easier and pain-free

Mananging upload of images to create custom pdfs on heroku - right tools

Im desiging an app which allows users to upload images (max 500k per image, roughly 20 images) from their hard drive to the site so as to be able to make some custom boardgames (e.g. snakes and ladders) in pdf formate. These will be created with prawn instantly and then made available for instant download.
Neither the images uploaded nor the pdfs created need to be saved on my apps side permanently. The moment the user downloads the pdf they are no longer needed.
Heroku doesn't support saving files to the system (it does allow to the tmp directory but says you shouldnt rely on it striking it out for me). I'm wondering what tools / services I should be looking into to get round this. Ive looked into paperclip, I'm wondering if this is right for this type of job.
Paperclip is on the right track, but the key insight is you need to use the S3 storage backend (Paperclip uses the FS by default which as you've noticed is no good on Heroku). It's pretty handy; instead of flushing writes out to the file system, it uses the AWS::S3 gem to upload them to S3. You can read more about it in the rdoc here: http://github.com/thoughtbot/paperclip/blob/master/lib/paperclip/storage/s3.rb
Here's how the flow would work:
I'd let your users upload their multiple source images. Here's an article on allowing multiple attachments to one model with paperclip: http://www.cordinc.com/blog/2009/04/multiple-attachments-with-vali.html.
Then when you're ready to generate the PDF (probably in a background job, right?), what you do is download all the source images to somewhere in tmp/ (make sure the directory is based on your model id or something so if two people do this at once, the files don't get stepped on). Once you've got all the images downloaded, you can generate your PDF. I know this is using the file system, but as long as you do all your filesystem interactions in one request or job cycle, it will work, your files will still be there. I use this method in a couple production web apps. You can't count on tmp/ being there between requests, but within one it's reliably there.
Storing your generated PDF on S3 with paperclip makes sense too, since then you can just hand your users the S3 URL. If you want you can make something to clear the files off every so often if you don't want to pay the S3 costs, but they should be trivial.
Paperclip sounds like an ideal candidate. It will save images in RAILS_ROOT/public/system/, which is both persistent and private (shouldn't be able to be enumerated on shared hosting).
You can configure it to produce thumbnails of your images if you wish.
And it can remove the images it manages when the associated model is destroyed - after your user downloads their PDF, and you delete the record from the database.
Prawn might not be appropriate, depending on the complexity of the PDFs you need to generate. If you have $$$, go for PrinceXML and the princely gem. I've had some success with wkhtmltopdf, which generates PDFs from a Webkit render of HTML/CSS - but it doesn't support any of the advanced page manipulation stuff that Prince does.

Resources