Let's say I got a project which has something like a media pool. Basically I wanna be able to upload whatever file you could upload (images, videos, pdf, etc.).
I was thinking about going with refile since it supports on-the-fly-processing of images which is nice since there is going to be an image api that should let the user request the image in whatever size he needs.
BUT, how would I handle pdf uploads or video uploads(even video processing)?
Is there maybe a better alternative to refile?
Thank you very much!
First of all, file attachment libraries can generally upload any type of file. Most popular are Paperclip and CarrierWave. They give you the ability to process on upload, which is suitable for videos. However, they don't allow you to process on-the-fly.
Dragonfly and Refile are, on the other hand, designed for on-the-fly processing. An upside of Refile is its support for direct uploads. One downside of Refile is that you have to serve all files through its Rack app, so if you have videos uploaded on S3 which you won't process, you still need to pay the performance penalty on first non-cached render. An upside of Dragonfly is that it has much more advanced on-the-fly processing support, and it also allows you to process on upload.
Lastly, we come to Shrine. Shrine was designed for processing on upload, and it's the only library with native support for background jobs, which is especially useful for longer processing like video transcoding. Shrine also has a Transloadit integration, if you want to delegate processing to a 3rd-party service. But you can also get on-the-fly processing with Shrine, using services like Cloudinary or even hooking up Dragonfly (see this post). Shrine has support for direct uploads, like Refile. Some of the other notable features include: metadata support, logging, flexible file validation, resumable uploads, better security and other.
Since Shrine arguably has most features and flexibility than any other file attachment library, I would recommend going for it.
Related
My team and I are building an iOS application. We allow technicians in the field to upload images for certain issues they are resolving on technical equipment. It will be important to zoom in (so keep quality relatively high) when these images are uploaded to S3.
Recently we decided to add thumbnails because it will be much faster when others browse the iOS app, rather than downloading a 1.5-2.5mb image.
My co-worker decided the best way to handle this is to generate a 200-500kb thumbnail in iOS then upload the image and the thumbnail to s3.
I voiced my concern that some of our technicians may be in some parts of the world where internet is slow and data usage is limited. So doing all this additional work on the device and uploading makes no sense to me. However the team considers this a good solution and will move forward. I've shown them easy examples of how to generate thumbnails from S3 and Lambda automatically on the server... allowing us to either upload higher fidelity images with the additional bandwith or just increase the speed of the app by uploading much less. Sometimes a user may upload as many as 100 images... meaning an additional 20-50mb...
Anyways I wanted to hear some answers about how you guys think the best way to handle this is, mainly for my own sanity check.
I don't completely comprehend the intricacies of your project, but from experience, I have one word for you - Cloudinary. As opposed to S3, which a general purpose Cloud storage solution, cloudinary is designed to handle images.
We have a 200,000 hits a day online classified app that handles tens of thousands of photos daily. And cloudinary provides an extremely mean solution for all our needs. We have uploads by users from their mobile and desktop devices, bookmarking of those images, CDN based serving, and thumbnail generation.
Did I mention they have thumbnail generation built in? They have lots of other features as well, including
Resize and Crop
Optimized JPEG Custom Crop
Face Thumbnail
Rotated Circular Thumbnail
Zoom Effects and Zoom Image Overlay
Watermark Image
Optimized WebP
Overlay, Border, Shadow Text Overlay, Border, Shadow etc.
The admin console is pretty kickass too, with all of the above features available for you to configure over the cloud. And it fits well with pretty much any application (we use it in our internal Ruby, Go, NodeJS services, our Web Application and our iOS and Android apps as well).
I'm not paid to sell Cloudinary to you, but I can vouch that if it is image based services I needed, I would go for Cloudinary any day over S3. Major players like EBay and TED etc. use it for their image requirements.
please bear with me as I'm not trying to frustrate anyone with inane questions, and I did google search this but I couldn't really find anything recent or helpful.
I am a novice programmer and I am using a classic asp web application. I just enabled the users to upload and download images, but I'm quickly regretting it as it's eating up all of the router bandwidth. I am finding my solution inadequate, so I wanted to start over.
My desire is threefold with this functionality:
Compression. I understand that this is impossible to do BEFORE uploading without some kind of Java/Silverlight/Flash portion of the application to handle uploads, correct? What is the common way most places go about this? Just allow regular file uploads and compress once they are on the server?
Resizing. I want to resize all images before they are uploaded to a reasonable size, instead of just telling users that try and upload huge camera images that they can't upload. I figure I just want to let them upload and have it resize for them before uploading. Does this functionality exist already?
Changing filetype. I want to allow users to upload all image file types but make them .jpg on the server after the upload.
With these three requirements, how hard is it to implement something like this in just pure code and libraries? Would it be better to just use a 3rd party plugin, such as ASPjpeg or ASPupload? Have you encountered something similar, and what was your solution?
Thanks.
Take a look at ASPJpeg and ASPUpload from Persits. We use these components to upload a full size image (can be png even though the library is "ASPJpeg"), resize it to several different sizes we need on our site, then store the resized images on the server in a variety of folders. The ASPUpload component is a little tricky but if you follow their sample code you'll be fine.
I never found a good component for decompressing uploaded zip files and had to write my own, which I've since abandoned. In the end with upload speeds increasing and storage getting so cheap, it started to matter less and less that the files were compressed before being uploaded.
EDIT: Just noticed you mentioned these components in your question. Consider this an endorsement of your idea to use them. :-)
I am interested in building a Rails based system for handling the display and organization of large amounts of photos. This is sort of like Flickr but smaller. Each photo will have metadata associated with it. Photos will be shown in a selectable list and grid view. It would be nice to be able to load images as they are needed as well (as this would probably speed things up).
At the moment I have a test version of my database working by images loading from the assets/images directory but it is beginning to run slow when displaying several images (200-600 images). This is due to the way I have my view setup. I am using a straight loop to display the images in both list and grid layouts.
I also manually resized the thumbnails and a medium sized image from a full sized source image. I am investigating other resizing methods. Any advice is appreciated here as well.
As I am new to handling the images this way, could someone point me in a direction based on experience designing and implementing something like Flickr?
I am investigating the following tools:
Paperclip
http://railscasts.com/episodes/134-paperclip
Requirements: ImageMajick
attachment_fu
http://clarkware.com/blog/2007/02/24/file-upload-fu#FileUploadFu
Requirement: One of the following: ImageScience, RMagick, miniMagick, ImageMajick?
CarrierWave
http://cloudinary.com/blog/ruby_on_rails_image_uploads_with_carrierwave_and_cloudinary
http://cloudinary.com/blog/advanced_image_transformations_in_the_cloud_with_carrierwave_cloudinary
I'd go with Carrierwave anyday. It is very flexible and has lot of useful strategies. It generates it's on Uploader class and has all nifty and self explanatory features such as automatic generation of thumbnails (as specified by you), blacklisting, formatting image, size constraints etc; which you can put to your use.
This Railscast by Ryan Bates - http://railscasts.com/episodes/253-carrierwave-file-uploads is very useful, if you haven't seen it already.
Paperclip and CarrierWave are totally appropriate tools for the job, and the one you choose is going to be a matter of personal preference. They both have tons of users and active, ongoing development. The difference is whether you'd prefer to define your file upload rules in a separate class (CarrierWave), or if you'd rather define them inline in your model (Paperclip).
I prefer CarrierWave, but based on usage it's clear plenty of people feel otherwise.
Note that neither gem is going to do anything for your slow view with 200-600 images. These gems are just for handling image uploads, and don't help you with anything beyond that.
Note also that Rails is really pretty bad at handling file uploads and file downloads, and you should avoid this where possible by letting other services (a cdn, your web server, s3, etc) handle these for you. The central gotcha is that if you handle a file transfer with rails, your entire web application process is busy for the duration of the transfer. (For related discussion on this topic, see: Best Ruby on Rails Architecture for Image Heavy App).
I'm using Google's Custom Search API to dynamically provide web search results. I very intensely searched the API's docs and could not find anything that states it grants you access to Google's site image previews, which happen to be stored as base64 encodes.
I want to be able to provide image previews for sites for each of the urls that the Google web search API returns. Keep in mind that I do not want these images to be thumbnails, but rather large images. My question is what is the best way to go about doing this, in terms of both efficiency and cost, in both the short and long term.
One option would be to crawl the web and generate and store the images myself. However this is way beyond my technical ability, and plus storing all of these images would be too expensive.
The other option would be to dynamically fetch the images right after Google's API returns the search results. However where/how I fetch the images is another question.
Would there be a low cost way of me generating the images myself? Or would the best solution be to use some sort of site thumbnailing service that does this for me? Would this be fast enough? Would it be too expensive? Would the service provide the image in the correct size for me? If not, how could I change the size of the image?
I'd really appreciate answers that are comprehensive and for any code examples to be in ruby using rails.
So as you pointed out in your question, there are two approaches that I can see to your issue:
Use an external service to render and host the images.
Render and host the images yourself.
I'm no expert in field, but my Googling has so far only returned services that allow you to generate thumbnails and not full-size screenshots (like the few mentioned here). If there are hosted services out there that will do this for you, I wasn't able to find them easily.
So, that leaves #2. For this, my first instinct was to look for a ruby library that could generate an image from a webpage, which quickly led me to IMGKit (there may be others, but this one looked clean and simple). With this library, you can easily pass in a URL and it will use the webkit engine to generate a screenshot of the page for you. From there, I would save it to wherever your assets are stored (like Amazon S3) using a file attachment gem like Paperclip or CarrierWave (railscast). Store your attachment with a field recording the original URL you passed to IMGKit from WSAPI (Web Search API) so that you can compare against it on subsequent searches and use the cached version instead of re-rendering the preview. You can also use the created_at field for your attachment model to throw in some "if older than x days, refresh the image" type logic. Lastly, I'd put this all in a background job using something like resque (railscast) so that the user isn't blocked when waiting for screenshots to render. Pass the array of returned URLs from WSAPI to background workers in resque that will generate the images via IMGKit--saving them to S3 via paperclip/carrierwave, basically. All of these projects are well-documented, and the Railscasts will walk you through the basics of the resque and carrierwave gems.
I haven't crunched the numbers, but you can against hosting the images yourself on S3 versus any other external provider of web thumbnail generation. Of course, doing it yourself gives you full control over how the image looks (quality, format, etc.), whereas most of the services I've come across only offer a small thumbnail, so there's something to be said for that. If you don't cache the images from previous searches, then your costs reduces even further, since you'll always be rendering the images on the fly. However I suspect that this won't scale very well, as you may end up paying a lot more for server power (for IMGKit and image processing) and bandwidth (for external requests to fetch the source HTML for IMGKit). I'd be sure to include some metrics in your project to attach some exact numbers to the kind of requests you're dealing with to help determine what the subsequent costs would be.
Anywho, that would be my high-level approach. I hope it helps some.
Screen shotting web pages reliably is extremely hard to pull off. The main problem is that all the current solutions (khtml2png, CutyCapt, Phantom.js etc) are all based around QT which provides access to an embedded Webkit library. However that webkit build is quite old and with HTML5 and CSS3, most of the effects either don't show, or render incorrectly.
One of my colleagues has used most, if not all, of the current technologies for generating screenshots of web pages for one of his personal projects. He has written an informative post about it here about how he now uses a SaaS solution instead of trying to maintain a solution himself.
The TLDR version; he now uses URL2PNG to do all his thumbnail and full size screenshots. It isn't free, but he says that it does the job for him. If you don't want to use them, they have a list of their competitors here.
Google's app engine has this speedy image resizing api which appears to perform significantly faster than the rails paperclip resizing alternative.
Anyone know of any rails/heroku friend image resizing api's that could work with paperclip to be a faster resizing solution?
Thanks
We've used Transloadit and it works well:
http://transloadit.com/
When you do images on heroku, you're usually storing them up on S3.
You can upload the file directly to S3, then use delayed job to process the file in the background. Your users will see a zippier/faster processing time.
Paperclip itself doesn't include image-resizing code; it simply plugs into tools like ImageMagick and MiniMagick. Have you tried some of these other engines?