I have a site where I load images of products. My images are originally bigger, and I resize their display by css. I'm looking for some performance upgrade and thought about variants, but then I found in documentation Note that to create a variant it's necessary to download the entire blob file from the service and load it into memory.
So the question is: does resizing images by using variants increases website performance, and is it cachable the same way as original image?
Related
My team and I are building an iOS application. We allow technicians in the field to upload images for certain issues they are resolving on technical equipment. It will be important to zoom in (so keep quality relatively high) when these images are uploaded to S3.
Recently we decided to add thumbnails because it will be much faster when others browse the iOS app, rather than downloading a 1.5-2.5mb image.
My co-worker decided the best way to handle this is to generate a 200-500kb thumbnail in iOS then upload the image and the thumbnail to s3.
I voiced my concern that some of our technicians may be in some parts of the world where internet is slow and data usage is limited. So doing all this additional work on the device and uploading makes no sense to me. However the team considers this a good solution and will move forward. I've shown them easy examples of how to generate thumbnails from S3 and Lambda automatically on the server... allowing us to either upload higher fidelity images with the additional bandwith or just increase the speed of the app by uploading much less. Sometimes a user may upload as many as 100 images... meaning an additional 20-50mb...
Anyways I wanted to hear some answers about how you guys think the best way to handle this is, mainly for my own sanity check.
I don't completely comprehend the intricacies of your project, but from experience, I have one word for you - Cloudinary. As opposed to S3, which a general purpose Cloud storage solution, cloudinary is designed to handle images.
We have a 200,000 hits a day online classified app that handles tens of thousands of photos daily. And cloudinary provides an extremely mean solution for all our needs. We have uploads by users from their mobile and desktop devices, bookmarking of those images, CDN based serving, and thumbnail generation.
Did I mention they have thumbnail generation built in? They have lots of other features as well, including
Resize and Crop
Optimized JPEG Custom Crop
Face Thumbnail
Rotated Circular Thumbnail
Zoom Effects and Zoom Image Overlay
Watermark Image
Optimized WebP
Overlay, Border, Shadow Text Overlay, Border, Shadow etc.
The admin console is pretty kickass too, with all of the above features available for you to configure over the cloud. And it fits well with pretty much any application (we use it in our internal Ruby, Go, NodeJS services, our Web Application and our iOS and Android apps as well).
I'm not paid to sell Cloudinary to you, but I can vouch that if it is image based services I needed, I would go for Cloudinary any day over S3. Major players like EBay and TED etc. use it for their image requirements.
please bear with me as I'm not trying to frustrate anyone with inane questions, and I did google search this but I couldn't really find anything recent or helpful.
I am a novice programmer and I am using a classic asp web application. I just enabled the users to upload and download images, but I'm quickly regretting it as it's eating up all of the router bandwidth. I am finding my solution inadequate, so I wanted to start over.
My desire is threefold with this functionality:
Compression. I understand that this is impossible to do BEFORE uploading without some kind of Java/Silverlight/Flash portion of the application to handle uploads, correct? What is the common way most places go about this? Just allow regular file uploads and compress once they are on the server?
Resizing. I want to resize all images before they are uploaded to a reasonable size, instead of just telling users that try and upload huge camera images that they can't upload. I figure I just want to let them upload and have it resize for them before uploading. Does this functionality exist already?
Changing filetype. I want to allow users to upload all image file types but make them .jpg on the server after the upload.
With these three requirements, how hard is it to implement something like this in just pure code and libraries? Would it be better to just use a 3rd party plugin, such as ASPjpeg or ASPupload? Have you encountered something similar, and what was your solution?
Thanks.
Take a look at ASPJpeg and ASPUpload from Persits. We use these components to upload a full size image (can be png even though the library is "ASPJpeg"), resize it to several different sizes we need on our site, then store the resized images on the server in a variety of folders. The ASPUpload component is a little tricky but if you follow their sample code you'll be fine.
I never found a good component for decompressing uploaded zip files and had to write my own, which I've since abandoned. In the end with upload speeds increasing and storage getting so cheap, it started to matter less and less that the files were compressed before being uploaded.
EDIT: Just noticed you mentioned these components in your question. Consider this an endorsement of your idea to use them. :-)
I am interested in building a Rails based system for handling the display and organization of large amounts of photos. This is sort of like Flickr but smaller. Each photo will have metadata associated with it. Photos will be shown in a selectable list and grid view. It would be nice to be able to load images as they are needed as well (as this would probably speed things up).
At the moment I have a test version of my database working by images loading from the assets/images directory but it is beginning to run slow when displaying several images (200-600 images). This is due to the way I have my view setup. I am using a straight loop to display the images in both list and grid layouts.
I also manually resized the thumbnails and a medium sized image from a full sized source image. I am investigating other resizing methods. Any advice is appreciated here as well.
As I am new to handling the images this way, could someone point me in a direction based on experience designing and implementing something like Flickr?
I am investigating the following tools:
Paperclip
http://railscasts.com/episodes/134-paperclip
Requirements: ImageMajick
attachment_fu
http://clarkware.com/blog/2007/02/24/file-upload-fu#FileUploadFu
Requirement: One of the following: ImageScience, RMagick, miniMagick, ImageMajick?
CarrierWave
http://cloudinary.com/blog/ruby_on_rails_image_uploads_with_carrierwave_and_cloudinary
http://cloudinary.com/blog/advanced_image_transformations_in_the_cloud_with_carrierwave_cloudinary
I'd go with Carrierwave anyday. It is very flexible and has lot of useful strategies. It generates it's on Uploader class and has all nifty and self explanatory features such as automatic generation of thumbnails (as specified by you), blacklisting, formatting image, size constraints etc; which you can put to your use.
This Railscast by Ryan Bates - http://railscasts.com/episodes/253-carrierwave-file-uploads is very useful, if you haven't seen it already.
Paperclip and CarrierWave are totally appropriate tools for the job, and the one you choose is going to be a matter of personal preference. They both have tons of users and active, ongoing development. The difference is whether you'd prefer to define your file upload rules in a separate class (CarrierWave), or if you'd rather define them inline in your model (Paperclip).
I prefer CarrierWave, but based on usage it's clear plenty of people feel otherwise.
Note that neither gem is going to do anything for your slow view with 200-600 images. These gems are just for handling image uploads, and don't help you with anything beyond that.
Note also that Rails is really pretty bad at handling file uploads and file downloads, and you should avoid this where possible by letting other services (a cdn, your web server, s3, etc) handle these for you. The central gotcha is that if you handle a file transfer with rails, your entire web application process is busy for the duration of the transfer. (For related discussion on this topic, see: Best Ruby on Rails Architecture for Image Heavy App).
I have a pdf reader app which render the pdf file. It works fine for normal pdf files. But for some of big magazine files, it's really slow to render a page. Then I tried to upload my pdf file to GoodReader, it's slightly better than my app, but it's also very slow. That means this kind of pdf really need to be optimized before it's used for iOS device.
I've tried the Adobe Acrobat 10 to reduce the file size, but the the result is not very obvious. And I have another similar magazine pdf is rendered pretty fast in my reader. But I can't tell the difference. I think there must be some key factors will affect the pdf rendering. But unfortunately I have no idea at all.
Can anybody advise how to optimize pdf file? Are there any good software for that? Thanks
If you have control over the generation of your files, I would suggest to avoid complex compression algorithms such as JBIG2 and to reduce the resolution (not the compression quality) of your raster images. JBIG2 is only used in black and white images, so maybe this is why you are getting a slow performance with some files and not with others.
Text should not be a problem in general, they are usually straight forward for rendering, but maybe you can try avoiding full embedded fonts if possible to keep the file size small.
If you will be using these files in a web scenario, I would also recommend using Linearized PDF files.
I'm working on a project which is composed of several compiled Delphi applications (over 20 exe's and dll's) and I'll be needing to share 60+ images (16x16, 24x24, 32x32, ...) between all of them.
I've though on two different ways of sharing images between all the applications, but I'm not sure which is better:
Idea 1:
Create a resource-only DLL project, which contains a resource link reference to the .res file that contains all my images. Each application will in turn load the dll and read the necessary images it may need into either a TImageList or TImage depending on it's needs.
Pros: Allows to keep the images in the repository in their native format.
Cons: I won't be able see the images at design time as they will only be loaded at run time. I'll also have to create the same number of constants as there are images or use a set with the same number of values as there are images so that each image can be referenced independently of it's name on the resource file.
Idea 2:
Create a Data Module which is compiled as a bpl and included as a run-time package on all the applications. I would add the images to several TImageList's (depending on the image size) or into a TPngImageList (which allows images of several sizes on a single component).
Pros: I'll be able to add this Data Module to all the applications I need and see at design-time all the images I may need to use.
Cons: All the images will be loaded into memory even if I only need to use one. I need to make sure the order of the images is never changed when adding/modifying images into the TImageList/TPngImageList. All images will be stored in a single .dfm.
Idea 3: (New)
After looking at other applications who also need to share images between compiled exe's, I've had yet another idea.
Save all the shared images as plain png/ico files on a sub-folder where the compiled files are (Data for example).
Pros: No need to load all images in memory, I can just get the ones needed. This may be specially important if the total number of the images is rather large (one application which uses this method has 1400 images on a Data sub-folder).
Cons: Images will be visible/available to anyone. May use up a little more disk space on the user machine.
I would like to ask for comments on these two ideas or any other suggestions on how to better accomplish this.
Thanks!
I have a strong preference for option 1. Doing it this way allows you to keep the images in your revision control repository in their native format. With option 2 you store images in .dfm files which I find exceedingly unsatisfactory. The downside is that you lose design time viewing of the images. I personally prefer to make that trade-off.
In my software I have a single global image list which I populate at runtime by loading from resources, and of course also assign image indices at runtime. The other benefit that this brings is the ability to choose image sizes appropriate to font scaling. Otherwise you need to have separate image lists for 16px icons, 20px icons, 24px icons, 32px icons etc.
Another option is to write your own TImage component, with extra properties called.
property dllname: string read fdllname write set_dllname;
property resname: string read fresname write set_resname;
In the setter procedures, you then load the image from the resource.
This way you will still be able to see the images in Design time.
Make sure to override the mechanism for saving the image in the dfm file, so that your exe does not get bloated with images that are already in the dll.
Not 100% sure on how to do that, but if you want to follow that route, I'm sure someone has an easy answer to that question.