Displaying interlaced (progressive) Image in UIImageView - ios

I am trying to display a JPEG image as it downloads, using part of the data, similiar to many web browsers do, or the facebook app.
there is a low-quality version of the image(just part of the data) and then display the full image in full quality.
this is best shown in the VIDEO HERE
I followed this SO question:
How do I display a progressive JPEG in an UIImageView while it is being downloaded?
but all I got was a imageview that is being rendered as data keeps comes in, no low-quality version first, no true progressive download and render.
can anyone share a code snippet or point me to where I can find more info as to how this can be implemented in an iOS app ?
tried this link for example which shows JPEG info, it identifies the image as progressive
http://www.webpagetest.org/jpeginfo/jpeginfo.php?url=http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg
and I used the correct code sequence
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
/// Append the data
[_dataTemp appendData:data];
/// Get the total bytes downloaded
const NSUInteger totalSize = [_dataTemp length];
/// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (CFDataRef)_dataTemp, (totalSize == _expectedSize) ? true : false);
/// We know the expected size of the image
if (_fullHeight > 0 && _fullWidth > 0)
{
[_imageView setImage:[UIImage imageWithCGImage:image]];
CGImageRelease(image);
}
}
but the code only shows the image when it is finished loading, with other images, it will show it as it downloading, but only top to bottom, no low quality version and then progressively add detail as browsers do.
DEMO PROJECT HERE

This is a topic I've had some interest in for a while: there appears to be no way to do what you want using Apple's APIs, but if you can invest time in this you can probably make it work.
First, you are going to need a JPEG decoding library: libjpeg or libjpeg-turbo. You will then need to integrate it into something you can use with Objective-C. There is an open source project that uses this library, PhotoScrollerNetwork, that uses leverages the turbo library to decode very large jpegs "on the fly" as they download, so they can be panned and zoomed (PhotoScroller is an Apple project that does the panning and zooming, but it requires pre-tiled images).
While the above project is not exactly what you want, you should be able to lift much of the libjpeg-turbo interface to decode progressive images and return the low quality images as they are received. It would appear that your images are quite large, otherwise there would be little need for progressive images, so you may find the panning/zooming capability of the above project of use as well.
Some users of PhotoScrollerNetwork have requested support for progressive images, but it seems there is very little general use of them on the web.
EDIT: A second idea: if it's your site that you would use to vend progressive images (and I assume this since there are so few to be found normally), you could take a completely different tact.
In this case, you would construct a binary file of your own design - one that had say 4 images inside it. The first four bytes would provide the length of the data following it (and each subsequent image would use the same 4-byte prefix). Then, on the iOS side, as the download starts, once you got the full bytes of the first image, you could use those to build a small low res UIImage, and show it while the next image was being received. When the next one fully arrives, you would update the low res image with the newer higher res image. Its possible you could use a zip container and do on the fly decompression - not 100% sure. In any case, the above is a standard solution to your problem, and would provide near-identical performance to libjpeg, with much much less work.

I have implemented a progressive loading solution for an app I am currently working on. It does not use progressive Jpeg as I needed more flexibility loading different-res versions, but I get the same result (and it works really well, definitely worth implementing).
It's a camera app working in tandem with a server. So the images originate with the iPhone's camera and are stored remotely. When the server gets the image, it gets processed (using imageMagick, but could be any suitable library) and stored in 3 sizes - small thumb (~160 x 120), large thumb (~400x300) and full-size (~ double retina screensize). Target devices are retina iPhones.
I have an ImageStore class which is responsible for loading images asynchronously from wherever they happen to be, trying the fastest location first (live cache, local filesystem cache, asset library, network server).
typedef void (^RetrieveImage)(UIImage *image);
- (void) fullsizeImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)largeThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)smallThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
Each of these methods will also attempt to load lower-res versions. The completion block actually loads the image into it's imageView.
Thus
fullsizeImageFromPath
will get the fullsized version, and also call largeThumbImageFromPath
largeThumbImageFromPath
will get the large thumb and also call smallThumbImageFromPath
smallThumbImageFromPath
will just get the small thumb
These methods invoke calls that are wrapped in cancellable NSOperations. If a larger-res version arrives before any of it's lower-res siblings, those respective lower-res calls are cancelled. The net result is that fullsizeImageFromPath may end up applying the small thumb, then the large thumb, and finally the full-res image to a single imageView depending on which arrives first. The result is really smooth.
Here is a gist showing the basic idea
This may not suit you as you may not be in control of the server side of the process. Before I had implemented this, I was pursuing the solution that David H describes. This would have been a lot more work, and less useful once I realised I also needed access to lower-res images in their own right.
Another approach which might be closer to your requirements is explained here
This has evolved into NYXProgressiveImageView, a subclass of UIImageView which is distributed as part of NYXImagesKit
Finally ... for a really hacky solution you could use a UIWebView to display progressive PNGs (progressive JPegs do not appear to be supported).
update
After recommending NYXProgressiveImageView, I realised that this is what you have been using. Unfortunately you did not mention this in your original post, so I feel I have been on a bit of a runaround. In fact, reading your post again, I feel you have been a little dishonest. From the text of your post, it looks as if the "DEMO" is a project that you created. In fact you didn't create it, you copied it from here:
http://cocoaintheshell.com/2011/05/progressive-images-download-imageio/ProgressiveImageDownload.zip
which accompanies this blog entry from cocoaintheshell
The only changes you have made is one NSLog line, and to alter the JPG test URL.
The code snippet that you posted isn't yours, it is copied from this project without attribution. If you had mentioned this in your post it would have saved me a whole heap of time.
Anyway, returning to the post... as you are using this code, you should probably be using the current version, which is on github:
https://github.com/Nyx0uf/NYXImagesKit
see also this blog entry
To keep your life simple, you only need these files from the project:
NYXProgressiveImageView.h
NYXProgressiveImageView.m
NYXImagesHelper.h
NYXImagesHelper.m
Next you need to be sure you are testing with GOOD images
For example, this PNG works well:
http://www.libpng.org/pub/png/img_png/pnglogo-grr.png
You also need to pay attention to this cryptic comment:
/// Note: Progressive JPEG are not supported see #32
There seems to be an issue with JPEG tempImage rendering which I haven't been able to work out - maybe you can. That is the reason why your "Demo" is not working correctly, anyway.
update 2
added gist

I believe this is what you are looking for:
https://github.com/contentful-labs/Concorde
A framework for downloading and decoding progressive JPEGs on iOS and OS X, that uses libjpeg-turbo as underlying JPEG implementation.

Try it, may be its useful for you :
https://github.com/path/FastImageCache
https://github.com/rs/SDWebImage

I have the same problem then i found something tricky its not proper solution but it works.
You have to load low resolution/thumbnail image when loaded then load
actual image.
This is example for android i hope you can transform it into ios version.

Related

Zoomify .zif format bad performance

The new .zif single file format provided by Zoomify Pro seems to have some performance issues. Comparing it to the old file structure it loads the page 3 to 4 times slower and the requests that it sends exceed 50% more (Tested with the same initial image in multiple file formats).
Using the old format is not feasible for out product and we are stuck with over a minute of load time.
Has anyone encountered this issue, and are there some workarounds? The results in the internet and the official site doesn't seem to be of any help.
NOTE: Contacting the vendor hasn't led to anything yet.
Although the official site claims the zif format could handle very large image, I'm skeptical about it because the viewer tries to do everything in Javascript. The performance is entire based on the client's machine. Try opening it on a faster machine and see if it improves.
Alternative solution: You could create Deep Zoom Image tiles by using VIPS library.
More information here:
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
Scroll further down in the article and you'll see this snippet:
With 7.40 and later, you can use --container to set the container
type. Normally dzsave will write a tree of directories, but with
--container zip you'll get a zip file instead. Use .zip as the directory suffix to turn on zip format automatically:
$ vips dzsave wtc.tif mypyr.zip
to write a zipfile containing the tiles.
Also, checkout this tutorial:
Serve deepzoom images from a zip archive with openseadragon
https://web.archive.org/web/20170310042401/https://literarymachin.es/deepzoom-osd-server/
The community (openseadragon and vips) is much stronger over there so you'll get help when you hit a wall.
If you want to take a break from all of this and just want the images zoomable, you could use 3rd party service such as zoomable.ca or zoomo.ca. It’s free and user friendly (upload your image and embed the viewer to your site like Google Map).
ZIF format designer here... ZIF can easily handle monstrous images, up to hundreds of terabytes in size.
Without a server, of course the viewer tries to do everything, it's the only option. As a result, serving ZIF directly from a webserver will not be as performant as using an image server. But... you can DO it. Using Zoomify tile folders, speed will be faster, but you may have hundreds of thousands or millions of tiles to deal with at the server side, and transfers will be horrendously slow and error-prone.
There are always trade-offs.See zif.photo for specification.

Resampling large bitmaps for lists in Android using MVVM Cross

I have a long list of cells which each contain an image.
The images are large on disk, as they are used for other things in the app like wallpapers etc.
I am familiar with the normal Android process for resampling large bitmaps and disposing of them when no longer needed.
However, I feel like trying to resample the images on the fly in a list adapter would be inefficient without caching them once decoded, otherwise a fling would spawn many threads and I will have to manage cancelling unneeded images etc etc.
The app is built making extensive use of the fantastic MVVMCross framework. I was thinking about using the MvxImageViews as these can load images from disk and cache them easily. The thing is, I need to resample them before they are cached.
My question is, does anybody know of an established pattern to do this in MVVMCross, or have any suggestions as to how I might go about achieving it? Do I need to customise the Download Cache plugin? Any suggestions would be great :)
OK, I think I have found my answer. I had been accidentally looking at the old MVVMCross 3.1 version of the DownloadCache Plugin / MvxLocalFileImageLoader.
After cloning the up to date (v3.5) repo I found that this functionality has been added. Local files are now cached and can be resampled on first load :)
The MvxImageView has a Max Height / Width setter method that propagates out to its MvxImageHelper, which in turn sends it to the MvxLocalFileImageLoader.
One thing to note is that the resampling only happens if you are loading from a file, not if you are using a resource id.
Source is here: https://github.com/MvvmCross/MvvmCross/blob/3.5/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache.Droid/MvxAndroidLocalFileImageLoader.cs
Once again MVVMCross saves my day ^_^
UPDATE:
Now I actually have it all working, here are some pointers:
As I noted in the comments, the local image caching is only currently available on the 3.5.2 alpha MVVMCross. This was incompatible with my project, so using 3.5.1 I created my own copies of the 3.5.2a MvxImageView, MvxImageHelper and MvxAndroidLocalFileImageLoader, along with their interfaces, and registered them in the Setup class.
I modified the MvxAndroidLocalFileImageLoader to also resample resources, not just files.
You have to bind to the MvxImageView's ImageUrl property using the "res:" prefix as documented here (Struggling to bind local images in an MvxImageView with MvvmCross); If you bind to 'DrawableId' this assigns the image directly to the underlying ImageView and no caching / resampling happens.
I needed to be able to set the customised MvxImageview's Max Height / Width for resampling after the layout was inflated/bound, but before the images were retrieved (I wanted to set them during 'OnMeasure', but the images had already been loaded by then). There is probably a better way but I hacked in a bool flag 'SizeSet'. The image url is temporarily stored if this is false (i.e. during the initial binding). Once this is set to true (after OnMeasure), the stored url is passed to the underlying ImageHelper to be loaded.
One section of the app uses full screen images as the background of fragments in a pager adapter. The bitmaps are not getting garbage collected quick enough, leading to eventual OOMs when trying to load the next large image. Manually calling GC.Collect() when the fragments are destroyed frees up the memory but causes a UI stutter and also wipes the cache as it uses weak refs.
I was getting frequent SIGSEGV crashes on Lollipop when moving between fragments in the pager adapter (they never happened on KitKat). I managed to work around the issue by adding a SetImageBitmap(null) to the ImageView's Dispose method. I then call Dispose() on the ImageView in its containing fragment's OnDestroyView().
Hope this helps someone, as it took me a while!

Optimize image size in Parse.com

I have a strange issue with the resize methods on Parse Image Module:
First I get an image of 3.8MB (1920x1080px) in a Parse Cloud Code, then I resize it to 384x216px.
What I can't figure out is that when I download the resulting image in my front application, the file size is still 3.8MB while the image size reduced as expected.
I can't allow myself to force my users to download images so heavy.
Any suggestions on how to solve this out ?
I am ready to work with an external library as long as it can be executed on Parse Cloud Code.
Do not resize images in the cloud. Resize them on the client and send small files over the web. It takes shorter processing time to resize them in comparison to traveling over the network.
The reason you are seeing the old image in front of your application is probably because parse has an internal cache. Queries that have been sent before load from the cache to lower the amount of transferred data. You can easily observe this by deleting the application and reinstalling, your images will probably work fine.
http://blog.parse.com/2011/09/30/easy-caching-with-parse/
https://www.parse.com/docs/ios_guide#queries-caching/iOS

iOS Upload Photo in Parts

Let's say I want to preserve the full resolution of a photo on the iPhone, and then upload it to a web service for storing. Quality is critical. Unfortunately, the size of a 3200x2400 photo taken with the iPhone camera is approximately 10-12MB for a PNG, and 1-3MB for a JPG (as of my latest tests).
Here we have a dilemma. On a 3G connection, a 12MB upload is an eternity (relatively speaking, of course). So I've explored a few options, including streams/chunking and background uploading. Still, it's not ideal. I'd like the upload to be as fast as possible. See edit.
So my question is this: would it be possible to split an image into separate data chunks, upload them all concurrently using multiple asynchronous connections, and then re-assemble them server side? Does an implementation exist for this?
EDIT: So speed is capped by bandwidth as has been discussed in the comments. But there are other uses for chunking/splitting that I would like to explore. So the question still stands.
What you can do is actually split the image into several pieces, and upload each, then reassemble later.
I guess a benefit of that would be getting a partial image on failed connection, then continuing uploading the remaining pieces afterwards.

What's the best way of saving/displaying images? (not blob vs. txt)

I’m making a gallery on site. And don’t know what the best solution for it. Need advice.
For my opinion there are two ways of operating with images.
User uploads image. I save it on server only once, only with its original size. And then, when there’s a need of displaying that image on screen I resize it to the necessary size, for example as avatar. So I store only ONE original-sized image and resize it to ANY proper size RIGHT BEFORE displaying.
User uploads image. I save it on server with original size and also make and save several copies (thumbnails-sized), for example, avatar-sized, erc. So that if the image is displayed it’s not resized every time it is displayed, just proper-sized copy taken.
I think that the second way is better. Because there’s no need to spend server strength on resizing images every time. But what if I’ll decide to change design of my site and some dimensions of images on it will be resized too? I’ll get the situation of having lots of images on server that doesn’t fit new design.
All around different forums they explain how to make galleries and every time they say that thumbnail-sized copies are also made and saved. But it looks like it doesn’t make sense if design is changed in time. Please, advise. Language – PHP.
One solution that others have come up with is a mix between the two. So, the user uploads the photo and you save it in its original form on your server. Then, when an avatar is needed, you check to see if you have the avatar saved on disk (maybe user12345_50x50.jpg - where 50x50 is widthxheight). If it does exist, show that image. If not, then use the server to resize/crop whatever, then save that image to disk and serve that to the user. This will allow you to request any size file and serve it as-needed -- taking advantage of caching those that have already been requested [Note that this is a server-side cache, so would apply for all users].
You sort of get the best of both worlds. You don't need to handle all of the image manipulation up front, just as needed. The first time the image is processed, that user will have to wait, but any other request will get the processed file.
One implementation that uses this solution in PHP is phpthumb: http://phpthumb.sourceforge.net/

Resources