Zoomify .zif format bad performance - zoomify

The new .zif single file format provided by Zoomify Pro seems to have some performance issues. Comparing it to the old file structure it loads the page 3 to 4 times slower and the requests that it sends exceed 50% more (Tested with the same initial image in multiple file formats).
Using the old format is not feasible for out product and we are stuck with over a minute of load time.
Has anyone encountered this issue, and are there some workarounds? The results in the internet and the official site doesn't seem to be of any help.
NOTE: Contacting the vendor hasn't led to anything yet.

Although the official site claims the zif format could handle very large image, I'm skeptical about it because the viewer tries to do everything in Javascript. The performance is entire based on the client's machine. Try opening it on a faster machine and see if it improves.
Alternative solution: You could create Deep Zoom Image tiles by using VIPS library.
More information here:
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
Scroll further down in the article and you'll see this snippet:
With 7.40 and later, you can use --container to set the container
type. Normally dzsave will write a tree of directories, but with
--container zip you'll get a zip file instead. Use .zip as the directory suffix to turn on zip format automatically:
$ vips dzsave wtc.tif mypyr.zip
to write a zipfile containing the tiles.
Also, checkout this tutorial:
Serve deepzoom images from a zip archive with openseadragon
https://web.archive.org/web/20170310042401/https://literarymachin.es/deepzoom-osd-server/
The community (openseadragon and vips) is much stronger over there so you'll get help when you hit a wall.
If you want to take a break from all of this and just want the images zoomable, you could use 3rd party service such as zoomable.ca or zoomo.ca. It’s free and user friendly (upload your image and embed the viewer to your site like Google Map).

ZIF format designer here... ZIF can easily handle monstrous images, up to hundreds of terabytes in size.
Without a server, of course the viewer tries to do everything, it's the only option. As a result, serving ZIF directly from a webserver will not be as performant as using an image server. But... you can DO it. Using Zoomify tile folders, speed will be faster, but you may have hundreds of thousands or millions of tiles to deal with at the server side, and transfers will be horrendously slow and error-prone.
There are always trade-offs.See zif.photo for specification.

Related

Download all tiles for a country from Open Street Map

I am creating an offline map app for iPhone (Using MKMapKit). It will have a list of countries. If the user selects a country, all tiles will be downloaded and stored on the iPhone. I will use Open Street Map as the maps provider.
(I have read that bulk downloading is forbidden, but given that tiles for a country is pretty small (≈200MB) and that they will only be downloaded once, at least I don't think it's a problem.)
I think I will be using the template URL #"http://c.tile.openstreetmap.org/{z}/{x}/{y}.png" to downloaded the tiles and then store them. My problem is that I don't know how to determine which tiles belong to which country and therefore to determine which to download.
I found this link in another SO answer, but that only allows you to download .pbf files (which I have no idea what it is) and per continent.
First: If you really want to grab all tiles (at all zoom-levels) you should read the OSM Tile Usage Policy again with care. If you just want to download all the tiles once (for your dev environment) you can use existing downloaders that allow you to select the desired country and download them. This will result in a directory with 1000s of small images and might take some days.
The better way would be to setup your own (desktop or server based) tile rendering chain that gives you full control about styling and doesn't stress the community ressources. Please consult www.switch2osm.org for a detailed tutorial on how to setup a server based rendering stack.
This is a problem. Downloading 200 MB of tiles only once is already questionably because it is not just about the traffic, these tiles have to be rendered first. But if I understood you correctly, with your application every user will download these 200 MB.
Instead you should think about downloading raw data (which PBF is) and either render your own raster tiles as already suggested by MaM, or create a vector map as done by other popular apps, like OsmAnd and Navit.

Displaying interlaced (progressive) Image in UIImageView

I am trying to display a JPEG image as it downloads, using part of the data, similiar to many web browsers do, or the facebook app.
there is a low-quality version of the image(just part of the data) and then display the full image in full quality.
this is best shown in the VIDEO HERE
I followed this SO question:
How do I display a progressive JPEG in an UIImageView while it is being downloaded?
but all I got was a imageview that is being rendered as data keeps comes in, no low-quality version first, no true progressive download and render.
can anyone share a code snippet or point me to where I can find more info as to how this can be implemented in an iOS app ?
tried this link for example which shows JPEG info, it identifies the image as progressive
http://www.webpagetest.org/jpeginfo/jpeginfo.php?url=http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg
and I used the correct code sequence
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
/// Append the data
[_dataTemp appendData:data];
/// Get the total bytes downloaded
const NSUInteger totalSize = [_dataTemp length];
/// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (CFDataRef)_dataTemp, (totalSize == _expectedSize) ? true : false);
/// We know the expected size of the image
if (_fullHeight > 0 && _fullWidth > 0)
{
[_imageView setImage:[UIImage imageWithCGImage:image]];
CGImageRelease(image);
}
}
but the code only shows the image when it is finished loading, with other images, it will show it as it downloading, but only top to bottom, no low quality version and then progressively add detail as browsers do.
DEMO PROJECT HERE
This is a topic I've had some interest in for a while: there appears to be no way to do what you want using Apple's APIs, but if you can invest time in this you can probably make it work.
First, you are going to need a JPEG decoding library: libjpeg or libjpeg-turbo. You will then need to integrate it into something you can use with Objective-C. There is an open source project that uses this library, PhotoScrollerNetwork, that uses leverages the turbo library to decode very large jpegs "on the fly" as they download, so they can be panned and zoomed (PhotoScroller is an Apple project that does the panning and zooming, but it requires pre-tiled images).
While the above project is not exactly what you want, you should be able to lift much of the libjpeg-turbo interface to decode progressive images and return the low quality images as they are received. It would appear that your images are quite large, otherwise there would be little need for progressive images, so you may find the panning/zooming capability of the above project of use as well.
Some users of PhotoScrollerNetwork have requested support for progressive images, but it seems there is very little general use of them on the web.
EDIT: A second idea: if it's your site that you would use to vend progressive images (and I assume this since there are so few to be found normally), you could take a completely different tact.
In this case, you would construct a binary file of your own design - one that had say 4 images inside it. The first four bytes would provide the length of the data following it (and each subsequent image would use the same 4-byte prefix). Then, on the iOS side, as the download starts, once you got the full bytes of the first image, you could use those to build a small low res UIImage, and show it while the next image was being received. When the next one fully arrives, you would update the low res image with the newer higher res image. Its possible you could use a zip container and do on the fly decompression - not 100% sure. In any case, the above is a standard solution to your problem, and would provide near-identical performance to libjpeg, with much much less work.
I have implemented a progressive loading solution for an app I am currently working on. It does not use progressive Jpeg as I needed more flexibility loading different-res versions, but I get the same result (and it works really well, definitely worth implementing).
It's a camera app working in tandem with a server. So the images originate with the iPhone's camera and are stored remotely. When the server gets the image, it gets processed (using imageMagick, but could be any suitable library) and stored in 3 sizes - small thumb (~160 x 120), large thumb (~400x300) and full-size (~ double retina screensize). Target devices are retina iPhones.
I have an ImageStore class which is responsible for loading images asynchronously from wherever they happen to be, trying the fastest location first (live cache, local filesystem cache, asset library, network server).
typedef void (^RetrieveImage)(UIImage *image);
- (void) fullsizeImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)largeThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)smallThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
Each of these methods will also attempt to load lower-res versions. The completion block actually loads the image into it's imageView.
Thus
fullsizeImageFromPath
will get the fullsized version, and also call largeThumbImageFromPath
largeThumbImageFromPath
will get the large thumb and also call smallThumbImageFromPath
smallThumbImageFromPath
will just get the small thumb
These methods invoke calls that are wrapped in cancellable NSOperations. If a larger-res version arrives before any of it's lower-res siblings, those respective lower-res calls are cancelled. The net result is that fullsizeImageFromPath may end up applying the small thumb, then the large thumb, and finally the full-res image to a single imageView depending on which arrives first. The result is really smooth.
Here is a gist showing the basic idea
This may not suit you as you may not be in control of the server side of the process. Before I had implemented this, I was pursuing the solution that David H describes. This would have been a lot more work, and less useful once I realised I also needed access to lower-res images in their own right.
Another approach which might be closer to your requirements is explained here
This has evolved into NYXProgressiveImageView, a subclass of UIImageView which is distributed as part of NYXImagesKit
Finally ... for a really hacky solution you could use a UIWebView to display progressive PNGs (progressive JPegs do not appear to be supported).
update
After recommending NYXProgressiveImageView, I realised that this is what you have been using. Unfortunately you did not mention this in your original post, so I feel I have been on a bit of a runaround. In fact, reading your post again, I feel you have been a little dishonest. From the text of your post, it looks as if the "DEMO" is a project that you created. In fact you didn't create it, you copied it from here:
http://cocoaintheshell.com/2011/05/progressive-images-download-imageio/ProgressiveImageDownload.zip
which accompanies this blog entry from cocoaintheshell
The only changes you have made is one NSLog line, and to alter the JPG test URL.
The code snippet that you posted isn't yours, it is copied from this project without attribution. If you had mentioned this in your post it would have saved me a whole heap of time.
Anyway, returning to the post... as you are using this code, you should probably be using the current version, which is on github:
https://github.com/Nyx0uf/NYXImagesKit
see also this blog entry
To keep your life simple, you only need these files from the project:
NYXProgressiveImageView.h
NYXProgressiveImageView.m
NYXImagesHelper.h
NYXImagesHelper.m
Next you need to be sure you are testing with GOOD images
For example, this PNG works well:
http://www.libpng.org/pub/png/img_png/pnglogo-grr.png
You also need to pay attention to this cryptic comment:
/// Note: Progressive JPEG are not supported see #32
There seems to be an issue with JPEG tempImage rendering which I haven't been able to work out - maybe you can. That is the reason why your "Demo" is not working correctly, anyway.
update 2
added gist
I believe this is what you are looking for:
https://github.com/contentful-labs/Concorde
A framework for downloading and decoding progressive JPEGs on iOS and OS X, that uses libjpeg-turbo as underlying JPEG implementation.
Try it, may be its useful for you :
https://github.com/path/FastImageCache
https://github.com/rs/SDWebImage
I have the same problem then i found something tricky its not proper solution but it works.
You have to load low resolution/thumbnail image when loaded then load
actual image.
This is example for android i hope you can transform it into ios version.

Using CombineFileInputFormat for images (or BLOBs)?

I planning a hdfs system that will host image files (few Mb to 200mb) for a digital repository (Fedora Commons). I found from another stackoverflow post that CombineFileInputFormat can be used to create input splits consisting of multiple input files. Can this approach be used for images or pdf? Inside the map task, I want process individual files in their entirety i.e. process each image in the input split separately.
I'm aware of the small files problem, and it will not be an issue for my case.
I want to use CombineFileInputFormat for the benefits of avoiding Mapper task setup/cleanup overhead, and data-locality preservation.
If you want to process images in Hadoop, I can only recommend using HIPI, which should allow you to do what you need.
Otherwise, when you say you want to process individual files in their entirety, I don't think you can do this with conventional input formats, because even with CombineFileInputFormat, you would have no guarantee that what's in your split is exactly 1 image.
An approach you could also consider is to have in input a file containing URLs/locations of your images (for example you could put them in Amazon S3), and make sure you have as many mappers as images, and then each map task would be able to process an individual image. I've done something similar not so long ago and it worked ok.

iOS Upload Photo in Parts

Let's say I want to preserve the full resolution of a photo on the iPhone, and then upload it to a web service for storing. Quality is critical. Unfortunately, the size of a 3200x2400 photo taken with the iPhone camera is approximately 10-12MB for a PNG, and 1-3MB for a JPG (as of my latest tests).
Here we have a dilemma. On a 3G connection, a 12MB upload is an eternity (relatively speaking, of course). So I've explored a few options, including streams/chunking and background uploading. Still, it's not ideal. I'd like the upload to be as fast as possible. See edit.
So my question is this: would it be possible to split an image into separate data chunks, upload them all concurrently using multiple asynchronous connections, and then re-assemble them server side? Does an implementation exist for this?
EDIT: So speed is capped by bandwidth as has been discussed in the comments. But there are other uses for chunking/splitting that I would like to explore. So the question still stands.
What you can do is actually split the image into several pieces, and upload each, then reassemble later.
I guess a benefit of that would be getting a partial image on failed connection, then continuing uploading the remaining pieces afterwards.

Watch video in the time they are uploaded

It is possible to implement a feature that allows users to watch videos as they are uploaded to server by others. Is html 5 suitable for this task? But flash? Are there any read to go solutions, don't want to reinvent the wheel. The application will be hosted on a dedicated server.
Thanks.
Of course it is possible, the data is there isnt it?
However it will be very hard to implement.
Also I am not so into python and I am not aware of a library or service suiting your requirements, but I can cover the basics of video streaming.
I assume you are talking about video files that are uploaded and not streams. Because, for that, there are obviously thousands of solutions out there...
In the most simple case the video being uploaded is already ready to be served to your clients and has a so called "faststart atom". They are container format specific and there are sometimes a bunch of them. The most common is the moov-atom. It contains a lot of data and is very complex, however in our use case, in a nutshell, it holds the data that enables the client to begin playing the video right away using the data available from the beginning.
You need that if you have progressive download videos (youtube...), meaning where a file is served from a Webserver. You obviously have not downloaded the full file and the player already can start playing.
If the fastastart atom was not present, that would not be possible.
Sometimes it is, but the player for example cannot display a progress bar, because it doesnt know how long the file is.
Having that covered the file could be uploaded. You will need an upload solution that writes the data directly to a buffer or a file. (file will be easier...).
This is almost always the case, for example PHP creates a file in the tmp_dir. You can also specify it if you want to find the video while its being uploaded.
Well, now you can start reading that file byte by byte and print that data to a connection to another client. Just be sure not to go ahead of what has already been recieved and written. You would probaby initiate your upload with a metadata set in memory that holds the current recieved byte position and location of the file.
Anyone who requests the file after the uploaded has started can just recieve the entire file, or if the upload is not yet finished, get it from your application.
You will have to throttle the data delivery or pause it when the data becomes short. This will appear to the client almost as a "slow connection". However you will have to echo some data from time to time to prevent the connection from closing. But if your upload doesnt stall, and why shoud it?, that shouldnt be a problem.
Now if you want to have someting like on the fly transcoding of various input formats into your desired output format, things get interesting.
AFAIK ffmpeg has neat apis which lets you directly deal with datasterams.
Also handbrake is a very good tool, however you would need to take the long road using external executeables.
I am not really aware of your requirements, however if your clients are already tuned in, for example on a red 5 streaming server, feeding data into a stream should also work fine.
Yes, take a look at Qik, http://qik.com/
"Instant Video Sharing ... Videos can be viewed live (right as they are being recorded) or anytime later."
Qik provides developer APIs, including ones like these:
qik.stream.subscribe_public_recent -- Subscribe to the videos (live and recorded)
qik.user.following -- Provides the list of people the user is following
qik.stream.public_info -- Get public information for a specific video
It is most certainly to do this, but it won't be trivial. And no, I don't think that you will find an "out of the box" solution that will require little effort on your behalf.
You say you want to let:
users watch videos as they are uploaded to server by others
Well, this could be interpreted two different ways:
Do you mean that you don't want a user to have to refresh the page before seeing new videos that other users have just finished uploading?
Or do you mean that you want one user to be able to watch a partially uploaded video (aka another user is still in the process of uploading it and right now the server only contains a partial upload of the video)?
Implementing #1 wouldn't be hard at all whatsoever. You would just need an AJAX script to check for newly uploaded videos, and those videos could then be served to the user in whatever way you choose. HTML5 vs. Flash isn't really a consideration here.
The second scenario, on the other hand, would require quite a bit of effort. I am guessing that HTML5 might not be mature enough to handle this type of situation. If you are not looking
to reinvent the wheel and don't have a lot of time to dedicate to this feature than I would say that you would be out of luck. You may be able to use ffmpeg to parse partial video files and feed them to a Flash player, but I would think of this as a large task.

Resources