Generating tiles from a large PNG - ios

I'm looking to use CATiledLayer to display a huge PNG image on an iOS device. For this to work, I need to split the larger image into tiles (at 100%, 50%, 25% and 12.5%) on the client (creating tiles at the server side is not an option).
I can see that there are libraries such as libjpeg-turbo that may work, however these are for JPEGs and I need to work with PNGs.
Does anyone know of a way that I can take a large PNG (~20Mb) and generate tiles from it on the device?
Any pointers or suggestions would be appreciated!
Thank you!

You can use the built-in Core Graphics CGDataProviderCreateWithFilename and CGImageCreateWithPNGDataProvider APIs to open the image, then create each of the tiles by doing something like:
const CGSize tileSize = CGSizeMake(256, 256);
const CGPoint tileOrigin = ...; // Calculate using current column, row, and tile size.
const CGRect tileFrame = CGRectMake(-tileOrigin.x, -tileOrigin.y, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(tileSize, YES, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, tileFrame, image.CGImage);
UIImage *tileImage = UIGraphicsGetImageFromCurrentImageContext();
[UIImagePNGRepresentation(tileImage) writeToFile:tilePath atomically:YES];
UIGraphicsEndImageContext();
You may also want to look at the related sample projects (Large Image Downsizing, and PhotoScroller) referenced under the UIScrollView Class Reference.

Related

CIImage extent in pixels or points?

I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.
My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?
I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.
I'll answer the best I can.
If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.
Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).
But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)
Yes, quite a bit. Taking this to a bottom line:
What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.
Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.
EDIT
Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
Usage:
image = image.resizeToBoundingSquare(640)
In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.
I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.
A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.

Is drawRect "wasteful" when cropping? Is there an alternative?

Let's say you have an original image that is
200 high, 100 wide
Let's say you want to draw only a square of it. Let's say, just the bottom square.
Let's say you want to draw it on to a new small image that is
20 high, 20 wide
Of course, you simply do this:
CGRect imageRect = CGRectMake( -10,0, 20,20);
.. begin graphics context ..
[originalImage drawInRect:imageRect];
With drawRect, you supply a rectangle the same full shape (same proportions) of the original image, but expressed in the size of the new canvas. No problem.
BUT:
in the example, you are drawing THE WHOLE ORIGINAL IMAGE -- THE WHOLE 200 HEIGHT on to the new small square.
(Of course the "top half" misses the new canvas, and you only get the bottom half on the new canvas -- which is what you wanted.)
My impression is iOS renders or calculates the "whole" original image, and it only "puts on" the bottom half (in the example) on to the new canvas.
This seems very wasteful.
IS THERE A FASTER WAY TO DO THIS?
It seems like there should be a command, something like this:
drawThisPartOfTheOriginalImage: (0,100 to 100,200)
ontoThisPartOfTheNewCanvas: (0,20 to 20,20)
What's the situation? Is there a more efficient command than drawRect when you are only drawing a small part of the original image? Cheers
CGContextClipToRect approach...(doesn't work!)
.
I experimented with CGContextClipToRect as Peter suggested below.
CGContextClipToRect indeed sets the area you will draw to on your "result" canvas. I simply set it to the size of that result canvas (it would be 20.20 in the example above). To repeat the aim here being to have iOS save time by avoiding pointlessly drawing the, err, not-drawn part of the original.
This example is for an original image 2000.2000 drawing on to a 500.500 (ie, only drawing the top left quarter of the original on to the result).
In fact notice it is slightly slower when you include the CGContextClipToRect, again suggesting iOS "knows when to stop" anyways.
// no need to "overdraw"... quickener turned OFF
//CGContextRef c = UIGraphicsGetCurrentContext();
//CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.443669
// no need to "overdraw"... quickener turned ON
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextClipToRect(c, CGRectMake(0, 0, resultSize.width,resultSize.height));
//Execution Time .................................. 0.461845
As you can see it's a hair slower, actually, adding the CGContextClipToRect trick.
For the record, here is the exact routine used to crop an image:
-(UIImage *)simplishTopCrop:(UIImage *)fromImage
{
// check for zero fromImage.size.width etc etc
CGSize resultSize = CGSizeMake(640,640);
CGFloat scale = MAX(
resultSize.width/fromImage.size.width,
resultSize.height/fromImage.size.height);
CGFloat width = fromImage.size.width * scale;
CGFloat height = fromImage.size.height * scale;
CGRect imageRect = CGRectMake(0,0, width,height);
UIGraphicsBeginImageContextWithOptions(resultSize, NO, 0);
// INSERT 'CGContextClipToRect' TRICK ABOVE, RIGHT HERE
[fromImage drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This is where clipping comes in. Clip to your dirty rect, then draw the whole image into your bounds. The clipping path will keep the rest of the image at least from appearing, and hopefully from being composited or sampled at all.
If your profiling in Instruments finds that that is not efficient enough, you might try cropping the image itself, using CGImageCreateWithImageInRect, and then drawing that image into your dirty rect. You may want to keep your cropped image around and only throw it away when the rect changes. One way or the other, cropping the image may be more efficient—but don't forget to profile both before and after to prove that.

Generating a 54 megapixel image on iPhone 4/4S and iPad 2

I'm currently working on a project that must generate a collage of a 9000x6000 pixels resolution, generated from 15 photos. The problem that I'm facing is that when I finish drawing I'm getting an empty image (those 15 images are not being drawn in the context).
This problem is only present on devices with 512MB of RAM like iPhone 4/4S or iPad 2 and I think that this is a problem caused by the system because it cannot allocate enough memory for this app. When I run this line: UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f); the app's memory usage raises by 216MB and the total memory usage gets to ~240MB RAM.
The thing that I cannot understand is why on Earth the images that I'm trying to draw within the for loop are not being rendered always on the currentContext? I emphasized the word always, because only once in 30 tests the images were rendered (without changing any line of code).
Question nr. 2: If this is a problem caused by the system because it cannot allocate enough memory, is there any other way to generate this image, like a CGContextRef backed by a file output stream, so that it won't keep the image in the memory?
This is the code:
CGSize outputSize = CGSizeMake(9000, 6000);
BOOL opaque = YES;
UIGraphicsBeginImageContextWithOptions(outputSize, opaque, 1.0f);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(currentContext, [UIColor blackColor].CGColor);
CGContextFillRect(currentContext, CGRectMake(0, 0, outputSize.width, outputSize.height));
for (NSUInteger i = 0; i < strongSelf.images.count; i++)
{
#autoreleasepool
{
AGAutoCollageImageData *imageData = (AGAutoCollageImageData *)strongSelf.layout.images[i];
CGRect destinationRect = CGRectMake(floorf(imageData.destinationRectangle.origin.x * scaleXRatio),
floorf(imageData.destinationRectangle.origin.y * scaleYRatio),
floorf(imageData.destinationRectangle.size.width * scaleXRatio),
floorf(imageData.destinationRectangle.size.height * scaleYRatio));
CGRect sourceRect = imageData.sourceRectangle;
// Draw clipped image
CGImageRef clippedImageRef = CGImageCreateWithImageInRect(((ALAsset *)strongSelf.images[i]).defaultRepresentation.fullScreenImage, sourceRect);
CGContextDrawImage(currentContext, drawRect, clippedImageRef);
CGImageRelease(clippedImageRef);
}
}
// Pull the image from our context
strongSelf.result = UIGraphicsGetImageFromCurrentImageContext();
// Pop the context
UIGraphicsEndImageContext();
P.S: The console doesn't show anything but 'memory warnings', which are expected to see.
Sound like a cool project.
Tactic: try also releasing imageData at the end of every loop (explicitly, after releasing the clippedImageRef)
Strategic:
If you do need to support such "low" RAM requirements with such "high" input, maybe you should consider 2 alternative options:
Compress (obviously): even minimal, naked to the eye, JPEG compression can go a long way.
Split: never "really" merge the image. Have an arrayed datastructure which represents a BigImage. make utilities for the presentation logic.

iOS .Tesseract OCR why recognition is so pure. Engine principle

I have a question about Tesseract OCR principle. As far as I understand, after shapes detection , symbols (their forms) are scaled(resized) to have some specific font size.
Such font size is based on trained data. Basically, trained set defines symbols (their geometry,shape), maybe their representation.
I am using Tesseract 3.01 (the latest) version on iOS platform.
I check Tesseract FAQ, looked at forum, but I do not understand why for some images I have low quality of recognition.
It is said that font should be bigger than 12pt & image should have more than 300 DPI. I did all necessary preprocessing such as blurring (if it is needed), contrast enhancement.
I even used other engine in Tesseract OCR - it is called CUBE.
But for some images (in spite of fact that they are bigger MIN(width, height) >1000 - I rescale them for tesseract, I get bad results for recognition
http://goo.gl/l9uJMe
However on other set of images results are better:
http://goo.gl/cwA9DC
Those images smaller I do not resize them, (just convert to grayscale mode).
If what I wrote about engine is correct.
Suppose trained set is based on font with size 14pt. Symbols from pictures are resized to some specific size, and I do not see any reason why they are not recognised in such case.
I also tried custom dictionaries, to penalise non dictionary words - did not give too much benefit to recognition.
tesseract = new tesseract::TessBaseAPI();
GenericVector<STRING> variables_name(1),variables_value(1);
variables_name.push_back("user_words_suffix");
variables_value.push_back("user-words");
int retVal = tesseract->Init([self.tesseractDataPath cStringUsingEncoding:NSUTF8StringEncoding], NULL,tesseract::OEM_TESSERACT_ONLY, NULL, 0, &variables_name, &variables_value, false);
ok |= retVal == 0;
ok |= tesseract->SetVariable("language_model_penalty_non_dict_word", "0.2");
ok |= tesseract->SetVariable("language_model_penalty_non_freq_dict_word", "0.2");
if (!ok)
{
NSLog(#"Error initializing tesseract!");
}
So my question is should I train tesseract on another font?
And ,honestly speaking, why I should train it? on default trained data text from Internet, or screen of PC(Mac) I get good recognition.
I also checked original tesseract English trained data it has 38 tiff files, that belong to the following families:
1) Аrial
2) verdana
3 )trebuc
4) times
5) georigia
6 ) cour
It seems that font from image does not belong to this set.
In your case the size of the image is not the problem. As I can see from your attached images (and I'm surprised that nobody mentioned it before) the problem is that the text on images from which you get bad results is not placed on straight lines.
One of the things that Tesseract does at early stages of OCR process is to detect image layout and extracting whole lines of text.
This image is the best example to illustrate this part of the process:
As you can see the engine is expecting the text to be perpendicular to the edge of the image.
If you done with all necessary image processing then try this, It may helpful for you
CGSize size = [image size];
int width = size.width;
int height = size.height;
uint32_t* _pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
if (!_pixels) {
return;//Invalid image
}
// Clear the pixels so any transparency is preserved
memset(_pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a context with RGBA _pixels
CGContextRef context = CGBitmapContextCreate(_pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// Paint the bitmap to our context which will fill in the _pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);
// We're done with the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
_tesseract->SetImage((const unsigned char *) _pixels, width, height, sizeof(uint32_t), width * sizeof(uint32_t));
_tesseract->SetVariable("tessedit_char_whitelist", ".#0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz/-!");
_tesseract->SetVariable("tessedit_consistent_reps", "0");
char* utf8Text = _tesseract->GetUTF8Text();
NSString *str = nil;
if (utf8Text) {
str = [NSString stringWithUTF8String:utf8Text];
}

Looking for a simple pixel drawing method in ios (iphone, ipad)

I have a simple drawing issue. I have prepared a 2 dimensional array which has an animated wave motion. The array is updated every 1/10th of a second (this can be changed by the user). After the array is updated I want to display it as a 2 dimensional image with each array value as a pixel with color range from 0 to 255.
Any pointers on how to do this most efficiently...
Appreciate any help on this...
KAS
If it's just a greyscale then the following (coded as I type, probably worth checking for errors) should work:
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, pointerToYourData, width*height, NULL);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGImageRef inputImage = CGImageCreate( width, height,
8, 8, width,
colourSpace,
kCGBitmapByteOrderDefault,
dataProvider,
NULL, NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colourSpace);
UIImage *image = [UIImage imageWithCGImage:inputImage];
CGImageRelease(inputImage);
someImageView.image = image;
That'd be for a one-shot display, assuming you didn't want to write a custom UIView subclass (which is worth the effort only if performance is a problem, probably).
My understanding from the docs is that the data provider can be created just once for the lifetime of your C buffer. I don't think that's true of the image, but if you created a CGBitmapContext to wrap your buffer rather than a provider and an image, that would safely persist and you could use CGBitmapContextCreateImage to get a CGImageRef to be moving on with. It's probably worth benchmarking both ways around if it's an issue.
EDIT: so the alternative way around would be:
// get a context from your C buffer; this is now something
// CoreGraphics could draw to...
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context =
CGBitmapContextCreate(pointerToYourData,
width, height,
8, width,
colourSpace,
kCGBitmapByteOrderDefault);
CGColorSpaceRelease(colourSpace);
// get an image of the context, which is something
// CoreGraphics can draw from...
CGImageRef image = CGBitmapContextCreateImage(context);
/* wrap in a UIImage, push to a UIImageView, as before, remember
to clean up 'image' */
CoreGraphics copies things about very lazily, so neither of these solutions should be as costly as the multiple steps imply.

Resources