I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. However, on devices with 1 GB RAM I'm easily able to process 8 MP images.
In the extension, however, there are much higher memory constraints. I had to scale down the image to under 2 MP in order to get it processed without crashing the extension. I also figured that the memory problems only occurred when not having a debugger attached to the extension. With it, everything works fine.
I did some experiments. I modified a memory budget test app to work within an extension and came up with the following results (showing the amount of RAM in MB that can be allocated before crashing):
╔═══════════════════════╦═════╦═══════════╦══════════════════╗
║ Device ║ App ║ Extension ║ Ext. (+Debugger) ║
╠═══════════════════════╬═════╬═══════════╬══════════════════╣
║ iPhone 6 Plus (8.0.2) ║ 646 ║ 115 ║ 645 ║
║ iPhone 5 (8.1 beta 2) ║ 647 ║ 97 ║ 646 ║
║ iPhone 4s (8.0.2) ║ 305 ║ 97 ║ 246 ║
╚═══════════════════════╩═════╩═══════════╩══════════════════╝
A few observations:
With the debugger attached the extension behaves like the "normal" app
Even though the 4s has only half the total amount of memory (512 MB) compared to the other devices it gets the same ~100 MB from the system for the extension.
Now my question: How am I supposed to work with this small amount of memory in a Photo Editing extension? One texture containing an 8 MP (camera resolution) RGBA image eats ~31 MB alone. What is the point of this extension mechanism if I have to tell the user that full size editing is only possible when using the main app?
Did one of you also reach that barrier? Did you find a solution to circumvent this constraint?
I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.
We found only two workarounds, but not an actual solution.
Scaling down the image, and applying the filter. This will require way less memory, but the image result is terrible. At least the extension will not crash.
or
Use CoreImage or Metal for image processing. As we analyzed the Sample Photo Editing Extension from Apple, which uses CoreImage, can handle very large image and even panoramas without quality or resolution loss. Actually, we were not able to crash the extension by loading very large images. The sample code can handle panoramas with a memory peek of 40mb, which is pretty impressive.
According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.
I hope I could help a bit.
Marco Paiva
If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.
That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.
Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.
Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.
Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:
- (CIImage *)outputImage {
const double g = self.inputIntensity.doubleValue;
const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
-1*g, 0*g, 1*g,
-1*g, 0*g, 1*g};
CIImage *result = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", self.inputImage,
#"inputWeights", [CIVector vectorWithValues:weights_v count:9],
#"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
CGRect rect = [self.inputImage extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:#"CICrop" keysAndValues:#"inputImage", result, #"inputRectangle", cropRect, nil].outputImage;
const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
0*g, 0*g, 0*g,
1*g, 1*g, 1*g};
result = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", result,
#"inputWeights", [CIVector vectorWithValues:weights_h count:9],
#"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:#"CICrop" keysAndValues:#"inputImage", result, #"inputRectangle", cropRect, nil].outputImage;
result = [CIFilter filterWithName:#"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;
return result;
}
Related
So I recently updated iOS to 9.0.2.
I've been using RosyWriter, Apple's example to capture and filter video frames using CIFilter and CIContext.
And it worked great in iOS 7 and 8.
It all broke down in iOS 9.
Now memory report in RosyWriter and my app looks like this:
And eventually the app crashes.
I call [_ciContext render: toCVPixelBuffer: bounds: colorSpace: ]; and imageWithCVPixelBuffer. Looks like CIContext has an internal memory leak when I call these two methods.
After spending about 4 days I found that if I create a new CIContext instance every time I want to render a buffer and release it after - this keeps the memory down. But this is not a solution because it's too expensive to do so.
Anyone else has this problem? Is there a way around this?
Thanks.
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download
https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
Just use below code after use context
context = [CIContext contextWithOptions:nil];
and release CGImageRef object also
CGImageRelease(<CGImageRef IMAGE OBJECT>);
Krafter,
Are you writing custom filters? I'm finding that dod works differently in iOS 9.
It looks like if dod.extent.origin.x and dod.extent.origin.y are not close to whole numbers stored as doubles (e.g. 31.0, 333.0), then the extent.size of the output image will be (dod.extent.size.width + 1.0, dod.extent.size.height + 1.0). Before iOS 9.0 the extent.size of the output image was always (dod.extent.size). So, if (you are cycling the same image through a custom CIFilter over and over && your dod.extent isn't close to nice, even whole numbers) {you get an image whose dimensions increase by 1.0 each time the filter runs, and that might produce a memory profile like you have.}
I'm assuming this is a bug in iOS 9, because the size of the output image should always match the size of dod.
My setup: iOS v9.2, iPhone 5C and iPad 2, Xcode 7.2, Obj-C
I added a new iOS 8 Photo Extension to my existing photo editing app. My app has quite a complex filter pipeline and needs to keep multiple textures in memory at a time. However, on devices with 1 GB RAM I'm easily able to process 8 MP images.
In the extension, however, there are much higher memory constraints. I had to scale down the image to under 2 MP in order to get it processed without crashing the extension. I also figured that the memory problems only occurred when not having a debugger attached to the extension. With it, everything works fine.
I did some experiments. I modified a memory budget test app to work within an extension and came up with the following results (showing the amount of RAM in MB that can be allocated before crashing):
╔═══════════════════════╦═════╦═══════════╦══════════════════╗
║ Device ║ App ║ Extension ║ Ext. (+Debugger) ║
╠═══════════════════════╬═════╬═══════════╬══════════════════╣
║ iPhone 6 Plus (8.0.2) ║ 646 ║ 115 ║ 645 ║
║ iPhone 5 (8.1 beta 2) ║ 647 ║ 97 ║ 646 ║
║ iPhone 4s (8.0.2) ║ 305 ║ 97 ║ 246 ║
╚═══════════════════════╩═════╩═══════════╩══════════════════╝
A few observations:
With the debugger attached the extension behaves like the "normal" app
Even though the 4s has only half the total amount of memory (512 MB) compared to the other devices it gets the same ~100 MB from the system for the extension.
Now my question: How am I supposed to work with this small amount of memory in a Photo Editing extension? One texture containing an 8 MP (camera resolution) RGBA image eats ~31 MB alone. What is the point of this extension mechanism if I have to tell the user that full size editing is only possible when using the main app?
Did one of you also reach that barrier? Did you find a solution to circumvent this constraint?
I am developing a Photo Editing extension for my company, and we are facing the same issue. Our internal image processing engine needs more than 150mb to apply certain effects to an image. And this is not even counting panorama images which will take around ~100mb of memory per copy.
We found only two workarounds, but not an actual solution.
Scaling down the image, and applying the filter. This will require way less memory, but the image result is terrible. At least the extension will not crash.
or
Use CoreImage or Metal for image processing. As we analyzed the Sample Photo Editing Extension from Apple, which uses CoreImage, can handle very large image and even panoramas without quality or resolution loss. Actually, we were not able to crash the extension by loading very large images. The sample code can handle panoramas with a memory peek of 40mb, which is pretty impressive.
According to the Apple's App Extension Programming Guide, page 55, chapter "Handling Memory Constraints", the solution for memory pressure in extensions is to review your image-processing code. So far we are porting our image processing engine to CoreImage, and the results are far better than our previous engine.
I hope I could help a bit.
Marco Paiva
If you're using a Core Image "recipe," you needn't worry about memory at all, just as Marco said. No image on which Core Image filters are applied is rendered until the image object is returned to the view.
That means you could apply a million filters to a highway billboard-sized photo, and memory would not be the issue. The filter specifications would simply be compiled into a convolution or kernel, which all come down to the same size—no matter what.
Misunderstandings about memory management and overflow and the like can be easily remedied by orienting yourself with the core concepts of your chosen programming language, development environment and hardware platform.
Apple's documentation introducing Core Image filter programming is sufficient for this; if you'd like specific references to portions of the documentation that I believe pertain specifically to your concerns, just ask.
Here is how you apply two consecutive convolution kernels in Core Image, with the "intermediary result" between them:
- (CIImage *)outputImage {
const double g = self.inputIntensity.doubleValue;
const CGFloat weights_v[] = { -1*g, 0*g, 1*g,
-1*g, 0*g, 1*g,
-1*g, 0*g, 1*g};
CIImage *result = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", self.inputImage,
#"inputWeights", [CIVector vectorWithValues:weights_v count:9],
#"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
CGRect rect = [self.inputImage extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:#"CICrop" keysAndValues:#"inputImage", result, #"inputRectangle", cropRect, nil].outputImage;
const CGFloat weights_h[] = {-1*g, -1*g, -1*g,
0*g, 0*g, 0*g,
1*g, 1*g, 1*g};
result = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", result,
#"inputWeights", [CIVector vectorWithValues:weights_h count:9],
#"inputBias", [NSNumber numberWithFloat:1.0],
nil].outputImage;
result = [result imageByCroppingToRect:cropRectLeft];
result = [CIFilter filterWithName:#"CICrop" keysAndValues:#"inputImage", result, #"inputRectangle", cropRect, nil].outputImage;
result = [CIFilter filterWithName:#"CIColorInvert" keysAndValues:kCIInputImageKey, result, nil].outputImage;
return result;
}
I am developing an iOS application which takes images from the iPhone camera using AVCaptureDevice.
The images captured seem to have a pixel density(ppi) of 72ppi.
1. I need to send these images for further processing to a backend cloud server which expects the images to have a minimum pixel density of 300ppi.
2. I also see that the images taken from the native iPhone 5 camera also have a pixel density of 72 ppi.
3. I need to know if there are any settings in the AVCapture foundation to set the pixel density of the images taken or if there are ways to increase the pixel density of the images taken from 72 to 300 ppi.
Any help would be appreciated.
What's the difference between a 3264 by 2448 pixel image at 72ppi and a 3264 by 2448 pixel image at 300ppi? There's hardly any except some minor difference in the meta data. And I don't understand why your backend service insists on a minimum pixel density.
The pixel density (or ppi) becomes relevant when you print or display an image at a specific size or place it in a document using a specific size.
Anyway, there is no good reason to set a specific ppi at capture time. That's probably the reason why Apple uses the default density of 72ppi. And I don't know of any means to change it at capture time.
However, you can change it at a later time by modifying the EXIF data of a JPEG file, e.g. using libexif.
As #Codo has pointed out, the pixel density is irrelevant until the image is being output (to a display, a printer a RIP, or whatever). It's metadata, not image data. However, if you're dealing with a third-party service that doesn't have the wit to understand this, you need to edit the image metadata after you have captured the image and before you save it.
This is how:
captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer
NSError *error) {
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
imageDataSampleBuffer,
kCMAttachmentMode_ShouldPropagate);
NSMutableDictionary *metadata = [[NSMutableDictionary alloc]
initWithDictionary:(__bridge NSDictionary*)metadataDict];
CFRelease(metadataDict);
NSMutableDictionary *tiffMetadata = [[NSMutableDictionary alloc] init];
[tiffMetadata setObject:[NSNumber numberWithInt:300]
forKey(NSString*)kCGImagePropertyTIFFXResolution];
[tiffMetadata setObject:[NSNumber numberWithInt:300] forKey:
(NSString*)kCGImagePropertyTIFFYResolution];
[metadata setObject:tiffMetadata forKey:(NSString*)kCGImagePropertyTIFFDictionary];
.
.
.
}];
Then feed metadata into writeImageToSavedPhotosAlbum:metadata:completionBlock, writeImageDataToSavedPhotosAlbum:metadata:completionBlock or a save into your private app folder, depending on your requirements.
I'm generating a bunch of tiles for CATiledLayer. It takes about 11 seconds to generate 120 tiles at 256 x 256 with 4 levels of detail on an iPhone 4S. The image itself fits within 2048 x 2048.
My bottleneck is UIImagePNGRepresentation. It takes about 0.10-0.15 seconds to generate every 256 x 256 image.
I've tried generating multiple tiles on different background queue's, but this only cuts it down to about 9-10 seconds.
I've also tried using the ImageIO framework with code like this:
- (void)writeCGImage:(CGImageRef)image toURL:(NSURL*)url andOptions:(CFDictionaryRef) options
{
CGImageDestinationRef myImageDest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)#"public.png", 1, nil);
CGImageDestinationAddImage(myImageDest, image, options);
CGImageDestinationFinalize(myImageDest);
CFRelease(myImageDest);
}
While this produces smaller PNG files (win!), it takes about 13 seconds, 2 seconds more than before.
Is there any way to encode a PNG image from CGImage faster? Perhaps a library that makes use of NEON ARM extension (iPhone 3GS+) like libjpeg-turbo does?
Is there perhaps a better format than PNG for saving tiles that doesn't take up a lot of space?
The only viable option I've been able to come up with is to increase the tile size to 512 x 512. This cuts the encoding time by half. Not sure what that will do to my scroll view though. The app is for iPad 2+, and only supports iOS 6 (using iPhone 4S as a baseline).
It turns out the reason why UIImageRepresentation was performing so poorly was because it was decompressing the original image every time even though I thought I was creating a new image with CGImageCreateWithImageInRect.
You can see the results from Instruments here:
Notice _cg_jpeg_read_scanlines and decompress_onepass.
I was force-decompressing the image with this:
UIImage *image = [UIImage imageWithContentsOfFile:path];
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
The timing of this was about 0.10 seconds, almost equivalent to the time taken by each UIImageRepresentation call.
There are numerous articles over the internet that recommend drawing as a way of force decompressing an image.
There's an article on Cocoanetics Avoiding Image Decompression Sickness. The article provides an alternate way of loading the image:
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], NULL);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(source);
And now the same process takes about 3 seconds! Using GCD to generate tiles in parallel reduces the time more significantly.
The writeCGImage function above takes about 5 seconds. Since the file sizes are smaller, I suspect the zlib compression is at a higher level.
From what I've read elsewhere, Apple recommends multiple versions of every graphical asset, so quality will be retained between pre-iPhone 4, iPhone 4 (with the retina display), and the iPad. But I'm using a technique that only requires one asset for all three cases.
I make each graphic the size I need for the iPhone 4 and the iPad, say a cat at 500x500 pixels. I name it myCat#2x.png. When I read it in for the iPhone:
CGRect catFrame = CGRectMake(0.0f, 0.0f, 250.0f, 250.0f);
UIImageView *theCat = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"myCat"]];
theCat.frame = catFrame;
[self.view addSubview:theCat];
[theCat release];
for the iPad, I do exactly the same thing, except for:
CGRect catFrame = CGRectMake(0.0f, 0.0f, 500.0f, 500.0f);
This seems to work fine in all three cases, and greatly reduces the number (and size) of graphic files. Is there anything wrong with this technique?
Check this: http://vimeo.com/30667638
We are releasing it soon. If you are interested in beta testing it drop me a lineEdit
This question has been a long time in circulation, so I will "answer" it based on my experience with the last couple of apps I've worked on.
I see no reason to have a separate image asset for retina display and non-retina display iPhones. The technique I outlined above seems to work just fine.
Probably, one will want a separate asset ("resource file") for the iPad, mostly for the change in aspect ratio of the screen. In my case (sprites for children's games) I was able to use many of the iPhone images on the iPad. The "look" was a little different, but I saved a lot of file space.
Whether this works for you will, of course, depend on the unique properties of your project.
Not all images scale well, even at 50%. dithering or patterns might get distorted. In general, scaling at factors of 1/2, 1/4 etc (dividing by two) will result in best results, but scaling down using an advanced algorithm like used in Photoshop will produce better results.
So, in most cases, this can produce acceptable results.