CIImage to CMSampleBufferRef conversion - ios

I am using captureStillImageAsynchronouslyFromConnection: to get image sample buffer from a camera. After that I am running the image through an OpenGL (GPU) to apply filters but unfortunately someone at Apple have put camera on iPhone 4 which output is bigger than the maximum texture size.
Brad Larsons explanation: The iPhone 4 is a special case, in that it can take photos large enough (2592x1936) that they just exceed the maximum texture size of the GPU on those devices (2048x2048). This causes the processing to fail, currently. All other devices either don't take photos that large, or support larger texture sizes (the iPad 2, iPad 3, and iPhone 4S support these larger sizes).
so the code I have is scaling down the image but I have to create CMSampleBufferRef after resizing on iPhone 4 only to cheat the capture process ... anyone knows how to get the CMSampleBufferRef from CIImage?
Objective-C
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect];

Related

CIContext, iOS 9 and memory issues

So I recently updated iOS to 9.0.2.
I've been using RosyWriter, Apple's example to capture and filter video frames using CIFilter and CIContext.
And it worked great in iOS 7 and 8.
It all broke down in iOS 9.
Now memory report in RosyWriter and my app looks like this:
And eventually the app crashes.
I call [_ciContext render: toCVPixelBuffer: bounds: colorSpace: ]; and imageWithCVPixelBuffer. Looks like CIContext has an internal memory leak when I call these two methods.
After spending about 4 days I found that if I create a new CIContext instance every time I want to render a buffer and release it after - this keeps the memory down. But this is not a solution because it's too expensive to do so.
Anyone else has this problem? Is there a way around this?
Thanks.
I can confirm that this memory leak still exists on iOS 9.2. (I've also posted on the Apple Developer Forum.)
I get the same memory leak on iOS 9.2. I've tested dropping EAGLContext by using MetalKit and MLKDevice. I've tested using different methods of CIContext like drawImage, createCGImage and render but nothing seem to work.
It is very clear that this is a bug as of iOS 9. Try it out your self by downloading the example app from Apple (see below) and then run the same project on a device with iOS 8.4, then on a device with iOS 9.2 and pay attention to the memory gauge in Xcode.
Download
https://developer.apple.com/library/ios/samplecode/AVBasicVideoOutput/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013109
Add this to the APLEAGLView.h:20
#property (strong, nonatomic) CIContext* ciContext;
Replace APLEAGLView.m:118 with this
[EAGLContext setCurrentContext:_context];
_ciContext = [CIContext contextWithEAGLContext:_context];
And finaly replace APLEAGLView.m:341-343 with this
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
#autoreleasepool
{
CIImage* sourceImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIFilter* filter = [CIFilter filterWithName:#"CIGaussianBlur" keysAndValues:kCIInputImageKey, sourceImage, nil];
CIImage* filteredImage = filter.outputImage;
[_ciContext render:filteredImage toCVPixelBuffer:pixelBuffer];
}
glBindRenderbuffer(GL_RENDERBUFFER, _colorBufferHandle);
Just use below code after use context
context = [CIContext contextWithOptions:nil];
and release CGImageRef object also
CGImageRelease(<CGImageRef IMAGE OBJECT>);
Krafter,
Are you writing custom filters? I'm finding that dod works differently in iOS 9.
It looks like if dod.extent.origin.x and dod.extent.origin.y are not close to whole numbers stored as doubles (e.g. 31.0, 333.0), then the extent.size of the output image will be (dod.extent.size.width + 1.0, dod.extent.size.height + 1.0). Before iOS 9.0 the extent.size of the output image was always (dod.extent.size). So, if (you are cycling the same image through a custom CIFilter over and over && your dod.extent isn't close to nice, even whole numbers) {you get an image whose dimensions increase by 1.0 each time the filter runs, and that might produce a memory profile like you have.}
I'm assuming this is a bug in iOS 9, because the size of the output image should always match the size of dod.
My setup: iOS v9.2, iPhone 5C and iPad 2, Xcode 7.2, Obj-C

glError 0x0501 when loading a large texture with OpenGL ES on the iPhone4

I got this error when I try to load a PVR image on device. It works in iPhone 5s, 5, 4s and iPad well, but in 4 it doesn't work. My PVR image size is: width = 4096 and height = 2048.
Cocos2d: cocos2d: TexturePVR: Error uploading compressed texture level: 0 . glError: 0x0501
Cocos2d: cocos2d: Couldn't load PVR image /var/mobile/Applications/7CF6C347-8B63-4C1E-857A-41F48C8ACBEF/Race.app/Images/BackGround/bg1.pvr.ccz
Cocos2d: cocos2d: Couldn't add PVRImage:Images/BackGround/bg1.pvr.ccz in CCTextureCache
I got this form this link:
Supported sizes are: iPhone 3gs / iPhone 4 / iPad 1 / iPod 3 / 4 : 2048x2048 iPad 2 / 3 / 4 / Mini / iPhone 4S / 5 / iPod 5 : 4096x4096
Btw, you can import texture at 4096x4096 and turn mip maps on, this would automatically use smaller resolution texture on older devices.
But how to turn on the mip-maps .. and what does this do?
// support mipmap filtering
sprite->getTexture()->generateMipmap();
ccTexParams texParams = { GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE };
sprite->getTexture()->setTexParameters(&texParams);
Well, you try to create a texture bigger than GL_MAX_TEXTURE_SIZE, so it will fail. There is no way around that.
Btw, you can import texture at 4096x4096 and turn mip maps on, this
would automatically use smaller resolution texture on older devices.
No, you can not. That is not what mipmaps are for. They are used when sampling the texture to avoid artifacts due to sparse sampling when the texture is shown smaller. You are still trying to create a 4096x4096 texure, which simply will not work on that device.
To solve this, you should either limit the texture size to the minimum supported size of all of the target devices you which to support, or dynamically query the GL_MAX_TEXTURE_SIZE limit and downscale the data before you try it to the GL, or provide different resolutions to choose from.
If you absolutely need that many pixels, you could also use some texture tiling approach and split it in multiple tiles - but that will probably require some major changes on your rendering code, and it might also lead to performance issues.

What is maximum resolution of an image that can be loaded in UIImageView in iOS Devices?

What is the maximum resolution of an UIImage that can be set in UIImageView using SetImage: method in iOS Devices?
You can load any size of image in a UIImageView as long a there is memory free.
But it is not the UIImageView that is taking up the memory, but the UIImage.
As Apple states the following in the UIImage documentation you should not load to big an image:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.

Correct image size for iPhone developing

I am working on an iPhone game in spritekit, for the past while I have been using random graphics but as my game play is beginning to shape up. I think it's time I look into my own graphics development.
My level is designed in Tiled, So I would like to know if it is ok to make each tile 16 * 16 and the hd version ( for the iPhone retina) 32 * 32.
and, what is the difference in drawing images for iPhone and iPhone retina.
Thank you
That's correct. Retina graphics twice the resolution of non-retina. The easiest way to support both retina and non-retina displays is to make two copies of each graphic you use. Use the format imagename.png for the non-retina and imagename#2x.png for the retina image. When you want to load the image you can use this line:
UIImage* image = [UIImage imageNamed:#"imagename"];
iOS will automatically select the correct image.

Any way to encode a PNG faster than UIImagePNGRepresentation?

I'm generating a bunch of tiles for CATiledLayer. It takes about 11 seconds to generate 120 tiles at 256 x 256 with 4 levels of detail on an iPhone 4S. The image itself fits within 2048 x 2048.
My bottleneck is UIImagePNGRepresentation. It takes about 0.10-0.15 seconds to generate every 256 x 256 image.
I've tried generating multiple tiles on different background queue's, but this only cuts it down to about 9-10 seconds.
I've also tried using the ImageIO framework with code like this:
- (void)writeCGImage:(CGImageRef)image toURL:(NSURL*)url andOptions:(CFDictionaryRef) options
{
CGImageDestinationRef myImageDest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)#"public.png", 1, nil);
CGImageDestinationAddImage(myImageDest, image, options);
CGImageDestinationFinalize(myImageDest);
CFRelease(myImageDest);
}
While this produces smaller PNG files (win!), it takes about 13 seconds, 2 seconds more than before.
Is there any way to encode a PNG image from CGImage faster? Perhaps a library that makes use of NEON ARM extension (iPhone 3GS+) like libjpeg-turbo does?
Is there perhaps a better format than PNG for saving tiles that doesn't take up a lot of space?
The only viable option I've been able to come up with is to increase the tile size to 512 x 512. This cuts the encoding time by half. Not sure what that will do to my scroll view though. The app is for iPad 2+, and only supports iOS 6 (using iPhone 4S as a baseline).
It turns out the reason why UIImageRepresentation was performing so poorly was because it was decompressing the original image every time even though I thought I was creating a new image with CGImageCreateWithImageInRect.
You can see the results from Instruments here:
Notice _cg_jpeg_read_scanlines and decompress_onepass.
I was force-decompressing the image with this:
UIImage *image = [UIImage imageWithContentsOfFile:path];
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
The timing of this was about 0.10 seconds, almost equivalent to the time taken by each UIImageRepresentation call.
There are numerous articles over the internet that recommend drawing as a way of force decompressing an image.
There's an article on Cocoanetics Avoiding Image Decompression Sickness. The article provides an alternate way of loading the image:
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], NULL);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(source);
And now the same process takes about 3 seconds! Using GCD to generate tiles in parallel reduces the time more significantly.
The writeCGImage function above takes about 5 seconds. Since the file sizes are smaller, I suspect the zlib compression is at a higher level.

Resources