Fastest way on iOS 7+ to get CVPixelBufferRef from BGRA bytes - ios

What is the fastest way on iOS 7+ to convert raw bytes of BGRA / UIImage data to a CVPixelBufferRef? The bytes are 4 bytes per pixel in BGRA order.
Is there any chance of a direct cast here vs. copying data into a secondary storage?
I've considered CVPixelBufferCreateWithBytes but I have a hunch it is copying memory...

You have to use CVPixelBufferCreate because CVPixelBufferCreateWithBytes will not allow fast conversion to an OpenGL texture using the Core Video texture cache. I'm not sure why this is the case, but that's the way things are at least as of iOS 8. I tested this with the profiler, and CVPixelBufferCreateWithBytes causes a texSubImage2D call to be made every time a Core Video texture is accessed from the cache.
CVPixelBufferCreate will do funny things if the width is not a multiple of 16, so if you plan on doing CPU operations on CVPixelBufferGetBaseAddress, and you want it to be laid out like a CGImage or CGBitmapContext, you will need to pad your width higher until it is a multiple of 16, or make sure you use the CVPixelBufferGetRowBytes and pass that to any CGBitmapContext you create.
I tested all combinations of dimensions of width and height from 16 to 2048, and as long as they were padded to the next highest multiple of 16, the memory was laid out properly.
+ (NSInteger) alignmentForPixelBufferDimension:(NSInteger)dim
{
static const NSInteger modValue = 16;
NSInteger mod = dim % modValue;
return (mod == 0 ? dim : (dim + (modValue - mod)));
}
+ (NSDictionary*) pixelBufferSurfaceAttributesOfSize:(CGSize)size
{
return #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA),
(NSString*)kCVPixelBufferWidthKey: #(size.width),
(NSString*)kCVPixelBufferHeightKey: #(size.height),
(NSString*)kCVPixelBufferBytesPerRowAlignmentKey: #(size.width * 4),
(NSString*)kCVPixelBufferExtendedPixelsLeftKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsRightKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsTopKey: #(0),
(NSString*)kCVPixelBufferExtendedPixelsBottomKey: #(0),
(NSString*)kCVPixelBufferPlaneAlignmentKey: #(0),
(NSString*)kCVPixelBufferCGImageCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey: #(YES),
(NSString*)kCVPixelBufferIOSurfacePropertiesKey: #{ #"IOSurfaceCGBitmapContextCompatibility": #(YES), #"IOSurfaceOpenGLESFBOCompatibility": #(YES), #"IOSurfaceOpenGLESTextureCompatibility": #(YES) } };
}
Interestingly enough, if you ask for a texture from the Core Video cache with dimensions smaller than the padded dimensions, it will return a texture immediately. Somehow underneath it is able to reference the original texture, but with a smaller width and height.
To sum up, you cannot wrap existing memory with a CVPixelBufferRef using CVPixelBufferCreateWithBytes and use the Core Video texture cache efficiently. You must use CVPixelBufferCreate and use CVPixelBufferGetBaseAddress.

Related

Is there a method that uses less memory than NSKeyedArchiver to archive an object?

I have created a ReplayKit Broadcast Extension, so the maximum amount of memory I can use is 50 MB.
I am taking samples of the broadcasted stream to send those images with a CFMessagePortSendRequest call. As that function accepts only CFData type, I need to convert my multi-plane image to Data.
NSKeyedArchiver.archivedObject() seems to exceed this 50 MB. Breaking on the line before the call I can see a memory consumption of ~6 MB. Then, executing the archivedObject call, my extension crashes cause it exceeds the memory limit.
Is there a less memory-eating way to convert the CIImage of a CVPixelBuffer to Data? And then back, of course.
I was able convert CMSampleBufferRef to NSData in following way. This method is using like 1-5~MB ram. I hope this will solve your problem.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBufferType);
UInt8* bap0 = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CVPixelBufferLockBaseAddress(imageBuffer,0);
int byteperrow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
int height = CVPixelBufferGetHeight(imageBuffer);
NSData *data = [NSData dataWithBytes:bap0 length:byteperrow * height];

How do I draw onto a CVPixelBufferRef that is planar/ycbcr/420f/yuv/NV12/not rgb?

I have received a CMSampleBufferRef from a system API that contains CVPixelBufferRefs that are not RGBA (linear pixels). The buffer contains planar pixels (such as 420f aka kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange aka yCbCr aka YUV).
I would like to modify do some manipulation of this video data before sending it off to VideoToolkit to be encoded to h264 (drawing some text, overlaying a logo, rotating the image, etc), but I'd like for it to be efficient and real-time. Buuuut planar image data looks suuuper messy to work with -- there's the chroma plane and the luma plane and they're different sizes and... Working with this on a byte level seems like a lot of work.
I could probably use a CGContextRef and just paint right on top of the pixels, but from what I can gather it only supports RGBA pixels. Any advice on how I can do this with as little data copying as possible, yet as few lines of code as possible?
CGBitmapContextRef can only paint into something like 32ARGB, correct. This means that you will want to create ARGB (or RGBA) buffers, and then find a way to very quickly transfer YUV pixels onto this ARGB surface. This recipe includes using CoreImage, a home-made CVPixelBufferRef through a pool, a CGBitmapContextRef referencing your home made pixel buffer, and then recreating a CMSampleBufferRef resembling your input buffer, but referencing your output pixels. In other words,
Fetch the incoming pixels into a CIImage.
Create a CVPixelBufferPool with the pixel format and output dimensions you are creating. You don't want to create CVPixelBuffers without a pool in real time: you will run out of memory if your producer is too fast; you'll fragment your RAM as you won't be reusing buffers; and it's a waste of cycles.
Create a CIContext with the default constructor that you'll share between buffers. It contains no external state, but documentation says that recreating it on every frame is very expensive.
On incoming frame, create a new pixel buffer. Make sure to use an allocation threshold so you don't get runaway RAM usage.
Lock the pixel buffer
Create a bitmap context referencing the bytes in the pixel buffer
Use CIContext to render the planar image data into the linear buffer
Perform your app-specific drawing in the CGContext!
Unlock the pixel buffer
Fetch the timing info of the original sample buffer
Create a CMVideoFormatDescriptionRef by asking the pixel buffer for its exact format
Create a sample buffer for the pixel buffer. Done!
Here's a sample implementation, where I have chosen 32ARGB as the image format to work with, as that's something that both CGBitmapContext and CoreVideo enjoys working with on iOS:
{
CGPixelBufferPoolRef *_pool;
CGSize _poolBufferDimensions;
}
- (void)_processSampleBuffer:(CMSampleBufferRef)inputBuffer
{
// 1. Input data
CVPixelBufferRef inputPixels = CMSampleBufferGetImageBuffer(inputBuffer);
CIImage *inputImage = [CIImage imageWithCVPixelBuffer:inputPixels];
// 2. Create a new pool if the old pool doesn't have the right format.
CGSize bufferDimensions = {CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels)};
if(!_pool || !CGSizeEqualToSize(bufferDimensions, _poolBufferDimensions)) {
if(_pool) {
CFRelease(_pool);
}
OSStatus ok0 = CVPixelBufferPoolCreate(NULL,
NULL, // pool attrs
(__bridge CFDictionaryRef)(#{
(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32ARGB),
(id)kCVPixelBufferWidthKey: #(bufferDimensions.width),
(id)kCVPixelBufferHeightKey: #(bufferDimensions.height),
}), // buffer attrs
&_pool
);
_poolBufferDimensions = bufferDimensions;
assert(ok0 == noErr);
}
// 4. Create pixel buffer
CVPixelBufferRef outputPixels;
OSStatus ok1 = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(NULL,
_pool,
(__bridge CFDictionaryRef)#{
// Opt to fail buffer creation in case of slow buffer consumption
// rather than to exhaust all memory.
(__bridge id)kCVPixelBufferPoolAllocationThresholdKey: #20
}, // aux attributes
&outputPixels
);
if(ok1 == kCVReturnWouldExceedAllocationThreshold) {
// Dropping frame because consumer is too slow
return;
}
assert(ok1 == noErr);
// 5, 6. Graphics context to draw in
CGColorSpaceRef deviceColors = CGColorSpaceCreateDeviceRGB();
OSStatus ok2 = CVPixelBufferLockBaseAddress(outputPixels, 0);
assert(ok2 == noErr);
CGContextRef cg = CGBitmapContextCreate(
CVPixelBufferGetBaseAddress(outputPixels), // bytes
CVPixelBufferGetWidth(inputPixels), CVPixelBufferGetHeight(inputPixels), // dimensions
8, // bits per component
CVPixelBufferGetBytesPerRow(outputPixels), // bytes per row
deviceColors, // color space
kCGImageAlphaPremultipliedFirst // bitmap info
);
CFRelease(deviceColors);
assert(cg != NULL);
// 7
[_imageContext render:inputImage toCVPixelBuffer:outputPixels];
// 8. DRAW
CGContextSetRGBFillColor(cg, 0.5, 0, 0, 1);
CGContextSetTextDrawingMode(cg, kCGTextFill);
NSAttributedString *text = [[NSAttributedString alloc] initWithString:#"Hello world" attributes:NULL];
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)text);
CTLineDraw(line, cg);
CFRelease(line);
// 9. Unlock and stop drawing
CFRelease(cg);
CVPixelBufferUnlockBaseAddress(outputPixels, 0);
// 10. Timings
CMSampleTimingInfo timingInfo;
OSStatus ok4 = CMSampleBufferGetSampleTimingInfo(inputBuffer, 0, &timingInfo);
assert(ok4 == noErr);
// 11. VIdeo format
CMVideoFormatDescriptionRef videoFormat;
OSStatus ok5 = CMVideoFormatDescriptionCreateForImageBuffer(NULL, outputPixels, &videoFormat);
assert(ok5 == noErr);
// 12. Output sample buffer
CMSampleBufferRef outputBuffer;
OSStatus ok3 = CMSampleBufferCreateForImageBuffer(NULL, // allocator
outputPixels, // image buffer
YES, // data ready
NULL, // make ready callback
NULL, // make ready refcon
videoFormat,
&timingInfo, // timing info
&outputBuffer // out
);
assert(ok3 == noErr);
[_consumer consumeSampleBuffer:outputBuffer];
CFRelease(outputPixels);
CFRelease(videoFormat);
CFRelease(outputBuffer);
}

loading Animations with RGBA8888 and RGBA4444 showing no difference in memory usage, platform cocos2D & iOS

plateform -> cocos2D, iOS
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA8888"
Shows memory usage in x-code Instruments 10.0 MB.
Step1: Loading animations from FileName.pvr.ccz(TexturePacker) with ImageFormat="RGBA4444"
Shows memory usage in x-code Instruments 10.0 MB.
Question -> why its not showing any difference in Memory Usage while using lower ImageFormat = "RGBA4444" instead of higher ImageFormat = "RGBA8888"?
TexturePacker file size = 2047 * 1348
The default texture format is RGBA8888 so if you have a RGBA4444 texture you need to change the format before loading the texture (and perhaps change it back afterwards).
The method to change texture format for newly created textures is a class method of CCTexture2D:
+ (void) setDefaultAlphaPixelFormat:(CCTexture2DPixelFormat)format;
I found this error cause my memory size same in both format:-http://www.cocos2d-iphone.org/forum/topic/31092.
In CCTexturePVR.m ->
// Not word aligned ?
if( mod != 0 ) {
NSUInteger neededBytes = (4 - mod ) / (bpp/8);
printf("\n");
NSLog(#"cocos2d: WARNING. Current texture size=(%tu,%tu). Convert it to size=(%tu,%tu) in order to save memory", _width, _height, _width + neededBytes, _height );
NSLog(#"cocos2d: WARNING: File: %#", [path lastPathComponent] );
NSLog(#"cocos2d: WARNING: For further info visit: http://www.cocos2d-iphone.org/forum/topic/31092");
printf("\n");
}
its cocos2d or iOS bug which can be handle by adjusting pvr.ccz size
Size dimension should be divisible by 4 but not the Power Of two. it will resolve bug and get expected memory difference for Both Format

iOS6 : How to use the conversion feature of YUV to RGB from cvPixelBufferref to CIImage

From iOS6, Apple has given the provision to use native YUV to CIImage through this call
initWithCVPixelBuffer:options:
In the core Image Programming guide, they have mentioned about this feature
Take advantage of the support for YUV image in iOS 6.0 and later.
Camera pixel buffers are natively YUV but most image processing
algorithms expect RBGA data. There is a cost to converting between the
two. Core Image supports reading YUB from CVPixelBuffer objects and
applying the appropriate color transform.
options = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCvCr88iPlanarFullRange) };
But, I am unable to use it properly. I have a raw YUV data. So, this is what i did
void *YUV[3] = {data[0], data[1], data[2]};
size_t planeWidth[3] = {width, width/2, width/2};
size_t planeHeight[3] = {height, height/2, height/2};
size_t planeBytesPerRow[3] = {stride, stride/2, stride/2};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn ret = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8PlanarFullRange,
nil,
width*height*1.5,
3,
YUV,
planeWidth,
planeHeight,
planeBytesPerRow,
nil,
nil, nil, &pixelBuffer);
NSDict *opt = #{ (id)kCVPixelBufferPixelFormatTypeKey :
#(kCVPixelFormatType_420YpCbCr8PlanarFullRange) };
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
I am getting nil for image. Anyy idea what I am missing.
EDIT:
I added lock and unlock base address before call. Also, I dumped the data of pixelbuffer to ensure pixellbuffer propely hold the data. It looks like something wrong with the init call only. Still CIImage object is returning nil.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CIImage *image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:opt];
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
There should be error message in console: initWithCVPixelBuffer failed because the CVPixelBufferRef is not IOSurface backed. See Apple's Technical Q&A QA1781 for how to create an IOSurface-backed CVPixelBuffer.
Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed...
...To do that, you must specify kCVPixelBufferIOSurfacePropertiesKey in the pixelBufferAttributes dictionary when creating the pixel buffer using CVPixelBufferCreate().
NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionary], (id)kCVPixelBufferIOSurfacePropertiesKey,
nil];
// you may add other keys as appropriate, e.g. kCVPixelBufferPixelFormatTypeKey, kCVPixelBufferWidthKey, kCVPixelBufferHeightKey, etc.
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreate(... (CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);
Alternatively, you can make IOSurface-backed CVPixelBuffers using CVPixelBufferPoolCreatePixelBuffer() from an existing pixel buffer pool, if the pixelBufferAttributes dictionary provided to CVPixelBufferPoolCreate() includes kCVPixelBufferIOSurfacePropertiesKey.
I am working on a similar problem and kept finding that same quote from Apple without any further information on how to work in a YUV color space. I came upon the following:
By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object. Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorspace objects.)
With 8-bit YUV 4:2:2 sources, Core Image can process 240 HD layers per gigabyte. Eight-bit YUV is the native color format for video source such as DV, MPEG, uncompressed D1, and JPEG. You need to convert YUV color spaces to an RGB color space for Core Image.
I note that there are no YUV color spaces, only Gray and RGB; and their calibrated cousins. I'm not sure how to convert the color space yet, but will certainly report here if I find out.

iOS lossless image editing

I'm working on a photo app for iPhone/iPod.
I'd like to get the raw data from a large image in an iPhone app and perform some pixel manipulation on it and write it back to the disk/gallery.
So far I've been converting the UIImage obtained from image picker to unsigned char pointers using the following technique:
CGImageRef imageBuff = [imageBuffer CGImage];//imageBuffer is an UIImage *
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(imageBuff));
unsigned char *input_image = (unsigned char *)CFDataGetBytePtr(pixelData);
//height & width represents the dimensions of the input image
unsigned char *resultant = (unsigned char *)malloc(height*4*width);
for (int i=0; i<height;i++)
{
for (int j=0; j<4*width; j+=4)
{
resultant[i*4*width+4*(j/4)+0] = input_image[i*4*width+4*(j/4)];
resultant[i*4*width+4*(j/4)+1] = input_image[i*4*width+4*(j/4)+1];
resultant[i*4*width+4*(j/4)+2] = input_image[i*4*width+4*(j/4)+2];
resultant[i*4*width+4*(j/4)+3] = 255;
}
}
CFRelease(pixelData);
I'm doing all operations on resultant and writing it back to disk in the original resolution using:
NSData* data = UIImagePNGRepresentation(image);
[data writeToFile:path atomically:YES];
I'd like to know:
is the transformation actually lossless?
if there's a 20-22 MP image at hand... is it wise to do this operation in a background thread? (chances of crashing etc... I'd like to know the best practice for doing this).
is there a better method for implementing this (getting the pixel data is a necessity here)?
Yes the method is lossless but i am not sure about 20-22 MP images. I think the Iphone is not at all a suitable choice if i want to edit that big images!
I have been successful in capturing and editing image upto 22 MP using this technique.
Tested this on an iPhone 4s and it worked fine. However, some of the effects I'm using required Core Image filters. It seems like CIFilters do not support more than 16 MP images. the filters will return a blank image if used on an image >16MP.
I'd still like people to comment on lossless large image editing strategies in iOS.

Resources