Duplicate / Copy CVPixelBufferRef with CVPixelBufferCreate - ios

I need to create a copy of a CVPixelBufferRef in order to be able to manipulate the original pixel buffer in a bit-wise fashion using the values from the copy. I cannot seem to achieve this with CVPixelBufferCreate, or with CVPixelBufferCreateWithBytes.
According to this question, it could possibly also be done with memcpy(). However, there is no explanation on how this would be achieved, and which Core Video library calls would be needed regardless.

This seems to work:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Get pixel buffer info
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
uint8_t *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, kCVPixelFormatType_32BGRA, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
// Do what needs to be done with the 2 pixel buffers
}

ooOlly's code was not working for me with YUV pixel buffers in all cases (green line at bottom and sig trap in memcpy), so this works in swift for YUV pixel buffers from the camera:
var copyOut: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &copyOut)
let copy = copyOut!
CVPixelBufferLockBaseAddress(copy, [])
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let ydestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 0)
let ysrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
memcpy(ydestPlane, ysrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 0))
let uvdestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 1)
let uvsrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
memcpy(uvdestPlane, uvsrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 1))
CVPixelBufferUnlockBaseAddress(copy, [])
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
better error handling than the force unwrap is strongly suggested of course.

Maxi Mus's code only due with RGB/BGR buffer.
so for YUV buffer below code should work.
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, pixelFormat, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
//BGR
// uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
// memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 0);
//YUV
uint8_t *yPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, bufferWidth * bufferHeight);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 1);
uint8_t *uvPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, bufferWidth * bufferHeight/2);
CVPixelBufferUnlockBaseAddress(pixelBufferCopy, 0);

Related

Memory leak from Objective-C code in iOS application

My code is eating memory. I added this function and it seems to the cause of all the problems as when I dont call it then I don't run out.
It's a function in Objective-C to crop an image. How do I release the memory that was used in the auction so that at the end of the function everything is cleaned up before exiting.
-(void) crop: (CVImageBufferRef)sampleBuffer
{
int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;
cropHeight = 720;
cropWidth = 1280;
cropX0 = 0;
cropY0 = 0;
outWidth = 1280;
outHeight = 720;
CVPixelBufferLockBaseAddress(sampleBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(sampleBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(sampleBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError)
{
NSLog(#" error %ld", err);
}
else
{
NSLog(#"Success");
}
CVPixelBufferRef pixelBuffer = NULL;
OSStatus result = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
inBuff.width,
inBuff.height,
kCVPixelFormatType_32BGRA,
outImg,
bytesPerRow,
NULL,
NULL,
NULL,
&pixelBuffer);
CVPixelBufferUnlockBaseAddress(sampleBuffer,0);
}
free(outImg);
at the end missing since you are not freeing the memory allocated.
It is a good practice in embedded programming and also here since you have const size pixel dimensions to use a const matrix that you can declare at the top of the function and initialized to zero.

CVPixelBufferCreate does not care Planar format

I try to rotate CoreVideo '420f' image without transfer to RGBA.
The incoming CMSampleBuffer Y-plane bytesPerRow is width + 32.
That means Y-plane row size is 8bit * width + sizeof(CVPlanarComponentInfo).
But if I call CVPixelBufferCreate(,,,'420f',,) , BytesPerRow == width.
CVPixelBufferCreate() does not care about planar format and did not add 32bytes.
I tried
vImage_Buffer myYBuffer = {buf, height, width, bytePerRow};
But there is no parameter for bitsPerPixel. I cannot use for UVBuffer.
I tried
vImageBuffer_Init(buf, height, width, bitPerPixel, flag);
But there is no parameter for bytesPerRow.
I like to know how to create vImageBuffer or CVPixelBuffer with '420f' planar format.
This is under construction code for rotation
NS_INLINE void dumpData(NSString* tag, unsigned char* p, size_t w) {
NSMutableString* str = [tag mutableCopy];
for(int i=0;i<w+100;++i) {
[str appendString:[NSString stringWithFormat:#"%02x ", *(p + i)]];
}
NSLog(#"%#", str);
}
- (CVPixelBufferRef) RotateBuffer:(CMSampleBufferRef)sampleBuffer withConstant:(uint8_t)rotationConstant
{
vImage_Error err = kvImageNoError;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t outHeight = width;
size_t outWidth = height;
assert(CVPixelBufferGetPixelFormatType(imageBuffer) == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange);
assert(CVPixelBufferGetPlaneCount(imageBuffer) == 2);
NSLog(#"YBuffer %ld %ld %ld", CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)); // BytesPerRow = width + 32
dumpData(#"Base=", CVPixelBufferGetBaseAddress(imageBuffer), width);
dumpData(#"Plane0=", CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), width);
CVPixelBufferRef rotatedBuffer = NULL;
CVReturn ret = CVPixelBufferCreate(kCFAllocatorDefault, outWidth, outHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, NULL, &rotatedBuffer);
NSLog(#"CVPixelBufferCreate err=%d", ret);
CVPixelBufferLockBaseAddress(rotatedBuffer, 0);
NSLog(#"CVPixelBufferCreate init %ld %ld %ld p=%p", CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0), CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0));
// BytesPerRow = width ??? should be width + 32
// rotate Y plane
vImage_Buffer originalYBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0), CVPixelBufferGetHeightOfPlane(imageBuffer, 0),
CVPixelBufferGetWidthOfPlane(imageBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) };
vImage_Buffer rotatedYBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 0), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 0),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 0), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 0) };
err = vImageRotate90_Planar8(&originalYBuffer, &rotatedYBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"rotatedYBuffer rotated %ld %ld %ld p=%p", rotatedYBuffer.width, rotatedYBuffer.height, rotatedYBuffer.rowBytes, rotatedYBuffer.data);
NSLog(#"RotateY err=%ld", err);
dumpData(#"Rotated Plane0=", rotatedYBuffer.data, outWidth);
// rotate UV plane
vImage_Buffer originalUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1), CVPixelBufferGetHeightOfPlane(imageBuffer, 1),
CVPixelBufferGetWidthOfPlane(imageBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1) };
vImage_Buffer rotatedUVBuffer = { CVPixelBufferGetBaseAddressOfPlane(rotatedBuffer, 1), CVPixelBufferGetHeightOfPlane(rotatedBuffer, 1),
CVPixelBufferGetWidthOfPlane(rotatedBuffer, 1), CVPixelBufferGetBytesPerRowOfPlane(rotatedBuffer, 1) };
err = vImageRotate90_Planar16U(&originalUVBuffer, &rotatedUVBuffer, 1, 0.0, kvImageNoFlags);
NSLog(#"RotateUV err=%ld", err);
dumpData(#"Rotated Plane1=", rotatedUVBuffer.data, outWidth);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CVPixelBufferUnlockBaseAddress(rotatedBuffer, 0);
return rotatedBuffer;
}
I found vImageBuffer BytesPerRow extra 32 byte is optional. Some Apple API add 32 byte on each row, some API does not add.
Actually questioned code works fine. CVPixelBufferCreate() creates buffer without extra 32 byte. vImageRotate90_Planar8() supports both formats, with 32 byte and without 32 byte.

How to convert opencv cv::Mat to CVPixelBuffer

I'm an undergraduate student and I'm doing some HumanSeg iPhone app using CoreML. Since my model needs resizing and black padding on the original video frames, I can't rely on Vision (which only provides resizing but no black padding) and have to do the converting myself.
I have CVPixelBuffer frames and I have converted it into cv::Mat using the following codes:
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int) CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int) CVPixelBufferGetHeight(pixelBuffer);
int bytePerRow = (int) CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *) CVPixelBufferGetBaseAddress(pixelBuffer);
Mat image = Mat(bufferHeight, bufferWidth, CV_8UC4, pixel, bytePerRow);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
/*I'll do my resizing and padding here*/
// How can I implement this function?
convertToCVPixelBuffer(image);
But now, after I've done my preprocessing works, I have to convert the cv::Mat back to a CVPixelBuffer to feed it to the CoreML model. How can I achieve this? (Or can Vision achieve black padding using some special techniques?)
Any help will be appreciated.
Please see below the code... Checking whether width and height is divisible by 64 is necessary or else we get weird results due to BytesPerRow mismatch with cv::Mat and CVPixelBuffer
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
/* Very much required see https://stackoverflow.com/questions/66434552/objective-c-cvmat-to-cvpixelbuffer
height & width has to be multiple of 64 for better caching
*/
int widthReminder = matimg.cols % 64, heightReminder = matimg.rows % 64;
if (widthReminder != 0 || heightReminder != 0) {
cv::resize(matimg, matimg, cv::Size(matimg.cols + (64 - widthReminder), matimg.rows + (64 - heightReminder)));
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return imageBuffer;
}
First, convert mat to UIImage (or any other class from iOS APIs), check this question. Then, convert resulting image to CVPixelBuffer like this.
For people who will be using the new OpenCV Swift Wrapper, here is #Abhinava 's code translated to Swift
func matToCVPixelBuffer(mat: Mat)-> CVPixelBuffer? {
let matrix = Mat()
Imgproc.cvtColor(src: mat, dst: matrix, code: ColorConversionCodes.COLOR_BGR2BGRA)
let widthRemainder = matrix.cols() % 64
let heightRemainder = matrix.rows() % 64
if widthRemainder != 0 || heightRemainder != 0 {
Imgproc.resize(src: matrix, dst: matrix, dsize: Size(width: matrix.cols() + (64 - widthRemainder), height: matrix.rows() + (64 - heightRemainder)))
}
let attributes = [
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue!,
kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue!,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue!,
kCVPixelBufferWidthKey: matrix.cols(),
kCVPixelBufferHeightKey: matrix.rows(),
kCVPixelBufferBytesPerRowAlignmentKey: matrix.step1(0)
] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(
kCFAllocatorDefault, Int(matrix.cols()),
Int(matrix.rows()),
kCVPixelFormatType_32BGRA,
attributes,
&pixelBuffer)
guard let pixelBuffer = pixelBuffer, (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let base = CVPixelBufferGetBaseAddress(pixelBuffer)
memcpy(base, matrix.dataPointer(), matrix.total()*matrix.elemSize())
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}

ios9 - Issues with cropping CVImageBuffer

I am facing few issues related to cropping with iOS9 SDK.
I have the following code to resize a image (converting from 4:3 to 16:9 by cropping in middle). This used to work fine till iOS8 SDK. With iOS 9, the bottom area is blank.
(CMSampleBufferRef)resizeImage:(CMSampleBufferRef) sampleBuffer {
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
int target_width = CVPixelBufferGetWidth(imageBuffer);
int target_height = CVPixelBufferGetHeight(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
int width = CVPixelBufferGetWidth(imageBuffer);
int x=0, y=0;
// Convert 16:9 to 4:3
if (((target_width*3)/target_height) == 4)
{
target_height = ((target_width*9)/16);
target_height = ((target_height + 15) / 16) * 16;
y = (height - target_height)/2;
}
else
if ((target_width == 352) && (target_height == 288))
{
target_height = ((target_width*9)/16);
target_height = ((target_height + 15) / 16) * 16;
y = (height - target_height)/2;
}
else
if (((target_height*3)/target_width) == 4)
{
target_width = ((target_height*9)/16);
target_width = ((target_width + 15) / 16) * 16;
x = ((width - target_width)/2);
}
else
if ((target_width == 288) && (target_height == 352))
{
target_width = ((target_height*9)/16);
target_width = ((target_width + 15) / 16) * 16;
x = ((width - target_width)/2);
}
CGRect cropRect;
NSLog(#"resizeImage x %d, y %d, target_width %d, target_height %d", x, y, target_width, target_height );
cropRect = CGRectMake(x, y, target_width, target_height);
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
OSStatus status;
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer]; //options: [NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
CVPixelBufferRef pixelBuffer;
status = CVPixelBufferCreate(kCFAllocatorSystemDefault, target_width, target_height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, attrs, &pixelBuffer);
if (status != 0)
{
NSLog(#"CVPixelBufferCreate error %d", (int)status);
}
[ciContext render:ciImage toCVPixelBuffer:pixelBuffer bounds:cropRect colorSpace:nil];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferUnlockBaseAddress( imageBuffer,0);
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
CMVideoFormatDescriptionRef videoInfo = NULL;
status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
if (status != 0)
{
NSLog(#"CMVideoFormatDescriptionCreateForImageBuffer error %d", (int)status);
}
CMSampleBufferRef oBuf;
status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &oBuf);
if (status != 0)
{
NSLog(#"CMSampleBufferCreateForImageBuffer error %d", (int)status);
}
CFRelease(pixelBuffer);
ciImage = nil;
pixelBuffer = nil;
return oBuf;
}
}
Any ideas or suggestions regarding this? I tried changing the crop rectangle but with no effect.
Thanks
Are you aware that the doc comment of the function [CIContext toCVPixelBuffer: bounds: colorSpace:] says about iOS8- and iOS9+? (I could've not found any online resource to link, though.)
/* Render 'image' to the given CVPixelBufferRef.
* The 'bounds' parameter has the following behavior:
* In OS X and iOS 9 and later: The 'image' is rendered into 'buffer' so that
* point (0,0) of 'image' aligns to the lower left corner of 'buffer'.
* The 'bounds' acts like a clip rect to limit what region of 'buffer' is modified.
* In iOS 8 and earlier: The 'bounds' parameter acts to specify the region of 'image' to render.
* This region (regarless of its origin) is rendered at upper-left corner of 'buffer'.
*/
Taking it into account I solved my problem, which looks the same as yours.

How to compile vImage emboss effect sample code?

Here is the code found in the documentation:
int myEmboss(void *inData,
unsigned int inRowBytes,
void *outData,
unsigned int outRowBytes,
unsigned int height,
unsigned int width,
void *kernel,
unsigned int kernel_height,
unsigned int kernel_width,
int divisor ,
vImage_Flags flags ) {
uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
vImage_Buffer src = { inData, height, width, inRowBytes }; // 2
vImage_Buffer dest = { outData, height, width, outRowBytes }; // 3
unsigned char bgColor[4] = { 0, 0, 0, 0 }; // 4
vImage_Error err; // 5
err = vImageConvolve_ARGB8888( &src, //const vImage_Buffer *src
&dest, //const vImage_Buffer *dest,
NULL,
0, //unsigned int srcOffsetToROI_X,
0, //unsigned int srcOffsetToROI_Y,
kernel, //const signed int *kernel,
kernel_height, //unsigned int
kernel_width, //unsigned int
divisor, //int
bgColor,
flags | kvImageBackgroundColorFill
//vImage_Flags flags
);
return err;
}
Here is the problem: the kernel variable seems to refer to three different types:
void * kernel in the formal parameter list
an undefined unsigned int uint_8 kernel, as a new variable which presumably would shadow the formal parameter
a const signed int *kernel when calling vImageConvolve_ARGB8888.
Is this actual code ? How may I compile this function ?
You are correct that that function is pretty messed up. I recommend using the Provide Feedback widget to let Apple know.
I think you should remove the kernel, kernel_width, and kernel_height parameters from the function signature. Those seem to be holdovers from a function that applies a caller-supplied kernel, but this example is about applying an internally-defined kernel.
Fixed the declaration of the kernel local variable to make it an array of uint8_t, like so:
uint8_t kernel[] = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
Then, at the call to vImageConvolve_ARGB8888(), replace kernel_width and kernel_height by 3. Since the kernel is hard-coded, the dimensions can be as well.
The kernel is just the kernel used in the convolution. In mathematical terms, it is the matrix that is convolved with your image, to achieve blur/sharpen/emboss or other effects. This function you provided is just a thin wrapper around the vimage convolution function. To actually perform the convolution you can follow the code below. The code is all hand typed so not necessarily 100% correct but should point you in the right direction.
To use this function, you first need to have pixel access to your image. Assuming you have a UIImage, you do this:
//image is a UIImage
CGImageRef img = image.CGImage;
CGDataProviderRef dataProvider = CGImageGetDataProvider(img);
CFDataRef cfData = CGDataProviderCopyData(dataProvider);
void * dataPtr = (void*)CFDataGetBytePtr(cfData);
Next, you construct the vImage_Buffer that you will pass to the function
vImage_Buffer inBuffer, outBuffer;
inBuffer.data = dataPtr;
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
Allocate the outBuffer as well
outBuffer.data = malloc(inBuffer.height * inBuffer.rowBytes)
// Setup width, height, rowbytes equal to inBuffer here
Now we create the Kernel, the same one in your example, which is a 3x3 matrix
Multiply the values by a divisor if they are float (they need to be int)
int divisor = 1000;
CGSize kernalSize = CGSizeMake(3,3);
int16_t *kernel = (int16_t*)malloc(sizeof(int16_t) * 3 * 3);
// Assign kernel values to the emboss kernel
// uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0} // * 1000 ;
Now perform the convolution on the image!
//Use a background of transparent black as temp
Pixel_8888 temp = 0;
vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, kernel, kernelSize.width, kernelSize.height, divisor, temp, kvImageBackgroundColorFill);
Now construct a new UIImage out of outBuffer and your done!
Remember to free the kernel and the outBuffer data.
This is the way I am using it to process frames read from a video with AVAssetReader. This is a blur, but you can change the kernel to suit your needs. 'imageData' can of course be obtained by other means, e.g. from an UIImage.
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *imageData = CVPixelBufferGetBaseAddress(imageBuffer);
int16_t kernel[9];
for(int i = 0; i < 9; i++) {
kernel[i] = 1;
}
kernel[4] = 2;
unsigned char *newData= (unsigned char*)malloc(4*currSize);
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, height, width, 4*width };
vImage_Error err=vImageConvolve_ARGB8888 (&inBuff,&outBuff,NULL, 0,0,kernel,3,3,10,nil,kvImageEdgeExtend);
if (err != kvImageNoError) NSLog(#"convolve error %ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
//newData holds the processed image

Resources