Compute the histogram of an image using vImageHistogramCalculation - ios

I'm trying to compute the histogram of an image using vImage's vImageHistogramCalculation_ARGBFFFF, but I'm getting a vImage_Error of type kvImageNullPointerArgument (error code a -21772).
Here's my code:
- (void)histogramForImage:(UIImage *)image {
//setup inBuffer
vImage_Buffer inBuffer;
//Get CGImage from UIImage
CGImageRef img = image.CGImage;
//create vImage_Buffer with data from CGImageRef
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
//The next three lines set up the inBuffer object
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
//This sets the pointer to the data for the inBuffer object
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
//Prepare the parameters to pass to vImageHistogramCalculation_ARGBFFFF
vImagePixelCount *histogram[4] = {0};
unsigned int histogram_entries = 4;
Pixel_F minVal = 0;
Pixel_F maxVal = 255;
vImage_Flags flags = kvImageNoFlags;
vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer,
histogram,
histogram_entries,
minVal,
maxVal,
flags);
if (error) {
NSLog(#"error %ld", error);
}
//clean up
CGDataProviderRelease(inProvider);
}
I suspect it has something to do with my histogram parameter, which, according to the docs, is supposed to be "a pointer to an array of four histograms". Am I declaring it correctly?
Thanks.

The trouble is that you’re not allocating space to hold the computed histograms. If you are only using the histograms locally, you can put them on the stack like so [note that I’m using eight bins instead of four, to make the example more clear]:
// create an array of four histograms with eight entries each.
vImagePixelCount histogram[4][8] = {{0}};
// vImageHistogramCalculation requires an array of pointers to the histograms.
vImagePixelCount *histogramPointers[4] = { &histogram[0][0], &histogram[1][0], &histogram[2][0], &histogram[3][0] };
vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer, histogramPointers, 8, 0, 255, kvImageNoFlags);
// You can now access bin j of the histogram for channel i as histogram[i][j].
// The storage for the histogram will be cleaned up when execution leaves the
// current lexical block.
If you need the histograms to stick around outside the scope of your function, you’ll need to allocate space for them on the heap instead:
vImagePixelCount *histogram[4];
unsigned int histogramEntries = 8;
histogram[0] = malloc(4*histogramEntries*sizeof histogram[0][0]);
if (!histogram[0]) { // handle error however is appropriate }
for (int i=1; i<4; ++i) { histogram[i] = &histogram[0][i*histogramEntries]; }
vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer, histogram, 8, 0, 255, kvImageNoFlags);
// You can now access bin j of the histogram for channel i as histogram[i][j].
// Eventually you will need to free(histogram[0]) to release the storage.
Hope this helps.

Related

Edit a RGB colorspace image with HSL conversion failed

I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.
I suppose the original image's color space is RGB, so here is my thought:
Convert the UIImage to cvMat
Convert the colorspace from BGR to HLS.
Loop through all the pixel points to get the corresponding HLS values.
Custom algorithms.
Rewrite the HLS value changes to cvMat
Convert the cvMat to UIImage
Here is my code:
Conversion between UIImage and cvMat
Reference: https://stackoverflow.com/a/10254561/1677041
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
#else
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step[0], // bytesPerRow
colorSpace, // colorspace
bitmapInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
cv::Mat cvMatWithImage(UIImage *image)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1) {
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo // Bitmap info flags
);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
I tested these two functions alone and confirm that they work.
Core operations about conversion:
/// Generate a new image based on specified HSL value changes.
/// #param h_delta h value in [-360, 360]
/// #param s_delta s value in [-100, 100]
/// #param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
Mat original = cvMatWithImage(self.originalImage);
Mat image;
cvtColor(original, image, COLOR_BGR2HLS);
// https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
// accept only char type matrices
CV_Assert(image.depth() == CV_8U);
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols * channels;
int y, x;
for (y = 0; y < nRows; ++y) {
for (x = 0; x < nCols; ++x) {
// https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
// https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// h = MAX(0, MIN(360, h + h_delta));
// s = MAX(0, MIN(100, s + s_delta));
// l = MAX(0, MIN(100, l + l_delta));
printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
original.at<Vec3b>(y, x)[0] = h;
original.at<Vec3b>(y, x)[1] = l;
original.at<Vec3b>(y, x)[2] = s;
}
}
cvtColor(image, image, COLOR_HLS2BGR);
UIImage *resultImage = UIImageFromCVMat(image);
dispatch_async(dispatch_get_main_queue(), ^ {
if (completion) {
completion(resultImage);
}
});
});
}
The question is:
Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
Should I use Vec3b within the two for loop? or Vec3i instead?
Does my thought have something wrong above?
Update:
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;
fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));
// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;
printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);
1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.
8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)
V should be L, there is a typo in the documentation
You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.
But you have to be careful converting it back.
2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.
If you have a question, feel free to comment

Get RGB "CVPixelBuffer" from ARKit

I'm trying to get a CVPixelBuffer in RGB color space from the Apple's ARKit. In func session(_ session: ARSession, didUpdate frame: ARFrame) method of ARSessionDelegate I get an instance of ARFrame. On page Displaying an AR Experience with Metal I found that this pixel buffer is in YCbCr (YUV) color space.
I need to convert this to RGB color space (I actually need CVPixelBuffer and not UIImage). I've found something about color conversion on iOS but I was not able to get this working in Swift 3.
There's several ways to do this, depending on what you're after. The best way to do this in realtime (to say, render the buffer to a view) is to use a custom shader to convert the YCbCr CVPixelBuffer to RGB.
Using Metal:
If you make a new project, select "Augmented Reality App," and select "Metal" for the content technology, the project generated will contain the code and shaders necessary to make this conversion.
Using OpenGL:
The GLCameraRipple example from Apple uses an AVCaptureSession to capture the camera, and shows how to map the resulting CVPixelBuffer to GL textures, which are then converted to RGB in shaders (again, provided in the example).
Non Realtime:
The answer to this stackoverflow question addresses converting the buffer to a UIImage, and offers a pretty simple way to do it.
I have also stuck on this question for several days. All of the code snippet I could find on the Internet is written in Objective-C rather than Swift, regarding converting CVPixelBuffer to UIImage.
Finally, the following code snippet works perfect for me, to convert a YUV image to either JPG or PNG file format, and then you can write it to the local file in your application.
func pixelBufferToUIImage(pixelBuffer: CVPixelBuffer) -> UIImage {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext(options: nil)
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
let uiImage = UIImage(cgImage: cgImage!)
return uiImage
}
The docs explicitly says that you need to access the luma and chroma planes:
ARKit captures pixel buffers in a planar YCbCr format (also known as YUV) format. To render these images on a device display, you'll need to access the luma and chroma planes of the pixel buffer and convert pixel values to an RGB format.
So there's no way to directly get the RGB planes and you'll have to handle this in your shaders, either in Metal or openGL as described by #joshue
You may want the Accelerate framework's image conversion functions. Perhaps a combination of vImageConvert_420Yp8_Cb8_Cr8ToARGB8888 and vImageConvert_ARGB8888toRGB888 (If you don't want the alpha channel). In my experience these work in real time.
Struggled a long while with this as well and I've ended up writing the following code, which works for me:
// Helper macro to ensure pixel values are bounded between 0 and 255
#define clamp(a) (a > 255 ? 255 : (a < 0 ? 0 : a));
- (void)processImageBuffer:(CVImageBufferRef)imageBuffer
{
OSType type = CVPixelBufferGetPixelFormatType(imageBuffer);
if (type == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
{
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// We know the return format of the base address based on the YpCbCr8BiPlanarFullRange format (as per doc)
StandardBuffer baseAddress = (StandardBuffer)CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer, width and height
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get buffer info and planar pixel data
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
// This just moved the pointer past the offset
baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
int bytesPerPixel = 4;
uint8_t *rgbData = rgbFromYCrCbBiPlanarFullRangeBuffer(baseAddress,
cbrBuff,
bufferInfo,
width,
height,
bytesPerRow);
[self doStuffOnRGBBuffer:rgbData width:width height:height bitsPerComponent:8 bytesPerPixel:bytesPerPixel bytesPerRow:bytesPerRow];
free(rgbData);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
else
{
NSLog(#"Unsupported image buffer type");
}
}
uint8_t * rgbFromYCrCbBiPlanarFullRangeBuffer(uint8_t *inBaseAddress,
uint8_t *cbCrBuffer,
CVPlanarPixelBufferInfo_YCbCrBiPlanar * inBufferInfo,
size_t inputBufferWidth,
size_t inputBufferHeight,
size_t inputBufferBytesPerRow)
{
int bytesPerPixel = 4;
NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);
uint8_t *rgbBuffer = (uint8_t *)malloc(inputBufferWidth * inputBufferHeight * bytesPerPixel);
NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes);
uint8_t *yBuffer = (uint8_t *)inBaseAddress;
for(int y = 0; y < inputBufferHeight; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * inputBufferWidth * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < inputBufferWidth; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
// ABGR image representation
rgbOutput[0] = 0Xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
return rgbBuffer;
}

xcode CVpixelBuffer shows negative values

I am using xcode and is currently trying to extract pixel values from the pixel buffer using the following code. However, when i print out the pixel values, it consists of negative values. Anyone has encountered such problem before?
part of the code is as below
- (void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:
(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef Buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(Buffer, 0);
uint8_t* BaseAddress = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(Buffer, 0);
size_t Width = CVPixelBufferGetWidth(Buffer);
size_t Height = CVPixelBufferGetHeight(Buffer);
if (BaseAddress)
{
IplImage* Temporary = cvCreateImage(cvSize(Width, Height), IPL_DEPTH_8U, 4);
Temporary->imageData = (char*)BaseAddress;
for (int i = 0; i < Temporary->width * Temporary->height; ++i) {
NSLog(#"Pixel value: %d",Temporary->imageData[i]);
//where i try to print the pixels
}
}
The issue is that imageData of IplImage is a signed char. Thus, anything greater than 127 will appear as a negative number.
You can simply assign it to an unsigned char, and then print that, and you'll see values in the range between 0 and 255, like you probably anticipated:
for (int i = 0; i < Temporary->width * Temporary->height; ++i) {
unsigned char c = Temporary->imageData[i];
NSLog(#"Pixel value: %u", c);
}
Or you can print that in hex:
NSLog(#"Pixel value: %02x", c);

How can I obtain all the image pixels from a UIImage object [duplicate]

This question already has answers here:
Get underlying NSData from UIImage
(7 answers)
Closed 8 years ago.
My task is to obtain all the image pixels from a UIImage object, and then store them in a variable. It is not difficult for me to do that for colour image:
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t ele = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
// Create memory for the input image
unsigned char *img_mem;
img_mem = (unsigned char*) malloc(rows*cols*4);
unsigned char *my_img;
my_img = (unsigned char *)malloc(rows*cols*3);
CGContextRef contextRef = CGBitmapContextCreate(img_mem, cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cols*4, // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
unsigned int pos_new;
unsigned int pos_old;
for(int i=0; i<rows; i++)
{
pos_new = i*cols*3;
pos_old = i*cols*4;
for(int j=0; j<cols; j++)
{
my_img[j*3+pos_new] = img_mem[pos_old+j*4];
my_img[j*3+pos_new+1] = img_mem[pos_old+j*4+1];
my_img[j*3+pos_new+2] = img_mem[pos_old+j*4+2];
}
}
free(img_mem);
//All the pixels are installed in my_img
free(my_img);
My problem is the above codes are useful for colour image, but for grayscale image I do not how to do it. Any ideas?
The trouble is you've got hard-coded numbers in your code that make assumptions about your input and output image formats. Doing it this way completely depends on the exact format of your greyscale source image, and equally on what format you want the resultant image to be in.
If you are sure the images will always be, say, 8-bit single-channel greyscale, then you could get away with simply removing all occurrences of *4 and *3 in your code, and reducing the final inner loop to only handle a single channel:-
for(int j=0; j<cols; j++)
{
my_img[j+pos_new] = img_mem[pos_old+j];
}
But if the output image is going to be 24-bit (as your code seems to imply) then you'll have to leave in all the occurrences of *3 and your inner loop would read:-
for(int j=0; j<cols; j++)
{
my_img[j*3+pos_new] = img_mem[pos_old+j];
my_img[j*3+pos_new+1] = img_mem[pos_old+j];
my_img[j*3+pos_new+2] = img_mem[pos_old+j];
}
This would create greyscale values in 24 bits.
To make it truly flexible you should look at the components of your colorSpace and dynamically code your pixel processing loops based on that, or at least throw some kind of exception or error if the image format is not what your code expects.
Please refer to the category (UIImage+Pixels) on the link : http://b2cloud.com.au/tutorial/obtaining-pixel-data-from-a-uiimage

How to compile vImage emboss effect sample code?

Here is the code found in the documentation:
int myEmboss(void *inData,
unsigned int inRowBytes,
void *outData,
unsigned int outRowBytes,
unsigned int height,
unsigned int width,
void *kernel,
unsigned int kernel_height,
unsigned int kernel_width,
int divisor ,
vImage_Flags flags ) {
uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
vImage_Buffer src = { inData, height, width, inRowBytes }; // 2
vImage_Buffer dest = { outData, height, width, outRowBytes }; // 3
unsigned char bgColor[4] = { 0, 0, 0, 0 }; // 4
vImage_Error err; // 5
err = vImageConvolve_ARGB8888( &src, //const vImage_Buffer *src
&dest, //const vImage_Buffer *dest,
NULL,
0, //unsigned int srcOffsetToROI_X,
0, //unsigned int srcOffsetToROI_Y,
kernel, //const signed int *kernel,
kernel_height, //unsigned int
kernel_width, //unsigned int
divisor, //int
bgColor,
flags | kvImageBackgroundColorFill
//vImage_Flags flags
);
return err;
}
Here is the problem: the kernel variable seems to refer to three different types:
void * kernel in the formal parameter list
an undefined unsigned int uint_8 kernel, as a new variable which presumably would shadow the formal parameter
a const signed int *kernel when calling vImageConvolve_ARGB8888.
Is this actual code ? How may I compile this function ?
You are correct that that function is pretty messed up. I recommend using the Provide Feedback widget to let Apple know.
I think you should remove the kernel, kernel_width, and kernel_height parameters from the function signature. Those seem to be holdovers from a function that applies a caller-supplied kernel, but this example is about applying an internally-defined kernel.
Fixed the declaration of the kernel local variable to make it an array of uint8_t, like so:
uint8_t kernel[] = {-2, -2, 0, -2, 6, 0, 0, 0, 0}; // 1
Then, at the call to vImageConvolve_ARGB8888(), replace kernel_width and kernel_height by 3. Since the kernel is hard-coded, the dimensions can be as well.
The kernel is just the kernel used in the convolution. In mathematical terms, it is the matrix that is convolved with your image, to achieve blur/sharpen/emboss or other effects. This function you provided is just a thin wrapper around the vimage convolution function. To actually perform the convolution you can follow the code below. The code is all hand typed so not necessarily 100% correct but should point you in the right direction.
To use this function, you first need to have pixel access to your image. Assuming you have a UIImage, you do this:
//image is a UIImage
CGImageRef img = image.CGImage;
CGDataProviderRef dataProvider = CGImageGetDataProvider(img);
CFDataRef cfData = CGDataProviderCopyData(dataProvider);
void * dataPtr = (void*)CFDataGetBytePtr(cfData);
Next, you construct the vImage_Buffer that you will pass to the function
vImage_Buffer inBuffer, outBuffer;
inBuffer.data = dataPtr;
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
Allocate the outBuffer as well
outBuffer.data = malloc(inBuffer.height * inBuffer.rowBytes)
// Setup width, height, rowbytes equal to inBuffer here
Now we create the Kernel, the same one in your example, which is a 3x3 matrix
Multiply the values by a divisor if they are float (they need to be int)
int divisor = 1000;
CGSize kernalSize = CGSizeMake(3,3);
int16_t *kernel = (int16_t*)malloc(sizeof(int16_t) * 3 * 3);
// Assign kernel values to the emboss kernel
// uint_8 kernel = {-2, -2, 0, -2, 6, 0, 0, 0, 0} // * 1000 ;
Now perform the convolution on the image!
//Use a background of transparent black as temp
Pixel_8888 temp = 0;
vImageConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, kernel, kernelSize.width, kernelSize.height, divisor, temp, kvImageBackgroundColorFill);
Now construct a new UIImage out of outBuffer and your done!
Remember to free the kernel and the outBuffer data.
This is the way I am using it to process frames read from a video with AVAssetReader. This is a blur, but you can change the kernel to suit your needs. 'imageData' can of course be obtained by other means, e.g. from an UIImage.
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *imageData = CVPixelBufferGetBaseAddress(imageBuffer);
int16_t kernel[9];
for(int i = 0; i < 9; i++) {
kernel[i] = 1;
}
kernel[4] = 2;
unsigned char *newData= (unsigned char*)malloc(4*currSize);
vImage_Buffer inBuff = { imageData, height, width, 4*width };
vImage_Buffer outBuff = { newData, height, width, 4*width };
vImage_Error err=vImageConvolve_ARGB8888 (&inBuff,&outBuff,NULL, 0,0,kernel,3,3,10,nil,kvImageEdgeExtend);
if (err != kvImageNoError) NSLog(#"convolve error %ld", err);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
//newData holds the processed image

Resources