Face detection using OpenCV : application crashes using CGContextDrawImage ( EXC_BAD_ACCESS ) - ios

I'm using OpenCV in my swift project to detect faces on pictures from an iPhone's gallery.
When I try to process a UIImage picked from the gallery, I'm first doing that :
+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
...
cv::Mat img = [inputImage CVGrayscaleMat];
...
}
Here is my CVGrayscaleMat method :
-(cv::Mat)CVGrayscaleMat
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat((int)rows, (int)cols, CV_8UC1); // 8 bits per component, 1 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
(int)cols, // Width of bitmap
(int)rows, // Height of bitmap
8, // Bits per component
4896, // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, (int)cols, (int)rows), self.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
When I try to process a picture, I got a EXC_BAD_ACCESS on that line :
CGContextDrawImage(contextRef, CGRectMake(0, 0, (int)cols, (int)rows), self.CGImage);
I'm still not super familiar with OpenCV... Any idea ?
Thanks in advance !
SOLVED :
Using the standard function - (cv::Mat)cvMatFromUIImage:(UIImage *)image

Related

Edit a RGB colorspace image with HSL conversion failed

I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.
I suppose the original image's color space is RGB, so here is my thought:
Convert the UIImage to cvMat
Convert the colorspace from BGR to HLS.
Loop through all the pixel points to get the corresponding HLS values.
Custom algorithms.
Rewrite the HLS value changes to cvMat
Convert the cvMat to UIImage
Here is my code:
Conversion between UIImage and cvMat
Reference: https://stackoverflow.com/a/10254561/1677041
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
#else
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step[0], // bytesPerRow
colorSpace, // colorspace
bitmapInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
cv::Mat cvMatWithImage(UIImage *image)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1) {
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo // Bitmap info flags
);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
I tested these two functions alone and confirm that they work.
Core operations about conversion:
/// Generate a new image based on specified HSL value changes.
/// #param h_delta h value in [-360, 360]
/// #param s_delta s value in [-100, 100]
/// #param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
Mat original = cvMatWithImage(self.originalImage);
Mat image;
cvtColor(original, image, COLOR_BGR2HLS);
// https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
// accept only char type matrices
CV_Assert(image.depth() == CV_8U);
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols * channels;
int y, x;
for (y = 0; y < nRows; ++y) {
for (x = 0; x < nCols; ++x) {
// https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
// https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// h = MAX(0, MIN(360, h + h_delta));
// s = MAX(0, MIN(100, s + s_delta));
// l = MAX(0, MIN(100, l + l_delta));
printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
original.at<Vec3b>(y, x)[0] = h;
original.at<Vec3b>(y, x)[1] = l;
original.at<Vec3b>(y, x)[2] = s;
}
}
cvtColor(image, image, COLOR_HLS2BGR);
UIImage *resultImage = UIImageFromCVMat(image);
dispatch_async(dispatch_get_main_queue(), ^ {
if (completion) {
completion(resultImage);
}
});
});
}
The question is:
Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
Should I use Vec3b within the two for loop? or Vec3i instead?
Does my thought have something wrong above?
Update:
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;
fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));
// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;
printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);
1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.
8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)
V should be L, there is a typo in the documentation
You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.
But you have to be careful converting it back.
2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.
If you have a question, feel free to comment

iOS GPUImage - how to replace all pixels in an image which have R, G and B values over a certain value?

Please bear with me as I am pretty new to iOS and GPUImage too. I am trying to figure out how to replace all pixels in an image which have R, G and B values over a certain value with another R,G,B value.
So let's say I need to replace all pixels which have an:
R value of over 20 with 100,
G value of over 100 with 200,
B value of over 40 with 0
I am able to do this the old school way but it is very slow and I need to do it as fast as possible:
- (CGImageRef)processImageOldSchoolWay:(CGImageRef)image
{
uint8_t *bufptr;
int width = (int)CGImageGetWidth( image );
int height = (int)CGImageGetHeight( image );
// allocate memory for pixels
uint32_t *pixels = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// draw the image into the context
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image );
// manipulate the pixels
bufptr = (uint8_t *)pixels;
for (NSInteger y = 0; y < height; y++){
for (NSInteger x = 0; x < width; x++ )
{
if (bufptr[3]>=(int)self.redslider.value) {
bufptr[3]=100;
}
if (bufptr[2]>=(int)self.greenslider.value) {
bufptr[2]=200;
}
if (bufptr[1]>=(int)self.blueslider.value) {
bufptr[1]=0;
}
bufptr += 4;
}
}
// create a new CGImage from the context with modified pixels
CGImageRef resultImage = CGBitmapContextCreateImage(context);
// release resources to free up memory
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
return resultImage;
}
So I came across GPUImage which seems to have VERY good performance. I think I need to use GPUImageColorMatrixFilter but I can't seem to understand how exactly the colorMatrix are formed.
For example, for sepia, I was able to get this to work by looking at examples:
GPUImagePicture *gpuImage = [[GPUImagePicture alloc] initWithImage:image];
GPUImageColorMatrixFilter *conversionFilter = [[GPUImageColorMatrixFilter alloc] init];
conversionFilter.colorMatrix = (GPUMatrix4x4){
{0.3588, 0.7044, 0.1368, 0.0},
{0.2990, 0.5870, 0.1140, 0.0},
{0.2392, 0.4696, 0.0912 ,0.0},
{0,0,0,1.0},
};
[gpuImage addTarget:conversionFilter];
[conversionFilter useNextFrameForImageCapture];
[gpuImage processImage];
UIImage *toReturn = [conversionFilter imageFromCurrentFramebufferWithOrientation:0];
return toReturn;
I can't seem to understand the formation of conversionFilter.colorMatrix line. Why were those values used in that matrix?? How can I get the values for my sort of need where I need to replace threshold values with other values?

How do you find a region of a certain color within an image on iOS?

I'm doing an image processing iOS app, where we have a large image (eg:size will be 2000x2000). Assume that image is completely black, except one part of the image is a different color (lets say the size of that region is 200x200).
SI want to calculate the start and end position of that differently coloured region. How can I achieve this?
Here's a simple way to allow the CPU to get pixel values from a UIImage. The steps are
allocate a buffer for the pixels
create a bitmap memory context using the buffer as the backing store
draw the image into the context (writes the pixels into the buffer)
examine the pixels in the buffer
free the buffer and associated resources
- (void)processImage:(UIImage *)input
{
int width = input.size.width;
int height = input.size.height;
// allocate the pixel buffer
uint32_t *pixelBuffer = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixelBuffer, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// invert the y-axis, so that increasing y is down
CGContextScaleCTM( context, 1.0, -1.0 );
CGContextTranslateCTM( context, 0, -height );
// draw the image into the pixel buffer
UIGraphicsPushContext( context );
[input drawAtPoint:CGPointZero];
UIGraphicsPopContext();
// scan the image
int x, y;
uint8_t r, g, b, a;
uint8_t *pixel = (uint8_t *)pixelBuffer;
for ( y = 0; y < height; y++ )
for ( x = 0; x < height; x++ )
{
r = pixel[0];
g = pixel[1];
b = pixel[2];
a = pixel[3];
// do something with the pixel value here
pixel += 4;
}
// release the resources
CGContextRelease( context );
CGColorSpaceRelease( colorSpace );
free( pixelBuffer );
}

iOS: CGBitmapContextCreate error when creating UIImage

I want to get a UIImage from Pix without doing any read/write to file operations. I am getting a CGBitmapContextCreate error.
Here's the Pix structure:
struct Pix
{
l_uint32 w; /* width in pixels */
l_uint32 h; /* height in pixels */
l_uint32 d; /* depth in bits */
l_uint32 wpl; /* 32-bit words/line */
l_uint32 refcount; /* reference count (1 if no clones) */
l_int32 xres; /* image res (ppi) in x direction */
/* (use 0 if unknown) */
l_int32 yres; /* image res (ppi) in y direction */
/* (use 0 if unknown) */
l_int32 informat; /* input file format, IFF_* */
char *text; /* text string associated with pix */
struct PixColormap *colormap; /* colormap (may be null) */
l_uint32 *data; /* the image data */
};
typedef struct Pix PIX;
The full documentation can be found here: http://tpgit.github.com/Leptonica/pix_8h_source.html
Here's my attempt at it:
- (UIImage *) getImageFromPix:(PIX *) thePix
{
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerComponent = thePix->d;
size_t width = thePix->w;
size_t height = thePix->h;
size_t bytesPerRow = thePix->wpl * 4;
colorSpace = CGColorSpaceCreateDeviceGray();
if(!colorSpace) {
NSLog(#"Error allocating color space Gray\n");
return NULL;
}
// Allocate memory for image data
bitmapData = thePix->data;
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
CGContextRef context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNone);
if(!context) {
free(bitmapData);
NSLog(#"Bitmap context not created");
}
CGColorSpaceRelease(colorSpace);
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage * result = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CGContextRelease(context);
return result;
}
Here's the error I am getting:
<Error>: CGBitmapContextCreate: unsupported parameter combination: 1 integer bits/component; 1 bits/pixel; 1-component color space; kCGImageAlphaNone; 16 bytes/row.
CGBitmapContextCreate() does not support every combination of pixel formats. There may be many Pix formats that it does not support. A full list of supported formats can be found in the Quartz 2D Programming Guide. The formats supported on iOS are fairly limited. You may need to convert what you are getting from Pix to one of supported formats. Its hard to do this in general but for your specific case if you always get 1 bits/pixel, 1 bits/component, no
alpha channel, it should not be too hard to convert to the Gray format of 8 bits/pixel, 8 bits/component, no alpha channel that is supported on iOS.

iOS: How to retrieve image dimensions X, Y from CGContextRef

As it says on the can; here is an example of why I need it:
Let's say I create a bitmap context:
size_t pixelCount = dest_W * dest_H;
typedef struct {
uint8_t r,g,b,a;
} RGBA;
// backing bitmap store
RGBA* pixels = calloc( pixelCount, sizeof( RGBA ) );
// create context using above store
CGContextRef X_RGBA;
{
size_t bitsPerComponent = 8;
size_t bytesPerRow = dest_W * sizeof( RGBA );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
X_RGBA = CGBitmapContextCreate( (void *)pixels, dest_W, dest_H,
bitsPerComponent, bytesPerRow,
colorSpace, kCGImageAlphaNoneSkipLast
);
assert(X_RGBA);
CGColorSpaceRelease(colorSpace);
}
Now I want to throw this context to a drawing function that will eg draw a circle touching the edges:
Do I really need to throw in the width and height as well? I am 99% sure I have seen some way to extract the width and height from the context, but I can't find it anywhere.
CGBitmapContextGetWidth // and Height
NB width and height probably don't make sense to a non-bitmap CGContextRef
Thanks # wiliz on #iphonedev

Resources