iOS: CGBitmapContextCreate error when creating UIImage - ios

I want to get a UIImage from Pix without doing any read/write to file operations. I am getting a CGBitmapContextCreate error.
Here's the Pix structure:
struct Pix
{
l_uint32 w; /* width in pixels */
l_uint32 h; /* height in pixels */
l_uint32 d; /* depth in bits */
l_uint32 wpl; /* 32-bit words/line */
l_uint32 refcount; /* reference count (1 if no clones) */
l_int32 xres; /* image res (ppi) in x direction */
/* (use 0 if unknown) */
l_int32 yres; /* image res (ppi) in y direction */
/* (use 0 if unknown) */
l_int32 informat; /* input file format, IFF_* */
char *text; /* text string associated with pix */
struct PixColormap *colormap; /* colormap (may be null) */
l_uint32 *data; /* the image data */
};
typedef struct Pix PIX;
The full documentation can be found here: http://tpgit.github.com/Leptonica/pix_8h_source.html
Here's my attempt at it:
- (UIImage *) getImageFromPix:(PIX *) thePix
{
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerComponent = thePix->d;
size_t width = thePix->w;
size_t height = thePix->h;
size_t bytesPerRow = thePix->wpl * 4;
colorSpace = CGColorSpaceCreateDeviceGray();
if(!colorSpace) {
NSLog(#"Error allocating color space Gray\n");
return NULL;
}
// Allocate memory for image data
bitmapData = thePix->data;
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
CGContextRef context = CGBitmapContextCreate(bitmapData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaNone);
if(!context) {
free(bitmapData);
NSLog(#"Bitmap context not created");
}
CGColorSpaceRelease(colorSpace);
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage * result = [UIImage imageWithCGImage:imgRef];
CGImageRelease(imgRef);
CGContextRelease(context);
return result;
}
Here's the error I am getting:
<Error>: CGBitmapContextCreate: unsupported parameter combination: 1 integer bits/component; 1 bits/pixel; 1-component color space; kCGImageAlphaNone; 16 bytes/row.

CGBitmapContextCreate() does not support every combination of pixel formats. There may be many Pix formats that it does not support. A full list of supported formats can be found in the Quartz 2D Programming Guide. The formats supported on iOS are fairly limited. You may need to convert what you are getting from Pix to one of supported formats. Its hard to do this in general but for your specific case if you always get 1 bits/pixel, 1 bits/component, no
alpha channel, it should not be too hard to convert to the Gray format of 8 bits/pixel, 8 bits/component, no alpha channel that is supported on iOS.

Related

Edit a RGB colorspace image with HSL conversion failed

I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.
I suppose the original image's color space is RGB, so here is my thought:
Convert the UIImage to cvMat
Convert the colorspace from BGR to HLS.
Loop through all the pixel points to get the corresponding HLS values.
Custom algorithms.
Rewrite the HLS value changes to cvMat
Convert the cvMat to UIImage
Here is my code:
Conversion between UIImage and cvMat
Reference: https://stackoverflow.com/a/10254561/1677041
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
#else
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step[0], // bytesPerRow
colorSpace, // colorspace
bitmapInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
cv::Mat cvMatWithImage(UIImage *image)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1) {
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo // Bitmap info flags
);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
I tested these two functions alone and confirm that they work.
Core operations about conversion:
/// Generate a new image based on specified HSL value changes.
/// #param h_delta h value in [-360, 360]
/// #param s_delta s value in [-100, 100]
/// #param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
Mat original = cvMatWithImage(self.originalImage);
Mat image;
cvtColor(original, image, COLOR_BGR2HLS);
// https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
// accept only char type matrices
CV_Assert(image.depth() == CV_8U);
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols * channels;
int y, x;
for (y = 0; y < nRows; ++y) {
for (x = 0; x < nCols; ++x) {
// https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
// https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// h = MAX(0, MIN(360, h + h_delta));
// s = MAX(0, MIN(100, s + s_delta));
// l = MAX(0, MIN(100, l + l_delta));
printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
original.at<Vec3b>(y, x)[0] = h;
original.at<Vec3b>(y, x)[1] = l;
original.at<Vec3b>(y, x)[2] = s;
}
}
cvtColor(image, image, COLOR_HLS2BGR);
UIImage *resultImage = UIImageFromCVMat(image);
dispatch_async(dispatch_get_main_queue(), ^ {
if (completion) {
completion(resultImage);
}
});
});
}
The question is:
Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
Should I use Vec3b within the two for loop? or Vec3i instead?
Does my thought have something wrong above?
Update:
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;
fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));
// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;
printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);
1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.
8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)
V should be L, there is a typo in the documentation
You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.
But you have to be careful converting it back.
2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.
If you have a question, feel free to comment

Get RGB "CVPixelBuffer" from ARKit

I'm trying to get a CVPixelBuffer in RGB color space from the Apple's ARKit. In func session(_ session: ARSession, didUpdate frame: ARFrame) method of ARSessionDelegate I get an instance of ARFrame. On page Displaying an AR Experience with Metal I found that this pixel buffer is in YCbCr (YUV) color space.
I need to convert this to RGB color space (I actually need CVPixelBuffer and not UIImage). I've found something about color conversion on iOS but I was not able to get this working in Swift 3.
There's several ways to do this, depending on what you're after. The best way to do this in realtime (to say, render the buffer to a view) is to use a custom shader to convert the YCbCr CVPixelBuffer to RGB.
Using Metal:
If you make a new project, select "Augmented Reality App," and select "Metal" for the content technology, the project generated will contain the code and shaders necessary to make this conversion.
Using OpenGL:
The GLCameraRipple example from Apple uses an AVCaptureSession to capture the camera, and shows how to map the resulting CVPixelBuffer to GL textures, which are then converted to RGB in shaders (again, provided in the example).
Non Realtime:
The answer to this stackoverflow question addresses converting the buffer to a UIImage, and offers a pretty simple way to do it.
I have also stuck on this question for several days. All of the code snippet I could find on the Internet is written in Objective-C rather than Swift, regarding converting CVPixelBuffer to UIImage.
Finally, the following code snippet works perfect for me, to convert a YUV image to either JPG or PNG file format, and then you can write it to the local file in your application.
func pixelBufferToUIImage(pixelBuffer: CVPixelBuffer) -> UIImage {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext(options: nil)
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
let uiImage = UIImage(cgImage: cgImage!)
return uiImage
}
The docs explicitly says that you need to access the luma and chroma planes:
ARKit captures pixel buffers in a planar YCbCr format (also known as YUV) format. To render these images on a device display, you'll need to access the luma and chroma planes of the pixel buffer and convert pixel values to an RGB format.
So there's no way to directly get the RGB planes and you'll have to handle this in your shaders, either in Metal or openGL as described by #joshue
You may want the Accelerate framework's image conversion functions. Perhaps a combination of vImageConvert_420Yp8_Cb8_Cr8ToARGB8888 and vImageConvert_ARGB8888toRGB888 (If you don't want the alpha channel). In my experience these work in real time.
Struggled a long while with this as well and I've ended up writing the following code, which works for me:
// Helper macro to ensure pixel values are bounded between 0 and 255
#define clamp(a) (a > 255 ? 255 : (a < 0 ? 0 : a));
- (void)processImageBuffer:(CVImageBufferRef)imageBuffer
{
OSType type = CVPixelBufferGetPixelFormatType(imageBuffer);
if (type == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
{
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// We know the return format of the base address based on the YpCbCr8BiPlanarFullRange format (as per doc)
StandardBuffer baseAddress = (StandardBuffer)CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer, width and height
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get buffer info and planar pixel data
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
uint8_t* cbrBuff = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
// This just moved the pointer past the offset
baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
int bytesPerPixel = 4;
uint8_t *rgbData = rgbFromYCrCbBiPlanarFullRangeBuffer(baseAddress,
cbrBuff,
bufferInfo,
width,
height,
bytesPerRow);
[self doStuffOnRGBBuffer:rgbData width:width height:height bitsPerComponent:8 bytesPerPixel:bytesPerPixel bytesPerRow:bytesPerRow];
free(rgbData);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
else
{
NSLog(#"Unsupported image buffer type");
}
}
uint8_t * rgbFromYCrCbBiPlanarFullRangeBuffer(uint8_t *inBaseAddress,
uint8_t *cbCrBuffer,
CVPlanarPixelBufferInfo_YCbCrBiPlanar * inBufferInfo,
size_t inputBufferWidth,
size_t inputBufferHeight,
size_t inputBufferBytesPerRow)
{
int bytesPerPixel = 4;
NSUInteger yPitch = EndianU32_BtoN(inBufferInfo->componentInfoY.rowBytes);
uint8_t *rgbBuffer = (uint8_t *)malloc(inputBufferWidth * inputBufferHeight * bytesPerPixel);
NSUInteger cbCrPitch = EndianU32_BtoN(inBufferInfo->componentInfoCbCr.rowBytes);
uint8_t *yBuffer = (uint8_t *)inBaseAddress;
for(int y = 0; y < inputBufferHeight; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * inputBufferWidth * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < inputBufferWidth; x++)
{
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
// ABGR image representation
rgbOutput[0] = 0Xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
return rgbBuffer;
}

Face detection using OpenCV : application crashes using CGContextDrawImage ( EXC_BAD_ACCESS )

I'm using OpenCV in my swift project to detect faces on pictures from an iPhone's gallery.
When I try to process a UIImage picked from the gallery, I'm first doing that :
+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
...
cv::Mat img = [inputImage CVGrayscaleMat];
...
}
Here is my CVGrayscaleMat method :
-(cv::Mat)CVGrayscaleMat
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat((int)rows, (int)cols, CV_8UC1); // 8 bits per component, 1 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
(int)cols, // Width of bitmap
(int)rows, // Height of bitmap
8, // Bits per component
4896, // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, (int)cols, (int)rows), self.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
When I try to process a picture, I got a EXC_BAD_ACCESS on that line :
CGContextDrawImage(contextRef, CGRectMake(0, 0, (int)cols, (int)rows), self.CGImage);
I'm still not super familiar with OpenCV... Any idea ?
Thanks in advance !
SOLVED :
Using the standard function - (cv::Mat)cvMatFromUIImage:(UIImage *)image

How do you find a region of a certain color within an image on iOS?

I'm doing an image processing iOS app, where we have a large image (eg:size will be 2000x2000). Assume that image is completely black, except one part of the image is a different color (lets say the size of that region is 200x200).
SI want to calculate the start and end position of that differently coloured region. How can I achieve this?
Here's a simple way to allow the CPU to get pixel values from a UIImage. The steps are
allocate a buffer for the pixels
create a bitmap memory context using the buffer as the backing store
draw the image into the context (writes the pixels into the buffer)
examine the pixels in the buffer
free the buffer and associated resources
- (void)processImage:(UIImage *)input
{
int width = input.size.width;
int height = input.size.height;
// allocate the pixel buffer
uint32_t *pixelBuffer = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixelBuffer, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// invert the y-axis, so that increasing y is down
CGContextScaleCTM( context, 1.0, -1.0 );
CGContextTranslateCTM( context, 0, -height );
// draw the image into the pixel buffer
UIGraphicsPushContext( context );
[input drawAtPoint:CGPointZero];
UIGraphicsPopContext();
// scan the image
int x, y;
uint8_t r, g, b, a;
uint8_t *pixel = (uint8_t *)pixelBuffer;
for ( y = 0; y < height; y++ )
for ( x = 0; x < height; x++ )
{
r = pixel[0];
g = pixel[1];
b = pixel[2];
a = pixel[3];
// do something with the pixel value here
pixel += 4;
}
// release the resources
CGContextRelease( context );
CGColorSpaceRelease( colorSpace );
free( pixelBuffer );
}

iOS: How to retrieve image dimensions X, Y from CGContextRef

As it says on the can; here is an example of why I need it:
Let's say I create a bitmap context:
size_t pixelCount = dest_W * dest_H;
typedef struct {
uint8_t r,g,b,a;
} RGBA;
// backing bitmap store
RGBA* pixels = calloc( pixelCount, sizeof( RGBA ) );
// create context using above store
CGContextRef X_RGBA;
{
size_t bitsPerComponent = 8;
size_t bytesPerRow = dest_W * sizeof( RGBA );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
X_RGBA = CGBitmapContextCreate( (void *)pixels, dest_W, dest_H,
bitsPerComponent, bytesPerRow,
colorSpace, kCGImageAlphaNoneSkipLast
);
assert(X_RGBA);
CGColorSpaceRelease(colorSpace);
}
Now I want to throw this context to a drawing function that will eg draw a circle touching the edges:
Do I really need to throw in the width and height as well? I am 99% sure I have seen some way to extract the width and height from the context, but I can't find it anywhere.
CGBitmapContextGetWidth // and Height
NB width and height probably don't make sense to a non-bitmap CGContextRef
Thanks # wiliz on #iphonedev

Resources