Why does CGBitmapContextCreate expect double the number of bytesPerRow? - ios

I'm trying to generate a pixel buffer using a bitmap context on iOS (in ObjC). The abridged code (to remove null checks etc) is below.
CGFloat width = 1;
CGFloat height = 1;
CVPixelBufferRef buffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_OneComponent8,
nil,
&buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *data = CVPixelBufferGetBaseAddress(buffer);
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
CGContextRef ctx = CGBitmapContextCreate(data,
width,
height,
8,
0,
space,
(CGBitmapInfo) kCGImageAlphaNoneSkipLast);
// ... draw into context
CVPixelBufferUnlockBaseAddress(buffer, 0);
This is trying to create a bitmap context for a single pixel, where both the input and the output pixels are 8-bit grayscale.
I get the following output (I added the bold):
CGBitmapContextCreate: invalid data bytes/row: should be at least 2 for 8 integer bits/component, 1 components, kCGImageAlphaNoneSkipLast.
Why does it double the expected bytes per row? This is consistent for the width / height combinations I've tried, and 'works' if I halve the width parameter in CGBitmapContextCreate. Note also that if I pass in a value for bytesPerRow then it still fails this check and gives the same output.
Am I missing something obvious?
Edit: formatting.

kCGImageAlphaNoneSkipLast was wrong. I needed to use kCGImageAlphaNone.
The bitmap create call now looks like:
CGContextRef ctx = CGBitmapContextCreate(data,
width,
height,
8,
CVPixelBufferGetBytesPerRow(buffer),
space,
kCGImageAlphaNone);

Related

How to get raw ARGB data from bitmap (iOS)?

Android:
import android.graphics.Bitmap;
public void getPixels (int[] pixels, int offset, int stride, int x, int y, int width, int height);
Bitmap bmap = source.renderCroppedGreyscaleBitmap();
int w=bmap.getWidth(),h=bmap.getHeight();
int[] pix = new int[w * h];
bmap.getPixels(pix, 0, w, 0, 0, w, h);
Returns in pixels[] a copy of the data in the bitmap.
Each value is a packed int representing a Color.
The stride parameter allows the caller to allow for gaps in the returned pixels array between rows.
For normal packed results, just pass width for the stride value.
The returned colors are non-premultiplied ARGB values.
iOS:
#implementation UIImage (Pixels)
-(unsigned char*) rgbaPixels
{
// The amount of bits per pixel, in this case we are doing RGBA so 4 byte = 32 bits
#define BITS_PER_PIXEL 32
// The amount of bits per component, in this it is the same as the bitsPerPixel divided by 4 because each component (such as Red) is only 8 bits
#define BITS_PER_COMPONENT (BITS_PER_PIXEL/4)
// The amount of bytes per pixel, in this case a pixel is made up of Red, Green, Blue and Alpha so it will be 4
#define BYTES_PER_PIXEL (BITS_PER_PIXEL/BITS_PER_COMPONENT)
// Define the colour space (in this case it's gray)
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
// Find out the number of bytes per row (it's just the width times the number of bytes per pixel)
size_t bytesPerRow = self.size.width * BYTES_PER_PIXEL;
// Allocate the appropriate amount of memory to hold the bitmap context
unsigned char* bitmapData = (unsigned char*) malloc(bytesPerRow*self.size.height);
// Create the bitmap context, we set the alpha to none here to tell the bitmap we don't care about alpha values
CGContextRef context = CGBitmapContextCreate(bitmapData,self.size.width,self.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaFirst);//It returns null
/* We are done with the colour space now so no point in keeping it around*/
CGColorSpaceRelease(colourSpace);
// Create a CGRect to define the amount of pixels we want
CGRect rect = CGRectMake(0.0,0.0,self.size.width,self.size.height);
// Draw the bitmap context using the rectangle we just created as a bounds and the Core Graphics Image as the image source
CGContextDrawImage(context,rect,self.CGImage);
// Obtain the pixel data from the bitmap context
unsigned char* pixelData = (unsigned char*)CGBitmapContextGetData(context);
// Release the bitmap context because we are done using it
CGContextRelease(context);
//CGColorSpaceRelease(colourSpace);
return pixelData;
#undef BITS_PER_PIXEL
#undef BITS_PER_COMPONENT
}
But it can't work.
CGBitmapContextCreate(bitmapData,self.size.width,self.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaFirst);
It returns NULL.
I need the same array as pix[ ] above,how can I make it?

CGContext color changes when alpha is really small

I'm making a painting app and at each point along a stroke the width of the brush can change size and the alpha decreases as if the brush was running out of paint. The problem is that when the alpha gets really small the color changes.
Once the alpha goes below .01 I start to get the color changes in the brush strokes. I have to have such a low alpha because at every pixel along the line I am overlaying the brush layer into the context and to get the needed transparency as the brush is running out of paint, the alpha ends up needing to be very small.
Here is the code where I am drawing my layer into the context:
CGContextSaveGState(_cacheContext);
CGContextSetAlpha(_cacheContext, _brushAlpha); // below .01 color starts to change
CGContextDrawLayerAtPoint (_cacheContext, bottomLeft, _brushShapeLayer);
CGContextRestoreGState(_cacheContext);
If the RGB color is all one color such as (1,0,0) then it works great. But when the color is something else like (.4,.6,.2) thats when I see the color changes at low alphas.
Thanks for any help!
* Update *
I tried to use kCGBitmapFloatComponents but I am getting an error:
Unsupported pixel description - 3 components, 32 bits-per-component,
128 bits-per-pixel
I assumed this meant that I couldn't use it in iOS but maybe I'm not setting it up correctly. Here is what I have for creating the context:
bitmapBytesPerRow = (self.frame.size.width * 8 * sizeof(float));
bitmapByteCount = (bitmapBytesPerRow * self.frame.size.height);
void* bitmap = malloc( bitmapByteCount );
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapFloatComponents;
self.cacheContext = CGBitmapContextCreate (bitmap, self.frame.size.width, self.frame.size.height, 32, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), bitmapInfo);
My Deployment Target is set at 8.2

How can I manipulate the pixel values in a CGImageRef in Xcode

I have some
CGImageRef cgImage = "something"
Is there a way to manipulate the pixel values of this cgImage? For example if this image contains values between 0.0001 and 3000 thus when I try to view or release the image this way in an NSImageView (How can I show an image in a NSView using an CGImageRef image)
I get a black image, all pixels are black, I think it has to do with setting the pixel range values in a different color map (I don't know).
I want to be able to manipulate or change the pixel values or just be able to see the image by manipulating the color map range.
I have tried this but obviously it doesn't work:
CGContextDrawImage(ctx, CGRectMake(0,0, CGBitmapContextGetWidth(ctx),CGBitmapContextGetHeight(ctx)),cgImage);
UInt8 *data = CGBitmapContextGetData(ctx);
for (**all pixel values and i++ **) {
data[i] = **change to another value I want depending on the value in data[i]**;
}
Thank you,
In order to manipulate individual pixels in an image
allocate a buffer to hold the pixels
create a memory bitmap context using that buffer
draw the image into the context, which puts the pixels into the
buffer
change the pixels as desired
create a new image from the context
free up resources (note be sure to check for leaks using instruments)
Here's some sample code to get you started. This code will swap the blue and red components of each pixel.
- (CGImageRef)swapBlueAndRedInImage:(CGImageRef)image
{
int x, y;
uint8_t red, green, blue, alpha;
uint8_t *bufptr;
int width = CGImageGetWidth( image );
int height = CGImageGetHeight( image );
// allocate memory for pixels
uint32_t *pixels = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// draw the image into the context
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image );
// manipulate the pixels
bufptr = (uint8_t *)pixels;
for ( y = 0; y < height; y++)
for ( x = 0; x < width; x++ )
{
red = bufptr[3];
green = bufptr[2];
blue = bufptr[1];
alpha = bufptr[0];
bufptr[1] = red; // swaps the red and blue
bufptr[3] = blue; // components of each pixel
bufptr += 4;
}
// create a new CGImage from the context with modified pixels
CGImageRef resultImage = CGBitmapContextCreateImage( context );
// release resources to free up memory
CGContextRelease( context );
CGColorSpaceRelease( colorSpace );
free( pixels );
return( resultImage );
}

Bad argument (image must have CV_8UC3 type) in grabCut

I am using grabCut algorithm using the following code:
cv::Mat img=[self cvMatFromUIImage:image];
cv::Rect rectangle(10,10,300,150);
cv::Mat result; // segmentation (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(img, // input image
result, // segmentation result
rectangle, // rectangle containing foreground
bgModel,fgModel, // models
3, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(img.size(),CV_8UC3,
cv::Scalar(255,255,255));
result=result&1;
img.copyTo(foreground, result);
result);
image=[self UIImageFromCVMat:foreground];
ImgView.image=image;
The code to convert UIImage to Mat image looks like this
- (cv::Mat)cvMatFromUIImage:(UIImage *)imge
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imge.CGImage);
CGFloat cols = imge.size.width;
CGFloat rows = imge.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
// Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imge.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
But I got the error
OpenCV Error: Bad argument (image must have CV_8UC3 type) in grabCut.
If I change
cv::Mat cvMat(rows, cols, CV_8UC4); line to cv::Mat cvMat(rows, cols, CV_8UC3);
then I get <Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaNoneSkipLast; 342 bytes/row..
I am confused here for what to do.
Any help please
The Problem seems to be, that the image you get has an alpha channels, while grabcut expects a rgb image without an alpha channel. So you need to get rid of the additional channel.
You can do this for example with this function:
cv::cvtColor(img , img , CV_RGBA2RGB);

CGImageCreate Test Pattern is not working (iOS)

I'm trying to create an UIImage test pattern for an iOS 5.1 device. The target UIImageView is 320x240 in size, but I was trying to create a 160x120 UIImage test pattern (future, non-test pattern images will be this size). I wanted the top half of the box to be blue and the bottom half to be red, but I get what looks like uninitialized memory corrupting the bottom of the image. The code is as follows:
int width = 160;
int height = 120;
unsigned int testData[width * height];
for(int k = 0; k < (width * height) / 2; k++)
testData[k] = 0xFF0000FF; // BGRA (Blue)
for(int k = (width * height) / 2; k < width * height; k++)
testData[k] = 0x0000FFFF; // BGRA (Red)
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, &testData, (width * height * 4), NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider, NULL, NO,renderingIntent);
UIImage *myTestImage = [UIImage imageWithCGImage:imageRef];
This should look like another example on Stack Overflow. Anyway, I found that as I decrease the size of the test pattern the "corrupt" portion of the image increases. What is also strange is that I see lines of red in the "corrupt" portion, so it doesn't appear that I'm just messing up the sizes of components. What am I missing? It feels like something in the provider, but I don't see it.
Thanks!
Added screenshots. Here is what it looks like with kCGImageAlphaNoneSkipFirst set:
And here is what it looks like with kCGImageAlphaFirst:
Your pixel data is in an automatic variable, so it's stored on the stack:
unsigned int testData[width * height];
You must be returning from the function where this data is declared. That makes the function's stack frame get popped and reused by other functions, which overwrites the data.
Your image, however, still refers to that pixel data at the same address on the stack. (CGDataProviderCreateWithData doesn't copy the data, it just refers to it.)
To fix: use malloc or CFMutableData or NSMutableData to allocate space for your pixel data on the heap.
Your image includes alpha which you then tell the system to ignore by skipping the most significant bits (i.e. the "B" portion of your image). Try setting it to kCGImageAlphaPremultipliedLast instead.
EDIT:
Now that I remember endianness, I realize that the program is probably reading your values in backwards, so what you might actually want is kCGImageAlphaPremultipliedFirst

Resources