I have some
CGImageRef cgImage = "something"
Is there a way to manipulate the pixel values of this cgImage? For example if this image contains values between 0.0001 and 3000 thus when I try to view or release the image this way in an NSImageView (How can I show an image in a NSView using an CGImageRef image)
I get a black image, all pixels are black, I think it has to do with setting the pixel range values in a different color map (I don't know).
I want to be able to manipulate or change the pixel values or just be able to see the image by manipulating the color map range.
I have tried this but obviously it doesn't work:
CGContextDrawImage(ctx, CGRectMake(0,0, CGBitmapContextGetWidth(ctx),CGBitmapContextGetHeight(ctx)),cgImage);
UInt8 *data = CGBitmapContextGetData(ctx);
for (**all pixel values and i++ **) {
data[i] = **change to another value I want depending on the value in data[i]**;
}
Thank you,
In order to manipulate individual pixels in an image
allocate a buffer to hold the pixels
create a memory bitmap context using that buffer
draw the image into the context, which puts the pixels into the
buffer
change the pixels as desired
create a new image from the context
free up resources (note be sure to check for leaks using instruments)
Here's some sample code to get you started. This code will swap the blue and red components of each pixel.
- (CGImageRef)swapBlueAndRedInImage:(CGImageRef)image
{
int x, y;
uint8_t red, green, blue, alpha;
uint8_t *bufptr;
int width = CGImageGetWidth( image );
int height = CGImageGetHeight( image );
// allocate memory for pixels
uint32_t *pixels = calloc( width * height, sizeof(uint32_t) );
// create a context with RGBA pixels
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate( pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast );
// draw the image into the context
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image );
// manipulate the pixels
bufptr = (uint8_t *)pixels;
for ( y = 0; y < height; y++)
for ( x = 0; x < width; x++ )
{
red = bufptr[3];
green = bufptr[2];
blue = bufptr[1];
alpha = bufptr[0];
bufptr[1] = red; // swaps the red and blue
bufptr[3] = blue; // components of each pixel
bufptr += 4;
}
// create a new CGImage from the context with modified pixels
CGImageRef resultImage = CGBitmapContextCreateImage( context );
// release resources to free up memory
CGContextRelease( context );
CGColorSpaceRelease( colorSpace );
free( pixels );
return( resultImage );
}
Related
I'm trying to generate a pixel buffer using a bitmap context on iOS (in ObjC). The abridged code (to remove null checks etc) is below.
CGFloat width = 1;
CGFloat height = 1;
CVPixelBufferRef buffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_OneComponent8,
nil,
&buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *data = CVPixelBufferGetBaseAddress(buffer);
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
CGContextRef ctx = CGBitmapContextCreate(data,
width,
height,
8,
0,
space,
(CGBitmapInfo) kCGImageAlphaNoneSkipLast);
// ... draw into context
CVPixelBufferUnlockBaseAddress(buffer, 0);
This is trying to create a bitmap context for a single pixel, where both the input and the output pixels are 8-bit grayscale.
I get the following output (I added the bold):
CGBitmapContextCreate: invalid data bytes/row: should be at least 2 for 8 integer bits/component, 1 components, kCGImageAlphaNoneSkipLast.
Why does it double the expected bytes per row? This is consistent for the width / height combinations I've tried, and 'works' if I halve the width parameter in CGBitmapContextCreate. Note also that if I pass in a value for bytesPerRow then it still fails this check and gives the same output.
Am I missing something obvious?
Edit: formatting.
kCGImageAlphaNoneSkipLast was wrong. I needed to use kCGImageAlphaNone.
The bitmap create call now looks like:
CGContextRef ctx = CGBitmapContextCreate(data,
width,
height,
8,
CVPixelBufferGetBytesPerRow(buffer),
space,
kCGImageAlphaNone);
Predefined: My A4 sheet will always be of white color.
I need to detect A4 sheet from image. I am able to detect rectangles, now the problem is I am getting multiple rectangles from my image. So I extracted the images from the detected rectangle points.
Now I want to match image color with white color.
Using below method to extract image from contours detected :
- (cv::Mat) getPaperAreaFromImage: (std::vector<cv::Point>) square, cv::Mat image
{
// declare used vars
int paperWidth = 210; // in mm, because scale factor is taken into account
int paperHeight = 297; // in mm, because scale factor is taken into account
cv::Point2f imageVertices[4];
float distanceP1P2;
float distanceP1P3;
BOOL isLandscape = true;
int scaleFactor;
cv::Mat paperImage;
cv::Mat paperImageCorrected;
cv::Point2f paperVertices[4];
// sort square corners for further operations
square = sortSquarePointsClockwise( square );
// rearrange to get proper order for getPerspectiveTransform()
imageVertices[0] = square[0];
imageVertices[1] = square[1];
imageVertices[2] = square[3];
imageVertices[3] = square[2];
// get distance between corner points for further operations
distanceP1P2 = distanceBetweenPoints( imageVertices[0], imageVertices[1] );
distanceP1P3 = distanceBetweenPoints( imageVertices[0], imageVertices[2] );
// calc paper, paperVertices; take orientation into account
if ( distanceP1P2 > distanceP1P3 ) {
scaleFactor = ceil( lroundf(distanceP1P2/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperWidth*scaleFactor, paperHeight*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperHeight*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperWidth*scaleFactor );
paperVertices[3] = cv::Point( paperHeight*scaleFactor, paperWidth*scaleFactor );
}
else {
isLandscape = false;
scaleFactor = ceil( lroundf(distanceP1P3/paperHeight) ); // we always want to scale the image down to maintain the best quality possible
paperImage = cv::Mat( paperHeight*scaleFactor, paperWidth*scaleFactor, CV_8UC3 );
paperVertices[0] = cv::Point( 0, 0 );
paperVertices[1] = cv::Point( paperWidth*scaleFactor, 0 );
paperVertices[2] = cv::Point( 0, paperHeight*scaleFactor );
paperVertices[3] = cv::Point( paperWidth*scaleFactor, paperHeight*scaleFactor );
}
cv::Mat warpMatrix = getPerspectiveTransform( imageVertices, paperVertices );
cv::warpPerspective(image, paperImage, warpMatrix, paperImage.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT );
if (true) {
cv::Rect rect = boundingRect(cv::Mat(square));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 5, 8, 0);
UIImage *object = [self UIImageFromCVMat:paperImage];
}
// we want portrait output
if ( isLandscape ) {
cv::transpose(paperImage, paperImageCorrected);
cv::flip(paperImageCorrected, paperImageCorrected, 1);
return paperImageCorrected;
}
return paperImage;
}
EDITED: I used below method to get the color from image. But now my problem after converting my original image to cv::mat, when I am cropping there is already transparent grey color over my image. So always I am getting the same color.
Is there any direct way to get original color from cv::mat image?
- (UIColor *)averageColor: (UIImage *) image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char rgba[4];
CGContextRef context = CGBitmapContextCreate(rgba, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, 1, 1), image.CGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
if(rgba[3] > 0) {
CGFloat alpha = ((CGFloat)rgba[3])/255.0;
CGFloat multiplier = alpha/255.0;
return [UIColor colorWithRed:((CGFloat)rgba[0])*multiplier
green:((CGFloat)rgba[1])*multiplier
blue:((CGFloat)rgba[2])*multiplier
alpha:alpha];
}
else {
return [UIColor colorWithRed:((CGFloat)rgba[0])/255.0
green:((CGFloat)rgba[1])/255.0
blue:((CGFloat)rgba[2])/255.0
alpha:((CGFloat)rgba[3])/255.0];
}
}
EDIT 2 :
Input Image
Getting this output
Need to detect only A4 sheet of white color.
I just resolved it using Google Vision api.
My objective was to calculate the cracks for builder purpose from image so in my case User will be using A4 sheet as reference on the image where crack is, and I will capture the A4 sheet and calculate the size taken by each pixel. Then build will tap on two points in the crack, and I will calculate the distance.
In google vision I used document text detection api and printed my app name on A4 sheet fully covered vertically or horizontally. And google vision api detect that text and gives me the coordinate.
Android:
import android.graphics.Bitmap;
public void getPixels (int[] pixels, int offset, int stride, int x, int y, int width, int height);
Bitmap bmap = source.renderCroppedGreyscaleBitmap();
int w=bmap.getWidth(),h=bmap.getHeight();
int[] pix = new int[w * h];
bmap.getPixels(pix, 0, w, 0, 0, w, h);
Returns in pixels[] a copy of the data in the bitmap.
Each value is a packed int representing a Color.
The stride parameter allows the caller to allow for gaps in the returned pixels array between rows.
For normal packed results, just pass width for the stride value.
The returned colors are non-premultiplied ARGB values.
iOS:
#implementation UIImage (Pixels)
-(unsigned char*) rgbaPixels
{
// The amount of bits per pixel, in this case we are doing RGBA so 4 byte = 32 bits
#define BITS_PER_PIXEL 32
// The amount of bits per component, in this it is the same as the bitsPerPixel divided by 4 because each component (such as Red) is only 8 bits
#define BITS_PER_COMPONENT (BITS_PER_PIXEL/4)
// The amount of bytes per pixel, in this case a pixel is made up of Red, Green, Blue and Alpha so it will be 4
#define BYTES_PER_PIXEL (BITS_PER_PIXEL/BITS_PER_COMPONENT)
// Define the colour space (in this case it's gray)
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
// Find out the number of bytes per row (it's just the width times the number of bytes per pixel)
size_t bytesPerRow = self.size.width * BYTES_PER_PIXEL;
// Allocate the appropriate amount of memory to hold the bitmap context
unsigned char* bitmapData = (unsigned char*) malloc(bytesPerRow*self.size.height);
// Create the bitmap context, we set the alpha to none here to tell the bitmap we don't care about alpha values
CGContextRef context = CGBitmapContextCreate(bitmapData,self.size.width,self.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaFirst);//It returns null
/* We are done with the colour space now so no point in keeping it around*/
CGColorSpaceRelease(colourSpace);
// Create a CGRect to define the amount of pixels we want
CGRect rect = CGRectMake(0.0,0.0,self.size.width,self.size.height);
// Draw the bitmap context using the rectangle we just created as a bounds and the Core Graphics Image as the image source
CGContextDrawImage(context,rect,self.CGImage);
// Obtain the pixel data from the bitmap context
unsigned char* pixelData = (unsigned char*)CGBitmapContextGetData(context);
// Release the bitmap context because we are done using it
CGContextRelease(context);
//CGColorSpaceRelease(colourSpace);
return pixelData;
#undef BITS_PER_PIXEL
#undef BITS_PER_COMPONENT
}
But it can't work.
CGBitmapContextCreate(bitmapData,self.size.width,self.size.height,BITS_PER_COMPONENT,bytesPerRow,colourSpace,kCGImageAlphaFirst);
It returns NULL.
I need the same array as pix[ ] above,how can I make it?
I am using grabCut algorithm using the following code:
cv::Mat img=[self cvMatFromUIImage:image];
cv::Rect rectangle(10,10,300,150);
cv::Mat result; // segmentation (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(img, // input image
result, // segmentation result
rectangle, // rectangle containing foreground
bgModel,fgModel, // models
3, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(img.size(),CV_8UC3,
cv::Scalar(255,255,255));
result=result&1;
img.copyTo(foreground, result);
result);
image=[self UIImageFromCVMat:foreground];
ImgView.image=image;
The code to convert UIImage to Mat image looks like this
- (cv::Mat)cvMatFromUIImage:(UIImage *)imge
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imge.CGImage);
CGFloat cols = imge.size.width;
CGFloat rows = imge.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
// Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), imge.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
But I got the error
OpenCV Error: Bad argument (image must have CV_8UC3 type) in grabCut.
If I change
cv::Mat cvMat(rows, cols, CV_8UC4); line to cv::Mat cvMat(rows, cols, CV_8UC3);
then I get <Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaNoneSkipLast; 342 bytes/row..
I am confused here for what to do.
Any help please
The Problem seems to be, that the image you get has an alpha channels, while grabcut expects a rgb image without an alpha channel. So you need to get rid of the additional channel.
You can do this for example with this function:
cv::cvtColor(img , img , CV_RGBA2RGB);
I'm trying to create an UIImage test pattern for an iOS 5.1 device. The target UIImageView is 320x240 in size, but I was trying to create a 160x120 UIImage test pattern (future, non-test pattern images will be this size). I wanted the top half of the box to be blue and the bottom half to be red, but I get what looks like uninitialized memory corrupting the bottom of the image. The code is as follows:
int width = 160;
int height = 120;
unsigned int testData[width * height];
for(int k = 0; k < (width * height) / 2; k++)
testData[k] = 0xFF0000FF; // BGRA (Blue)
for(int k = (width * height) / 2; k < width * height; k++)
testData[k] = 0x0000FFFF; // BGRA (Red)
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, &testData, (width * height * 4), NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider, NULL, NO,renderingIntent);
UIImage *myTestImage = [UIImage imageWithCGImage:imageRef];
This should look like another example on Stack Overflow. Anyway, I found that as I decrease the size of the test pattern the "corrupt" portion of the image increases. What is also strange is that I see lines of red in the "corrupt" portion, so it doesn't appear that I'm just messing up the sizes of components. What am I missing? It feels like something in the provider, but I don't see it.
Thanks!
Added screenshots. Here is what it looks like with kCGImageAlphaNoneSkipFirst set:
And here is what it looks like with kCGImageAlphaFirst:
Your pixel data is in an automatic variable, so it's stored on the stack:
unsigned int testData[width * height];
You must be returning from the function where this data is declared. That makes the function's stack frame get popped and reused by other functions, which overwrites the data.
Your image, however, still refers to that pixel data at the same address on the stack. (CGDataProviderCreateWithData doesn't copy the data, it just refers to it.)
To fix: use malloc or CFMutableData or NSMutableData to allocate space for your pixel data on the heap.
Your image includes alpha which you then tell the system to ignore by skipping the most significant bits (i.e. the "B" portion of your image). Try setting it to kCGImageAlphaPremultipliedLast instead.
EDIT:
Now that I remember endianness, I realize that the program is probably reading your values in backwards, so what you might actually want is kCGImageAlphaPremultipliedFirst