i m using xcode and would like to load a grayscale image into the program but i am having problem with it.
Previously i have converted a grayscaled IplImage(size,8, 1) to UIimage and stored as an jpg. Now i would like to revert the process to get back the IplImage.
I obtain UIimage by doing
UIImage *uiimage1 = [UIImage imageNamed:#"IMG_1.JPG"];
Then i use the following function.
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
// Getting CGImage from UIImage
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Creating temporal IplImage for drawing
IplImage *iplimage = cvCreateImage(
cvSize(image.size.width,image.size.height), IPL_DEPTH_8U, 4
);
// Creating CGContext for temporal IplImage
CGContextRef contextRef = CGBitmapContextCreate(
iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault
);
// Drawing CGImage to CGContext
CGContextDrawImage(
contextRef,
CGRectMake(0, 0, image.size.width, image.size.height),
imageRef
);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
// Creating result IplImage
IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
cvCvtColor(iplimage, ret, CV_RGBA2BGR);
cvReleaseImage(&iplimage);
return ret;
}
This works fine for loading color image with the standard RGBA channels, but i have some problem when i want to load grayscale image with only 1 channel and no alpha channel.
i have tried to change colorSpace to CGColorSpaceCreateDeviceGray(), change the number of channels from 4 to 1 and commented out the cvCvtColor and return IplImage *iplimage directly
however, i still have the error, CGContextDrawImage: invalid context 0x0
I think there might be a problem with CGBitmapContextCreate and likely something wrong with bitmapInfo..
i tried a few combinations such as kCGBitmapByteOrderDefault|kCGImageAlphaNone but none of them work.
Any idea how what i should do please? thanks in advance!
Look into CGBitmapContextCreate's documentation. (in xCode CMD + right click I think to see documentation)
Did u set size_t bytesPerRow for that cases? RGBA is 8+8+8+8 bit long, only Alpha is 8
CGContextRef CGBitmapContextCreate ( void *data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef space, CGBitmapInfo bitmapInfo );
Related
I have gone through many of similar question here on SO, but it does not give the specific output that i need. I have tried converting image to black and white but due to some reason some of the text does not appear clear or we can say gets distorted. Below here is the code that i have tried so far...
+(UIImage *)grayImage:(UIImage *)processedImage{
cv::Mat grayImage = [MMOpenCVHelper cvMatGrayFromAdjustedUIImage:processedImage];
cv::adaptiveThreshold(grayImage, grayImage, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 11, 2);
cv::GaussianBlur(grayImage, grayImage, cv::Size(1,1), 50.0);
UIImage *grayeditImage=[MMOpenCVHelper UIImageFromCVMat:grayImage];
grayImage.release();
return grayeditImage;
}
+ (cv::Mat)cvMatGrayFromAdjustedUIImage:(UIImage *)image {
cv::Mat cvMat = [self cvMatFromAdjustedUIImage:image];
cv::Mat grayMat;
if ( cvMat.channels() == 1 ) {
grayMat = cvMat;
}
else {
grayMat = cv :: Mat( cvMat.rows,cvMat.cols, CV_8UC1 );
cv::cvtColor( cvMat, grayMat, cv::COLOR_BGR2GRAY );
}
return grayMat; }
+ (cv::Mat)cvMatFromAdjustedUIImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat; }
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage; }
The output that i got from the above code is here, and the result that i want is here, any help will be great..! Thank You
EDITED :- Original Image here
I would like to answer my question as i might get helpful. I got B/W output by change in adaptive threshold algorithm and value of block size, below is the code that is used
+(UIImage *)grayImage:(UIImage *)processedImage{ // B/W
cv::Mat grayImage = [MMOpenCVHelper cvMatGrayFromAdjustedUIImage:processedImage];
cv::adaptiveThreshold(grayImage, grayImage, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, 11, 7);
cv::GaussianBlur(grayImage, grayImage, cv::Size(1,1), 50.0);
UIImage *grayeditImage=[MMOpenCVHelper UIImageFromCVMat:grayImage];
grayImage.release();
return grayeditImage;
}
I'm using following codes for converting UIImage* and cv::Mat to each other:
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
and
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
I took these from OpenCV Documentation. I use them as follows:
UIImage *img = [UIImage imageNamed:#"transparent.png"];
UIImage *img2 = [self UIImageFromCVMat:[self cvMatFromUIImage:img]];
However these functions loses the alpha channel information. I know it is because of the flags kCGImageAlphaNone and kCGImageAlphaNoneSkipLast, unfortunately I could't find a way not lose alpha information by changing these flags.
So, how do I convert these two types between each other without losing alpha information?
Here is the image that I use:
We should use these functions from opencv v2.4.6:
UIImage* MatToUIImage(const cv::Mat& image);
void UIImageToMat(const UIImage* image, cv::Mat& m, bool alphaExist = false);
And don't forget to include:
opencv2/imgcodecs/ios.h
You need to not pass kCGImageAlphaNoneSkipLast and instead pass (kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst) to get premultiplied alpha in BGRA format. CoreGraphics only supports premultiplied alpha. But, you will need to check on how OpenCV represents alpha in pixels to determine how to tell OpenCV that the pixels are already premultiplied. The code I have used assumes straight alpha with OpenCV, so you will need to be careful of that.
Hi I am resizing my image using code from
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
CGBitmapInfo bitMapInfo = CGImageGetBitmapInfo(imageRef);
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitMapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
It is working as expected for normal images. However, it fails when I try to give it a png-8 image. I know it as a png-8 image from typing file image.png in the command line.
The output is
image.png: PNG image data, 800 x 264, 8-bit colormap, non-interlaced
The error message in the console is colorspace not supported.
After some googling, I realized that "indexed color spaces are not supported for bitmap graphics contexts."
Following some advice, instead of using the original colorspace, I changed it to
colorSpace = CGColorSpaceCreateDeviceRGB();
Now I am getting this new error:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 2400 bytes/row.
FYI, my image is 800 px wide.
How can I resolve this issue? Thanks a lot!
I realized that the list of supported formats are here:
https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-BCIBHHBB
And there is none with 24bits/pixel.
So I ended using the accepted solution here:
iPhone: Changing CGImageAlphaInfo of CGImage
I was wondering if there was a way to create a CGImage corresponding to a rectangle inside the context?
What I am doing right now:
I am using CGBitmapContextCreateImage to create a CGImage from a context. Then, I use CGImageCreateWithImageInRect to extract that sub-image.
Anil
Try this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGContextRef smallContext = CGBitmapContextCreate(data,
width, height,
CGBitmapContextGetBitsPerComponent(bigContext), bytesPerRow,
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext));
CGImageRef image = CGBitmapContextCreateImage(smallContext);
CGContextRelease(smallContext);
return image;
}
or this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data,
height * bytesPerRow, NULL);
CGImageRef image = CGImageCreate(width, height,
CGBitmapContextGetBitsPerComponent(bigContext),
CGBitmapContextGetBitsPerPixel(bigContext),
CGBitmapContextGetBytesPerRow(bigContext),
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext),
provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
return image;
}
You can create a cropped image as follows as mentioned here,
For eg:-
UIImage *image = //original image
CGRect rect = //cropped rect
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
You need to get the CGImage from context to use the above code to crop it. You can use CGBitmapContextCreateImage as mentioned in question. Here is the documentation.
You could create your CGBitmapContext with a buffer that you allocated, and create a CGImage from scratch using the same buffer. With the context and the image sharing a buffer, you can draw into the context and then create a CGImage with that section of the master image.
Note that if you draw into the same context afterward, the cropped image may actually pick up the changes (depending on just how much shared-referencing-instead-of-copying is going on internally). Depending on what you're doing, you may or may not find this desirable.
I have a freehand drawing view (users can draw lines with their finger). I only use a few colors, so I have a compression algorithm that I wrote (want to send it over a local network to another iPad) but I can't seem to get the data out of the graphics context accurately, even with this simple test:
//Get the data
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = baseImageView.image.CGImage;
NSData *dataToUse = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
//Reuse the data
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)dataToUse);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef test = CGImageCreate(width,height,8,32,4*width,colorSpace,
kCGBitmapByteOrder32Big|kCGImageAlphaPremultipliedLast,provider,NULL,false,
kCGRenderingIntentDefault); //I get width and height from another part of the program
imageView.image = [UIImage imageWithCGImage:test];
I simply copied out the data from one CGImage and tried to insert it into another. However, the result is garbage and not only that, for some reason it comes out as BGRA when I copy the data, but CGImageCreate wants RGBA. Where am I going wrong with this round-trip test?
Looks like the answer is that it's not enough to just get the data provider. You need to actually render the image into a bitmap context and take the data from there. Revised way:
//Get the data
CGImageRef imageRef = baseImageView.image.CGImage;
size_t height = CGImageGetHeight(imageRef);
size_t width = CGImageGetWidth(imageRef);
size_t bufferLength = width * height * 4;
unsigned char *rawData = (unsigned char *)malloc(bufferLength);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(bitmapData, width, height, 8,
4*width, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSData *dataToUse = [NSData dataWithBytes:rawData length:bufferLength];
//Later free(rawData);
Using the data is still the same.