I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!
Related
I'm using the following method from the Advanced Video Example on Github to capture the raw video data:
- (AgoraVideoRawData *)mediaDataPlugin:(AgoraMediaDataPlugin *)mediaDataPlugin didCapturedVideoRawData:(AgoraVideoRawData *)videoRawData
I have already been able to convert the Y U V buffers to a CVPixelBuffer > CIImage and apply the blur, but i'm having trouble translating the CIImage data back into YUV buffers.
I already succeeded into setting random values to the yuv-buffers which results in a grey video frame being sent to the other user.
memset(videoRawData.yBuffer, 128, videoRawData.yStride * videoRawData.height);
memset(videoRawData.uBuffer, 128, videoRawData.uStride * videoRawData.height / 2);
memset(videoRawData.vBuffer, 128, videoRawData.vStride * videoRawData.height / 2);
Could someone point me in the right direction on how to translate CIImage data back into YUV buffers? Or if there is a more efficient way to blur a YUV videodata stream, i'm willing to try that.
I have found a solutation that works for me. I will try to post a complete answer so others might find a solution that works for them. See comments in code for more explanation.
Set these helpers somewhere in your file. This will be used later to calculate the RGB values of each color pixel:
#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
All code posted here is inside the - (AgoraVideoRawData *)mediaDataPlugin:(AgoraMediaDataPlugin *)mediaDataPlugin didCapturedVideoRawData:(AgoraVideoRawData *)videoRawData method for simplicity sake of answerring this question.
- (AgoraVideoRawData *)mediaDataPlugin:(AgoraMediaDataPlugin *)mediaDataPlugin didCapturedVideoRawData:(AgoraVideoRawData *)videoRawData
{
// create pixelbuffer from raw video data
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
videoRawData.width,
videoRawData.height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // NV12
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < videoRawData.height; i ++) {
for (int j = 0; j < videoRawData.width; j ++) {
yDestPlane[k++] = videoRawData.yBuffer[j + i * videoRawData.yStride];
}
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < videoRawData.height / 2; i ++) {
for (int j = 0; j < videoRawData.width / 2; j ++) {
uvDestPlane[k++] = videoRawData.uBuffer[j + i * videoRawData.uStride];
uvDestPlane[k++] = videoRawData.vBuffer[j + i * videoRawData.vStride];
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// create CIImage from pixel buffer
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// apply pixel filter to image
CIFilter *pixelFilter = [CIFilter filterWithName:#"CIPixellate"];
[pixelFilter setDefaults];
[pixelFilter setValue:coreImage forKey:kCIInputImageKey];
[pixelFilter setValue:#40 forKey:#"inputScale"];
CIVector *vector = [[CIVector alloc] initWithX:160 Y:160]; // x & y should be multiple of 'inputScale' parameter
[pixelFilter setValue:vector forKey:#"inputCenter"];
CIImage *outputBlurredImage = [pixelFilter outputImage];
CIContext *blurImageContext = [CIContext contextWithOptions:nil];
CGImageRef inputCGImage = [blurImageContext createCGImage:outputBlurredImage fromRect:[coreImage extent]];
// write blurred image data to YUV buffers
NSUInteger blurredWidth = CGImageGetWidth(inputCGImage);
NSUInteger blurredHeight = CGImageGetHeight(inputCGImage);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * blurredWidth;
NSUInteger bitsPerComponent = 8;
UInt32 * pixels = (UInt32 *) calloc(blurredHeight * blurredWidth, sizeof(UInt32));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, blurredWidth, blurredHeight, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, blurredWidth, blurredHeight), inputCGImage);
int frameSize = videoRawData.width * videoRawData.height;
int yIndex = 0; // Y start index
int uIndex = frameSize; // U statt index
int vIndex = frameSize * 5 / 4; // V start index: w*h*5/4
// allocate buffers to store YUV data
UInt32 *currentPixel = pixels;
char *yBuffer = malloc( sizeof(char) * ( frameSize + 1 ) );
char *uBuffer = malloc( sizeof(char) * ( uIndex + frameSize + 1 ) );
char *vBuffer = malloc( sizeof(char) * ( vIndex + frameSize + 1 ) );
// loop through each RGB pixel and translate to YUV
for (int j = 0; j < blurredHeight; j++) {
for (int i = 0; i < blurredWidth; i++) {
UInt32 color = *currentPixel;
UInt32 R = R(color);
UInt32 G = G(color);
UInt32 B = B(color);
UInt32 Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
UInt32 U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
UInt32 V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yBuffer[yIndex++] = Y;
if (j % 2 == 0 && i % 2 == 0) {
uBuffer[uIndex++] = U;
vBuffer[vIndex++] = V;
}
currentPixel++;
}
}
// copy new YUV values to given videoRawData object buffers
memcpy((void*)videoRawData.yBuffer, yBuffer, strlen(yBuffer));
memcpy((void*)videoRawData.uBuffer, uBuffer, strlen(uBuffer));
memcpy((void*)videoRawData.vBuffer, vBuffer, strlen(vBuffer));
// cleanup
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(inputCGImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(pixels);
free(yBuffer);
free(uBuffer);
free(vBuffer);
return videoRawData;
}
I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.
I suppose the original image's color space is RGB, so here is my thought:
Convert the UIImage to cvMat
Convert the colorspace from BGR to HLS.
Loop through all the pixel points to get the corresponding HLS values.
Custom algorithms.
Rewrite the HLS value changes to cvMat
Convert the cvMat to UIImage
Here is my code:
Conversion between UIImage and cvMat
Reference: https://stackoverflow.com/a/10254561/1677041
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
#else
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step[0], // bytesPerRow
colorSpace, // colorspace
bitmapInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
cv::Mat cvMatWithImage(UIImage *image)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1) {
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo // Bitmap info flags
);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
I tested these two functions alone and confirm that they work.
Core operations about conversion:
/// Generate a new image based on specified HSL value changes.
/// #param h_delta h value in [-360, 360]
/// #param s_delta s value in [-100, 100]
/// #param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
Mat original = cvMatWithImage(self.originalImage);
Mat image;
cvtColor(original, image, COLOR_BGR2HLS);
// https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
// accept only char type matrices
CV_Assert(image.depth() == CV_8U);
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols * channels;
int y, x;
for (y = 0; y < nRows; ++y) {
for (x = 0; x < nCols; ++x) {
// https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
// https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// h = MAX(0, MIN(360, h + h_delta));
// s = MAX(0, MIN(100, s + s_delta));
// l = MAX(0, MIN(100, l + l_delta));
printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
original.at<Vec3b>(y, x)[0] = h;
original.at<Vec3b>(y, x)[1] = l;
original.at<Vec3b>(y, x)[2] = s;
}
}
cvtColor(image, image, COLOR_HLS2BGR);
UIImage *resultImage = UIImageFromCVMat(image);
dispatch_async(dispatch_get_main_queue(), ^ {
if (completion) {
completion(resultImage);
}
});
});
}
The question is:
Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
Should I use Vec3b within the two for loop? or Vec3i instead?
Does my thought have something wrong above?
Update:
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;
fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));
// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;
printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);
1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.
8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)
V should be L, there is a typo in the documentation
You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.
But you have to be careful converting it back.
2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.
If you have a question, feel free to comment
I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.
I'm writing an image editing programme, and I need functionality to dither any arbitrary 24-bit RGB image (I've taken care of loading it with CoreGraphics and such) to an image with 3 bit colour channels, then displaying it. I've set up my matrices and such, but I've not got any results from the code below besides a simple pattern that is applied to the image:
- (CGImageRef) ditherImageTo16Colours:(CGImageRef)image withDitheringMatrixType:(SQUBayerDitheringMatrix) matrix {
if(image == NULL) {
NSLog(#"Image is NULL!");
return NULL;
}
unsigned int imageWidth = CGImageGetWidth(image);
unsigned int imageHeight = CGImageGetHeight(image);
NSLog(#"Image size: %u x %u", imageWidth, imageHeight);
CGContextRef context = CGBitmapContextCreate(NULL,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image); // draw it
CGImageRelease(image); // get rid of the image, we don't want it anymore.
unsigned char *imageData = CGBitmapContextGetData(context);
unsigned char ditheringModulusType[0x04] = {0x02, 0x03, 0x04, 0x08};
unsigned char ditheringModulus = ditheringModulusType[matrix];
unsigned int red;
unsigned int green;
unsigned int blue;
uint32_t *memoryBuffer;
memoryBuffer = (uint32_t *) malloc((imageHeight * imageWidth) * 4);
unsigned int thresholds[0x03] = {256/8, 256/8, 256/8};
for(int y = 0; y < imageHeight; y++) {
for(int x = 0; x < imageWidth; x++) {
// fetch the colour components, add the dither value to them
red = (imageData[((y * imageWidth) * 4) + (x << 0x02)]);
green = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 1]);
blue = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 2]);
if(red > 36 && red < 238) {
red += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(green > 36 && green < 238) {
green += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(blue > 36 && blue < 238) {
blue += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
}
// memoryBuffer[(y * imageWidth) + x] = (0xFF0000 + ((x >> 0x1) << 0x08) + (y >> 2));
memoryBuffer[(y * imageWidth) + x] = find_closest_palette_colour(((red & 0xFF) << 0x10) | ((green & 0xFF) << 0x08) | (blue & 0xFF));
}
}
//CGContextRelease(context);
context = CGBitmapContextCreate(memoryBuffer,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
NSLog(#"Created context from buffer: %#", context);
CGImageRef result = CGBitmapContextCreateImage(context);
return result;
}
Note that find_closest_palette_colour doesn't do anything besides returning the original colour right now for testing.
I'm trying to implement the example pseudocode from Wikipedia, and I don't really get anything out of that right now.
Anyone got a clue on how to fix this up?
Use the code that I have provided here: https://stackoverflow.com/a/17900812/342646
This code converts the image to a single-channel gray-scale first. If you want the dithering to be done on a three-channel image, you can just split your image into three channels and call the function three times (once per channel).
which is the best way to check whether a UIImage is blank?
I have this painting editor which returns a UIImage; I don't want to save this image if there's nothing on it.
Try this code:
BOOL isImageFlag=[self checkIfImage:image];
And checkIfImage method:
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = NO;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = YES;
break;
}
}
free(imageData);
return imageExist;
}
You will have to add OpenGLES framework and import this in the .m file:
#import <OpenGLES/ES1/gl.h>
One idea would be to call UIImagePNGRepresentation to get an NSData object then compare it with a pre-defined 'empty' version - ie: call:
- (BOOL)isEqualToData:(NSData *)otherData
to test?
Not tried this on large data; might want to check performance, if your image data is quite large, otherwise if it's small it is probably just like calling memcmp() in C.
Something along these lines:
Create a 1 px square CGContext
Draw the image so it fills the context
Test the one pixel of the context to see if it contains any data. If it's completely transparent, consider the picture blank
Others may be able to add more details to this answer.
Here's a solution in Swift that does not require any additional frameworks.
Thanks to answers in a related question here:
Get Pixel Data of ImageView from coordinates of touch screen on xcode?
func imageIsEmpty(_ image: UIImage) -> Bool {
guard let cgImage = image.cgImage,
let dataProvider = cgImage.dataProvider else
{
return true
}
let pixelData = dataProvider.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let imageWidth = Int(image.size.width)
let imageHeight = Int(image.size.height)
for x in 0..<imageWidth {
for y in 0..<imageHeight {
let pixelIndex = ((imageWidth * y) + x) * 4
let r = data[pixelIndex]
let g = data[pixelIndex + 1]
let b = data[pixelIndex + 2]
let a = data[pixelIndex + 3]
if a != 0 {
if r != 0 || g != 0 || b != 0 {
return false
}
}
}
}
return true
}
I'm not at my Mac, so I can't test this (and there are probably compile errors). But one method might be:
//The pixel format depends on what sort of image you're expecting. If it's RGBA, this should work
typedef struct
{
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} MyPixel_T;
UIImage *myImage = [self doTheThingToGetTheImage];
CGImageRef myCGImage = [myImage CGImage];
//Get a bitmap context for the image
CGBitmapContextRef *bitmapContext =
CGBitmapContextFreate(NULL, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage),
CGImageGetBitsPerComponent(myCGImage), CGImageGetBytesPerRow(myCGImage),
CGImageGetColorSpace(myCGImage), CGImageGetBitmapInfo(myCGImage));
//Draw the image into the context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage)), myCGImage);
//Get pixel data for the image
MyPixel_T *pixels = CGBitmapContextGetData(bitmapContext);
size_t pixelCount = CGImageGetWidth(myCGImage) * CGImageGetHeight(myCGImage);
for(size_t i = 0; i < pixelCount; i++)
{
MyPixel_T p = pixels[i];
//Your definition of what's blank may differ from mine
if(p.red > 0 && p.green > 0 && p.blue > 0 && p.alpha > 0)
return NO;
}
return YES;
I just encountered the same problem. Solved it by checking the dimensions:
Swift example:
let image = UIImage()
let height = image.size.height
let width = image.size.height
if (height > 0 && width > 0) {
// We have an image
} else {
// ...and we don't
}