Best performant way to check if an image is all white? - ios

I'm trying to determine if a drawing currently is all white. The solution I could come up with was to scale down the image, then check pixel by pixel if it's white and return NO as soon as it finds a pixel that is not white.
It works, but I have a gut feeling it could be done in a more performant way. Here's the code:
- (BOOL)imageIsAllWhite:(UIImage *)image {
CGSize size = CGSizeMake(100.0f, 100.0f);
UIImageView *imageView = [[UIImageView alloc] initWithImage:[image scaledImageWithSize:size]];
unsigned char pixel[4 * (int)size.width * (int)size.height];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(
pixel,
(size_t)size.width,
(size_t)size.height,
8,
(size_t)(size.width * 4),
colorSpace,
kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(cgContext, 0, 0);
[imageView.layer renderInContext:cgContext];
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);
for (int i = 0; i < sizeof(pixel); i = i + 4) {
if(!(pixel[i] == 255 && pixel[i+1] == 255 && pixel[i+2] == 255)) {
return NO;
}
}
return YES;
}
Any ideas for improvement?

Please following code for check whether UIImage is White color
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = YES;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = NO;
break;
}
}
return imageExist;
}
Calling Functions
UIImage *image = [UIImage imageNamed:#"demo1.png"];
BOOL isImageFlag=[self checkIfImage:image];
if (isImageFlag == YES) {
NSLog(#"YES it's totally White");
}else{
NSLog(#"Nope it's not White");
}

It feels like there's no speedy route that would go to the GPU and back again so the answer is really no more interesting than taking a statistical approach and using GCD to ensure multicore utilisation.
In most images, colours are more likely to be close to other similar colours. So if one pixel is white, it's more likely that its neighbouring pixel is also white. Therefore a strict linear progression through the pixels is less likely to find a white pixel quickly than is sampling points a distance apart, then sampling closer points, etc. Ideally there'd be some f(x) that took the relevant range of integers as input and returned each of them only once, such that the distance between f(x) and f(x+1) is greatest for x = 0 and then decreases monotonically.
If the image is reasonably large, and more so if you can afford to return the result asynchronously, then the cost of dispatching the task to multiple cores is likely to be outweighed by the gain of having multiple cores work on it at once.
You're fixing your image size at 100x100 pixels. I'm going to take a liberty and assume you can move up to 128x128 because it makes the f(x) easy — in that case you can just do a bit reversal.
E.g.
static inline int convolution(int input) {
// bit reverse a 14-bit number
return ((input & 0x0001) << 13) |
((input & 0x0002) << 11) |
((input & 0x0004) << 9) |
((input & 0x0008) << 7) |
((input & 0x0010) << 5) |
((input & 0x0020) << 3) |
((input & 0x0040) << 1) |
((input & 0x0080) >> 1) |
((input & 0x0100) >> 3) |
((input & 0x0200) >> 5) |
((input & 0x0400) >> 7) |
((input & 0x0800) >> 9) |
((input & 0x1000) >> 11) |
((input & 0x2000) >> 13);
}
... elsewhere ...
__block BOOL hasFoundNonWhite = NO;
const int numberOfPixels = 128 * 128;
const int pixelsPerBatch = 128;
const int numberOfBatches = numberOfPixels / pixelsPerBatch;
dispatch_apply(numberOfBatches,
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
^(size_t index) {
if (hasFoundNonWhite) {
return;
}
index *= pixelsPerBatch;
for (int i = index; i < index + pixelsPerBack; i ++) {
int indexToCheck = convolution(i);
int arrayIndex = indexToCheck << 2;
if (!(pixel[arrayIndex] == 255 && pixel[arrayIndex+1] == 255 && pixel[arrayIndex+2] == 255)) {
hasFoundNonWhite = YES;
return;
}
}
});
return !hasFoundNonWhite;
Addendum: the other knee-jerk thing you'd do when dealing with a vector processing task like this is check the Accelerate framework, likely vDSP. That ends up compiling down to use the vector unit on your CPU. In this case you might reformulate the test as "sum of vector must equal size of vector * 255" (if you can make an assumption about alpha). However there is no integral sum, and converting to float probably isn't worth the cost.

Related

Edit a RGB colorspace image with HSL conversion failed

I'm making an app to edit image's HSL colorspace via opencv2 and some conversions code from Internet.
I suppose the original image's color space is RGB, so here is my thought:
Convert the UIImage to cvMat
Convert the colorspace from BGR to HLS.
Loop through all the pixel points to get the corresponding HLS values.
Custom algorithms.
Rewrite the HLS value changes to cvMat
Convert the cvMat to UIImage
Here is my code:
Conversion between UIImage and cvMat
Reference: https://stackoverflow.com/a/10254561/1677041
#import <UIKit/UIKit.h>
#import <opencv2/core/core.hpp>
UIImage *UIImageFromCVMat(cv ::Mat cvMat)
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
#if 0
// OpenCV defaults to either BGR or ABGR. In CoreGraphics land,
// this means using the "32Little" byte order, and potentially
// skipping the first pixel. These may need to be adjusted if the
// input matrix uses a different pixel format.
bitmapInfo = kCGBitmapByteOrder32Little | (
cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
#else
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
#endif
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step[0], // bytesPerRow
colorSpace, // colorspace
bitmapInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
cv::Mat cvMatWithImage(UIImage *image)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
// check whether the UIImage is greyscale already
if (numberOfComponents == 1) {
cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
}
CGContextRef contextRef = CGBitmapContextCreate(
cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
bitmapInfo // Bitmap info flags
);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
I tested these two functions alone and confirm that they work.
Core operations about conversion:
/// Generate a new image based on specified HSL value changes.
/// #param h_delta h value in [-360, 360]
/// #param s_delta s value in [-100, 100]
/// #param l_delta l value in [-100, 100]
- (void)adjustImageWithH:(CGFloat)h_delta S:(CGFloat)s_delta L:(CGFloat)l_delta completion:(void (^)(UIImage *resultImage))completion
{
dispatch_async(dispatch_get_global_queue(0, 0), ^{
Mat original = cvMatWithImage(self.originalImage);
Mat image;
cvtColor(original, image, COLOR_BGR2HLS);
// https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way
// accept only char type matrices
CV_Assert(image.depth() == CV_8U);
int channels = image.channels();
int nRows = image.rows;
int nCols = image.cols * channels;
int y, x;
for (y = 0; y < nRows; ++y) {
for (x = 0; x < nCols; ++x) {
// https://answers.opencv.org/question/30547/need-to-know-the-hsv-value/
// https://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?#cvtcolor
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// h = MAX(0, MIN(360, h + h_delta));
// s = MAX(0, MIN(100, s + s_delta));
// l = MAX(0, MIN(100, l + l_delta));
printf("(%02d, %02d):\tHSL(%d, %d, %d)\n", x, y, h, s, l); // <= Label 1
original.at<Vec3b>(y, x)[0] = h;
original.at<Vec3b>(y, x)[1] = l;
original.at<Vec3b>(y, x)[2] = s;
}
}
cvtColor(image, image, COLOR_HLS2BGR);
UIImage *resultImage = UIImageFromCVMat(image);
dispatch_async(dispatch_get_main_queue(), ^ {
if (completion) {
completion(resultImage);
}
});
});
}
The question is:
Why does the HLS values out of my expected range? It shows as [0, 255] like RGB range, is that cvtColor wrong usage?
Should I use Vec3b within the two for loop? or Vec3i instead?
Does my thought have something wrong above?
Update:
Vec3b hls = original.at<Vec3b>(y, x);
uchar h = hls.val[0], l = hls.val[1], s = hls.val[2];
// Remap the hls value range to human-readable range (0~360, 0~1.0, 0~1.0).
// https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html
float fh, fl, fs;
fh = h * 2.0;
fl = l / 255.0;
fs = s / 255.0;
fh = MAX(0, MIN(360, fh + h_delta));
fl = MAX(0, MIN(1, fl + l_delta / 100));
fs = MAX(0, MIN(1, fs + s_delta / 100));
// Convert them back
fh /= 2.0;
fl *= 255.0;
fs *= 255.0;
printf("(%02d, %02d):\tHSL(%d, %d, %d)\tHSL2(%.4f, %.4f, %.4f)\n", x, y, h, s, l, fh, fs, fl);
original.at<Vec3b>(y, x)[0] = short(fh);
original.at<Vec3b>(y, x)[1] = short(fl);
original.at<Vec3b>(y, x)[2] = short(fs);
1) take a look to this, specifically the part of RGB->HLS. When the source image is 8 bits it will go from 0-255 but if you use float image it may have different values.
8-bit images: V←255⋅V, S←255⋅S, H←H/2(to fit to 0 to 255)
V should be L, there is a typo in the documentation
You can convert the RGB/BGR image to a floating point image and then you will have the full value. i.e. the S and L are from 0 to 1 and H 0-360.
But you have to be careful converting it back.
2) Vec3b is unsigned 8 bits image (CV_8U) and Vec3i is integers (CV_32S). Knowing this, it depends on what type is your image. As you said it goes from 0-255 it should be unsigned 8 bits which you should use Vec3b. If you use the other one, it will get 32 bits per pixel and it uses this size to calculate the position in the array of pixels... so it may give something like out of bounds or segmentation error or random problems.
If you have a question, feel free to comment

Convert matrix to UIImage

I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!

Implementing Ordered Dithering (24 bit RGB to 3 bit per channel RGB)

I'm writing an image editing programme, and I need functionality to dither any arbitrary 24-bit RGB image (I've taken care of loading it with CoreGraphics and such) to an image with 3 bit colour channels, then displaying it. I've set up my matrices and such, but I've not got any results from the code below besides a simple pattern that is applied to the image:
- (CGImageRef) ditherImageTo16Colours:(CGImageRef)image withDitheringMatrixType:(SQUBayerDitheringMatrix) matrix {
if(image == NULL) {
NSLog(#"Image is NULL!");
return NULL;
}
unsigned int imageWidth = CGImageGetWidth(image);
unsigned int imageHeight = CGImageGetHeight(image);
NSLog(#"Image size: %u x %u", imageWidth, imageHeight);
CGContextRef context = CGBitmapContextCreate(NULL,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeight), image); // draw it
CGImageRelease(image); // get rid of the image, we don't want it anymore.
unsigned char *imageData = CGBitmapContextGetData(context);
unsigned char ditheringModulusType[0x04] = {0x02, 0x03, 0x04, 0x08};
unsigned char ditheringModulus = ditheringModulusType[matrix];
unsigned int red;
unsigned int green;
unsigned int blue;
uint32_t *memoryBuffer;
memoryBuffer = (uint32_t *) malloc((imageHeight * imageWidth) * 4);
unsigned int thresholds[0x03] = {256/8, 256/8, 256/8};
for(int y = 0; y < imageHeight; y++) {
for(int x = 0; x < imageWidth; x++) {
// fetch the colour components, add the dither value to them
red = (imageData[((y * imageWidth) * 4) + (x << 0x02)]);
green = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 1]);
blue = (imageData[((y * imageWidth) * 4) + (x << 0x02) + 2]);
if(red > 36 && red < 238) {
red += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(green > 36 && green < 238) {
green += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
} if(blue > 36 && blue < 238) {
blue += SQUBayer117_matrix[x % ditheringModulus][y % ditheringModulus];
}
// memoryBuffer[(y * imageWidth) + x] = (0xFF0000 + ((x >> 0x1) << 0x08) + (y >> 2));
memoryBuffer[(y * imageWidth) + x] = find_closest_palette_colour(((red & 0xFF) << 0x10) | ((green & 0xFF) << 0x08) | (blue & 0xFF));
}
}
//CGContextRelease(context);
context = CGBitmapContextCreate(memoryBuffer,
imageWidth,
imageHeight,
8,
4 * (imageWidth),
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB),
kCGImageAlphaNoneSkipLast);
NSLog(#"Created context from buffer: %#", context);
CGImageRef result = CGBitmapContextCreateImage(context);
return result;
}
Note that find_closest_palette_colour doesn't do anything besides returning the original colour right now for testing.
I'm trying to implement the example pseudocode from Wikipedia, and I don't really get anything out of that right now.
Anyone got a clue on how to fix this up?
Use the code that I have provided here: https://stackoverflow.com/a/17900812/342646
This code converts the image to a single-channel gray-scale first. If you want the dithering to be done on a three-channel image, you can just split your image into three channels and call the function three times (once per channel).

How image pixel data "scans" the image pixels?

The Goal:
Finding the first black pixel on the left side of an image that contains black and transparent pixels only.
What I have:
I know how to get the pixel data and have an array of black and transparent pixels (found it here : https://stackoverflow.com/a/1262893/358480 ):
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
NSUInteger alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
[result addObject:[NSNumber numberWithInt:alpha]];
}
free(rawData);
return result;
}
What is the problem ?
I can not understand the order which the function "scans" the image.
What i want is to get only the columns of the image and locate the first column that has at list 1 non-transperant pixel. this way I will know how to crop the left, transparent side of the image?
How can I get the pixels by columns?
Thanks
Shani
The bytes are ordered left-to-right, top-to-bottom. So to do what you want, I think you want to loop over the rawData like this:
int x = 0;
int y = 0;
BOOL found = NO;
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
unsigned char alphaByte = rawData[(y*bytesPerRow)+(x*bytesPerPixel)+3];
if (alphaByte > 0) {
found = YES;
break;
}
}
if (found) break;
}
NSLog(#"First non-transparent pixel at %i, %i", x, y);
Then your first column that contains a non-transparent pixel will be column x.
Normally one would iterate over the image array from top to bottom over rows, and within each row from left to right over the columns. In this case you want the reverse: we want to iterate over each column, beginning at the left, and within the column we go over all rows and check if a black pixel is present.
This will give you the left-most black pixel:
size_t maxIndex = height * bytesPerRow;
for (size_t x = 0; x < bytesPerRow; x += bytesPerPixel)
{
for (size_t index = x; index < maxIndex; index += bytesPerRow)
{
if (rawData[index + 3] > 0)
{
goto exitLoop;
}
}
}
exitLoop:
if (x < bytesPerRow)
{
x /= bytesPerPixel;
// left most column is `x`
}
Well, this is equal to mattjgalloway, just slightly optimized, and neater too :O
Although a goto is usually permitted to abandon two loops from within the inner loop, it's still ugly. Makes me really miss those nifty flow control statements D has...
The function you provided in the example code does something different though. It starts at a certain position in the image (defined by xx and yy), and goes over count pixels going from the starting position to the right, continuing to next rows. It adds those alpha values to some array I suspect.
When passed xx = yy = 0, this will find the top-most pixel with certain conditions, not the left-most. This transformation is given by the code above. Do remind that a 2D image is simply a 1D array in memory, starting with the top row from left to right and proceeding with the next rows. Doing simple math one can iterate over rows or over columns.

How to check if a uiimage is blank? (empty, transparent)

which is the best way to check whether a UIImage is blank?
I have this painting editor which returns a UIImage; I don't want to save this image if there's nothing on it.
Try this code:
BOOL isImageFlag=[self checkIfImage:image];
And checkIfImage method:
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = NO;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = YES;
break;
}
}
free(imageData);
return imageExist;
}
You will have to add OpenGLES framework and import this in the .m file:
#import <OpenGLES/ES1/gl.h>
One idea would be to call UIImagePNGRepresentation to get an NSData object then compare it with a pre-defined 'empty' version - ie: call:
- (BOOL)isEqualToData:(NSData *)otherData
to test?
Not tried this on large data; might want to check performance, if your image data is quite large, otherwise if it's small it is probably just like calling memcmp() in C.
Something along these lines:
Create a 1 px square CGContext
Draw the image so it fills the context
Test the one pixel of the context to see if it contains any data. If it's completely transparent, consider the picture blank
Others may be able to add more details to this answer.
Here's a solution in Swift that does not require any additional frameworks.
Thanks to answers in a related question here:
Get Pixel Data of ImageView from coordinates of touch screen on xcode?
func imageIsEmpty(_ image: UIImage) -> Bool {
guard let cgImage = image.cgImage,
let dataProvider = cgImage.dataProvider else
{
return true
}
let pixelData = dataProvider.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let imageWidth = Int(image.size.width)
let imageHeight = Int(image.size.height)
for x in 0..<imageWidth {
for y in 0..<imageHeight {
let pixelIndex = ((imageWidth * y) + x) * 4
let r = data[pixelIndex]
let g = data[pixelIndex + 1]
let b = data[pixelIndex + 2]
let a = data[pixelIndex + 3]
if a != 0 {
if r != 0 || g != 0 || b != 0 {
return false
}
}
}
}
return true
}
I'm not at my Mac, so I can't test this (and there are probably compile errors). But one method might be:
//The pixel format depends on what sort of image you're expecting. If it's RGBA, this should work
typedef struct
{
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} MyPixel_T;
UIImage *myImage = [self doTheThingToGetTheImage];
CGImageRef myCGImage = [myImage CGImage];
//Get a bitmap context for the image
CGBitmapContextRef *bitmapContext =
CGBitmapContextFreate(NULL, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage),
CGImageGetBitsPerComponent(myCGImage), CGImageGetBytesPerRow(myCGImage),
CGImageGetColorSpace(myCGImage), CGImageGetBitmapInfo(myCGImage));
//Draw the image into the context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage)), myCGImage);
//Get pixel data for the image
MyPixel_T *pixels = CGBitmapContextGetData(bitmapContext);
size_t pixelCount = CGImageGetWidth(myCGImage) * CGImageGetHeight(myCGImage);
for(size_t i = 0; i < pixelCount; i++)
{
MyPixel_T p = pixels[i];
//Your definition of what's blank may differ from mine
if(p.red > 0 && p.green > 0 && p.blue > 0 && p.alpha > 0)
return NO;
}
return YES;
I just encountered the same problem. Solved it by checking the dimensions:
Swift example:
let image = UIImage()
let height = image.size.height
let width = image.size.height
if (height > 0 && width > 0) {
// We have an image
} else {
// ...and we don't
}

Resources