which is the best way to check whether a UIImage is blank?
I have this painting editor which returns a UIImage; I don't want to save this image if there's nothing on it.
Try this code:
BOOL isImageFlag=[self checkIfImage:image];
And checkIfImage method:
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = NO;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = YES;
break;
}
}
free(imageData);
return imageExist;
}
You will have to add OpenGLES framework and import this in the .m file:
#import <OpenGLES/ES1/gl.h>
One idea would be to call UIImagePNGRepresentation to get an NSData object then compare it with a pre-defined 'empty' version - ie: call:
- (BOOL)isEqualToData:(NSData *)otherData
to test?
Not tried this on large data; might want to check performance, if your image data is quite large, otherwise if it's small it is probably just like calling memcmp() in C.
Something along these lines:
Create a 1 px square CGContext
Draw the image so it fills the context
Test the one pixel of the context to see if it contains any data. If it's completely transparent, consider the picture blank
Others may be able to add more details to this answer.
Here's a solution in Swift that does not require any additional frameworks.
Thanks to answers in a related question here:
Get Pixel Data of ImageView from coordinates of touch screen on xcode?
func imageIsEmpty(_ image: UIImage) -> Bool {
guard let cgImage = image.cgImage,
let dataProvider = cgImage.dataProvider else
{
return true
}
let pixelData = dataProvider.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let imageWidth = Int(image.size.width)
let imageHeight = Int(image.size.height)
for x in 0..<imageWidth {
for y in 0..<imageHeight {
let pixelIndex = ((imageWidth * y) + x) * 4
let r = data[pixelIndex]
let g = data[pixelIndex + 1]
let b = data[pixelIndex + 2]
let a = data[pixelIndex + 3]
if a != 0 {
if r != 0 || g != 0 || b != 0 {
return false
}
}
}
}
return true
}
I'm not at my Mac, so I can't test this (and there are probably compile errors). But one method might be:
//The pixel format depends on what sort of image you're expecting. If it's RGBA, this should work
typedef struct
{
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} MyPixel_T;
UIImage *myImage = [self doTheThingToGetTheImage];
CGImageRef myCGImage = [myImage CGImage];
//Get a bitmap context for the image
CGBitmapContextRef *bitmapContext =
CGBitmapContextFreate(NULL, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage),
CGImageGetBitsPerComponent(myCGImage), CGImageGetBytesPerRow(myCGImage),
CGImageGetColorSpace(myCGImage), CGImageGetBitmapInfo(myCGImage));
//Draw the image into the context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(myCGImage), CGImageGetHeight(myCGImage)), myCGImage);
//Get pixel data for the image
MyPixel_T *pixels = CGBitmapContextGetData(bitmapContext);
size_t pixelCount = CGImageGetWidth(myCGImage) * CGImageGetHeight(myCGImage);
for(size_t i = 0; i < pixelCount; i++)
{
MyPixel_T p = pixels[i];
//Your definition of what's blank may differ from mine
if(p.red > 0 && p.green > 0 && p.blue > 0 && p.alpha > 0)
return NO;
}
return YES;
I just encountered the same problem. Solved it by checking the dimensions:
Swift example:
let image = UIImage()
let height = image.size.height
let width = image.size.height
if (height > 0 && width > 0) {
// We have an image
} else {
// ...and we don't
}
Related
I'm using a CGBitMapContext() to convert colour spaces to ARGB and get the pixel data values, I malloc space for bit map context and free it after I'm done but am still seeing a Memory Leak in Instruments I'm thinking I'm likely doing something wrong so any help would be appreciated.
Here is the ARGBBitmapContext function
func createARGBBitmapContext(width: Int, height: Int) -> CGContext {
var bitmapByteCount = 0
var bitmapBytesPerRow = 0
//Get image width, height
let pixelsWide = width
let pixelsHigh = height
bitmapBytesPerRow = Int(pixelsWide) * 4
bitmapByteCount = bitmapBytesPerRow * Int(pixelsHigh)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Here is the malloc call that Instruments complains of
let bitmapData = malloc(bitmapByteCount)
let context = CGContext(data: bitmapData, width: pixelsWide, height: pixelsHigh, bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
// Do I need to free something here first?
return context!
}
Here is where I use the context to retrieve all the pixel values as a list of UInt8s (and where the memory leaks)
extension UIImage {
func ARGBPixelValues() -> [UInt8] {
let width = Int(self.size.width)
let height = Int(self.size.height)
var pixels = [UInt8](repeatElement(0, count: width * height * 3))
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = createARGBBitmapContext(inImage: self.cgImage!)
context.clear(rect)
context.draw(self.cgImage!, in: rect)
var location = 0
if let data = context.data {
while location < (width * height) {
let arrOffset = 3 * location
let offset = 4 * (location)
let R = data.load(fromByteOffset: offset + 1, as: UInt8.self)
let G = data.load(fromByteOffset: offset + 2, as: UInt8.self)
let B = data.load(fromByteOffset: offset + 3, as: UInt8.self)
pixels[arrOffset] = R
pixels[arrOffset+1] = G
pixels[arrOffset+2] = B
location += 1
}
free(context.data) // Free the data consumed, perhaps this isn't right?
}
return pixels
}
}
Instruments reports a malloc error of 1.48MiB which is right for my image size (540 x 720) I free the data but apparently that is not right.
I should mention that I know you can pass nil to CGContext init (and it will manage memory) but I'm more curious why using malloc creates an issue is there something more I should know (I'm more familiar with Obj-C).
Because CoreGraphics is not handled by ARC (like all other C libraries), you need to wrap your code with with an autorelease, even in Swift. Particularly if you are not on the main thread (which you should not be, if CoreGraphics is involved... .userInitiated or lower is appropriate).
func myFunc() {
for _ in 0 ..< makeMoneyFast {
autoreleasepool {
// Create CGImageRef etc...
// Do Stuff... whir... whiz... PROFIT!
}
}
}
For those that care, your Objective-C should also be wrapped like:
BOOL result = NO;
NSMutableData* data = [[NSMutableData alloc] init];
#autoreleasepool {
CGImageRef image = [self CGImageWithResolution:dpi
hasAlpha:hasAlpha
relativeScale:scale];
NSAssert(image != nil, #"could not create image for TIFF export");
if (image == nil)
return nil;
CGImageDestinationRef destRef = CGImageDestinationCreateWithData((CFMutableDataRef)data, kUTTypeTIFF, 1, NULL);
CGImageDestinationAddImage(destRef, image, (CFDictionaryRef)options);
result = CGImageDestinationFinalize(destRef);
CFRelease(destRef);
}
if (result) {
return [data copy];
} else {
return nil;
}
See this answer for details.
I wrote this code that is supposed to NSLog all non-white pixels as a test before going further.
This is my code:
UIImage *image = [UIImage imageNamed:#"image"];
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
if(!pixelData) {
return;
}
const UInt8 *buffer = CFDataGetBytePtr(pixelData);
CFRelease(pixelData);
for(int y = 0; y < image.size.height; y++) {
for(int x = 0; x < image.size.width; x++) {
int pixelInfo = ((image.size.width * y) + x) * 4;
UInt8 red = buffer[pixelInfo];
UInt8 green = buffer[(pixelInfo + 1)];
UInt8 blue = buffer[pixelInfo + 2];
UInt8 alpha = buffer[pixelInfo + 3];
if(red != 0xff && green != 0xff && blue != 0xff){
NSLog(#"R: %hhu, G: %hhu, B: %hhu, A: %hhu", red, green, blue, alpha);
}
}
}
For some reason, when I build an app, it iterates for a moment and then throws BAD_ACCESS error on line:
UInt8 red = buffer[pixelInfo];. What could be the issue?
Is this the fastest method to iterate through pixels?
I think the problem is a buffer size error.
buffer has the size of width x height, and pixelInfo has a 4 multiplier.
I think you need to create an array 4 times bigger and save each pixel color of buffer in this new array. But you have to be careful not to read more of the size of the buffer.
I'm trying to determine if a drawing currently is all white. The solution I could come up with was to scale down the image, then check pixel by pixel if it's white and return NO as soon as it finds a pixel that is not white.
It works, but I have a gut feeling it could be done in a more performant way. Here's the code:
- (BOOL)imageIsAllWhite:(UIImage *)image {
CGSize size = CGSizeMake(100.0f, 100.0f);
UIImageView *imageView = [[UIImageView alloc] initWithImage:[image scaledImageWithSize:size]];
unsigned char pixel[4 * (int)size.width * (int)size.height];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(
pixel,
(size_t)size.width,
(size_t)size.height,
8,
(size_t)(size.width * 4),
colorSpace,
kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(cgContext, 0, 0);
[imageView.layer renderInContext:cgContext];
CGContextRelease(cgContext);
CGColorSpaceRelease(colorSpace);
for (int i = 0; i < sizeof(pixel); i = i + 4) {
if(!(pixel[i] == 255 && pixel[i+1] == 255 && pixel[i+2] == 255)) {
return NO;
}
}
return YES;
}
Any ideas for improvement?
Please following code for check whether UIImage is White color
- (BOOL) checkIfImage:(UIImage *)someImage {
CGImageRef image = someImage.CGImage;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
GLubyte * imageData = malloc(width * height * 4);
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
CGContextRef imageContext =
CGBitmapContextCreate(
imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGContextRelease(imageContext);
int byteIndex = 0;
BOOL imageExist = YES;
for ( ; byteIndex < width*height*4; byteIndex += 4) {
CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
imageExist = NO;
break;
}
}
return imageExist;
}
Calling Functions
UIImage *image = [UIImage imageNamed:#"demo1.png"];
BOOL isImageFlag=[self checkIfImage:image];
if (isImageFlag == YES) {
NSLog(#"YES it's totally White");
}else{
NSLog(#"Nope it's not White");
}
It feels like there's no speedy route that would go to the GPU and back again so the answer is really no more interesting than taking a statistical approach and using GCD to ensure multicore utilisation.
In most images, colours are more likely to be close to other similar colours. So if one pixel is white, it's more likely that its neighbouring pixel is also white. Therefore a strict linear progression through the pixels is less likely to find a white pixel quickly than is sampling points a distance apart, then sampling closer points, etc. Ideally there'd be some f(x) that took the relevant range of integers as input and returned each of them only once, such that the distance between f(x) and f(x+1) is greatest for x = 0 and then decreases monotonically.
If the image is reasonably large, and more so if you can afford to return the result asynchronously, then the cost of dispatching the task to multiple cores is likely to be outweighed by the gain of having multiple cores work on it at once.
You're fixing your image size at 100x100 pixels. I'm going to take a liberty and assume you can move up to 128x128 because it makes the f(x) easy — in that case you can just do a bit reversal.
E.g.
static inline int convolution(int input) {
// bit reverse a 14-bit number
return ((input & 0x0001) << 13) |
((input & 0x0002) << 11) |
((input & 0x0004) << 9) |
((input & 0x0008) << 7) |
((input & 0x0010) << 5) |
((input & 0x0020) << 3) |
((input & 0x0040) << 1) |
((input & 0x0080) >> 1) |
((input & 0x0100) >> 3) |
((input & 0x0200) >> 5) |
((input & 0x0400) >> 7) |
((input & 0x0800) >> 9) |
((input & 0x1000) >> 11) |
((input & 0x2000) >> 13);
}
... elsewhere ...
__block BOOL hasFoundNonWhite = NO;
const int numberOfPixels = 128 * 128;
const int pixelsPerBatch = 128;
const int numberOfBatches = numberOfPixels / pixelsPerBatch;
dispatch_apply(numberOfBatches,
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
^(size_t index) {
if (hasFoundNonWhite) {
return;
}
index *= pixelsPerBatch;
for (int i = index; i < index + pixelsPerBack; i ++) {
int indexToCheck = convolution(i);
int arrayIndex = indexToCheck << 2;
if (!(pixel[arrayIndex] == 255 && pixel[arrayIndex+1] == 255 && pixel[arrayIndex+2] == 255)) {
hasFoundNonWhite = YES;
return;
}
}
});
return !hasFoundNonWhite;
Addendum: the other knee-jerk thing you'd do when dealing with a vector processing task like this is check the Accelerate framework, likely vDSP. That ends up compiling down to use the vector unit on your CPU. In this case you might reformulate the test as "sum of vector must equal size of vector * 255" (if you can make an assumption about alpha). However there is no integral sum, and converting to float probably isn't worth the cost.
I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!
How can I replace specific color(RGB value) in CGBitmapContext that has already drawn?
Is there any easy way?
Thanks in advance.
You'll want to get a pointer to the pixels and information about their format by doing something like this:
// This assumes the data is RGBA format, 8-bits per channel.
// You'll need to verify that by calling CGBitmapContextGetBitsPerPixel (), etc.
typedef struct RGBA8 {
UInt8 red;
UInt8 green;
UInt8 blue;
UInt8 alpha;
} RGBA8;
RGBA8* pixels = CGBitmapContextGetData (context);
UInt32 height = CGBitmapContextGetHeight (context);
UInt32 width = CGBitmapContextGetWidth (context);
UInt32 rowBytes = CGBitmapContextGetBytesPerRow (context);
UInt32 x, y;
for (y = 0; y < height; y++)
{
RGBA8* currentRow = (RGBA8*)((UInt8*)pixels + y * rowBytes);
for (x = 0; x < width; x++)
{
if ((currentRow->red == replaceRed) && (currentRow->green == replaceGreen) &&
(currentRow->blue == replaceBlue) && (currentRow->alpha == replaceAlpha))
{
currentRow->red = newRed;
currentRow->green = newGreen;
currentRow->blue = newBlue;
currentRow->alpha = newAlpha;
}
currentRow++;
}
}