iOS using UIGraphicsGetImageFromCurrentImageContext to detect certain pixels on screen - ios

I have a UIView where the user can draw various UIBezierPaths.
I need to analyze the drawn BezierPaths to process certain patterns. I have not found any solution to converting UIBezierPaths to a list of coordinates/points, it seems like this is undoable? It's strange as this data must be stored and used someway to draw the actual paths..
So to bypass this problem i decided to draw the BezierPath with a width of 1px:
[path setLineWidth:1];
And convert my UIView to an UIImage:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
Then i can identify pixels by getting the color for a certain pixel position at the image:
- (UIColor *)colorAtPixel:(CGPoint)point {
CGImageRef imageRef = [self CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
long bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
long byteIndex = (bytesPerRow * point.y) + point.x * bytesPerPixel;
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0 ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0 ;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *color = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
free(rawData);
return color;
}
Now my issue is that the image generated is blurry, if a draw a straight 1px BezierPath line and convert this to an UIImage, the line has a width of 3x because of it becomming blurry.
How can i solve this? Is there actually no possible way to convert BezierPaths to a list of coordinates?

This is due to Anti Alias rendering.
See this question on SO for information on how to turn this of.
I believe that you are better of rendering the beziercurve yourself. It will be faster with a very small memory footprint.

You may take a look at this answer from Moritz about using CGPathApply and CGPathApplierFunction.
For each element in the specified path, Quartz calls the applier function, which can examine (but not modify) the element.
That could give you access to the points stored in a UIBezierPath.

Related

inconsistencies in colors when drawing

I have a UIImageView and I draw to it using UIColor orangeColor. Now, I have a function that is supposed to detect the pixelColor of a pixel tapped on.
R: 1.000000 G: 0.501961 B: 0.000000
That's the RGB value I receive when attempting to detect the pixelColor for UIOrange
It should be.
R: 1.000000 G: 0.5 B: 0.000000
Here's my function
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, _overlay_imageView.frame.size.width, _overlay_imageView.frame.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = _overlay_imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Any ideas?
I must mention, my UIImageView has a clearBackground, and its ontop of a black canvas. Is that maybe the issue?
There's nothing wrong with your function. This is a a result of floating point math. Half of an integer 255 (the max value of an unsigned byte) is either 127/255.0 or 128/255.0 depending on how you round. Neither one of those is 0.5. They are 0.498039215686275 and 0.501960784313725 respectively.
EDIT: I guess I should add that the colors in the CGImage are stored as bytes, not floats. So when you create your orange with a float in UIColor its getting truncated to R:255, G:128, B:0 A:255. When you read this back as a float you get 1.0 0.501961 B: 0.0 A: 1.0

Fastest and most effiecient way to find out non-transparent pixel of UIImage on iOS

I want to ask about image processing mechanism. I develop an iOS app which using OpenGLES for hand-writing on a view. I have a function save that convert a view with all drawing to an Image and save to Photo Library.
I can properly convert content of view to image easily using below code
(Note: The following code is not the problem. Its purpose is just to convert content of view to image and it worked perfect, but I show here for reference)
// Get the size of the backing CAEAGLLayer
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != &UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
} else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
The problem is I want to determine if the view has any drawing or not. If no drawing -> can't save because saving a blank image is useless so my thinking is to check if image has any non-transparent pixel or not
My solution
Convert my drawing view to Image (its pixels have alpha channel)
Check if the Image has any non-zero alpha channel pixel
If yes, user properly draws something -> can Save
If no, user not draws anything or user erases everything -> not Save
I know the BruteForce algorithim to go through all pixels but it seems the worst way and just be implemented if there is no other efficient ways
So is there any efficient way to check it
I found that the BruteForce algorithm is not slower as I though. It just take about less than 200 miliseconds to go through all pixel datas of an image has size of iPad Pro as well as iPad mini 2
So I though using BruteForce is acceptable
Following is code to check
CGImageRef imageRef = [selfImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
float total = width * height * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(total, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef tempContext = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(tempContext, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(tempContext);
// Now your rawData contains the image data in the RGBA8888 pixel format
BOOL empty = YES;
for (int i = 0 ; i < total ;) {
CGFloat alpha = ((CGFloat) rawData[i + 3] ) / 255.0f;
// CGFloat red = ((CGFloat) rawData[i] ) / alpha;
// CGFloat green = ((CGFloat) rawData[i + 1] ) / alpha;
// CGFloat blue = ((CGFloat) rawData[i + 2] ) / alpha;
i += bytesPerPixel;
if (alpha != 0) {
empty = NO;
break;
}
}
if (empty) {
//Do something
} else {
//Do other thing
}
If is there any improvement or other effiecient algorithms, please post here, I really appreciate

How to control a memory overflow correctly in Objective-C?

I'm tryng add into an array all pixels value of a picture. If the picture is so big my app lost connection with assets and crash..
How can I check if memory is growing to stop and show alert, because I don't know what is the max image size that it can do to prevent this while load a picture.
My code is:
-(NSArray*)getRGBAFromImage:(UIImage*)image atx:(int)xp atY:(int)yp
{
NSMutableArray *resultColor = [NSMutableArray array];
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yp) + xp * bytesPerPixel;
//EDIT: THIS IS THE LOOP
for (int i = 0 ; i < count ; ++i)
{
CGFloat red = (rawData[byteIndex] *1) ;
CGFloat green = (rawData[byteIndex + 1] *1) ;
CGFloat blue = (rawData[byteIndex + 2] *1) ;
CGFloat alpha = (rawData[byteIndex + 3] *1) ;
byteIndex += bytesPerPixel;
redTotal = redTotal+red;
greenTotal = greenTotal + green;
blueTotal = blueTotal + blue;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
NSLog(#"width:%i hight:%i Color:%#",width,height,[color description]);
free(rawData);
return resultColor;
}
Or should I add this to a queue??
Thanks!
Wile the image is rather large (32MB) the array will be much larger since it is an array of objects. There will be 8M UIColor color objects. Just the array of pointers to the UIColor objects in a 64-bit device will take 64MB and then add in 8M times the size of a UIColor object.
You need to find another way of handling this.

Finding non transparent filled CGRect in images on iOS

I've some images on my iOS application which is partially transparent (format is PNG).
Can i find CGRect areas of non-transparent areas on an image?
I am not aware of any function that would give this out of the box.
But you can write your own function. All you need to do is to get the color of pixels one by one and figure out if they make a rect or not.
To get this you can use the following code.
CGImageRef image = [myUIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
This was originally posted at this question.

iOS find subimage in a larger image

What is the best way to find coordinates of subimages in a larger image. Subimage is very simple and always the same. For example how do I find coordinates of all black squares in the image below:
I think the best way here might be to write a function/UIImage category to check the color at a pixel in the image. Then (if you know for a fact the images are squares), you can check the color of each pixel moving down diagonally until you one is a different color (then you have the location and size of your square).
One working implementation I found for checking the color of a pixel is in the open source component OBShapedButton.
It is a UIImage category.
Code:
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}

Resources