inconsistencies in colors when drawing - ios

I have a UIImageView and I draw to it using UIColor orangeColor. Now, I have a function that is supposed to detect the pixelColor of a pixel tapped on.
R: 1.000000 G: 0.501961 B: 0.000000
That's the RGB value I receive when attempting to detect the pixelColor for UIOrange
It should be.
R: 1.000000 G: 0.5 B: 0.000000
Here's my function
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, _overlay_imageView.frame.size.width, _overlay_imageView.frame.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = _overlay_imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Any ideas?
I must mention, my UIImageView has a clearBackground, and its ontop of a black canvas. Is that maybe the issue?

There's nothing wrong with your function. This is a a result of floating point math. Half of an integer 255 (the max value of an unsigned byte) is either 127/255.0 or 128/255.0 depending on how you round. Neither one of those is 0.5. They are 0.498039215686275 and 0.501960784313725 respectively.
EDIT: I guess I should add that the colors in the CGImage are stored as bytes, not floats. So when you create your orange with a float in UIColor its getting truncated to R:255, G:128, B:0 A:255. When you read this back as a float you get 1.0 0.501961 B: 0.0 A: 1.0

Related

iOS using UIGraphicsGetImageFromCurrentImageContext to detect certain pixels on screen

I have a UIView where the user can draw various UIBezierPaths.
I need to analyze the drawn BezierPaths to process certain patterns. I have not found any solution to converting UIBezierPaths to a list of coordinates/points, it seems like this is undoable? It's strange as this data must be stored and used someway to draw the actual paths..
So to bypass this problem i decided to draw the BezierPath with a width of 1px:
[path setLineWidth:1];
And convert my UIView to an UIImage:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
Then i can identify pixels by getting the color for a certain pixel position at the image:
- (UIColor *)colorAtPixel:(CGPoint)point {
CGImageRef imageRef = [self CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
long bytesPerRow = bytesPerPixel * width;
int bitsPerComponent = 8;
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
long byteIndex = (bytesPerRow * point.y) + point.x * bytesPerPixel;
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0 ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0 ;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *color = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
free(rawData);
return color;
}
Now my issue is that the image generated is blurry, if a draw a straight 1px BezierPath line and convert this to an UIImage, the line has a width of 3x because of it becomming blurry.
How can i solve this? Is there actually no possible way to convert BezierPaths to a list of coordinates?
This is due to Anti Alias rendering.
See this question on SO for information on how to turn this of.
I believe that you are better of rendering the beziercurve yourself. It will be faster with a very small memory footprint.
You may take a look at this answer from Moritz about using CGPathApply and CGPathApplierFunction.
For each element in the specified path, Quartz calls the applier function, which can examine (but not modify) the element.
That could give you access to the points stored in a UIBezierPath.

How can I check for color of single a pixel in CGContext?

I want to check for a color in a single pixel in a CGContext. I have tried using this guide here http://www.markj.net/iphone-uiimage-pixel-color/. However i've tried it with no success. I don't want to get a context from a image because I already got a context that is not visible on screen. Is it possible to find single pixel color in a CGContext and if it's possible how can i do that?. Google have gave me no answer. Down below you can se how I have declared my context.
- (BOOL) initContext:(CGSize)size {
float scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1; non-retina
// scalefactor = 2; retina
int bitmapByteCount;
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height)*scaleFactor*scaleFactor;
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow*scaleFactor, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Basically I want a method where i can input a point and get the color from cacheContext.
If cacheBitmap is the pointer to your pixels, then a pixel at (x,y) can be found using the following:
uint8_t* pixel = (uint8_t*)cacheBitmap + (y * bitmapRowBytes) + (x * 4);
uint8_t alpha = pixel [ 0 ];
uint8_t red = pixel [ 1 ];
uint8_t green = pixel [ 2 ];
uint8_t blue = pixel [ 3 ];
However, your bitmapBytesPerRow is calculated incorrectly. If you're allocating a bitmap that is width * scaleFactor by height * scaleFactor, then your bitmapRowBytes needs to account for that. It should be:
bitmapRowBytes = size.width * 4 * scaleFactor;

Black border in bitmap image in iOS

I am using the following code to create a bitmap image.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(targetWidth, targetHeight), NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context,CGRectMake(0, 0, targetWidth, targetHeight));
float red ,green, blue ,alpha ;
for (int Row = 1; Row <= targetHeight; Row++)
{
if (Row <= originalHeight) {
for (int Col = 0; Col < targetWidth; iCol++)
{
if (Col < originalWidth) {
UIColor *color = [originalImage colorAtPixel:CGPointMake(Col, Row) :originalImage];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (red == 0.0 && green == 0.0 && blue == 0.0 && alpha == 0.0) {
CGContextSetRGBFillColor(context, 0 , 0, 0 , 0); //set transparent pixels
}
else {
CGContextSetRGBFillColor(context, red , green, blue , 1);
}
CGContextFillRect(context, CGRectMake(Col,Row,1,1));
}
}
}
}
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//below code is used from another link
- (UIColor *)colorAtPixel:(CGPoint)point :(UIImage*)image {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), point)) {
return nil;
}
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = image.CGImage;
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
I want the image to look transparent. I change the Black background of the context to transparent using CGContextSetRGBFillColor(context, 0 , 0, 0 , 0); //set transparent
which works fine. But there it still leaves a black border on the image.I want to remove it.
How can this be achieved? Any pointers?
One way would be to make a soft threshold instead of a hard limit. For example, check if the luminance of a given pixel is below some small, but non-zero amount. If so, set the alpha of the pixel to some interpolated value. For example:
const double epsilon = 0.1; // or some other small value
double luminance = (red * 0.2126) + (green * 0.7152) + (blue * 0.0722);
if (luminance < epsilon)
{
CGContextSetRGBFillColor(context, red, green, blue, luminance / epsilon);
}
else
{
CGContextSetRGBFillColor(context, red, green, blue, 1.0);
}

What is the most efficient way to grab a rectangle of pixels in ios?

I have the following code that is attempting to test each pixel in a rectangle of pixels
px, py = Location of touch
dx, dy = Size of rectangle to be tested
UIImage *myImage; // image from which pixels are read
int ux = px + dx;
int uy = py + dy;
for (int x = (px-dx); x <= ux; ++x)
{
for (int y = (py-dy); y <= uy; ++y)
{
unsigned char pixelData[] = { 0, 0, 0, 0 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef cgimage = myImage.CGImage;
int imageWidth = CGImageGetWidth(cgimage);
int imageHeight = CGImageGetHeight(cgimage);
CGContextDrawImage(context, CGRectMake(-px, py - imageHeight, imageWidth, imageHeight), cgimage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// Here I have latest pixel, with RBG values stored in pixelData that I can test
}
}
The code in the inner loop is grabbing the pixel at location x, y. Is there a more efficient way to grab the entire rectangle of pixels from (x-dx,y-dy), (x+dx,y+dy) from the UIImage (myImage)?
Wow, your answer is simple.
The problem is that you're creating a whole new bitmap context for every single pixel you scan. Creating just one bitmap context is expensive, and doing that thousands of times in succession is really bad for performance.
Just create one single bitmap context (with a custom data object) at first, and then scan through that data. It'll be way faster.
UIImage *myImage; // image from which pixels are read
CGSize imageSize = [myImage size];
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * imageSize.width;
NSUInteger bitsPerComponent = 8;
unsigned char *pixelData = (unsigned char*) calloc(imageSize.height * imageSize.width * bytesPerPixel, sizeof(unsigned char))
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData, imageSize.width, 1imageSize.height bitsPerComponent, bytesPerPixel, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, imageSize.width, imageSize.height), myImage.CGImage);
// you can tweak these 2 for-loops to get whichever section of pixels from the image you want for reading.
for (NSInteger x = 0; x < imageSize.width; x++) {
for (NSInteger y = 0; y < imageSize.height; y++) {
int pixelIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
CGFloat red = (pixelData[pixelIndex] * 1.0) / 255.0;
CGFloat green = (pixelData[pixelIndex + 1] * 1.0) / 255.0;
CGFloat blue = (pixelData[pixelIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (pixelData[pixelIndex + 3] * 1.0) / 255.0;
pixelIndex += 4;
UIColor *yourPixelColor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
}
}
// be a good memory citizen
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(pixelData);

iOS find subimage in a larger image

What is the best way to find coordinates of subimages in a larger image. Subimage is very simple and always the same. For example how do I find coordinates of all black squares in the image below:
I think the best way here might be to write a function/UIImage category to check the color at a pixel in the image. Then (if you know for a fact the images are squares), you can check the color of each pixel moving down diagonally until you one is a different color (then you have the location and size of your square).
One working implementation I found for checking the color of a pixel is in the open source component OBShapedButton.
It is a UIImage category.
Code:
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}

Resources