iOS -- detect the color of a pixel? - ios

For example, suppose I want to detect the color of the pixel with screen coordinates (100, 200). Is there a way to do this?
EDIT -- I'm not worried about retina display issues for now.

This may not be the most direct route, but you could:
Use UIGraphicsBeginImageContextWithOptions to grab the screen (see the Apple Q&A QA1703 - "Screen Capture in UIKit Applications").
Then use CGImageCreateWithImageInRect to grab the portion of the resultant image you require.
Finally analyse the resultant image. It gets complicated at this point, but thankfully there's an existing question that should show you the way: How to get the RGB values for a pixel on an image on the iphone
Alternatively, there's the following blog article that has accompanying code: What Color is My Pixel? Image based color picker on iPhone

Here is how to do it
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//GET PIXEL FROM POINT
int index = 4*((width*round(yCoor))+round(xCoor));
int R = rawData[index];
int G = rawData[index+1];
int B = rawData[index+2];
NSLog(#"%d %d %d", R, G, B);
//IF YOU WANT TO ALTER THE PIXELS
int byteIndex = 0;
for(int ii = 0 ; ii < width * height ; ++ii)
{
rawData[byteIndex] = (char)(newPixelValue);
rawData[byteIndex+1] = (char)(newPixelValue);
rawData[byteIndex+2] = (char)(newPixelValue);
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage(ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
image = rawImage;
free(rawData);

Try This One Where "self.m_imgvwSource" is the uiview/uiimageview as per your need
- (UIColor *) GetCurrentPixelColorAtPoint:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.m_imgvwSource.layer renderInContext:context];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSLog(#"pixel: %d %d %d %d", pixel[0], pixel[1], pixel[2], pixel[3]);
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
return color;
}

Related

how to change color of part of UIImage?

I have one image which is in grayscale and I am applying it's original color in some part of that image and I have achieved it. Now I want to change color of that part in which I have applied original color in image
I have this:
Original Image
I want to convert in this:
Result Image
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 100; // in range of 0-255
if (diff > allowedDifference) {
// CGFloat redTemp = 236;
// red = green;
// green = redTemp;
red = 236.0;
green = 17.0;
blue = 17.0;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
I tried bitmapcontext and everything, but not getting desired result.
Does anyone have idea ?
You can try grabbing pixel data from an image by using CGBitmapContextCreate to create a color space, then draw an image to it via CGContextDrawImage.
Secondly, you will receive an array of bytes of one dimension.
Like this: [r1, g1, b1, a1, r2, g2, b2, a2, ...] where r,g,b,a - color components, 1,2 - nu. of pixel.
After this, you can iterate over the array and compare each pixel's color components. Since you should skip grayscale pixels, you need to compare rgb params and they theoretically must be equal, but you can also support some little errors in few digits +-.
And if concrete pixel is not grayscale, just swap red and green bytes.
Should be the way to go.
Updated with example:
UIImage *image = [UIImage imageNamed:#"qfjsc.png"];
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 50.0; // in range of 0-255
if (diff > allowedDifference) {
CGFloat redTemp = red;
red = green;
green = redTemp;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
Note the static allowed difference variable. It allows you to skip almost non grayscale pixels, but which are in RGB color space and almost grayscale by its nature.
Here are examples:
Allowed difference = 0
Allowed difference = 50

Change color of some specific pixels in UIImage

I have a simple UIImageView with some image of person. Now I want to change color of some of the pixels based on their location or some frame value. How this can be done?
Any help...
For long-term implementatio you should take a look at Core Image Framework tutorial.
For one-time case you can refer to already existing answer at iPhone : How to change color of particular pixel of a UIImage?
I've found nice non-ARC solution that is working for changing picture color within entire frame, but you can try to adopt it to be applied only to certain pixel:
- (void) grayscale:(UIImage*) image {
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel;
for (int ii = 0 ; ii < width * height ; ++ii)
{
// Get color values to construct a UIColor
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
rawData[byteIndex] = (char) (red);
rawData[byteIndex+1] = (char) (green);
rawData[byteIndex+2] = (char) (blue);
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.workingImage = rawImage;
[self.imageView setImage:self.workingImage];
free(rawData);
}
Source: http://brandontreb.com/image-manipulation-retrieving-and-updating-pixel-values-for-a-uiimage

How to get pixels of an image which return rgb,hex value? ios

I am new to ios so please be gentle...
I have an imageview which display selected image from the photo library.
I want to pick the color which i select touching on the image with its rgb and hex value.
I think you should try this method:
- (UIColor *)colorAtPosition:(CGPoint)position
{
CGRect sourceRect = CGRectMake(position.x, position.y, 1.f, 1.f);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, sourceRect);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *buffer = malloc(4);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGContextRef context = CGBitmapContextCreate(buffer, 1, 1, 8, 4, colorSpace, bitmapInfo);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0.f, 0.f, 1.f, 1.f), imageRef);
CGImageRelease(imageRef);
CGContextRelease(context);
CGFloat r = buffer[0] / 255.f;
CGFloat g = buffer[1] / 255.f;
CGFloat b = buffer[2] / 255.f;
CGFloat a = buffer[3] / 255.f;
free(buffer);
return [UIColor colorWithRed:r green:g blue:b alpha:a];
}
or try this one:
http://www.markj.net/iphone-uiimage-pixel-color/
This is pretty cool approach.

Painting a Gradient inside an Image by tapping in Objective-C?

I have an animal image and that image have a white background and the shape of animal is black outline. That is fixed on my image view in my .xib.
Now I would like to paint on the image, however only on the particular closed part.
Suppose a user touches on the hand then only hands will fill the gradient. The rest of the image will remain the same.
- (UIImage*)imageFromRawData:(unsigned char *)rawData
{
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * self.imageDoodle.image.size.width;
CGImageRef imageRef = [self.imageDoodle.image CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGContextRef context = CGBitmapContextCreate(rawData,self.imageDoodle.image.size.width,
self.imageDoodle.image.size.height,bitsPerComponent,bytesPerRow,colorSpace,
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
CGImageRelease(imageRef);
return rawImage;
}
-(unsigned char*)rawDataFromImage:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"w=%d,h=%d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
Where would I need to change my code to support this?
This is possible by UIBezierPath but I don't know how to implement it in this case.

how to change the alpha of a groundoverlay in google maps ios sdk?

i added a groundoverlay to a mapview, and i found thoese ways to change the alpha of groundoverlay.icon.
How to set the opacity/alpha of a UIImage?
but it seems has no affect in the app, i still can not see the map or other groundoverlays behind the image.
is there a solution to handle this?
+ (UIImage *) setImage:(UIImage *)image withAlpha:(CGFloat)alpha
{
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha
int length = height * width * 4;
for (int i=0; i<length; i+=4)
{
m_PixelBuf[i+3] = 255*alpha;
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
return finalImage;
}

Resources