I am developing color Photo Frame app but I am stuck in one part. I have lots of frame in my app. So I want to Find Transparent area of my app and put UIImageView programmatically on that part. But i was trying number of code to read pixel by pixel and many more but nothing works
Frame
here code which i use for find area of transparent
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(image.size.width * image.size.height * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * image.size.width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, image.size.width, image.size.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width , image.size.height), imageRef);
CGContextRelease(context);
unsigned char * rawData2 = malloc(image.size.width * image.size.height * 4);
BOOL isBlank = YES;
for(int index=0;index<(image.size.width * image.size.height * 4);index+=4)
{
if(index%4==0)
{
if(rawData[(int)index+3]==0)
{
rawData2[(int)index] = rawData[(int)index];
rawData2[(int)index+1] = rawData[(int)index+1];
rawData2[(int)index+2] = rawData[(int)index+2];
rawData2[(int)index+3] = rawData[(int)index+3];
isBlank=NO;
}
else
{
rawData2[(int)index] = 0;
rawData2[(int)index+1] = 0;
rawData2[(int)index+2] = 0;
rawData2[(int)index+3] = 0;
}
}
}
How to find transparent area frame(CGreact) in the imageview?
Related
I have one image which is in grayscale and I am applying it's original color in some part of that image and I have achieved it. Now I want to change color of that part in which I have applied original color in image
I have this:
Original Image
I want to convert in this:
Result Image
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 100; // in range of 0-255
if (diff > allowedDifference) {
// CGFloat redTemp = 236;
// red = green;
// green = redTemp;
red = 236.0;
green = 17.0;
blue = 17.0;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
I tried bitmapcontext and everything, but not getting desired result.
Does anyone have idea ?
You can try grabbing pixel data from an image by using CGBitmapContextCreate to create a color space, then draw an image to it via CGContextDrawImage.
Secondly, you will receive an array of bytes of one dimension.
Like this: [r1, g1, b1, a1, r2, g2, b2, a2, ...] where r,g,b,a - color components, 1,2 - nu. of pixel.
After this, you can iterate over the array and compare each pixel's color components. Since you should skip grayscale pixels, you need to compare rgb params and they theoretically must be equal, but you can also support some little errors in few digits +-.
And if concrete pixel is not grayscale, just swap red and green bytes.
Should be the way to go.
Updated with example:
UIImage *image = [UIImage imageNamed:#"qfjsc.png"];
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 50.0; // in range of 0-255
if (diff > allowedDifference) {
CGFloat redTemp = red;
red = green;
green = redTemp;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
Note the static allowed difference variable. It allows you to skip almost non grayscale pixels, but which are in RGB color space and almost grayscale by its nature.
Here are examples:
Allowed difference = 0
Allowed difference = 50
I've some images on my iOS application which is partially transparent (format is PNG).
Can i find CGRect areas of non-transparent areas on an image?
I am not aware of any function that would give this out of the box.
But you can write your own function. All you need to do is to get the color of pixels one by one and figure out if they make a rect or not.
To get this you can use the following code.
CGImageRef image = [myUIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
This was originally posted at this question.
I have an animal image and that image have a white background and the shape of animal is black outline. That is fixed on my image view in my .xib.
Now I would like to paint on the image, however only on the particular closed part.
Suppose a user touches on the hand then only hands will fill the gradient. The rest of the image will remain the same.
- (UIImage*)imageFromRawData:(unsigned char *)rawData
{
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * self.imageDoodle.image.size.width;
CGImageRef imageRef = [self.imageDoodle.image CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGContextRef context = CGBitmapContextCreate(rawData,self.imageDoodle.image.size.width,
self.imageDoodle.image.size.height,bitsPerComponent,bytesPerRow,colorSpace,
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
CGImageRelease(imageRef);
return rawImage;
}
-(unsigned char*)rawDataFromImage:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"w=%d,h=%d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
Where would I need to change my code to support this?
This is possible by UIBezierPath but I don't know how to implement it in this case.
i added a groundoverlay to a mapview, and i found thoese ways to change the alpha of groundoverlay.icon.
How to set the opacity/alpha of a UIImage?
but it seems has no affect in the app, i still can not see the map or other groundoverlays behind the image.
is there a solution to handle this?
+ (UIImage *) setImage:(UIImage *)image withAlpha:(CGFloat)alpha
{
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha
int length = height * width * 4;
for (int i=0; i<length; i+=4)
{
m_PixelBuf[i+3] = 255*alpha;
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
return finalImage;
}
I'm try to work with the raw pixels of an image and I'm running into some problems.
First, calling .CGImage on a C4Image doesn't work so I have to use a UIImage to load the file.
Second, the byte array seems to be the wrong length and the image doesn't seem to have the right dimensions or colours.
I'm borrowing some code from the discussion here.
UIImage *image = [UIImage imageNamed:#"C4Table.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (__bridge NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
unsigned char * pixels = (unsigned char *)[data bytes];
for(int i = 0; i < [data length]; i += 4) {
pixels[i] = 0; // red
pixels[i+1] = pixels[i+1]; // green
pixels[i+2] = pixels[i+2]; // blue
pixels[i+3] = pixels[i+3]; // alpha
}
size_t imageWidth = CGImageGetWidth(imageRef);
size_t imageHeight = CGImageGetHeight(imageRef);
NSLog(#"width: %d height: %d datalength: %d" ,imageWidth ,imageHeight, [data length] );
C4Image *imgimgimg = [[C4Image alloc] initWithRawData:pixels width:imageWidth height:imageHeight];
[self.canvas addImage:imgimgimg];
Is there a better way to do this or am I missing a step?
Close. There is a loadPixelData method on C4Image, and if you check in the main C4 repo (C4iOS) you'll be able to see how the image class loads pixels... It can be tricky.
C4Image loadPixelData:
-(void)loadPixelData {
const char *queueName = [#"pixelDataQueue" UTF8String];
__block dispatch_queue_t pixelDataQueue = dispatch_queue_create(queueName, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(pixelDataQueue, ^{
NSUInteger width = CGImageGetWidth(self.CGImage);
NSUInteger height = CGImageGetHeight(self.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
free(rawData);
rawData = malloc(height * bytesPerRow);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), self.CGImage);
CGContextRelease(context);
_pixelDataLoaded = YES;
[self postNotification:#"pixelDataWasLoaded"];
pixelDataQueue = nil;
});
}
To modify this for your question, I have done the following:
-(void)getRawPixelsAndCreateImages {
C4Image *image = [C4Image imageNamed:#"C4Table.png"];
NSUInteger width = CGImageGetWidth(image.CGImage);
NSUInteger height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
unsigned char *rawData = malloc(height * bytesPerRow);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
CGContextRelease(context);
C4Image *imgimgimg = [[C4Image alloc] initWithRawData:rawData width:width height:height];
[self.canvas addImage:imgimgimg];
for(int i = 0; i < height * bytesPerRow; i += 4) {
rawData[i] = 255;
}
C4Image *redImgimgimg = [[C4Image alloc] initWithRawData:rawData width:width height:height];
redImgimgimg.origin = CGPointMake(0,320);
[self.canvas addImage:redImgimgimg];
}
It can be quite confusing to learn how to work with pixel data, because you need to know how to work with Core Foundation (which is pretty much a C api). The main line of code, to populate the rawData is a call to CGContextDrawImage which basically copies the pixels from an image into the data array that you're going to play with.
I have created a gist that you can download to play around with in C4.
Working with Raw Pixels
In this gist you'll see that I actually grab the CGImage from a C4Image object, use that to populate an array of raw data, and then use that array to create a copy of the original image.
Then, I modify the red component of the pixel data by changing all values to 255, and then use the modified pixel array to create a tinted version of the original image.