I've some images on my iOS application which is partially transparent (format is PNG).
Can i find CGRect areas of non-transparent areas on an image?
I am not aware of any function that would give this out of the box.
But you can write your own function. All you need to do is to get the color of pixels one by one and figure out if they make a rect or not.
To get this you can use the following code.
CGImageRef image = [myUIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
This was originally posted at this question.
Related
I am developing color Photo Frame app but I am stuck in one part. I have lots of frame in my app. So I want to Find Transparent area of my app and put UIImageView programmatically on that part. But i was trying number of code to read pixel by pixel and many more but nothing works
Frame
here code which i use for find area of transparent
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(image.size.width * image.size.height * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * image.size.width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, image.size.width, image.size.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width , image.size.height), imageRef);
CGContextRelease(context);
unsigned char * rawData2 = malloc(image.size.width * image.size.height * 4);
BOOL isBlank = YES;
for(int index=0;index<(image.size.width * image.size.height * 4);index+=4)
{
if(index%4==0)
{
if(rawData[(int)index+3]==0)
{
rawData2[(int)index] = rawData[(int)index];
rawData2[(int)index+1] = rawData[(int)index+1];
rawData2[(int)index+2] = rawData[(int)index+2];
rawData2[(int)index+3] = rawData[(int)index+3];
isBlank=NO;
}
else
{
rawData2[(int)index] = 0;
rawData2[(int)index+1] = 0;
rawData2[(int)index+2] = 0;
rawData2[(int)index+3] = 0;
}
}
}
How to find transparent area frame(CGreact) in the imageview?
I have one image which is in grayscale and I am applying it's original color in some part of that image and I have achieved it. Now I want to change color of that part in which I have applied original color in image
I have this:
Original Image
I want to convert in this:
Result Image
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 100; // in range of 0-255
if (diff > allowedDifference) {
// CGFloat redTemp = 236;
// red = green;
// green = redTemp;
red = 236.0;
green = 17.0;
blue = 17.0;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
I tried bitmapcontext and everything, but not getting desired result.
Does anyone have idea ?
You can try grabbing pixel data from an image by using CGBitmapContextCreate to create a color space, then draw an image to it via CGContextDrawImage.
Secondly, you will receive an array of bytes of one dimension.
Like this: [r1, g1, b1, a1, r2, g2, b2, a2, ...] where r,g,b,a - color components, 1,2 - nu. of pixel.
After this, you can iterate over the array and compare each pixel's color components. Since you should skip grayscale pixels, you need to compare rgb params and they theoretically must be equal, but you can also support some little errors in few digits +-.
And if concrete pixel is not grayscale, just swap red and green bytes.
Should be the way to go.
Updated with example:
UIImage *image = [UIImage imageNamed:#"qfjsc.png"];
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bytesCount = height * width * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
unsigned char *outputData = (unsigned char *)calloc(bytesCount, sizeof(unsigned char));
NSUInteger byteIndex = 0;
for (NSUInteger i=0; i<bytesCount / bytesPerPixel; ++i) {
CGFloat red = (CGFloat)rawData[byteIndex];
CGFloat green = (CGFloat)rawData[byteIndex+1];
CGFloat blue = (CGFloat)rawData[byteIndex+2];
CGFloat alpha = (CGFloat)rawData[byteIndex+3];
BOOL grayscale = red == green == blue;
if (!grayscale) {
// test for near values
CGFloat diff = MAX(ABS(red-green), MAX(ABS(red-blue), ABS(green-blue)));
static CGFloat allowedDifference = 50.0; // in range of 0-255
if (diff > allowedDifference) {
CGFloat redTemp = red;
red = green;
green = redTemp;
}
}
outputData[byteIndex] = red;
outputData[byteIndex+1] = green;
outputData[byteIndex+2] = blue;
outputData[byteIndex+3] = alpha;
byteIndex += bytesPerPixel;
}
free(rawData);
CGDataProviderRef outputDataProvider = CGDataProviderCreateWithData(NULL,
outputData,
bytesCount,
NULL);
free(outputData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bytesPerPixel * 8,
bytesPerRow,
colorSpace,
kCGBitmapByteOrderDefault,
outputDataProvider,
NULL,NO,
kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(outputDataProvider);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
Note the static allowed difference variable. It allows you to skip almost non grayscale pixels, but which are in RGB color space and almost grayscale by its nature.
Here are examples:
Allowed difference = 0
Allowed difference = 50
I'm tryng add into an array all pixels value of a picture. If the picture is so big my app lost connection with assets and crash..
How can I check if memory is growing to stop and show alert, because I don't know what is the max image size that it can do to prevent this while load a picture.
My code is:
-(NSArray*)getRGBAFromImage:(UIImage*)image atx:(int)xp atY:(int)yp
{
NSMutableArray *resultColor = [NSMutableArray array];
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yp) + xp * bytesPerPixel;
//EDIT: THIS IS THE LOOP
for (int i = 0 ; i < count ; ++i)
{
CGFloat red = (rawData[byteIndex] *1) ;
CGFloat green = (rawData[byteIndex + 1] *1) ;
CGFloat blue = (rawData[byteIndex + 2] *1) ;
CGFloat alpha = (rawData[byteIndex + 3] *1) ;
byteIndex += bytesPerPixel;
redTotal = redTotal+red;
greenTotal = greenTotal + green;
blueTotal = blueTotal + blue;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
NSLog(#"width:%i hight:%i Color:%#",width,height,[color description]);
free(rawData);
return resultColor;
}
Or should I add this to a queue??
Thanks!
Wile the image is rather large (32MB) the array will be much larger since it is an array of objects. There will be 8M UIColor color objects. Just the array of pointers to the UIColor objects in a 64-bit device will take 64MB and then add in 8M times the size of a UIColor object.
You need to find another way of handling this.
I have a simple UIImageView with some image of person. Now I want to change color of some of the pixels based on their location or some frame value. How this can be done?
Any help...
For long-term implementatio you should take a look at Core Image Framework tutorial.
For one-time case you can refer to already existing answer at iPhone : How to change color of particular pixel of a UIImage?
I've found nice non-ARC solution that is working for changing picture color within entire frame, but you can try to adopt it to be applied only to certain pixel:
- (void) grayscale:(UIImage*) image {
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel;
for (int ii = 0 ; ii < width * height ; ++ii)
{
// Get color values to construct a UIColor
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
rawData[byteIndex] = (char) (red);
rawData[byteIndex+1] = (char) (green);
rawData[byteIndex+2] = (char) (blue);
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.workingImage = rawImage;
[self.imageView setImage:self.workingImage];
free(rawData);
}
Source: http://brandontreb.com/image-manipulation-retrieving-and-updating-pixel-values-for-a-uiimage
I have a PNG (complete with alpha channel) that I'm looking to composite onto a CGContextRef using CGContextDrawImage. I'd like the RBG channels to be composited, but I'd also like for the source images alpha channel to be copied over as well.
Ultimately I'll be passing the final CGContextRef (in the form of a CGImageRef) to GLKit where I'm hoping to manipulate the alpha channel for colour tinting purposes using a fragment shader.
Unfortunately I'm running into issues when it comes to creating my texture atlas using Core Graphics. It appears that the final CGImageRef fails to copy over the alpha channel from my source image and is non-transparent. I've attached my current compositing code, and a copy of my test image below:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * atlasSize.height * atlasSize.width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * atlasSize.width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf,
atlasSize.width,
atlasSize.height,
bitsPerComponent,
bytesPerRow,
colorSpace
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(x, y, image.size.width, image.size.height), image.CGImage);
CGImageRef imgRef = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
Where do you people find this procedures of using CGBitmapContextCreate as this is one of the most common issues: kCGImageAlphaPremultipliedFirst will set the alpha to 1 and PREMULTIPLY RGB with the alpha value.
If you are using Xcode pleas command-click kCGImageAlphaPremultipliedFirst and find an appropriate replacement such as kCGImageAlphaLast.
Adding an example of using alpha as last channel:
+ (UIImage *)generateRadialGradient {
int size = 256;
uint8_t *buffer = malloc(size*size*4);
memset(buffer, 255, size*size*4);
for(int i=0; i<size; i++) {
for(int j=0; j<size; j++) {
float x = ((float)i/(float)size)*2.0f - 1.0f;
float y = ((float)j/(float)size)*2.0f - 1.0f;
float relativeRadius = x*x + y*y;
if(relativeRadius >= 0.0 && relativeRadius < 1.0) { buffer[(i*size + j)*4 + 3] = (uint8_t)((1.0-sqrt(relativeRadius))*255.0); }
else buffer[(i*size + j)*4 + 3] = 0;
}
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
buffer,
size*size*4,
NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4*size;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big|kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(size,
size,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
/*I get the current dimensions displayed here */
return [UIImage imageWithCGImage:imageRef];
}
So this code creates a radial gradient from code. The inner part is full opaque while it gets transparent when it gets further from center.
We could also use kCGImageAlphaFirst which results in yellowish gradient. Alpha is always at 1 and only the first (red) channel is being decreased. The result is being white in the middle and as the red channel is decreased the yellow color starts showing.