How to remove opacity but keep the alpha channel of UIImage? - ios

I have a layer where I want the user to draw a 'mask' for cutting out images. It is semi-opaque so that they can see beneath what they are selecting.
How can I process this so that the drawing data has an alpha of 1.0, but retain the alpha channel (for masking)?
TL:DR - I'd like the black area to be a solid, single colour.
Here is the desired before and after (the white background should be transparent in both):
something like this:
for (pixel in image) {
if (pixel.alpha != 0.0) {
fill solid black
}
}

The following should do what you're after. Majority of the code is from How to set the opacity/alpha of a UIImage? I only added a test for the alpha value, before converting the colour of the pixel to black.
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [[UIImage imageNamed:#"testImage"] CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha when the alpha of the source != 0
int length = height * width * 4;
for (int i=0; i<length; i+=4) {
if (m_PixelBuf[i+3] != 0) {
m_PixelBuf[i+3] = 255;
}
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
finalImage will now contain an image where all pixels that don't have an alpha of 0.0 have alpha of 1.

The underlying model for this app should not be images. This is not a question of "how do I create one rendition of the image from the other."
Instead, the underlying object model should be an array of paths. Then, when you want to create the image with translucent paths vs opaque paths, it's just a question of how you render this array of paths. Once you tackle it that way, the problem is not a complex image manipulation question but a simple rendering question.
By the way, I really like this array-of-paths model, because then it becomes quite trivial to do things like "gee, let me provide an undo function, letting the user remove one stroke at a time." It opens you up to all sorts of nice functional enhancements.
In terms of specifics of how to render these paths, it can be implemented in a variety of different ways. You could use custom drawRect function for UIView subclass that renders the paths with the appropriate alpha. Or you can do it with CAShapeLayer objects, too. Or you can do some hybrid (creating new image snapshots as you finish adding each path, saving you from having to re-render all of the paths each time). There are tons of ways of tackling this.
But the key insight is to employ an underlying model of an array of paths, and then the rendering of your two types of images becomes fairly trivial exercise:
The first image is a rendering of a bunch of paths as CAShapeLayer objects with alpha of 0.5. The second is the same rendering, but with an alpha of 1.0. Again, it doesn't matter if you use shape layers or low level Core Graphics calls, but the underlying idea is the same. Either render your paths with translucency or not.

Related

iOS drawing pixels in a UIView

I have an 2D array of RGB values (or any such data container) that I need to write to UIView that is currently displayed to the user. An example would be — while using the capture output from the camera, I run some algorithms to identify objects and then highlight them using custom defined RGB pixels.
What is the best way to do this as this whole thing is done in real-time every 10 frames per second for example?
Use the method below to create a UIImage from your 2D array. You can then display this image using a UIImageView.
-(UIImage *)imageFromArray:(void *)array width:(unsigned int)width height:(unsigned int)height {
/*
Assuming pixel color values are 8 bit unsigned
You need to create an array that is in the format BGRA (blue,green,red,alpha).
You can achieve this by implementing a for-loop that sets the values at each index.
I have not included a for-loop in this example because it depends on how the values are stored in your input 2D array.
You can set the alpha value to 255.
*/
unsigned char pixelData[width * height * 4];
// This is where the for-loop would be
void *baseAddress = &pixelData;
size_t bytesPerRow = width * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return image;
}

Remove UIBezirePath Lines drawn On UIImage

I am trying to erase lines drawn on UIImage. I have successfully erased lines drawn on empty canvas.
What would be the trick of erasing lines drawn on UIImage. Below are some things which I have tried but unable to get correct eraser effect.
use touch point and get RGB of image at that point and used that colour stroke.
colorwithpatternimage is too slow.
Kindly suggest any better solution
What I usually do is draw the image to an offscreen buffer (say a CGBitmapContext, for example), draw the Bezier curves over it, and copy the result to the screen.
To remove one of the Beziers, I draw the image to the offscreen buffer, draw all the Bezier curves, except the one (or ones) I don't want, and then copy the result to the screen.
This also has the advantage that it avoids flicker that can be caused by erasing an element that's already onscreen. And it works properly if the curves overlap, whereas drawing with the image as a pattern would likely erase any overlap points.
EDIT: Here's some pseudo-code (never compiled - just from memory) to demonstrate what I mean:
-(UIImage*)drawImageToOffscreenBuffer:(UIImage*)inputImage
{
CGBitmapContextRef offscreen = CGBitmapContextCreate(...[inputImage width], [inputImage height]...);
CGImageRef cgImage = [inputImage CGImage];
CGRect bounds = CGRectMake (0, 0, [inputImage width], [inputImage height]);
CGContextDrawImage (offscreen, bounds, cgImage);
// Now iterate through the Beziers you want to draw
for (i = 0; i < numBeziers; i++)
{
if (drawBezier(i))
{
CGContextMoveToPoint(offscreen, ...);
CGContextAddCurveToPoint(offscreen, ...); // fill in your bezier info here
}
}
// Put result into a CGImage
size_t rowBytes = CGBitmapContextGetBytesPerRow(offscreen);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, CGBitmapContextGetData(offscreen), rowBytes * [inputImage height], NULL);
CGColorSpaceRef colorSpace = CGBitmapContextGetColorSpace(offscreen);
CGImageRef cgResult = CGImageCreate([inputImage width], [inputImage height], ..., dataProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(rgbColorSpace);
// Make a UIImage out of that CGImage
UIImage* result = [UIImage imageWithCGImage:cgResult];
// Can't remember if you need to release the cgResult here? I think so
CGImageRelease(cgResult);
return result;
}

Color of all the pixels in the screen

I want to know the color of all the pixel and want to return an array of it. This is how I am doing it so far:
- (NSMutableArray *) colorOfPointinArray{
NSMutableArray *array_of_colors=[[NSMutableArray alloc] init];
unsigned char pixel[4]={0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);
for (int x_axis=0; x_axis<screenWidth; x_axis++)
{
for (int y_axis=0; y_axis<screenHeight; y_axis++)
{
CGContextTranslateCTM(context, -x_axis, -y_axis);
[self.layer renderInContext:context];
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
[array_of_colors addObject:color];
}
}
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return array_of_colors;
}
Now, this is taking so much time and freezes the app. I think its because of the 2 for-loops I have added. How can I improve this ?
You're creating a 1x1 pixel context, and then rendering the image into that one pixel h*w times. No wonder it's taking forever! Instead, create a context that's the same size as the layer, and then render into that context just once. Then loop through the resulting pixels and keep the color values. This may still not be instantaneous; depending on the size of the layer, turning every pixel into a UIColor could still take awhile (and some nontrivial memory) but that'll be about as quick as you can get it in the general case if you really want output in that form.
This is similar to the problem of sampling a pixel color value from an image. There are tons of posts about that. This one has some nice examples: How to get the RGB values for a pixel on an image on the iphone

Images being rotated when converted from Matrix

I am working on a app and making use of the opencv library.
The problem I am having happens only to certain images (usually if made with the phone's camera) and I pinpointed as being just a conversion problem. When I convert a (problematic) Image to a cv::Mat object and then back it just rotates 90 degrees.
Here is the call that causes the problem:
cv::Mat tmpMat = [sentImage CVMat];
UIImage * tmpImage = [[UIImage alloc] initWithCVMat:tmpMat];
[imageHolder setImage: tmpImage];
And here are the functions that do the conversion from image to matrix and vice-versa.
-(cv::Mat)CVMat
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
- (id)initWithCVMat:(const cv::Mat&)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
}
else
{
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
self = [self initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return self;
}
Now I am using a "Aspect Fill" property in my imageHolder (a UIImageView) and tried changing it without success. I also tried seeing if it was a problem of a matrix being transposed on the conversion and tried to change without success and it also would not be logical since it does not turn every picture.
I do not understand why it works with some pictures but other not (all photos taken with the phone's camera don't work).
If anyone can shed a light on the matter I would appreciate.
Images from the camera that are taken with different orientations (Portrait / Landscape) are saved in the same resolution (same number of rows and columns) by the iPhone camera. The difference is that the JPEG contains a flag (to be precise, the Exif.Image.Orientation flag) to tell the displaying software how the image needs to be rotated to be displayed correctly.
My guess is that OpenCV looses that information (that is stored in the UIImage in the imageOrientation property) when converting, so when the image is converted back to UIImage this piece of information is set to default (UIImageOrientationUp), explaining why certain images appear to be rotated.
I was having the same issue. This solves the problem of the image rotating when converting from UIImage to cvMat. Add the method at the bottom, call it after you dismiss the picker controller. It is the 'second answer' located here: Rotating a CGImage
Also, there are two methods in the ios.h for UIImage to cvMat and vice versa, that you can just include. highgui/ios.h. Then add the rotation method and you are good to go.

iOS Performance Tuning: fastest way to get pixel color for large images

There are a number of questions/answers regarding how to get the pixel color of an image for a given point. However, all of these answers are really slow (100-500ms) for large images (even as small as 1000 x 1300, for example).
Most of the code samples out there draw to an image context. All of them take time when the actual draw takes place:
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage)
Examining this in Instruments reveals that the draw is being done by copying the data from the source image:
I have even tried a different means of getting at the data, hoping that getting to the bytes themselves would actually prove much more efficient.
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = CGImageCreateWithImageInRect(self.CGImage,
CGRectMake(pointX * self.scale,
pointY * self.scale,
1.0f,
1.0f));
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef data = CGDataProviderCopyData(provider);
CGImageRelease(cgImage);
UInt8* buffer = (UInt8*)CFDataGetBytePtr(data);
CGFloat red = (float)buffer[0] / 255.0f;
CGFloat green = (float)buffer[1] / 255.0f;
CGFloat blue = (float)buffer[2] / 255.0f;
CGFloat alpha = (float)buffer[3] / 255.0f;
CFRelease(data);
UIColor *pixelColor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
return pixelColor;
This method takes it's time on the data copy:
CFDataRef data = CGDataProviderCopyData(provider);
It would appear that it too is reading the data from disk, instead of the CGImage instance I am creating:
Now, this method, in some informal testing does perform better, but it is still not as fast I want it to be. Does anyone know of an even faster way of getting the underlying pixel data???
If it's possible for you to draw this image to the screen via OpenGL ES, you can get extremely fast random access to the underlying pixels in iOS 5.0 via the texture caches introduced in that version. They allow for direct memory access to the underlying BGRA pixel data stored in an OpenGL ES texture (where your image would be residing), and you could pick out any pixel from that texture almost instantaneously.
I use this to read back the raw pixel data of even large (2048x2048) images, and the read times are at worst in the range of 10-20 ms to pull down all of those pixels. Again, random access to a single pixel there takes almost no time, because you're just reading from a location in a byte array.
Of course, this means that you'll have to parse and upload your particular image to OpenGL ES, which will involve the same reading from disk and interactions with Core Graphics (if going through a UIImage) that you'd see if you tried to read pixel data from a random PNG on disk, but it sounds like you just need to render once and sample from it multiple times. If so, OpenGL ES and the texture caches on iOS 5.0 would be the absolute fastest way to read back this pixel data for something also displayed onscreen.
I encapsulate these processes in the GPUImagePicture (image upload) and GPUImageRawData (fast raw data access) classes within my open source GPUImage framework, if you want to see how something like that might work.
I have yet to find a way to get access to the drawn (in frame buffer) pixels. The fastest method I've measured is:
Indicate you want the image to be cached by specifying kCGImageSourceShouldCache when creating it.
(optional) Precache the image by forcing it to render.
Draw the image a 1x1 bitmap context.
The cost of this method is the cached bitmap, which may have a lifetime as long as the CGImage it is associated with. The code ends up looking something like this:
Create image w/ ShouldCache flag
NSDictionary *options = #{ (id)kCGImageSourceShouldCache: #(YES) };
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
CGImageRef cgimage = CGImageSourceCreateImageAtIndex(imageSource, 0, (__bridge CFDictionaryRef)options);
UIImage *image = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
Precache image
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
Draw image to a 1x1 bitmap context
unsigned char pixelData[] = { 0, 0, 0, 0 };
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef cgimage = image.CGImage;
int imageWidth = CGImageGetWidth(cgimage);
int imageHeight = CGImageGetHeight(cgimage);
CGContextDrawImage(context, CGRectMake(-testPoint.x, testPoint.y - imageHeight, imageWidth, imageHeight), cgimage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
pixelData has the R, G, B, and A values of the pixel at testPoint.
A CGImage context is possibly nearly empty and contains no actual pixel data until you try to read the first pixel or draw it, so trying to speed up getting pixels from an image might not get you anywhere. There's nothing to get yet.
Are you trying to read pixels from a PNG file? You could try going directly after the file and mmap'ing it and decoding the PNG format yourself. It will still take awhile to pull the data from storage.
- (BOOL)isWallPixel: (UIImage *)image: (int) x :(int) y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
//UInt8 red = data[pixelInfo]; // If you need this info, enable it
//UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
//UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
//UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info
if (alpha) return YES;
else return NO;
}

Resources