UIImage face detection - ios

I am trying to write a routine that takes a UIImage and returns a new UIImage that contains just the face. This would seem to be very straightforward, but my brain is having problems getting around the CoreImage vs. UIImage spaces.
Here's the basics:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
-(UIImage *)getFaceImage:(UIImage *)picture {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];
CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
NSArray *features = [detector featuresInImage:ciImage];
// For simplicity, I'm grabbing the first one in this code sample,
// and we can all pretend that the photo has one face for sure. :-)
CIFaceFeature *faceFeature = [features objectAtIndex:0];
return imageFromImage:picture inRect:faceFeature.bounds;
}
The image that is returned is from the flipped image. I've tried adjusting faceFeature.bounds using something like this:
CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
... but that gives me results outside the image.
I'm sure there's something simple to fix this, but short of calculating the bottom-down and then creating a new rect using that as the X, is there a "proper" way to do this?
Thanks!

It's much easier and less messy to just use CIContext to crop your face from the image. Something like this:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];
Where inputImage is your UIImage object and faceFeature object is of type CIFaceFeature that you get from [CIDetector featuresInImage:] method.

Since there doesn't seem to be a simple way to do this, I just wrote some code to do it:
CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x,
_picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height,
faceFeature.bounds.size.width,
faceFeature.bounds.size.height);
This worked a charm.

There is no simple way to achieve this, the problem is that the images from the iPhone camera are always in portrait mode, and metadata settings are used to get them to display correctly. You will also get better accuracy in your face detection call if you tell it the rotation of the image beforehand. Just to make things complicated, you have to pass it the image orientation in EXIF format.
Fortunately there is an apple sample project that covers all of this called Squarecam, i suggest you check it for details

Related

Ios crash "could not execute support code to read Objective-C" on ios 12.4.2 not on 12.0.1

This method returns a qr code image of a string. It works correctly on Ios 12.0.1 (iphone SE) but it crash on 12.4.2 (iphone 6). The method crash when i try to assign the resultant UIImage to an UIImageView, the resultant UIImage is not nil.
-(UIImage*)get_QR_image :(NSString*)qrString :(UIColor*)ForeGroundCol :(UIColor*)BackGroundCol{
NSData *stringData = [qrString dataUsingEncoding: NSUTF8StringEncoding];
CIFilter *qrFilter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
[qrFilter setValue:#"H" forKey:#"inputCorrectionLevel"];
CIImage *qrImage = qrFilter.outputImage;
float scaleX = 320;
float scaleY = 320;
CIColor *iForegroundColor = [CIColor colorWithCGColor:[ForeGroundCol CGColor]];
CIColor *iBackgroundColor = [CIColor colorWithCGColor:[BackGroundCol CGColor]];
CIFilter * filterColor = [CIFilter filterWithName:#"CIFalseColor" keysAndValues:#"inputImage", qrImage, #"inputColor0", iForegroundColor, #"inputColor1", iBackgroundColor, nil];
CIImage *filtered_image = [filterColor valueForKey:#"outputImage"];
filtered_image = [filtered_image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
UIImage *result_image = [UIImage imageWithCIImage:filtered_image
scale:[UIScreen mainScreen].scale
orientation:UIImageOrientationUp];
return result_image;
}
the line involved in crash is:
filtered_image = [filtered_image imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
it generates this log:
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.
There's something in my method that works only on 12.0.1 ? Or maybe something wrong ? How i can investigate more to solve that crash ?
EDIT
in red i have:
MyQrCodeImageViewBig.image=qrimage;
with messagge:
Thread 1: EXC_BREAKPOINT (code=1, subcode=0x1a83e146c)
I see a lot of problems resulting from the [UIImage imageWithCIImage:] initializer. The main problem being that a CIImage does not actually contain any bitmap data. It needs to be rendered by a CIContext first. So the target you assign the UIImage to needs to know that it is backed by a CIImage that still needs rendering. Usually UIImageView handles this well, but I wouldn't trust it too much.
What you can do instead is to render the image yourself into a bitmap (CGImage) and initialize the UIImage with that instead. You need a CIContext for that, which I recommend you create somewhere outside this method once and re-use it every time you need to render an image (it's an expensive object):
self.context = [CIContext context];
Then in your method, you render the image like this:
CGImageRef cgImage = [self.context createCGImage:filtered_image fromRect:[filtered_image extent]];
UIImage* result_image = [UIImage imageWithCGImage:cgImage];

inputImage = nil : Filtered Image not displaying?

Have been developing an image filtering app with the help of online tutorials on bitFountain. The user should be able to select image of photo they have added to an album and then can either add filter to the photo or delete that image.
My delete image functionality runs fine but adding a filter is not working.
I have logged three of the filter instances to the console as they are returned by the method but it comes back as inputImage = nil.
2015-10-19 10:41:53.634 FilterApp[78451:28732768] <CISepiaTone: inputImage=nil inputIntensity=1>
2015-10-19 10:41:53.634 FilterApp[78451:28732768] <CIGaussianBlur: inputImage=nil inputRadius=1>
2015-10-19 10:41:53.635 FilterApp[78451:28732768] <CIColorClamp: inputImage=nil inputMinComponents=[0.2 0.2 0.2 0.2] inputMaxComponents=[0.9 0.9 0.9 0.9]>
What exactly does inputImage=nil mean?
I'm not sure where the code could be going wrong, or if this problem is even external to the code.
(Using Xcode7 and iPhone 4s simulator.)
Edit: This was the code used to convert to UIImage.
- (UIImage *)filteredImageFromImage:(UIImage *)image andFilter:(CIFilter *)filter
{
CIImage *unfilteredImage = [[CIImage alloc] initWithCGImage:image.CGImage];
[filter setValue:unfilteredImage forKey:kCIInputImageKey];
CIImage *filteredImage = [filter outputImage];
CGRect extent = [filteredImage extent];
CGImageRef cgImage = [self.context createCGImage:filteredImage fromRect:extent];
UIImage *finalImage = [UIImage imageWithCGImage:cgImage];
return finalImage;
}
This problem occurs when you use CIImage. But when you display the image it contains nil value. So Convert your CIImage to UIImage.
e.g.
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
UIImage *imageDisplay = [UIImage imageWithCGImage:image];
It may be helpful to you.

How to render an image with effect faster with UIKit

I'm making an iOS app which there's a process to switch a lot of pictures with several UIImageViews (a loop to set image property of a UIImageView with a bunch of images). And sometimes some of the images needs some graphic effect, say multiplication.
The easiest way is to use a CIFilter to do this thing but the problem is that CALayer on iOS doesn't support "filters" property, so you need to apply the effect to the images before you set "image" property. But this is really too slow when you refresh the screen very frequently.
So next I tried to use Core Graphics directly to do the multiplication with UIGraphics context and kCGBlendModeMultiply. This is really much faster than using a CIFilter, but since you have to apply the multiplication before rendering the image, you can still feel the program runs slower while trying to render images with multiplication effect than rendering normal images.
My guess is that the fundamental problem of these 2 approaches is that you have to process the effect to the images with GPU, and then get the result image with CPU, then finally render the result image with GPU, which means the data transfer between CPU and GPU wasted a lot of time, so I then tried to change the super class from UIImageView to UIView and implement the CGGraphics context code to drawRect method, then when I set the "image" property I call setNeedsDisplay method in didSet. But this doesn't work so well... actually every time it calls setNeedsDisplay the program becomes much more slow that even slower than using a CIFilter, probably because there are several views displaying.
I guess that probably I can fix this problem with OpenGL but I'm wondering if I can solve this problem with UIKit only?
As far as I understand you have to make the same changes to different images. So time of initial initialization is not critical for you but each image should be processed as soon as possible. First of all it is critical to generate new images in a background queue/thread.
There are two good ways to quickly process/generate images:
Use CIFilter from CoreImage
Use GPUImage library
If you used CoreImage check that you use CIFilter and CIContext properly. CIContext creation takes quite a lot of time but it could be SHARED between different CIFilters and images - so you should create CIContext only once! CIFilter could also be SHARED between different images, but since it is not thread safe you should have a separate CIFilter for each thread.
In my code I have the following:
+ (UIImage*)roundShadowImageForImage:(UIImage*)image {
static CIFilter *_filter;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^
{
NSLog(#"CIContext and CIFilter generating...");
_context = [CIContext contextWithOptions:#{ kCIContextUseSoftwareRenderer: #NO,
kCIContextWorkingColorSpace : [NSNull null] }];
CIImage *roundShadowImage = [CIImage imageWithCGImage:[[self class] roundShadowImage].CGImage];
CIImage *maskImage = [CIImage imageWithCGImage:[[self class] roundWhiteImage].CGImage];
_filter = [CIFilter filterWithName:#"CIBlendWithAlphaMask"
keysAndValues:
kCIInputBackgroundImageKey, roundShadowImage,
kCIInputMaskImageKey, maskImage, nil];
NSLog(#"CIContext and CIFilter are generated");
});
if (image == nil) {
return nil;
}
NSAssert(_filter, #"Error: CIFilter for cover images is not generated");
CGSize imageSize = CGSizeMake(image.size.width * image.scale, image.size.height * image.scale);
// CIContext and CIImage objects are immutable, which means each can be shared safely among threads
CIFilter *filterForThread = [_filter copy]; // CIFilter could not be shared between different threads.
CGAffineTransform imageTransform = CGAffineTransformIdentity;
if (!CGSizeEqualToSize(imageSize, coverSize)) {
NSLog(#"Cover image. Resizing image %# to required size %#", NSStringFromCGSize(imageSize), NSStringFromCGSize(coverSize));
CGFloat scaleFactor = MAX(coverSide / imageSize.width, coverSide / imageSize.height);
imageTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
}
imageTransform = CGAffineTransformTranslate(imageTransform, extraBorder, extraBorder);
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
ciImage = [ciImage imageByApplyingTransform:imageTransform];
if (image.hasAlpha) {
CIImage *ciWhiteImage = [CIImage imageWithCGImage:[self whiteImage].CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISourceOverCompositing"
keysAndValues:
kCIInputBackgroundImageKey, ciWhiteImage,
kCIInputImageKey, ciImage, nil];
[filterForThread setValue:filter.outputImage forKey:kCIInputImageKey];
}
else
{
[filterForThread setValue:ciImage forKey:kCIInputImageKey];
}
CIImage *outputCIImage = [filterForThread outputImage];
CGImageRef cgimg = [_context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImage;
}
If you are still not satisfied with the speed try GPUImage It is a very good library, it is also very fast because it uses OpenGL for image generation.

UIImagePNGRepresentation returns nil after CIFilter

Running into a problem getting a PNG representation for a UIImage after having rotated it with CIAffineTransform. First, I have a category on UIImage that rotates an image 90 degrees clockwise. It seems to work correctly when I display the rotated image in a UIImageView.
-(UIImage *)cwRotatedRepresentation
{
// Not very precise, stop yelling at me.
CGAffineTransform xfrm=CGAffineTransformMakeRotation(-(6.28 / 4.0));
CIContext *context=[CIContext contextWithOptions:nil];
CIImage *inputImage=[CIImage imageWithCGImage:self.CGImage];
CIFilter *filter=[CIFilter filterWithName:#"CIAffineTransform"];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSValue valueWithBytes:&xfrm objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *result=[filter valueForKey:#"outputImage"];
CGImageRef cgImage=[context createCGImage:result fromRect:[inputImage extent]];
return [[UIImage alloc] initWithCIImage:result];
}
However, when I try to actually get a PNG for the newly rotated image, UIImagePNGRepresentation returns nil.
-(NSData *)getPNG
{
UIImage *myImg=[UIImage imageNamed:#"canada"];
myImg=[myImg cwRotatedRepresentation];
NSData *d=UIImagePNGRepresentation(myImg);
// d == nil :(
return d;
}
Is core image overwriting the PNG headers or something? Is there a way around this behavior, or a better means of achieving the desired result of a PNG representation of a UIImage rotated 90 degrees clockwise?
Not yelling, but -M_PI_4 will give you the constant you want with maximum precision :)
The only other thing that I see is you probably want to be using [result extent] instead of [inputImage extent] unless your image is known square.
Not sure how that would cause UIImagePNGRepresentation to fail though. One other thought... you create a CGImage and then use the CIImage in the UIImage, perhaps using initWithCGImage would give better results.

convert jpg UIImage to bitmap UIImage?

I'm trying to let a user pan/zoom through a static image with a selection rectangle on the main image, and a separate UIView for the "magnified" image.
The "magnified" UIView implements drawRect:
// rotate selectionRect if image isn't portrait internally
CGRect tmpRect = selectionRect;
if ((image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationLeftMirrored || image.imageOrientation == UIImageOrientationRight || image.imageOrientation == UIImageOrientationRightMirrored)) {
tmpRect = CGRectMake(selectionRect.origin.y,image.size.width - selectionRect.origin.x - selectionRect.size.width,selectionRect.size.height,selectionRect.size.width);
}
// crop and draw
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], tmpRect);
[[UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation] drawInRect:rect];
CGImageRelease(imageRef);
The performance on this is atrocious. It spends 92% of its time in [UIImage drawInRect].
Deeper, that's 84.5% in ripc_AcquireImage and 7.5% in ripc_RenderImage.
ripc_AcquireImage is 51% decoding the jpg, 30% upsampling.
So...I guess my question is what's the best way to avoid this? One option is to not take in a jpg to start, and that is a real solution for some things [ala captureStillImageAsynchronouslyFromConnection without JPG intermediary ]. But if I'm getting a UIImage off the camera roll, say...is there a clean way to convert the UIImage-jpg? Is converting it even the right thing (is there a "cacheAsBitmap" flag somewhere that would do that, essentially?)
Apparently this isn't just a JPG issue—it's anything that's not backed by "ideal native representation", I think.
I'm having good success (significant performance improvements) with just the following:
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
For both things taken from the camera roll and from the camera.

Resources