iOS - Issues cropping an image to a detected face - ios

I am trying to crop a UIImage to a face that has been detected using the built-in CoreImage face detection functionality. I seem to be able to detect the face properly, but when I attempt to crop my UIImage to the bounds of the face, it is nowhere near correct. My face detection code looks like this:
-(NSArray *)facesForImage:(UIImage *)image {
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *opts = #{CIDetectorAccuracy : CIDetectorAccuracyHigh};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts];
NSArray *features = [detector featuresInImage:ciImage];
return features;
}
...and the code to crop the image looks like this:
-(UIImage *)imageCroppedToFaceAtIndex:(NSInteger)index forImage:(UIImage *)image {
NSArray *faces = [self facesForImage:image];
if((index < 0) || (index >= faces.count)) {
DDLogError(#"Invalid face index provided");
return nil;
}
CIFaceFeature *face = [faces objectAtIndex:index];
CGRect faceBounds = face.bounds;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, faceBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
return croppedImage;
}
I have an image with only 1 face in it I'm using for testing, and it appears to detect it with no problem. But the crop is way off. Any idea what could be the problem with this code?

For anyone else having a similar issue -- transforming CGImage coordinates to UIImage coordinates -- I found this great article explaining how to use CGAffineTransform to accomplish exactly what I was looking for.

The code to convert the face geometry from Core Image to UIImage coordinates is fussy. I haven't messed with it in quite a while, but I remember it giving me fits, especially when dealing with images that are rotated.
I suggest looking at the demo app "SquareCam", which you can find with a search of the Xcode docs. It draws red squares around faces, which is a good start.
Note that the rectangle you get from Core Image is always a square, and sometimes crops a little too closely. You may have to make your cropping rectangles taller and wider.

This class does the trick! A quite flexible and handy override of UIImage. https://github.com/kylestew/KSMagicalCrop

Do It with this code It is worked for me.
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
//Container for the face attributes
UIView* faceContainer = [[UIView alloc] initWithFrame:facePicture.frame];
// flip faceContainer on y-axis to match coordinate system used by core image
[faceContainer setTransform:CGAffineTransformMakeScale(1, -1)];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
RightEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
}
[view addSubview:faceContainer];
CGFloat y = view.frame.size.height - (faceView.frame.origin.y+faceView.frame.size.height);
CGRect rect = CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([<Original Image> CGImage],rect);
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//----cropped Image-------//
UIImageView *img = [[UIImageView alloc]initWithFrame:CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height)];
img.image = croppedImage;

Related

Problem clipping background to coreimage aztec generation with CIBlendWithMask

I am trying to add an image background to a generated atec code so far I can get the aztec code to generate but am having trouble with the CIBlendWithMask filter, i'm not sure exactly what i'm doing wrong I believe the user selected background as kCIInputBackgroundImageKey is correct and the aztec output image as the kCIInputImageKey is correct, I think where i'm going wrong is the kCIInputMaskImageKey but not exactly sure why I need to do I thought the aztec output would be a sufficient mask image - do I need to select the color or something to get the background clipping to the aztec image?
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
CIFilter *colorFilter = [CIFilter filterWithName:#"CIFalseColor"];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
[colorFilter setValue:aztecFilter.outputImage forKey:#"background"];
NSData* imageData = [[NSUserDefaults standardUserDefaults] objectForKey:#"usertheme"];
CIImage *image = [UIImage imageWithData:imageData].CIImage;
[colorFilter setValue:[CIColor colorWithCGColor:[[UIColor blackColor] CGColor]] forKey:#"inputColor0"];
[colorFilter setValue:[CIColor colorWithRed:1 green:1 blue:1 alpha:0] forKey:#"inputColor1"];
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputImageKey];
[blendFilter setValue:image forKey:kCIInputBackgroundImageKey];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputMaskImageKey];
Trying to create something like this but for aztec codes instead of QR
You may be over-thinking this a bit...
You can use the CIBlendWithMask filter with only a background image and a mask image.
So, if we start with a gradient image (you may be generating it on-the-fly):
Then generate the Aztec Code image (yellow outline is just to show the frame):
We can use that code image as the mask.
Here is sample code:
- (void)viewDidLoad {
[super viewDidLoad];
self.view.backgroundColor = [UIColor systemYellowColor];
// create a vertical stack view
UIStackView *sv = [UIStackView new];
sv.axis = UILayoutConstraintAxisVertical;
sv.spacing = 8;
sv.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:sv];
// add 3 image views to the stack view
for (int i = 0; i < 3; ++i) {
UIImageView *imgView = [UIImageView new];
[sv addArrangedSubview:imgView];
[imgView.widthAnchor constraintEqualToConstant:240].active = YES;
[imgView.heightAnchor constraintEqualToAnchor:imgView.widthAnchor].active = YES;
}
[self.view addSubview:sv];
[sv.centerXAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerXAnchor].active = YES;
[sv.centerYAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerYAnchor].active = YES;
// load a gradient image for the background
UIImage *gradientImage = [UIImage imageNamed:#"bkgGradient"];
// put it in the first image view
((UIImageView *)sv.arrangedSubviews[0]).image = gradientImage;
// create aztec filter
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
// give it some string data
NSString *qrString = #"My string to encode";
NSData *stringData = [qrString dataUsingEncoding: NSUTF8StringEncoding];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
// get the generated aztec image
CIImage *aztecCodeImage = aztecFilter.outputImage;
// scale it to match the background gradient image
float scaleX = gradientImage.size.width / aztecCodeImage.extent.size.width;
float scaleY = gradientImage.size.height / aztecCodeImage.extent.size.height;
aztecCodeImage = [[aztecCodeImage imageBySamplingNearest] imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
// convert to UIImage and set the middle image view
UIImage *scaledCodeImage = [UIImage imageWithCIImage:aztecCodeImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
((UIImageView *)sv.arrangedSubviews[1]).image = scaledCodeImage;
// create a blend with mask filter
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
// set the background image
CIImage *bkgInput = [CIImage imageWithCGImage:[gradientImage CGImage]];
[blendFilter setValue:bkgInput forKey:kCIInputBackgroundImageKey];
// set the mask image
[blendFilter setValue:aztecCodeImage forKey:kCIInputMaskImageKey];
// get the blended CIImage
CIImage *output = [blendFilter outputImage];
// convert to UIImage
UIImage *blendedImage = [UIImage imageWithCIImage:output scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
// set the bottom image view to the result
((UIImageView *)sv.arrangedSubviews[2]).image = blendedImage;
}
which produces:

Teeth whitening iOS

I want to detect the teeth from an image and want to whiten them with slider.
I found the following code for mouth detection. But how should I detect the exact teeth location so that I could whiten them. Is there any third party for this?
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
// create a face detector - since speed is not an issue now we'll use a high accuracy detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -imageView.bounds.size.height);
for(CIFaceFeature* faceFeature in features)
{
// Get the face rect: Translate CoreImage coordinates to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceRect];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// add the new view to create a box around the face
[imageView addSubview:faceView];
if(faceFeature.hasMouthPosition)
{
// Get the mouth position translated to imageView UIKit coordinates
const CGPoint mouthPos = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);
// Create an UIView to represent the mouth, its size depend on the width of the face.
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5,
mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5,
faceWidth*MOUTH_SIZE_RATE,
faceWidth*MOUTH_SIZE_RATE)];
// make the mouth look nice and add it to the view
mouth.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.3];
//mouth.layer.cornerRadius = faceWidth*MOUTH_SIZE_RATE*0.5;
[imageView addSubview:mouth];
NSLog(#"Mouth %g %g", faceFeature.mouthPosition.x, faceFeature.mouthPosition.y);
}
}

GPUImage: strange image deformation (but only with some photo) [duplicate]

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

ios GPUImage,bad result of image processing with small-size images?

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...
Crop area:
https://www.dropbox.com/s/e3mlp25sl6m55yk/IMG_0709.PNG
Bad Result=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurRadiusInPixels = 8.0;
[stillImageFilter prepareForImageCapture];
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
[imageSource removeAllTargets];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
UPDATE:
In the meantime, when you crop your images, do so to the nearest
multiple of 8 pixels in width and you should see the correct result
Thank u #Brad Larson ! i resize image width to nearest multiple of 8 and get what i want
-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
float fixSize = next8(image.size.width);
CGSize newSize = CGSizeMake(fixSize, image.size.height);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
float next8(float n) {
int bits = (int)n & 7; // give us the distance to the previous 8
if (bits == 0)
return (float)n;
return (float)n + (8-bits);
}
Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.
What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.
In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

CIGaussianBlur image size

I want to blur my view, and I use this code:
//Get a UIImage from the UIView
NSLog(#"blur capture");
UIGraphicsBeginImageContext(BlurContrainerView.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 5] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
blurredImage = [[UIImage alloc] initWithCIImage:resultImage];
//Place the UIImage in a UIImageView
UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
newView.image = blurredImage;
NSLog(#"%f,%f",newView.frame.size.width,newView.frame.size.height);
//insert blur UIImageView below transparent view inside the blur image container
[BlurContrainerView insertSubview:newView belowSubview:transparentView];
And it blurs the view, but not all of it. How can I blur all of the View?
The issue isn't that it's not blurring all of the image, but rather that the blur is extending the boundary of the image, making the image larger, and it's not lining up properly as a result.
To keep the image the same size, after the line:
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
You can grab the CGRect for a rectangle the size of the original image in the center of this resultImage:
// note, adjust rect because blur changed size of image
CGRect rect = [resultImage extent];
rect.origin.x += (rect.size.width - viewImage.size.width ) / 2;
rect.origin.y += (rect.size.height - viewImage.size.height) / 2;
rect.size = viewImage.size;
And then use CIContext to grab that portion of the image:
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:resultImage fromRect:rect];
UIImage *blurredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
Alternatively, for iOS 7, if you go to the iOS UIImageEffects sample code and download iOS_UIImageEffects.zip, you can then grab the UIImage+ImageEffects category. Anyway, that provides a few new methods:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
So, to blur and image and lightening it (giving that "frosted glass" effect) you can then do:
UIImage *newImage = [image applyLightEffect];
Interestingly, Apple's code does not employ CIFilter, but rather calls vImageBoxConvolve_ARGB8888 of the vImage high-performance image processing framework. This technique is illustrated in WWDC 2013 video Implementing Engaging UI on iOS.
A faster solution is to avoid CGImageRef altogether and perform all transformations at CIImage lazy level.
So, instead of your unfitting:
// create UIImage from filtered image (but size is wrong)
blurredImage = [[UIImage alloc] initWithCIImage:resultImage];
A nice solution is to write:
Objective-C
// cropping rect because blur changed size of image
CIImage *croppedImage = [resultImage imageByCroppingToRect:imageToBlur.extent];
// create UIImage from filtered cropped image
blurredImage = [[UIImage alloc] initWithCIImage:croppedImage];
Swift 3
// cropping rect because blur changed size of image
let croppedImage = resultImage.cropping(to: imageToBlur.extent)
// create UIImage from filtered cropped image
let blurredImage = UIImage(ciImage: croppedImage)
Swift 4
// cropping rect because blur changed size of image
let croppedImage = resultImage.cropped(to: imageToBlur.extent)
// create UIImage from filtered cropped image
let blurredImage = UIImage(ciImage: croppedImage)
Looks like the blur filter is giving you back an image that’s bigger than the one you started with, which makes sense since pixels at the edges are getting blurred out past them. The easiest solution would probably be to make newView use a contentMode of UIViewContentModeCenter so it doesn’t try to squash the blurred image down; you could also crop blurredImage by drawing it in the center of a new context of the appropriate size, but you don’t really need to.

Resources