face recognition using AVfoundation with image as a source file - ios

I want to use AVFoundation to perform face recognition on an image taken from the camera roll.
Is this possible, in the first place?
I've found many tutorials on how to do this using live camera stream as input. I studied them and found no way to bind an AVCaptureInputDevice to an image.
P.S.
I don't want to use CoreImage. Actually I want to ditch CoreImage and use AVFoundation face recognition.
Thank you a lot.

What's the problem with CoreImage? It uses the same detector:
NSURL *url = [NSURL URLWithString:#"..."];
CIImage *img = [CIImage imageWithContentsOfURL:url]; // uiImageObj.CIImage
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyLow, CIDetectorAccuracy, nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:detectorOptions];
NSArray *features = [detector featuresInImage:img];

Related

Unable to draw CIImage on GLKView after few frames since updated to iOS 10.2?

Using following code in my application which was performing quiet fine to draw a CIImage on a GLKView again and again as recieved from AVCaptureOutput -didOutputSampleBuffer until I was using iOS <= 10.1.*
After updating device to iOS 10.2.1 it has stopped working. I am calling it for few frames the app just crashes with low memory warning. Whereas with iOS 10.1.1 and below I smoothly runs the app even on older device like iPhone 5S.
[_glkView bindDrawable];
if (self.eaglContext != [EAGLContext currentContext])
[EAGLContext setCurrentContext:self.eaglContext];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (ciImage) {
[_ciContext drawImage:ciImage inRect:gvRect fromRect:dRect];
}
[_glkView display];
This is how I am making the CIImage.
- (CIImage*)ciImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer ofSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CIImage *croppedImage = nil;
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
croppedImage = ciImage;
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
[scaleFilter setValue:croppedImage forKey:#"inputImage"];
[scaleFilter setValue:[NSNumber numberWithFloat:self.zoom_Resize_Factor == 1 ? 0.25 : 0.5] forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0] forKey:#"inputAspectRatio"];
croppedImage = [scaleFilter valueForKey:#"outputImage"];
NSDictionary *options = #{(id)kCIImageAutoAdjustRedEye : #(false)};
NSArray *adjustments = [ciImage autoAdjustmentFiltersWithOptions:options];
for (CIFilter *filter in adjustments) {
[filter setValue:croppedImage forKey:kCIInputImageKey];
croppedImage = filter.outputImage;
}
CIFilter *selectedFilter = [VideoFilterFactory getFilterWithType:self.selectedFilterType]; //This line needs to be removed from here
croppedImage = [VideoFilterFactory applyFilter:selectedFilter OnImage:croppedImage];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return croppedImage;
}
Here is imgur link http://imgur.com/a/u6Vyo of VM Tracker and OpenGL ES instruments result. Incase it eases to understand. Thanks.
Your GLKView rendering implementation looks fine, the issue seems to be coming from the amount of processing you're doing on PixelBuffer after converting it into CIImage.
Also the Imgur link you shared shows that GLKView is unable to prepare VideoTexture object correctly, most probably due to the memory overload being created in each iteration. You need to optimise this CIFilter Processing.

iOS generated QR code not recognized on other platforms

I am generating a QR code image using the CIQRCodeGenerator filter available from CIFilter. The image is generated fine and when it's displayed I can read the image using AVCaptureSession. However, when I try to scan the QR code using a different platform (Android, BlackBerry, iOS 6) then it doesn't recognize the image. According to Apple's documentation the generated image is compliant with the ISO/IEC 18004:2006 standard. Is the problem that I need something that is compliant with ISO 18004:2000?
Here is the code I'm using to generate the image:
NSData *stringData = [stringToEncode dataUsingEncoding:NSISOLatin1StringEncoding];
CIFilter *qrFilter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
[qrFilter setValue:#"M" forKey:#"inputCorrectionLevel"];
CIImage *qrImage = qrFilter.outputImage;
return [UIImage squareUIImageFromCIImage:qrImage withSize:size];
Here is a sample QR code:
Does anybody know if there is a way to generate a more universally recognized QR code image using CIFilter? I'd really prefer not to go back to using ZXing.
Thank you!
I'm not sure if the slight change makes a difference, but here's a snippet from my recent project that generates a QR code that is scanned from an iPad camera successfully in under a second:
CIFilter *filter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[filter setDefaults];
NSData *data = [accountNumber dataUsingEncoding:NSUTF8StringEncoding];
[filter setValue:data forKey:#"inputMessage"];
CIImage *outputImage = [filter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:[outputImage extent]];
UIImage *barcode = [UIImage imageWithCGImage:cgImage
scale:1.
orientation:UIImageOrientationUp];
// EDIT:
CFRelease(cgImage);
You are using the ISO-8859-1 character set, but different QR code readers assume different things about the character encoding depending on which version of the standard they're following. UTF-8 seems to be more common than ISO-8859-1.

Better performance for ffmpeg decoding editing image contrast for every frame

i'm developing an rtsp player using ffmpeg library and i must edit contrast image for every frame of video, searching the web i found this code for edit contrast :
- (UIImage*)contrast
{
CIImage *beginImage = [CIImage imageWithCGImage:[self CGImage]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =
[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self = newImg;
CGImageRelease(cgimg);
return self;
}
it works perfectly, but on iPad i lose performance and when decoding video are show a lot of noise on screen. There is a better way in performance to modify contrast for image??
Yes, there is a better way... Use opengl es 2.0 shaders, btw this hard work will be done on GPU...

iOS facedetection works differently in simulator and device

I am trying to use iOS 5 facedetection, and I'm finding that when I run the code on the simulator, it detects a face with correct frame. However, if I run the exact same code on the same image on the device, it returns incorrect dimensions.
Here's my code:
CIImage* image = [CIImage imageWithCGImage:someImage.CGImage];
NSDictionary *detectorOptions =
[NSDictionary dictionaryWithObjectsAndKeys:
CIDetectorAccuracyHigh, CIDetectorAccuracy,
nil];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:detectorOptions];
NSArray *faceFeatures = [detector featuresInImage:image];
for (CIFeature *f in faceFeatures) {
NSLog(#"Feature: %#", NSStringFromRect(f.bounds));
}
The output from the simulator (correct):
Feature: {{78, 153}, {200, 200}}
The output from the device (incorrect):
Feature: {{104, 199}, {272, 272}}
Is this a bug? Or am I improperly using the face detection code? I've also tried using featuresInImage:options and passing in another dictionary with the device orientation

Updating an in-memory iOS UIImage with EXIF/geotag information

I am trying to update a UIImage with geotag information. I looked at Saving Geotag info with photo on iOS4.1, which is where I found a reference to the NSMutableDDictionary+ImageMetadata category. However, I don't want to save to the photo album, but have a UIImage to pass on.
The following code seemed like it was making too many copies of the image and required all these frameworks linked: CoreImage, AssetsLibrary, CoreMedia, ImageIO.
Is there something more efficient that UIImage -> CIImage -> CGImage -> UIImage that can take the properties NSDictionary needed for setting EXIF data?
- (UIImage *)updateImage:(UIImage *)image location:(CLLocation *)location dateOriginal:(NSDate *)dateOriginal
{
NSMutableDictionary *properties = [NSMutableDictionary dictionaryWithDictionary:[image.CIImage properties]];
// uses https://github.com/gpambrozio/GusUtils
[properties setLocation:location];
[properties setDateOriginal:dateOriginal];
CIImage *tempImage = [[CIImage alloc] initWithImage:image options:properties];
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *updatedImage = [UIImage imageWithCGImage:[context createCGImage:tempImage fromRect:tempImage.extent]];
return updatedImage;
}
Have a look at libexif: http://libexif.sourceforge.net/
You'll probably need to pass the image's byte data to the library using
UIImageJPEGRepresentation(image, quality) bytes]

Resources