I have use the google api for face detection. so i have integrate via firebase also installed framework from firebase.
Face Detection is working fine in iPhonex when device is in landscape
mode.
But when device is in Portrait mode it's not working.
I have debug and found that in FirebaseMLVision.framework have processImage method in which passing the image, but Result is always blank when device is in Portrait.
Method FirebaseMLVision.framework
- (void)processImage:(FIRVisionImage *)image
completion:(FIRVisionFaceDetectionCallback)completion
NS_SWIFT_NAME(process(_:completion:));
I called as below:
[_faceRecognizer
processImage:image
completion:^(NSArray<FIRVisionFace *> *faces, NSError *error) {
if (error != nil || faces == nil) {
completed(emptyResult);
} else {
completed([self processFaces:faces]);
}
}];
Please help me what is wrong.
Thanks.
Have you tried out the QuickStart mlvision sample app? Its face detection should work fine in iPhone X portrait mode.
I had the same problem, but it was solved.
The image passed to MLKit seems to fail to be detected if the vertical length exceeds 1280.
If you are using AVCaptureSession, try changing the value of sessionPreset.
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .hd1280x720
By fixing the resolution of the output image at 720x1280, the face is detected normally.
If you are not using AVCaptureSession, try changing the image resolution.
Related
I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11.
Any help would be appreciable.
I'm not I understood the question correctly. I don't think you can achieve a 0.5x zoom only using the wide-angle camera.
If you refer to the 0.5x zoom as in the native iOS camera, you achieve that by switching from the wide-angle camera to the ultra wide-angle camera.
I think you have two options:
You can switch directly to the ultrawide camera via a button or so (below a short snippet for selecting the camera and assigning it to your active device.
if let device = AVCaptureDevice.default(.builtInUltraWideCamera, for: .video, position: .back) {
videoDevice = device
} else {
fatalError("no back camera")
}
On supported devices like the iphone11 you can use the .builtInDualCamera as your active device. In this case, the system will automatically switch between the different cameras depending on the zoomFactor. Of course, on iPhone 11pro and 12pro series, you can even use .buildInTripleCamera to go to 2x zoom. You can even check for when the switch happens, please refer to the link below for more information.
https://developer.apple.com/documentation/avfoundation/avcapturedevice/3153003-virtualdeviceswitchovervideozoom
When checking ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces on an iPhone X running iOS 13 it returns 1. But looking at the ARKit promo page it says:
ARKit Face Tracking tracks up to three faces at once, using the TrueDepth camera on iPhone X, iPhone XS, iPhone XS Max, iPhone XR, and iPad Pro to power front-facing camera experiences like Memoji and Snapchat.
Is there any documentation that specifies what each device supports?
At first I should say that only devices with TrueDepth sensor support ARFaceTracingConfiguration, so the oldest gadget in a row is iPhone X.
Secondly, to track three faces at a time you need iOS 13+, as you already said.
Thirdly, the default value is 1 face to track, thus you have to use the following instance property if you wanna simultaneously track up to three faces:
var maximumNumberOfTrackedFaces: Int { get set }
or:
guard ARFaceTrackingConfiguration.isSupported
else {
print("You can't track faces on this device.")
return
}
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 3
sceneView.session.run(config, options: [.resetTracking, .removeExistingAnchors])
P.S. I haven't seen any documentation that specifies what each device supports except this one.
How can I add Portrait mode feature while capture an image using custom camera.
Anyone please help me ..
I am using AVPhotoCapture were I had enable image depth property to true but there no portrait effect while capturing.
Try installing google camera if your device has dual camera ,you can find the apk in apk mirror website. The Google camera has hdr mode and other focus modes,i hope it helps
Basic Task: update the EXIF orientation property for the metaData asociated with a UIImage. My problem is that I don't know where the orientation property is in all the EXIF info.
Convoluted Background: I am changing the orientation of the image returned by imagePickerController:didFinishPickingMediaWithInfo: so I am thinking that I also need to update the metaData before saving the image with writeImageToSavedPhotosAlbum:(CGImageRef)imageRef metadata:(NSDictionary *)metadata.
In other words, unless I change it, the metaData will contain the old/initial orientation and thus be wrong. The reason I am changing the orientation is because it keeps tripping me up when I run the Core Image face detection routine. Taking a photo with the iPhone (device) in Portrait mode using the front camera, the orientation is UIImageOrientationRight (3). If I rewrite the image so the orientation is UIImageOrientationUp(0), I get good face detection results. For reference, the routine to rewrite the image is below.
The whole camera orientation thing I find very confusing and I seem to be digging myself deeper into a code hole with all of this. I have looked at the posts (here, here and here. And according to this post (https://stackoverflow.com/a/3781192/840992):
"The camera is actually landscape native, so you get up or down when
you take a picture in landscape and left or right when you take a
picture in portrait (depending on how you hold the device)."
...which is totally confusing. If the above is true, I would think I should be getting an orientation of UIImageOrientationLeftMirrored or UIImageOrientationRightMirrored with the front camera. And none of this would explain why the CIDetector fails the virgin image returned by the picker.
I am approaching this ass-backwards but I can't seem to get oriented...
-(UIImage *)normalizedImage:(UIImage *) thisImage
{
if (thisImage.imageOrientation == UIImageOrientationUp) return thisImage;
UIGraphicsBeginImageContextWithOptions(thisImage.size, NO, thisImage.scale);
[thisImage drawInRect:(CGRect){0, 0, thisImage.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Take a look at my answer here:
Force UIImagePickerController to take photo in portrait orientation/dimensions iOS
and associated project on github (you won't need to run the project, just look at the readme).
It's more concerned with reading rather than writing metadata - but it includes a few notes on Apple's imageOrientation and the corresponding orientation 'Exif' metadata.
This might be worth a read also
Captured photo automatically rotated during upload in IOS 6.0 or iPhone
There are two different constant numbering conventions in play to indicate image orientation.
kCGImagePropertyOrientation constants as used in TIFF/IPTC image metadata tags
UIImageOrientation constants as used by UIImage imageOrientation property.
iPhones native camera orientation is landscape left (with home button to the right). Native pixel dimensions always reflect this, rotation flags are used to orient the image correctly with the display orientation.
Apple UIImage.imageOrientation TIFF/IPTC kCGImagePropertyOrientation
iPhone native UIImageOrientationUp = 0 = Landscape left = 1
rotate 180deg UIImageOrientationDown = 1 = Landscape right = 3
rotate 90CCW UIImageOrientationLeft = 2 = Portrait down = 8
rotate 90CW UIImageOrientationRight = 3 = Portrait up = 6
UIImageOrientation 4-7 map to kCGImagePropertyOrientation 2,4,5,7 - these are the mirrored counterparts.
UIImage derives it's imagerOrientation property from the underlying kCGImagePropertyOrientation flags - that's why it is a read-only property. This means that as long as you get the metadata flags right, the imageOrientation will follow correctly. But if you are reading the numbers in order to apply a transform, you need to be aware which numbers you are looking at.
A few gleanings from my world o' pain in looking into this:
Background: Core Image face detection was failing and it seemed to be related to using featuresInImage:options: and using the UIImage.imageOrientation property as an argument. With an image adjusted to have no rotation and not mirrored, detection worked fine but when passing in an image directly from the camera detection failed.
Well...UIImage.imageOrientation is DIFFERENT than the actual orientation of the image.
In other words...
UIImage* tmpImage = [self.imageInfo objectForKey:UIImagePickerControllerOriginalImage];
printf("tmpImage.imageOrientation: %d\n", tmpImage.imageOrientation);
Reports a value of 3 or UIImageOrientationRight whereas using the metaData returned by the UIImagePickerControllerDelegate method...
NSMutableDictionary* metaData = [[tmpInfo objectForKey:#"UIImagePickerControllerMediaMetadata"] mutableCopy];
printf(" metaData orientation %d\n", [[metaData objectForKey:#"Orientation"] integerValue]);
Reports a value of 6 or UIImageOrientationLeftMirrored.
I suppose it seems obvious now that UIImage.imageOrientation is a display orientation which appears to determined by source image orientation and device rotation. (I may be wrong here) Since the display orientation is different than the actual image data, using that will cause the CIDetector to fail. Ugh.
I'm sure all of that serves very good and important purposes for liquid GUIs etc. but it is too much to deal with for me since all the CIDetector coordinates will also be in the original image orientation making CALayer drawing sick making. So, for posterity here is how to change the Orientation property of the metaData AND the image contained therein. This metaData can then be used to save the image to the cameraRoll.
Solution
// NORMALIZE IMAGE
UIImage* tmpImage = [self normalizedImage:[self.imageInfo objectForKey:UIImagePickerControllerOriginalImage]];
NSMutableDictionary * tmpInfo =[self.imageInfo mutableCopy];
NSMutableDictionary* metaData = [[tmpInfo objectForKey:#"UIImagePickerControllerMediaMetadata"] mutableCopy];
[metaData setObject:[NSNumber numberWithInt:0] forKey:#"Orientation"];
[tmpInfo setObject:tmpImage forKey:#"UIImagePickerControllerOriginalImage"];
[tmpInfo setObject:metaData forKey:#"UIImagePickerControllerMediaMetadata"];
self.imageInfo = tmpInfo;
I am developing a realtime video processing app for iOS 5. The video stream dimensions need to match the screen size of the device. I currently only have a iPhone 4 to develop against. For the iPhone 4 I set the AVCaptureSession preset to AVCaptureSessionPresetMedium:
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetMedium];
The captured images (via CMSampleBufferRef) have the size of the screen.
My question: Is the assumption correct that the images captured with a session preset of AVCaptureSessionPresetMedium have the full screen device dimensions on iPhone 4s and iPad2 as well? I unfortunately cannot verify that myself.
I looked at the apple documentation:
http://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVCaptureSession_Class/Reference/Reference.html#//apple_ref/doc/constant_group/Video_Input_Presets
but I cannot find a ipad2 dimension preset of 1024/768 and would like to save me the performance penalty of resizing images in real time.
Whats the recommended path to go?
The resolution of the camera and the resolution of the screen aren't really related anymore. You say
The captured images (via CMSampleBufferRef) have the size of the
screen
but I don't think this is actually true (and it may vary by device). A medium capture on an iPad 2 and an iPhone 4s is 480x360. Note this isn't even the same aspect ratio as the screen on a phone or iPod: the camera is 4x3 but the screen is 3x2.