I am using AVFoundation in iOS for real-time face detection from camera frames.
When I set the camera quality, AVCaptureSession.sessionPreset, to .photo, the faces were not detected on some models (iPhone8).
When I set AVCaptureSession.sessionPreset to .high, it was able to detect.
However, I would like to use .photo, which has higher image quality, because I want the app to also have the role of taking pictures of the camera.
On the 12mini and 13mini, the .photo detected faces just fine.
Is there a solution to this problem?
var captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSession.Preset.photo
Expected behavior
Faces are detected in all models when .photo
SDK Info:
pod 'GoogleMLKit/FaceDetection', '3.2.0'
Smartphone:
Failure: iPhone8 iOS15
Success: iPhone12mini iOS14.2, iPhone13mini iOS16
Development Environment:
IDE Eversion:Xcode14.2
Laptop/Desktop: MacBook Pro(2021)
Laptop/Desktop OS/version: macOS 12.6.2
Related
I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11.
Any help would be appreciable.
I'm not I understood the question correctly. I don't think you can achieve a 0.5x zoom only using the wide-angle camera.
If you refer to the 0.5x zoom as in the native iOS camera, you achieve that by switching from the wide-angle camera to the ultra wide-angle camera.
I think you have two options:
You can switch directly to the ultrawide camera via a button or so (below a short snippet for selecting the camera and assigning it to your active device.
if let device = AVCaptureDevice.default(.builtInUltraWideCamera, for: .video, position: .back) {
videoDevice = device
} else {
fatalError("no back camera")
}
On supported devices like the iphone11 you can use the .builtInDualCamera as your active device. In this case, the system will automatically switch between the different cameras depending on the zoomFactor. Of course, on iPhone 11pro and 12pro series, you can even use .buildInTripleCamera to go to 2x zoom. You can even check for when the switch happens, please refer to the link below for more information.
https://developer.apple.com/documentation/avfoundation/avcapturedevice/3153003-virtualdeviceswitchovervideozoom
I have use the google api for face detection. so i have integrate via firebase also installed framework from firebase.
Face Detection is working fine in iPhonex when device is in landscape
mode.
But when device is in Portrait mode it's not working.
I have debug and found that in FirebaseMLVision.framework have processImage method in which passing the image, but Result is always blank when device is in Portrait.
Method FirebaseMLVision.framework
- (void)processImage:(FIRVisionImage *)image
completion:(FIRVisionFaceDetectionCallback)completion
NS_SWIFT_NAME(process(_:completion:));
I called as below:
[_faceRecognizer
processImage:image
completion:^(NSArray<FIRVisionFace *> *faces, NSError *error) {
if (error != nil || faces == nil) {
completed(emptyResult);
} else {
completed([self processFaces:faces]);
}
}];
Please help me what is wrong.
Thanks.
Have you tried out the QuickStart mlvision sample app? Its face detection should work fine in iPhone X portrait mode.
I had the same problem, but it was solved.
The image passed to MLKit seems to fail to be detected if the vertical length exceeds 1280.
If you are using AVCaptureSession, try changing the value of sessionPreset.
let captureSession = AVCaptureSession()
captureSession.sessionPreset = .hd1280x720
By fixing the resolution of the output image at 720x1280, the face is detected normally.
If you are not using AVCaptureSession, try changing the image resolution.
i want to apply CIFilter on camera preview like native Camera App.i know about GPUImage framework but get memory issue at some points,Is there any other way to use CIFilter with live camera preview...
I need to retrieve the resolution in pixels of a movie captured by iOS camera of the iPhone...
Is there a library like UIDevice that check which type of device I'm using, also for the camera information?
Everything depends in which mode you'll start capturing video.
According to this link: https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW30
You can try to capture video with AVCaptureSessionPresetHigh preset, and than check what's the size of captured image.
This should give you highest video resolution in recording mode.
I am developing a realtime video processing app for iOS 5. The video stream dimensions need to match the screen size of the device. I currently only have a iPhone 4 to develop against. For the iPhone 4 I set the AVCaptureSession preset to AVCaptureSessionPresetMedium:
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetMedium];
The captured images (via CMSampleBufferRef) have the size of the screen.
My question: Is the assumption correct that the images captured with a session preset of AVCaptureSessionPresetMedium have the full screen device dimensions on iPhone 4s and iPad2 as well? I unfortunately cannot verify that myself.
I looked at the apple documentation:
http://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVCaptureSession_Class/Reference/Reference.html#//apple_ref/doc/constant_group/Video_Input_Presets
but I cannot find a ipad2 dimension preset of 1024/768 and would like to save me the performance penalty of resizing images in real time.
Whats the recommended path to go?
The resolution of the camera and the resolution of the screen aren't really related anymore. You say
The captured images (via CMSampleBufferRef) have the size of the
screen
but I don't think this is actually true (and it may vary by device). A medium capture on an iPad 2 and an iPhone 4s is 480x360. Note this isn't even the same aspect ratio as the screen on a phone or iPod: the camera is 4x3 but the screen is 3x2.