ML kit face recognition not working on IOS - ios

I'm working on an app that does facial recognition. One of the steps include detecting the user smile. For that, I am currently using google's Ml Kit. The application works fine on Android platform but when I run on Ios (Iphone Xr and others) it does not recognize any faces on any image. I have already followed every steps on how to integrate Ios and Firebase and it runs fine.
Here's my code. It's always falling on length == 0, as the image would not contain any faces. The image passed as parameter is coming from the image_picker plugin.
Future<Face> verifyFace(File thisImage) async {
var beforeTime = new DateTime.now();
final image = FirebaseVisionImage.fromFile(thisImage);
final faceDetector = FirebaseVision.instance.faceDetector(
FaceDetectorOptions(
mode: FaceDetectorMode.accurate,
enableClassification: true,
),
);
var processedImages = await faceDetector.processImage(image);
print('Processing time: ' +
DateTime.now().difference(beforeTime).inMilliseconds.toString());
if (processedImages.length == 0) {
throw new NoFacesDetectedException();
} else if (processedImages.length == 1) {
Face face = processedImages.first;
if(face.smilingProbability == null){
throw new LipsNotFoundException();
}
else {
return face;
}
} else if (processedImages.length > 1) {
throw new TooManyFacesDetectedException();
}
}
If someone has any tips or can tell what I am doing wrong I would be very grateful.

I know this is an old issue, but I was having the same problem and turns out I just forgot to add the pod 'Firebase/MLVisionFaceModel' in the podfile.

there is configuration in some many places so i will better left you this video (although maybe you already see it) so you can see some code and how Matt Sullivan make that one you are trying to do.
let met know if you already see it and please add maybe an example repo i could work with so see you exact code.

From what I can tell, ML Kit face detection does work on iOS but very poorly.
It doesn't even seem worth it to use the SDK.
The docs do say that the face itself must be at least 100x100px. In my testing though the face itself needs to be at least 700px for the SDK to detect the face.
The SDK on Android works super well even on small image sizes (200x200px in total).

Related

React Native HTML to PDF not displaying local images

In case you guys don't know, there was a problem previously with this library not rendering local images on Android as well, but apparently it was solved. Now, I'm facing the exact same issue on iOS, with a difference that I can use static images like assets/src/assets/images/logo.png. But when the images start with something like file:///, storage://, ph:// it simply does not get rendered.
What I'm doing is trying to generate a PDF report file, which must be generated independently the user has an internet connection or not. That is the reason why I have to use local images.
The static image is the logo of the company, and the local image which is not getting rendered is an image saved to the phone's storage through Image Picker or Camera Roll. The React Native Image component displays the image perfectly, so I don't think I'm using a wrong path.
What I have tried so far:
Removing the file:/// or storage:// or ph:// from the beginning of the path string;
In some cases, when I save an image to the phone's library with Camera Roll, it will return a path that starts with ph:// but without an extension, such as .jpg or .png. I tried to put the extension manually, and still does not make any difference;
I tried to convert the image to base64 using rn-fetch-blob (with RNFetchBlob.fs.base64.encode(path)), but still got no success.
Devices:
iPhone SE with iOS 14 (also simulator iPhone 11 with iOS 15)
MacBook Air 2017 Core i5 1.8GHz and 8gb RAM (macOS Big Sur 11.5.2)
Environment
node: 12.22.7
npm: 6.14.15
react: 16.9.0
react-native: 0.61.5
react-native-html-to-pdf: ^0.11.0 (updating it to 0.12.0 also got me the same result)
Code:
sharePDF = async () => {
try {
this.changeVisibilityOptions(false);
this.changeVisibilityLoading('Gerando PDF...');
let htmlTemplate = '';
htmlTemplate = await getPDFDespesa(this.state);
const pdfOptions = {
html: htmlTemplate,
fileName: 'RelatorioDespesas',
directory: 'Relatorios'
};
let pdfFile = await RNHTMLtoPDF.convert(pdfOptions);
this.changeVisibilityLoading(false);
const shareOptions = {
title: 'Compartilhar com:',
url: `file://${pdfFile.filePath}`,
type: 'application/pdf',
failOnCancel: false
};
const ShareResponse = await Share.open(shareOptions);
} catch (error) {
this.setState({ visibilityLoadingScreen: false });
console.log('Error =>', error);
}
}
Final thoughts:
Well, since the code is stored at a private repository, I can't show the whole thing here for ethical reasons. But I'm doing my best to give you as much details as possible.
The output the code produces an almost complete PDF, with the only point that I see broken image icons where the images were supposed to be. For Android it works perfectly now.
I think this might be an issue related to WebView, since react-native-html-to-pdf uses WebView to generate the PDF from HTML code. I reached this conclusion after another developer at my job was trying to create a screen with a preview of the PDF before it could be shared got the very same problem for both Android and iOS. The library he used was react-native-webview.
Update with solution
Alright guys, after a long time of research, me and a colleague got to a solution which may not be the best but does what we expected.
First of all, one thing that was discovered is that we have to divide the problem in two, because we actually had two problems.
Images from react-native-image-picker: After a long time trying to find the problem which was preventing the local images from getting rendered, I tried updating the library to version 4.7.3 (latest version at that day) and did a number of required changes to the code, as the version we were using was considerably aged. Well, it happened to work out for my surprise, even with the response uri's format not being changed;
Images from #react-native-community/cameraroll: This one was a bit more complicated. It took me some time to realize that the iOS' PHAsset was not supported in the WebView or react-native-html-to-pdf (which uses WebView in background). So, after some research, me and my colleague found a workaround that lead us to a relatively easy solution. Basically we used react-native-fs to copy the PHAsset media file to a temporary directory, which would return a uri that started with file:// and could be rendered by WebView. That's the code we used to do this:
export default function getImageNameFromUrl(imageUrl = "") {
if (imageUrl) {
const splittedImageUrl = imageUrl.split('/');
return splittedImageUrl.pop();
}
return null;
};
export default async function copyAssetsFileIOSAndReturnURI(remoteURL = '', localURI = '') {
try {
if (remoteURL && localURI) {
const imageName = getImageNameFromUrl(remoteURL);
const imgPath = await RNFS.copyAssetsFileIOS(localURI, RNFS.TemporaryDirectoryPath+imageName, 0, 0);
return imgPath;
}
return null;
} catch (err) {
console.log(err);
return null;
}
}

Inaccurate face detection using ML Kit Face detection, doesn't work with selfies

I am creating a iOS app that uses the Firebase ML Kit Face Detection and I am trying to allow users to take a photo from their camera and check if there was a face in it. So I have followed the documentation and some youtube videos but it seems that it just doesn't work properly/accurately for me. I did some testing using a photo library not just pictures that I take, and what I found is it works well when I use selfies from google, but when I take my own selfies it never seems to work. I noticed when I take a selfie on my camera it does like a "mirror" kind of thing where it flips it, but I even took a picture of my friend using the front facing camera and it still didn't work. So I am not sure if I implemented this wrong, or what is going on. I have attached some of the relevant code to show how it was implemented. Thanks to anyone who takes the time to help out, I am a novice at iOS development so hopefully this isn't a waste of your time.
func photoVerification(){
let options = VisionFaceDetectorOptions()
let vision = Vision.vision()
let faceDetector = vision.faceDetector(options: options)
let image = VisionImage(image: image_one.image!)
faceDetector.process(image) { (faces, error) in
guard error == nil, let faces = faces, !faces.isEmpty else{
//No face detected provide error on image
print("No face detected!")
self.markImage(isVerified: false)
return
}
//Face Has been detected Offer Verified Tag to user
print("Face detected!")
self.markImage(isVerified: true)
}
}

Detecting ARKit compatible device from user agent

We would like to enable a feature that allows a model to be viewed using a deep link to our ARKit app from a web page.
Has anyone discovered a way to discover if a device is ARKit compatible using the user agent string or any other browser-based mechanism?
Thanks!
Apple seems to use the following code to show/hide the "Visit this page on iOS 12 to try AR Quick Look" on https://developer.apple.com/arkit/gallery/
(function () {
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
document.documentElement.classList.add(isRelAR ? 'relar' : 'no-relar');
})();
The interesting part of course being
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
Make your actions accordingly based on the value of isRelAR.
Safari doesn’t expose any of the required hardware information for that.
If you already have a companion iOS app for your website, another option might be to still provide some non-AR experience for your content, so that the website has something to link to in all cases.
For example, AR furniture catalogs seem to be a thing now. But if the device isn’t ARKit capable, you could still provide a 3D model of each furniture piece linked from your website, letting the user spin it around and zoom in on it with touch gestures instead of placing it in AR.

Capture pictues in blackberry application using VideoControl only works on emulator and not device

This is only some of the code because other parts of it are spread out but on the simulator for blackberry curve this adds a VideoControl to the manager and shows up fine with another button that actually captures the picture. However, when I run this on an actual Blackberry curve (version 6 I think) it doesn't display this on the screen.
try
{
_p = javax.microedition.media.Manager.createPlayer("capture://video?encoding=jpeg&width=1024&height=768");
_p.realize();
_videoControl = (VideoControl) _p.getControl("VideoControl");
if (_videoControl != null)
{
videoField = (Field) _videoControl.initDisplayMode (VideoControl.USE_GUI_PRIMITIVE, "net.rim.device.api.ui.Field");
// _videoControl.setDisplayFullScreen(true);
_videoControl.setVisible(true);
// EnhancedFocusControl efc = (EnhancedFocusControl)p.getControl("net.rim.device.api.amms.control.camera.EnhancedFocusControl");
// efc.startAutoFocus();
_p.start();
if(videoField != null)
{
add(videoField);
}
}
}
catch(Exception e)
{
Dialog.alert(e.toString());
}
In my experience the way of image taking you use has appeared very unreliable (it worked fine only on a limited number of devices), so I stopped using it. Use native Camera app instead - it works fine on all devices.
A lot of the time when things work on the emulator but not device it's permissions related, have you checked ApplicationPermissionsManager?
Word of warning, from OS4.5 to 6 a lot of stuff has been deprecated so be sure you check you have the right permissions for the models you are working with.
e.g. ApplicationPermissions.PERMISSION_SCREEN_CAPTURE was deprecated in 4.6 I think.

camera programming in black berry

my following code returns null ,
byte[] image1 = _videoControl.getSnapshot(null);
any suggestion please
Few important moments about VideoControl.getSnapshot method:
some manufacturers may not implement getSnapshot() method;
the viewfinder must actually be visible on the screen prior to calling getSnapShot();
if you attempt to take pictures too quickly, however, getSnapShot() may
return null. The camera requires time to clear out its buffer and
prepare for the next shot;
you may check MMAPI System Property for "video.snapshot.encodings" before capturing:
if (System.getProperty("video.snapshot.encodings") == null) {
// getSnapshot() is not supported
}
You may read this chapter from book "Advanced BlackBerry Development":
http://books.google.com/books?id=F4Qu-lpoVncC&pg=PA53&lpg=PA53#v=onepage&q&f=false
Since VideoControl.getSnapshot method is not supported by all devices I'd recommend to use another approach. You can start the native BB Camera app with this line of code:
Invoke.invokeApplication(Invoke.APP_TYPE_CAMERA, new CameraArguments());
and then using the FileSystemJournalListener catch the taken image.
The BB SDK on your PC contains samples. Search for 'fileexplorerdemo' sample to see the rest of details.

Resources