Detect an object and take photo - ios

You might have seen that option in one of the samsung phone that when a person smile it take the photo. So it somehow detects the smile and the click the photo automatically.I'm trying to create the similar thing on iOS that lets say if the camera detects a chair it clicks the photo.I've searched around and what I found is that there is a library called OpenCV but I'm not sure it'll work with iOS or not. Plus there is a concept of CoreImage in iOS which has something to do with deep understanding of the image. So any idea about this?

openCV For iOS
For detection you can use openCV framework in iOS and the native detection methods. In my application i am using openCV rectangle detection and the scenario is: after taken picture openCV detects rectangle on the image and then makes lines on detected shape, also it can crop the image with basic functionality and as perspective correction.
options: Face Detection, Shape Detection
Native way:
iOS provides us real time detection there are many tutorials how to use them i will link at the end of the thread. Native way also provides us face detection, shape detection and perspective correction.
Coclusion:
Choice is up to you but i prefer native way. remember openCV is written in C++ if you are using swift language you can import openCV in your project and then connect swift to objective-C to call openCV. Using Bridging Headers
Tutorials:
Medium Link 1
Medium Link 2
Toptal Tutorial
How to use OPENCV in iOS

Related

Edit the contours of faces in images using ML Kit Swift

I was able to build an IOS app using google's ML Kit Face Contour API to identify facial features in images with Firebase ML Kit which was found here: https://codelabs.developers.google.com/codelabs/mlkit-ios/#0
My next step is to edit the contours of faces such as change the lip colour and make the eyes brighten. Any idea on where I can find reference on how to do this?
My end result is a Selfie Editor app similar to what Facetune app does.
I'm afraid currently MLKit doesn't support the feature to edit the contours.

Apple Vision Framework Identify face

Is it possible in the Apple Vision Framework to compare faces and recognise if that person is in a picture compared to a reference image of that person?
Something like Facebook Face recognition.
Thomas
From the Vision Framework Documentation:
The Vision framework performs face and face landmark detection, text
detection, barcode recognition, image registration, and general
feature tracking. Vision also allows the use of custom Core ML models
for tasks like classification or object detection.
So, no, the Vision Framework does not provide face recognition, only face detection.
There are approaches out there to recognize faces. Here is an example of face recognition in an AR app:
https://github.com/NovatecConsulting/FaceRecognition-in-ARKit
They trained a model that can detect like 100 persons, but you have to retrain it for every new person you want to recognize. Unfortunately, you can not just give two images in and have the faces compared.
According to Face Detection vs Face Recognition article:
Face detection just means that a system is able to identify that there is a human face present in an image or video. For example, Face Detection can be used to auto focus functionality for cameras.
Face recognition describes a biometric technology that goes far beyond a way when just a human face is detected. It actually attempts to establish whose face it is.
But...
In a case you need an Augmented Reality app, like Facebook's FaceApp, the answer is:
Yes, you can create an app similar to FaceApp using ARKit.
Because you need just a simple form of Face Recognition what is accessible via ARKit or RealityKit framework. You do not even need to create a .mlmodel like you do using Vision and CoreML frameworks.
All you need is a device with a front camera, allowing you detect up to three faces at a time using ARKit 3.0 or RealityKit 1.0. Look at the following Swift code how you can do it to get ARFaceAnchor when a face has detected.
And additionally, if you wanna use reference images for simple face detection – you need to put several reference images in Xcode's .arresourcegroup folder and use the following Swift code as additional condition to get a ARImageAnchor (in the center of a detected image).

Reproduce the new scanning feature in iOS 11 Notes

Does anyone know how to reproduce the new Notes new scanning feature in iOS 11??
Is AVFoundation used for the camera?
How is the camera detecting the shape of the paper/document/card?
How do they place the overlay over in real time?
How does the camera know when to take the photo?
What's that animated overlay and how can we achieve this?
Does anyone know how to reproduce this?
Not exactly :P
Is AVFoundation used for the camera? Yes
How is the camera detecting the shape of the paper/document/card?
They are using the Vision Framework to do rectangle detection.
It's stated in this WWDC session by one of the demonstrators
How do they place the overlay over in real time?
You Should check out the above video for this as he talks about doing something similar in one of the demos
How does the camera know when to take the photo?
I'm not familiar with this app but it's surely triggered in the capture session, no?
Whats that animated overlay and how can we achieve this?
Not sure about this but I'd imagine it's some kind of CALayer with animation
Is Tesseract framework used for the image afterwards?
Isn't Tesseract OCR for text?
If you're looking for handwriting recognition, you might want to look for a MNIST model
Use Apple’s rectangle detection SDK, which provides an easy-to-use API that can identify rectangles in still images or video sequences in near-realtime. The algorithm works very well in simple scenes with a single prominent rectangle in a clean background, but is less accurate in more complicated scenes, such as capturing small receipts or business cards in cluttered backgrounds, which are essential use-cases for our scanning feature.
An image processor that identifies notable features (such as faces and barcodes) in a still image or video.
https://developer.apple.com/documentation/coreimage/cidetector

Motion Sensing by Camera in iOS

I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how??
I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve?
For Example if I move my hand before camera then it will show some message or alert.
And plz give me some useful and easy to understand links about this.
Thanx
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it.
This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications.
Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
I don't exactly understand what you mean here:
Here I am not asking about face recognition or a particular colored
image motion, because I got all result for OpenCV when I searched
But I would suggest to go for opencv as you can use opencv in IOS. Here is a good link which helps you to setup opencv in ios.
There are lot of opencv motion detection codes online and here is one among them, which you can make use of.
You need to convert the UIImage ( image type in IOS ) to cv::Mat or IplImage and pass it to the opencv algorithms. You can convert using this link or this.

Facedetection in iOS

I'm currently working on a project, where I need to detect a face and then take a photo with the camera. (after the camera focused everything correctly).
Is something like this possbile in iOS?
Are there any good tutorials on this?
i would suggest to use opencv for this as it has proven algorithm and fast enough to work on image as well as video
https://github.com/aptogo/FaceTracker
https://github.com/mjp/FaceRecognition
This solution will work for android too using opencv port to android.
Use GPUImage for face detection.
Face detection example is also available in GPUImage.
see last point in FilterShowCase example project of GPUImage for face detection.
iOS 10 and Swift 3
You can check apple example you can detect face
https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html
you can select the face metedata to make camera track the face and show yellow box on the face its have good performace than this example
https://github.com/wayn/SquareCam-Swift

Resources