I am trying to apply some filter on GPUImageVideoCamera and above the GPUImageVideoCamera, I am trying to add ARKit. But when ARKit session starts, GPUImageVideoCamera stop working, it seems to be a pause.
I have also try to keep GPUImageVideoCamera related part in my view controller A and I have presented view controller B with ARKit, then also it has the same issue.
Any hint or help will be appreciated.
Thanks in advance.
ARKit to share the same camera with a GPU video instance? probably better to make use of the ARKit session features, according to documentation "An ARSession object coordinates the major processes that ARKit performs on your behalf to create an augmented reality experience. These processes include reading data from the device's motion sensing hardware, controlling the device's built-in camera, and performing image analysis on captured camera images..."
What about using the "currentFrame" property instead?
Related
I create a game to add filter and 3d object in real time, but I can´t configure the back camera to face tracker, for example I need add 3d object in a people with the back camere, and this have to detect the rotation or move. Thanks a lot.
AugmentedFaces API works only with front camera.
You can check out shared camera access documentation and see if you can combine ARCore with Face Detection from Firebase or OpenCV
Does anyone know how to reproduce the new Notes new scanning feature in iOS 11??
Is AVFoundation used for the camera?
How is the camera detecting the shape of the paper/document/card?
How do they place the overlay over in real time?
How does the camera know when to take the photo?
What's that animated overlay and how can we achieve this?
Does anyone know how to reproduce this?
Not exactly :P
Is AVFoundation used for the camera? Yes
How is the camera detecting the shape of the paper/document/card?
They are using the Vision Framework to do rectangle detection.
It's stated in this WWDC session by one of the demonstrators
How do they place the overlay over in real time?
You Should check out the above video for this as he talks about doing something similar in one of the demos
How does the camera know when to take the photo?
I'm not familiar with this app but it's surely triggered in the capture session, no?
Whats that animated overlay and how can we achieve this?
Not sure about this but I'd imagine it's some kind of CALayer with animation
Is Tesseract framework used for the image afterwards?
Isn't Tesseract OCR for text?
If you're looking for handwriting recognition, you might want to look for a MNIST model
Use Apple’s rectangle detection SDK, which provides an easy-to-use API that can identify rectangles in still images or video sequences in near-realtime. The algorithm works very well in simple scenes with a single prominent rectangle in a clean background, but is less accurate in more complicated scenes, such as capturing small receipts or business cards in cluttered backgrounds, which are essential use-cases for our scanning feature.
An image processor that identifies notable features (such as faces and barcodes) in a still image or video.
https://developer.apple.com/documentation/coreimage/cidetector
I'm trying to utilize the benefits of ARKit's camera tracking for a game, but without actually having to position the player at the position of the camera.
Does ARKit provide a way to do so, or am I forced to use a different way of camera/device tracking which is as good as the one provided by ARKit?
How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).
Can anyone help me to detect realtime objects in iPhone camera using OpenCV?
My actual objective is to give an alert to users while an object interfering on a specific location of my application camera view.
My current thinking is to capture an image with respect to my camera overlay view which represents a specific location of my camera view. And then I process that image using OpenCV to detect objects by colors. If there I can identify an object in a specific image. I will give an alert to user in camera overlay itself. I coudn't know how I can detect an object from UIImage.
Please direct me if anyone knows some other good way to achieve my goal. Thanks in advance.
I solved my issue by the following way,
Created an image capture module with AVFoundation classes (AVCaptureSession)
Capturing simultaneous image buffer through a timer working along with camera module.
Processing captured frames to find objects through OpenCV
(Cropping, grayscale, threshold, feature detection etc...)
Referral Link: http://docs.opencv.org/doc/tutorials/tutorials.html
Alerting user through animated camera overlay view
Anyway the detection of objects through image processing is not much accurate. We need to have a object sensor (like a depth sensor in Kinet camera or similar) to detect objects in real scenario in live streaming, or may be we have to create AI for it perfect working.