Avoid scene monitoring in AVFoundation? - ios

In my current project I need very specific control over the AVCaptureDevice and the lighting settings (ISO, exposure,flashMode and even TorchMode). I am not trying to get "high quality" photos, in the sense of typical photography, but it is really important for me to be able to control the settings of the camera precisely, in order to get useable photos.
This poses no problem, as long as I am keeping the flash turned off.
But when setting the flashMode to .on, scene monitoring is enabled in the capturePhoto()-method which determines the flash intensity and auto-focusses while using the torch, even though it is set .off.
https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/1778634-photosettingsforscenemonitoring
The flash is required in every photo taken, regardless of lighting conditions. So my question is: Is there a way to avoid scene monitoring and have the capturePhoto() method always using the flash without the torch?
Thanks for your help!

Related

Can ARKit (or ARSession) keep running in the background?

My question is - Can ARSession run in the background - basically I need the ARKit running and all the per frame information and camera incentrics but I DONT want to render the camera feed on the screen - As happens in the ARSCNView.
Not specifically in the background thread or process.
Basically I just want to use the Tracking information(Images + Camera Pos + Camera Euler Angles etc) from the ARKit and dont want to render anything in AR per se or the camera feed.
Before everyone jumps on me - I know that Apple restricts GPU process in the background - case in point :
Execution of the command buffer was aborted due to an error during
execution. Insufficient Permission (to submit GPU work from
background) (IOAF code 6)
But there should be a way to use the ARKit or ARSession without the camera feed and only with the tracking information, right?
There is no way to do what you want to do, as soon as the ARSCNScene or ARSKScene is fully covered AR tracking stops. There are a couple reasons for this.
First, ARKit is for Augmented Reality, not tracking operations. If you aren't displaying reality (the camera feed) on the screen then you cannot augment reality.
Second, and most important, Apple cares about user privacy and if you could access the camera without showing the feed onscreen the user would have no way of knowing when the camera was in use. This would leave open the possibility of apps spying on users without their knowledge. Apple will never allow this. Ever.
Also, AR hits the battery hard so it only runs when its actually being used.
Based on your comments it seems as though you don't actually want to create an AR experience anyway so there may be a better way to get what you are after through Heading information and the Accelerometer.

Is it possible to create a custom flashMode in AVFoundation?

Today I found out that settings like a custom exposuremode or a fixed lens position in the focus mode, which I set for the AVCaptureDevice in my AVCaptureSession conflicted with the flashMode.on option of the AVCaptureSettings, as it alters the previously specified ISO and exposure duration values etc. .
So my question is: Is it possible to specify a custom flashMode that does not analyze the viewed scene and uses strictly pre-defined settings?
For me it is more valuable to be in control of these settings, even though it might lead to a loss of photo quality.

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

Do ARKit anchors persist after pause and run again?

I am considering developing an ARKit app, but before deciding to buy an iPhone I would like to ask two questions that are crucial for me. Please let me know if this has already been asked, as I could not find it somewhere else online.
The questions:
1. Let's say the motion tracking gets lost (e.g., when pointing to a white wall) and then recovers again. Does it localize in the same frame of reference or it starts from scratch? Also, are the anchors preserved?
2. Let's say I pause the session and then run it again (e.g., by leaving the app and then coming back). Does is localize back to the frame of reference from before the pause? Also, are the anchors preserved?
I am asking this because I know that localization does not work in ARCore yet and I was wondering about its state in ARKit.
Thanks!
ARKit has two or three ways to lose tracking (depending on how you think of them); each has a different effect on anchors.
1. TemporARy tracking quality issues
(I honestly fumbled caps lock in the middle of that word. My keyboard is making the puns for me!)
In the first situation you mention, and similar cases — pointing at a blank wall, giving the phone a sudden jostle, moving from a darkened area into bright light or vice versa — your app will get notified of changes to ARKit’s tracking state that effect the quality of camera pose tracking.
When the tracking state is limited, ARKit’s idea of where the world is might be out of sync with the real world, but it still has enough information to be able to relocalize when the situation passes. That includes anchors. (Try for yourself; run one of Apple’s ARKit sample code projects, and cover the camera lens for a bit while moving the phone.)
If whatever situation is affecting the tracking state persists for a long time, relocalization is unlikely to succeed. It can help to track how long you’ve been in limited tracking and offer the user a way to restart the session if things get too out of whack.
2 and 3. Session interruption and resume or restart
If something happens that interrupts ARKit’s ability to receive camera and motion data — like the incoming phone call screen on iPhone, or the user responding to an interactive notification, your app gets a sessionWasInterrupted message. There’s nothing you can do in this case (as far as session management is concerned) other than wait for a corresponding sessionInterruptionEnded message.
If the interruption was brief and the device hasn’t moved much since, there’s a chance of automatic relocalization. Of course, you can’t tell how much the device has been moved because motion tracking was off... you can make an educated guess based on the duration of interruption and how sensitive your AR experience is to tracking precision, and decide accordingly whether to restart the session. (For example, a game that has space invaders floating in the air is less affected than an app that lets the user trace out a floor plan by marking walls.)
Aside: Traditional iOS UI patterns like modal view controllers, tab views, and navigation controllers can push the view hosting an AR session away, interrupting the session and losing tracking. Like Apple’s Human Interface Guidelines for AR suggest, it can be good to use things like popover views instead, so that you keep the AR experience onscreen and the session running.
When/if you do restart your AR session, you have a choice of whether to keep anchors or reset tracking. If you’ve already lost localization, what this really means is whether you keep track of anchors in arbitrary coordinate space they’re defined in (even though that space doesn’t line up with the real world anymore), or just lose all the anchors.
Short of restarting the session, though, there’s nothing that’ll cause anchors to be removed. And if you lose tracking temporarily enough to get relocalization, anchors that track real-world objects (that is, plane anchors, as opposed to the ones you manually create) should adjust back to realistic positions even if the coordinate systems doesn’t quite line up the way it used to.

Camera setup logic iOS

Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view:

Resources