Technique World tracking performance is being affected by resource constraints - ios

While running an ARKit session with world tracking enabled, the Xcode console shows log messages about (I presume: reduced) tracking performance, even though
The AR Session is in Normal tracking state,
I am using the device in a well-lit office room with plenty of "features" to detect, and
The device moves only subtly.
TLDR: I want to understand what could be causing them, what impact they could have, and how to prevent them, or (re)act to them when they do occur—NB. not simply hide these errors.
[Technique] World tracking performance is being affected by resource constraints [0]
[Technique] World tracking performance is being affected by resource constraints [1]
The Console app shows that these are coming from the library ARKit and fall in the logging category Technique.
Although they sound like warnings, the Console app shows their type as errors.
As expected while using world tracking, the Console app does show a lot of CoreMotion logs around the time of the errors, but those lines do not seem to contain any errors, warnings, or other info that helps me diagnose what's going on.
The moment the errors show up in the log, there are no delegate callbacks, but eventually (anything between 5 seconds or 50 seconds) the screen will freeze with the callback session failed:
Error Domain=com.apple.arkit.error Code=200 "World tracking failed." UserInfo={NSLocalizedDescription=World tracking failed., NSLocalizedFailureReason=World tracking cannot determine the device's position.}
The ARKit source/documentation doesn't provide any hints on what "resource constraints" might have caused the inability to determine the device's position, it just reads:
World tracking has encountered a fatal error.
I've tried (without success) to stop the warnings from appearing by:
Resetting the session tracking: still errors,
Resetting the session with removal of all ARAnchors: still errors,
Disabling plane detection (once no longer needed): still errors.
Pausing the AR Session silences the warnings (makes sense, since it means the device stops tracking its movement while paused), but when resuming the session, the warnings return.
When closing the session and re-creating it (i.e. dismiss VC and recreate), while not having moved the camera (or changed lighting) the problem doesn't always re-occur.
My best guess is that blinking TL-lights are the cause for the tracking performance warnings, given Apple's explanation how world tracking works:
… visual-inertial odometry. This process combines information from the iOS device’s motion sensing hardware with computer vision analysis of the scene visible to the device’s camera. ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data.
(iPhone 6S, iOS 11 beta 4, no other apps running in the background)
Questions:
What is the difference between [0] and [1]?
What can cause these errors?
What impact do they have while the ARSession hasn't failed (yet)? I'm assuming we'll see "jumpy" models as the world coordinates supplied by the frame updates aren't accurate—NB. after resetting/stopping ARAnchor tracking the errors still occurred.
Will we know if the AR session is about to fail, or will it be unexpected? (Again, the ARSession doesn't fail immediately after the tracking state changes to "limited")
Is there some way to deal with this, e.g. freeing up some of these mentioned "resources"—or other ways to avoid running into these constraints?

Only allowing Portrait Device Orientation resolved this error for me.

I think this is xcode 9.1 bug. If you are use SceneKit: find where you set .backgroundColor = "your color" for ARSCNView or subclass. Comment this row and clean project.

This error happened to me because I manually added a view controller on the storyboard and added an ARSKView to it. This adds it as a child to a UIView which is by default a child view in a UIViewController. Therefore to support landscape and portrait orientation I had to add Auto-layout constraints, which for some reason triggered this error.
When I looked at the stock AR project, the ARSKView is an immediate child of the view controller, and it supports all view sizes and orientations out of the box without auto-layout constraints. When I set it up this way, the errors disappeared.

I think this is kind of bag because this issue happened for me only when I use constraints when I try to po something.
If I disable, for example, constraint below my code works well as expected.

Related

ARKit Resuming Session

I have an ARKit app which allows the user to add a cube to the scene. This works fine and I can see the cube. But when I push the app to the background and then move the device to another location (by walking to a different room) then ARKit session is unable to determine the correct position of my old nodes.
Is there anyway I can find a workaround this problem so that when the app is resumed from coming to foreground from the background then it still remembers the position of the nodes.
UPDATE: I am looking into saving the lat and long for the user and then somehow converting those lat and long to SCNVector3 to put the node.
You probably won't be able to keep the AR running in the background. Apple does not recommend pausing the session or interrupting it and trying to resume:
Avoid interrupting the AR experience. If the user transitions to another fullscreen UI in your app, the AR view might not be an expected state when coming back.
Use the popover presentation (even on iPhone) for auxiliary view controllers to keep the user in the AR experience while adjusting settings or making a modal selection. In this example, the SettingsViewController and VirtualObjectSelectionViewController classes use popover presentation.
The issue is that once the session gets interrupted, the device stops using it's mechanisms that keep track of AR Nodes as well as your location, might have to set up a mechanism that keeps the app running constantly in the background and running the ARSession through that. You might be able to find projects on github that allow for running in the background. Another issue you might face is Apple's limitations with running apps in the background, which is apparently 3 minutes.
If you're at all interested in restarting the AR Session one you came back in, you can see my answer on this thread.

AVCaptureSession bug persists between installs

I have a app that opens straight to a camera that is based on this WWDC sample: https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html
A few users have been experiencing a bug where the camera does not turn on and does not allow them to capture content.
I just ran into the same issue last night and this is what I observed:
I was debugging a separate issue and the camera was working 100% ok, then all of a sudden, it stopped working.
Every time I would open the app or navigate back to the camera, it would show a dark view of whatever it was currently pointing at, but the image was frozen. Its like it worked for 1 second, then the capture preview would freeze.
I tried force closing and reopening, same problem.
I tried uninstalling and reinstalling, same problem.
I then restarted my phone and the issue was solved.
How is it possible that this bug persists between separate installs?
Does anyone know what might be causing the camera to fail?
How should I go about debugging it if it only occurred once after months of using it and I have no idea what triggers it?
Are you running iOS 9?
That might be an iOS 9 software internal bug.
I had exactly the same non-reproducible issues but with MapKit's map view rendering.
In my case map view was showing just rectangle grid without any map objects - no streets, lakes, rivers, etc.
I've tried to google over internet to find potential reason of such weird issue, but without any luck. Then I've restarted device and that helped, the same as in your case.
Sure, my info is not a full answer, just want to share my experience.
Your process does not have direct access to the camera hardware, but rather through a device manager. It is the state of that manager that determines if things will work.
To confirm, when your app is experiencing issues, kill it then open the default Camera app. If it shows a blank or still frame preview, then you know it's not necessarily a problem in your app.

UIImagePickerController crash on canceling with zoom present on iOS 8.x only

In our app we show the Camera modally on top of another UIViewController. On iOS 8.x only, about 1/10 of the time if you zoom you wind up with a crash:
PLImagePickerCameraView didHideZoomSlider:]: message sent to deallocated instance
There is an existing SO post which has a supposed workaround - How-to find out what causes a didHideZoomSlider error on IOS 8? - but every variation I have tried fails to solve the crash. The suggestion involves putting a delay before dismissViewControllerAnimated:completion: . No matter what delay I try I can still reproduce the crash.
It only occurs (1) if you zoom the camera view (2) either choose Cancel or take a photo and exit the camera, all shortly before the zoom indication animation fades away. It appears to be a problem in iOS 8.x which Apple hasn't fixed. It crashes in Apple's code with no involvement in anything we have.
I do see "Snapshotting a view that has not been rendered results in an empty snapshot. Ensure your view has been rendered at least once before snapshotting or snapshot after screen updates." but this seems to be unrelated and is also referring to code inside the UIImagePickerController not anything we are doing.
I am about to deal with Apple DTS to see if we can find some way to avoid this or what in the environment might be causing this to happen. I thought to ask here in case anyone has another idea.
This crash did not occur under iOS 7.X. Happens on any model iPhone or iPad.
The answer from Apple is "it's a bug, file it in Radar" which of course says nothing about when it will be fixed. There is no workaround other than to tell users to wait a little until the slider fades (which our support people tell the users). There is nothing you can do other than implement your own camera and zoom support and do it yourself correctly.
Perhaps Apple will fix it in 8.2.

Problems going back and forth between scenes (Sprite Kit)

I am new to the iOS platform and have written an app using Sprite Kit and Xamarin. To transition between scenes I am using NavigationController.PushViewController with a "Back" button on the new scene to return. This works fine.
My problem occurs when I go back and forth between the original scene and next scene about 3 times. The frame rate (and overall application) slows to a crawl, and depending on which scenes, will hard crash. This happens only on device and not in the simulator. My gut instinct says I'm leaking memory somewhere / not cleaning up properly, so I have just started getting up-to-speed with Instruments. Note that each scene has about 5-8 images on it.
Questions:
What is the proper way to clean-up after transitioning out of a scene back to the previous one? I am explicitly calling RemoveAllActions() and RemoveAllChildren() to ensure clean-up.
What is my best strategy when using Instruments? I've only just begun to experiment with an Allocations trace, but there are a lot of allocations and knowing what to look for is challenging.
Any help is appreciated. I am considering dumping Sprite Kit altogether since my app doesn't really need it (I wanted to experiment) and because it has been a challenge building the app with all kinds of deep crashes (e.g. SKShapeNode, which is very problematic).
Thanks.

touchesMoved called at irregular intervals

I'm making a game for iOS where you mostly drag big objects across the screen. When I run the game on an actual iPad/iPhone for a while (continuously dragging the object in circles across the screen) every 5 minutes or so the dragged object goes all stuttery for about 10-30 seconds. Then, it goes back to moving silky-smooth.
Visually, it looks like the game's frame rate dropped to 15 fps for a while, but in actual fact it's running at rock-solid 60 fps all the time. However, I noticed that the only thing that doesn't move smoothly is the dragged object, while the rest of the game is all running perfectly smooth.
This led me to believe that the stuttering is related to the touch input in iOS. So I started looking at touchesMoved, and saw that it's normally called every 16 milliseconds (so touch input runs at 60 fps). So far so good.
Then I noticed that when the object starts stuttering, touchesMoved starts being called at weird time intervals, fluctuating wildly between 8 milliseconds and 50 milliseconds.
So while the touchscreen is in this weird state, sometimes touchesMoved will get called just 8 milliseconds after the previous call, and sometimes it won't get called until 50 ms after the previous call. Of course, this makes the dragged object look all choppy because its position is updated at irregular intervals.
Do you have any idea what could be causing touchesMoved to stop being called at regular intervals, as it normally does?
Bonus:
-Whenever I tilt the screen to force the screen orientation to change, roughly 70% of the time the touchscreen goes into the aforementioned state where touchesMoved starts being called irregularly. Then after 10-20 seconds it goes back to normal and everything looks smooth again.
-I've tried this on two iPads and two iPhones, with iOS 6 and 7, and the issue appears in all of these devices.
-An OpenGLES view is used to display the graphics. It syncs to the display refresh rate using CADisplayLink.
-The Xcode project I'm using to test this has been generated by the unity3d game development tool, but I've found several non-unity games where the same issue appears. this appears to be a system-wide problem. note I'm measuring the timings in objective-c using CFAbsoluteTimeGetCurrent, completely outside unity.
This is not a bug in Unity.
Something inside the OS is getting into a bad state and the touch-drag messages stop flowing smoothly. Sometimes you'll get multiple updates in a frame and sometimes you'll get none.
The issue does not happen on iPhone4 or below, or if the game is running at 30Hz frame rate.
I experienced this bug while using an in-house engine I'd written while working at a previous company. It first manifest itself after upgrading the UI system of a scrabble genre game, where you drag the tiles about on the screen. This was utterly bizarre, and I was never able to pin down the exact reproduction steps, but they seem to be timing related, somehow.
It can also be seen on Angry Birds (unless they've fixed it by now), and a variety of other games on the market, basically anything with touch-drag scrolling or movement. For Angry Birds, just go into a level and start scrolling sideways. Most of the time it'll be silky smooth, but maybe 1 in 10 times, it'll be clunky. Re-start the app and try again.
A workaround is to drop the input update frequency to 30Hz for a few frames. This jolts the internals of the OS somehow and gives it time to straighten itself out, so when you crank it back up to 60Hz, it runs smoothly again.
Do that whenever you detect that the bad state has been entered.
I've also run into this. I can verify that it's a bug in Unity 4.3.x.
My 4.2.x builds process touches on device at 60Hz with TouchPhase.Moved on every frame.
My 4.3.x builds get 20-40Hz with TouchPhase.Stationary being emitted on dropped frames.
TestFlight history saved my sanity here.
Don't forget to file a bug with UT.
It's a real disaster. Sometimes it's lag and sometimes it's really smooth. It lags even when GPU and CPU utilization are below 10%. it's not Unity bug. I'm using cocos2d v3.
If someone found a cure, please post it!
I've been running into this as well. My current hypothesis which I still have to verify is that if you ask for a given frame rate (eg. 60fps), but your actually achieved frame rate is less than that (eg. 45fps) that this causes a timing/race condition issue between Unity requesting inputs from iOS at a higher frame rate than it actually is running at. However, if you set it to 30fps (at least in my UNity 5.2 tests with iOS 9.1) then you get a solid 30Hz of input. When I disabled a chunk of my game and it was running at a very solid 60fps, then I would consistently get 60Hz of input from the touch screen. This is what I have for now, but I have to prove this in a simple project which I haven't had time to do yet. Hope this helps someone else.
I found a potential solution to this problem here: https://forums.developer.apple.com/thread/62315 (posting here because it took me a lot of research to stumble across that link whereas this StackOverflow question was the first Google result).
To follow up on this, I got a resonse on my bug report to Apple. This
is the response:
"As long as you don't cause any display updates the screen stays in
low power and therefore 30hz mode, which in turn also keeps the event
input stream down at 30 hz. A workaround is to actually cause a
display update on each received move, even if it is just a one pixel
move of a view if input is needed while no explicit screen update will
be triggered."
In my application, using a GLKView, I set its
preferredFramesPerSecond to 60. Occasionally, my input rate would
randomly drop to 30hz. The response from Apple doesn't fully explain
why this would happen, but apparently the expected method of handling
this is to call display() directly from touchesMoved() while dragging.
I've subclassed GLKView and I set preferredFramesPerSecond to 60. On
touchesBegan(), I set isPaused=true, and start calling display()
within touchesMoved(). On touchesEnd(), I set isPaused=false. Doing
this, I'm no longer having any issues - the app is more performant
than it's ever been.
Apple's example TouchCanvas.xcodeproj does all drawing from within
touchesMoved() as well, so I guess this is the intended way to handle
this.
As far as I can tell, your best bet to achieve a smooth look may be to interpolate between touch events, rather than immediately mapping your objects to your touch position.
tl;dr: It seems just having a CADisplayLink causing any OpenGL context or Metal device to draw at 60fps can cause this.
I was repro'ing this on my iPhone 7 Plus w/ iOS 10.2.1.
I made a small sample simple app with a CADisplayLink with preferredFramesPerSecond = 60, and tried the following rendering approaches:
GLKView w/ display()
CAEAGLLayer, used as prescribed by Apple at WWDC (opaque, sole layer, fullscreen, nothing drawn above it)
MTLDevice
In each case, the render method would only clear the screen, not try drawing anything else.
In each case, I saw the input rate problem.
The following "tricks" also seemed to do nothing, when called inside touchesMoved:
Calling glkView.setNeedsDisplay() (with enableSetNeedsDisplay set to true)
Moving some other view
Calling glkView.display() (actually, seems like it can raise your input rate to 40 events per second. But it doesn't look any better, as far as I can tell, and seems wrong to do, so I wouldn't recommend it.)
I gave up, after running all these tests. Instead, I interpolate my object between the touch positions instead. So it's what I'd recommend, too.
But I figured I'd include my research here in case somebody else takes a stab at it and finds a better solution.

Resources