iOS 16 breaks the VNDetectFaceRectanglesRequest face tracking - ios

First of all let me say that I confirmed that this behavior only happens on iOS 16, and does not happen on the previous iOS versions.
In short, the app uses Apple Vision Framework to track face rectangles in the camera input and show that in camera output with view overlays over those found faces.
The correct behavior of the apps is as follows:
camera output is shown on the screen
frames from camera input is constantly checked whether it contains faces
if the face is found it's being overlaid by a circular view, and it's also being tracked (if you move the camera to the left, the view overlay will move to the left as well and the app will know it's the same face and won't draw another circle)
The breaking behaviour is as follows:
the same
the same
if the face is found, at first it's being tracked correctly, but at some point if you bug it out (move the face close to the edges) it will stop recognizing that it's the same face and it will constantly register (even the same face) as another one popping in the screen, and that results in constant jittery overlap of the circular overlay view, and ultimately to the error:
Exceeded maximum allowed number of Trackers for a tracker type:
VNObjectTrackerRevision2Type
Also, there's one more thing which I noticed after updating my Xcodde:
Thread Performance Checker: -[AVCaptureSession startRunning] should be called from background thread. Calling it on the main thread can lead to UI unresponsiveness
I did put those methods in background thread, but no avail, the issue only worsened from that point on.
Does anyone know what could've changed in the iOS 16 version that would impact this behaviour? Is it more likely a thread related issue, camera input related issue, VN framework related issue, or something else?
How would you go about detecting where the problem is? (I spent 2 days logging the face tracking but I've found no logical issue).

Related

SceneKit objects not showing up in ARSCNView about 1 out of 100 times

I can't share much code because it's proprietary, but this is a bug that's been haunting me for awhile. We have SceneKit geometry added to the ARKit face node and displayed inside an ARSCNView. It works perfectly almost all of the time, but about 1 in 100 times, nothing shows up at all. The ARSession is running, and none of the parent nodes are set to hidden. Further, when I look at Debug Memory Graph function in Xcode, the geometry appears to be entirely visible there (and doesn't seem to be set to hidden). I can see all the nodes attached to the face node perfectly within the ARSCNView of the memory graph, but on the screen, nothing shows up. This has been an issue for multiple iOS versions, so it didn't just appear with a recent update.
Has anybody run into a similar problem, or does anybody have any ideas to look into? Is it an apple bug, or is there a timing issue I might not be aware of? It's been really hard to debug because of how infrequent it is, and I haven't found it discussed on any other forums (but point me in the right direction if there is a previous discussion). Thanks!
This is pretty common practice if AR tracking is poor for some reason.
I ran into a similar problem too. I think it's definitely a tracking error which arises due to the fault of the user of AR app. Sometimes, if you're using World Tracking Config in ARKit and track a surrounding environment offhandedly or if you are tracking under inappropriate conditions – you get a sloppy tracking data which results in situation when your World Grid/Axis may be unpredictably shifted aside and your model may fly away somewhere. If such a situation arises - look for your model somewhere nearby – maybe it’s behind you.
If you're using a gadget with a LiDAR, the aforementioned situation is almost impossible, but if you're using a gadget with no LiDAR you need thoroughly track your room. Also there must be good lighting conditions and high-contrast real-world objects with distinguishable non-repetitive textures.

Fixing or avoiding memory leak in default third party library

I developed an app that includes the ability to preview the subdivision results of a 3D model on the fly. I have my own catmull clark subdivision functions to permanently modify the geometry, but I use the .subdivisionLevel property of the SCNGeometry to temporarily subdivide the model as a preview. In most cases previewing does not automatically mean the user will go for the permanent option.
.subdivisionLevel uses (just as MDLMesh’s subdivision, which I tried as a workaround) Pixar’s OpenSubdiv to do the actual subdivision and smoothing. It works faster than my own but more importantly it doesn’t permanently modify the vertex data I provide through a SCNGeometry source.
The problem is, I can’t get it to stop leaking memory. I first noticed this a long time ago, figured it was something in my code. I don’t think it’s just one specific IOS version and it happens in both Swift and Objective C. Eventually I set up a small example adding just 1 line to the SceneKit game template in Xcode, setting the ship’s subdivisionLevel to 1. Instruments shows that immediately results in memory leaks:
I submitted a bug report to Apple a week ago but I’m not sure I can expect a reply or a fix anytime soon or at all. The screenshot is from a test with a very small model, but even with small models (hundreds to couple of thousand vertices) it leaks a lot and fast and will lead to the app crashing.
To reproduce, create a new project in Xcode based on the SceneKit game template and add the following lines to handletap:
if result.node.geometry!.subdivisionLevel == 3 {
result.node.geometry!.subdivisionLevel = 0
} else {
result.node.geometry!.subdivisionLevel = 3
}
(Remove the ! For objective c)
Tap the ship to leak megabytes, tap it some more and it quickly adds up.
OpenSubdiv is used in 3D Studio max as well as others obviously and it appears to be in Apple’s implementation. So my question is: is there a way to fix/avoid this problem without giving up on the subdivision features of SceneKit entirely, or is a response from Apple my only chance?
Going through the WWDC videos to get an idea of how committed Apple is to OpenSubdiv and thus the chance of them fixing the leaks, I found the subdivision can be performed on the GPU by Metal since the latest SceneKit update.
Here are the required two lines (Swift) if you want to use subdivision in SceneKit or Model IO:
let tess = SCNGeometryTessellator()
geometry.tessellator = tess
(from WWDC 2017 What's new in Scenekit, 23:45 into the video)
This will cause the subdivision to be performed on the GPU (thus faster, especially at higher levels), use less memory, and most importantly, releases the memory when setting the subdivision level lower or back to zero.

iOS MapKit Performance Issue Using Satellite Tiles

I have an iOS 8 app that uses MapKit. I recently discovered a performance problem with the app when running a video decompression in addition to displaying a map. The app was unable to keep up with the flow of incoming data when using the satellite view tile set. However, this problem vanished the moment I swapped to the default MapKit tile set. The app is not CPU bottlenecked when the problem is occurring. It makes sense to me that the default (vector) map tile set is easier to display, but I am confused about why the issue is happening in the first place.
The problem seems strange to me because there is no movement or manipulation of the map when the problem is occurring. I would understand the issue better if it happened when manipulating the map in addition to rendering video to the screen, but the problem exists even with no user input. I am constrained in analyzing the system because we use a hardware accessory, so some Instruments are not available over wireless performance analysis. I am not using a high number of annotations, overlays, or other objects. We have a few custom annotations and overlays in use. There are existing apps that do this exact combination of decoding and maps, without the performance problem, so I suspect it's a configuration issue.
Are there certain attributes on the MKMapView that I can set to improve performance? I am at a loss as to what to investigate further since I cannot make the problem happen with the GPU Instrument active and the CPU doesn't appear to be the constraint.

iOS: how to detect an Ipad is put in a special frame

I have programmed an Ipad application that has a behaviour that I would like to change if I put it in a wooden frame (any other material could be added). To simplify things, the background should change whenever is inside this frame, and there must be no tap-touch interaction, just putting the Ipad inside the frame.
Of course, we could program am specific gesture on the screen, like double tapping or swiping but it is not the searched solution.
Another thought has been to detect the lack of movement for a certain amount of time, but that would not assure that iPad is inside the frame.
I have thought about interacting with magnets (thinking about smartcovers) and the sleep sensor in the right side of the Ipad, but I don't know how to do it.
I cannot see any other useful sensor.
Any suggestion?
A combination of accelerometer and the camera seems like an idea worth trying out:
Scan the accelerometer data to detect a spike followed by a flat line (= putting the iPad into the frame, then resting).
After detecting the motion event, use the back camera (maybe combined with the flash) to detect a pattern image fixed inside of the frame for this purpose. It might be necessary to put the pattern into a little hole to create at least a blurry image.
The second step is there to distinguish the frame from any other surface the iPad might be placed upon.

IOS weight scale pointer rotates abruptedly

I am requested to make an application, displaying the data on a virtual scale face animatedly. The data is obtained from BLE HW scale over BT 4.0. App scale face is designed just as a real watch, with the pointer rotating to indicate the data. As a note, the data from HW seems to be dis-contiguous. I am sure that's caused by the irregular data sampling of HW. Here is my design : make a queue and queue the incoming data. Then consume the data in a timer and dispatch animation That's the primary background knowledge. Currently, I am stuck in the problem of abruptly rotating the pointer as the real one. I have tried several solutions but none of them is acceptable to customers. I suppose that it is closely related to the poor user experience of pointer animation stuff. Please help me on the following question.
I try to invoke CATransform3DMakeRotation in the timer (50ms) to make the animation of rotation, the pointer always slightly trembles when it rotates. I suppose it is caused by the fact that a new animation is dispatched while the previous animation is not completed. This conclusion can be verified by the fact that the increasing the timer frequency causes the more seriously trembling while the decreasing recovers. Then I am trying invoke the next rotation in the completion callback when the last one is done. Now the trembling really disappears. but the rotation is not contiguous visually. Suppose 0, 450, 500, 600. Then rotation looks like consisting of 3 sub-animations (0~450, 450 ~500 and 500 ~600). That also gives rise to the poor experience.
So, How can I deal with it?
Please help to enlighten me. Thanks

Resources