ARCore – Max number of Anchors - augmented-reality

I am wondering what is the max number of anchors that we can create in ARCore before the AR session crash? Let say, we place an anchor every 5 meters.

The max number of anchors in ARCore app depends on several factors (on chipset, ARCore version, etc). There are Local anchors (Trackable and Session), Cloud anchors and Geospatial anchors. Each type of anchors has a different processing cost. However, in the context of Local anchors, I can say that 50 trackable anchors is quite a heavy burden for ARCore processing.

Related

How is the camera heading determined in the ARCore GeoSpatial API?

I'm in a somewhat unusual situation where I need to be able to calculate, without using ARCore anchors, the transform from the ARKit coordinate system (ARWorldTracking) to the geo spatial coordinate system (see note below on why I need to do this).
Each frame, I am getting the heading of the camera using the appropriate API. I would now like to use this heading to figure out how map between ARKit coordinates and geo spatial coordinates. When the phone is held upright in portrait orientation, it is pretty easy to figure out how the heading is determined (it appears to be based on the negative z-axis of the the ARFrame's ARCamera object. When the phone is held flat with the screen up, the heading seems to follow the negative x-axis of the camera object.
What I am unable to determine is how the heading is determined when the phone has yaw, pitch, and roll (in this situation it is unclear which axis the heading is in reference to). I've tried a bunch of different test cases and so far I am unable to achieve the accuracy I am expecting.
Note: I cannot use the ARCore anchors since I am using the ARWorldMap in conjunction with ARWorldTrackingSession. This means that when the system is localizing to the ARWorldMap, no ARCore anchors can be created (due to the fact that the tracking state is .limited).

What sensors does ARCore use?

What sensors does ARCore use: single camera, dual-camera, IMU, etc. in a compatible phone?
Also, is ARCore dynamic enough to still work if a sensor is not available by switching to a less accurate version of itself?
Updated: May 10, 2022.
About ARCore and ARKit sensors
Google's ARCore, as well as Apple's ARKit, use a similar set of sensors to track a real-world environment. ARCore can use a single RGB camera along with IMU, what is a combination of an accelerometer, magnetometer and a gyroscope. Your phone runs world tracking at 60fps, while Inertial Measurement Unit operates at 1000Hz. Also, there is one more sensor that can be used in ARCore – iToF camera for scene reconstruction (Apple's name is LiDAR). ARCore 1.25 supports Raw Depth API and Full Depth API.
Read what Google says about it about COM method, built on Camera + IMU:
Concurrent Odometry and Mapping – An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used for fixing a drift in the tracked motion.
Here's Google US15595617 Patent: System and method for concurrent odometry and mapping.
in 2014...2017 Google tended towards Multicam + DepthCam config (Tango project)
in 2018...2020 Google tended to SingleCam + IMU config
in 2021 Google returned to Multicam + DepthCam config
We all know that the biggest problem for Android devices is a calibration. iOS devices don't have this issue ('cause Apple controls its own hardware and software). A low quality of calibration leads to errors in 3D tracking, hence all your virtual 3D objects might "float" in a poorly-tracked scene. In case you use a phone without iToF sensor, there's no miraculous button against bad tracking (and you can't switch to a less accurate version of tracking). The only solution in such a situation is to re-track your scene from scratch. However, a quality of tracking is much higher when your device is equipped with ToF camera.
Here are four main rules for good tracking results (if you have no ToF camera):
Track your scene not too fast, not too slow
Track appropriate surfaces and objects
Use well lit environment when tracking
Don't track reflected of refracted objects
Horizontal planes are more reliable than vertical ones
SingleCam config vs MultiCam config
The one of the biggest problems of ARCore (that's ARKit problem too) is an Energy Impact. We understand that the higher frame rate is – the better tracking's results are. But the Energy Impact at 30 fps is HIGH and at 60 fps it's VERY HIGH. Such an energy impact will quickly drain your smartphone's battery (due to an enormous burden on CPU/GPU). So, just imagine that you use 2 cameras for ARCore – your phone must process 2 image sequences at 60 fps in parallel as well as process and store feature points and AR anchors, and at the same time, a phone must simultaneously render animated 3D graphics with Hi-Res textures at 60 fps. That's too much for your CPU/GPU. In such a case, a battery will be dead in 30 minutes and will be as hot as a boiler)). It seems users don't like it because this is not-good AR experience.

Can ARCore track moving surfaces?

ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.

Multiple Plotspaces or shifted axes/plots

I'm looking to incorporate 4 real time scatter-plots into a graph and it has been requested that they be separated (at least in pairs) to make it easier to pick out signals. Would it be less resource intensive to have multiple plotspaces on my graph, or shift a new set of axes and plots on the same plotspace? Is this still the case if I add 2-4 more scatter-plots (for 6-8 total)?
FYI, I'm currently using CorePlot 1.6 (haven't had time to make the jump to 2.0).
If all of the plots are in the same graph, use multiple plot spaces. A plot space just defines a coordinate mapping between the data and the screen so it does't use any video memory or other system resources (just a small amount of memory for the plot space object itself). Each plot and axis are CALayer objects, so those will be the primary drivers of resource usage.

How to implement semi-randomized level in iPhone game?

I want to create a game with a level structure similar to iCopter or Canabalt, where each level has a randomized floor (and roof), but the height of the floor is never impossible to reach from the previous one. I am also unsure on how to continually increase difficulty. I have searched far and wide for a tutorial or something like that, but I couldn't find anything. Can anyone help?
It sounds like far too specific a need to be the subject of a tutorial, to be honest. I've played Canabalt but not iCopter so I'll talk about a game like the former.
There are all sorts of calculus equations you could use to calculate acceleration and gravity to work out precisely where a platform would have to be in order to be reachable, but I suspect you will do just as well with a simpler approximation. If all your platforms are of a minimum length, then you can make an assumption on the speed that it's reasonable to expect a player to be able to reach by the time they get to the end. That, in combination with however long your jump algorithm keeps someone in the air, dictates the maximum distance that another platform of the same height could possibly be and still be reachable.
The highest platform you can reach is usually dictated by your jump algorithm - that could be a constant height, or it could be proportional to the speed, but either way you can easily estimate the highest reasonable jump you can make at the end of any given platform. This gives you a maximum relative height that you can reach from there.
Assuming your physics are fairly realistic and you apply a constant downwards force while the player is in the air, the apex of the jump will be at around the half-way point. So a platform that is the maximum attainable height relative to the player needs to be half as distant as one on the same level would be. And to find reasonable relative height and distance combinations in between, you can linearly interpolate.
Platforms below you are obviously more lenient - they can be further away than one on the same level, again by a distance roughly proportional to the speed you're travelling.
A simple algorithm then would be to pick, at each stage, either a higher or lower platform, pick a relative height from within the attainable bounds, then find the distance it needs to be at.
To adjust difficulty, you can start with these relative height and distance values above, which represent the extremes of what is possible, and reduce them by a proportion to make the jumps easier to complete. I might start with 50% reductions, +/-10% (randomised) to provide a few tougher jumps. Then as the game progresses, I'd slowly ramp that 50% down towards 0, so the player has less and less margin for error.
EDIT: Since I posted this answer, I found another interesting source which may be of use: A Probabilistic Multi-Pass Level Generator. Although the game in question is different (one of the Mario games I don't recognise) many of the principles are similar, in terms of placing platforms at reasonable heights and distances. Java code is provided.
I'm not sure how comfortable you are with math, physics, etc. but, in my opinion, this is a pretty simple solution:
Using a formula to determine if a launched ball will clear a fence in the distance is a reasonable way to find an arc defining the possible farthest points the next platform could be. It's a standard formula you learn in physics when studying projectile motion. There's a fairly interactive example here that includes the equation.
I'd recommend determining the position for your next platform like this:
Randomly choose a horizontal distance X from the end of the current platform to the beginning of the next platform (determine a reasonable range for X however you want).
Use the fence problem to find a maximum value for height Y to make the platform reachable.
You may need to subtract a small amount from the maximum height to ensure the platform can be reached depending on how you have things implemented.
Choose a height Y that is no higher than the maximum (remember that you can and should allow negative values for Y).
Place the next platform past the current one at distance X and height Y

Resources