ARkit delete infinite planes without restarting session? - ios

Does anyone know how to make ARkit delete infinite planes without restarting session? I'm currently trying to make my app detect one plane at a time and make it infinite. If a new plane is detected it should ideally delete the last plane and focus on the new one at whatever height the plane is at. Does anyone know how to do this without restarting the ARkit session? (Note that restarting the session causes all ARkit placed objects to lose their last position.)

Related

ARKit Tracking - offsetting phone movement

Currently I am trying to record the movements and rotations of my SCNNode. I am recording the movement data to a csv and then examining it afterwards. Everything is working fine except for the fact that when you move the phone, the data changes because of the SCNNode changing in world space. To elaborate, the node isn't moving or rotating but the movement of the phone is messing with the data in a way that makes it looks like it's moving.
I have read Apple's documentation about ARSessionConfiguration.worldAlignment and I think it could be possible to cancel out the movement of the phone using the gravity property of the node (default worldAlignment).
Does anyone have any advice as to how I could achieve this?
Update:
As mentioned above my original approach to solving this was to change the ARSessionConfiguration. When you change that, the only thing that really changes is where the SCNNode starts in the world space. Therefore, the change did not effect how the movement is represented.

ARKit save object position and see it in any next session

I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.
In iOS 12: Yes!
"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.
Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).
For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.
Otherwise, probably not.
Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.
With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.
But maybe...
One of the additions from “ARKit 1.5” in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.
Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).

Do ARKit anchors persist after pause and run again?

I am considering developing an ARKit app, but before deciding to buy an iPhone I would like to ask two questions that are crucial for me. Please let me know if this has already been asked, as I could not find it somewhere else online.
The questions:
1. Let's say the motion tracking gets lost (e.g., when pointing to a white wall) and then recovers again. Does it localize in the same frame of reference or it starts from scratch? Also, are the anchors preserved?
2. Let's say I pause the session and then run it again (e.g., by leaving the app and then coming back). Does is localize back to the frame of reference from before the pause? Also, are the anchors preserved?
I am asking this because I know that localization does not work in ARCore yet and I was wondering about its state in ARKit.
Thanks!
ARKit has two or three ways to lose tracking (depending on how you think of them); each has a different effect on anchors.
1. TemporARy tracking quality issues
(I honestly fumbled caps lock in the middle of that word. My keyboard is making the puns for me!)
In the first situation you mention, and similar cases — pointing at a blank wall, giving the phone a sudden jostle, moving from a darkened area into bright light or vice versa — your app will get notified of changes to ARKit’s tracking state that effect the quality of camera pose tracking.
When the tracking state is limited, ARKit’s idea of where the world is might be out of sync with the real world, but it still has enough information to be able to relocalize when the situation passes. That includes anchors. (Try for yourself; run one of Apple’s ARKit sample code projects, and cover the camera lens for a bit while moving the phone.)
If whatever situation is affecting the tracking state persists for a long time, relocalization is unlikely to succeed. It can help to track how long you’ve been in limited tracking and offer the user a way to restart the session if things get too out of whack.
2 and 3. Session interruption and resume or restart
If something happens that interrupts ARKit’s ability to receive camera and motion data — like the incoming phone call screen on iPhone, or the user responding to an interactive notification, your app gets a sessionWasInterrupted message. There’s nothing you can do in this case (as far as session management is concerned) other than wait for a corresponding sessionInterruptionEnded message.
If the interruption was brief and the device hasn’t moved much since, there’s a chance of automatic relocalization. Of course, you can’t tell how much the device has been moved because motion tracking was off... you can make an educated guess based on the duration of interruption and how sensitive your AR experience is to tracking precision, and decide accordingly whether to restart the session. (For example, a game that has space invaders floating in the air is less affected than an app that lets the user trace out a floor plan by marking walls.)
Aside: Traditional iOS UI patterns like modal view controllers, tab views, and navigation controllers can push the view hosting an AR session away, interrupting the session and losing tracking. Like Apple’s Human Interface Guidelines for AR suggest, it can be good to use things like popover views instead, so that you keep the AR experience onscreen and the session running.
When/if you do restart your AR session, you have a choice of whether to keep anchors or reset tracking. If you’ve already lost localization, what this really means is whether you keep track of anchors in arbitrary coordinate space they’re defined in (even though that space doesn’t line up with the real world anymore), or just lose all the anchors.
Short of restarting the session, though, there’s nothing that’ll cause anchors to be removed. And if you lose tracking temporarily enough to get relocalization, anchors that track real-world objects (that is, plane anchors, as opposed to the ones you manually create) should adjust back to realistic positions even if the coordinate systems doesn’t quite line up the way it used to.

Slow Scenekit Performance

I am creating this sort of first person shooter game for the iPhone 6 Plus, but when I introduce any lights to the scene, the frame rate goes from an already barely acceptable 12fps to an absolutely unplayable 2fps. Also, introducing a particle system with more than ten particles in it takes the frame rate to 9fps. I have already made it so that it adds all the walls and doors to a map node, and then flattens it using flattenedClone and adds that. I am unsure what else I can do without switching to Metal. But I am also wondering about this because if SceneKit were so slow, why would it even exist?
Problem solved: get a developer licence!

Cocos2dx 2.1.4 Game, Continuos FPS drop and never recovers

I am creating a game using cocos2dx 2.1.4. Its FPS drops continuously , and never recover.
Please find the details as follows
Background about the way I am doing things:-
Its game about scrolling down some shapes, each shape is made up of some square blocks.I have 7 kind of blocks. All loaded in a Sprite Sheet and using these blocks from this sprite sheet I create a shape.
A level file is consist of these shapes. I load two levels at the same time one onscreen and another off screen to make it seamless scrolling. For loading two levels at the same time I used two different CCSprite game batch nodes as :-
CCSpriteFrameCache::sharedSpriteFrameCache()->addSpriteFramesWithFile("56blackglow.plist");
_gameBatchNode1 = CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode1->retain();
this->addChild(_gameBatchNode1,kForeground);
_gameBatchNode2= CCSpriteBatchNode::create("56blackglow.png", 200);
_gameBatchNode2->retain();
this->addChild(_gameBatchNode2,kForeground);
The problem I am facing is that as I keep on playing the game frame rate drops continuously , from 60 fps till 10 fps and never recovers or might recover in near future , as I observed for 20 minutes but its too much time to wait.
My observations:-
1> I used Time profiler it shows maximum time is in draw() calls. Also if I play game very fast the peak of time increases in track, that should be fine as I am giving more work to do, but once a peak is attained it remains approximately at that height only, even if I leave the game Idle. Is it normal ? According to me it should have returned to its normal peak once the current work is done.
2> Some where I thought its happening because I used two batch nodes and removing its children on a user touch immediately might causing it slow but then after removing the children it should run normal. to give an idea is it ok to remove 10 children from batch node immediately ? some guys say its very slow process. Just to check if this causing problem , I did :-
Instead of removing them I just set visibility of the children to false.But still FPS drops and never recovers.
Please share your thoughts on this.
Though SpriteBatchNodes are generally quite good for drawing a lot of elements efficiently, I think there are best used for static/not-so-dynamic elements. In your case, if you have a lot of elements which go out of the screen but are still alive the draw() function will have to make some checks, thus hogging your performance (even if you set isVisible(false); explicitly, it still nedds to be checked).
In your case I think it would be better if you simply added new shapes outside of screen via some time-based function, and if they scroll outside of view simply remove them from scene, without using batchNodes.
Just found that with every touch, I am adding 8 new sprites to the layer, and its adding every time I touch . So with time I am giving more and more work to do. This is the problem
Actually I wanted to replace the sprite at 8 places with a touch, the way I was doing every time :-
_colorBottom1=CCSprite::createWithSpriteFrameName(png[0]);
this->addChild(_colorBottom1,kForeground);
_colorBottom1->setPosition(ccp((_colorPanelLeftPad)*_blockWidth,_blockWidth));
It was causing this sprite to be added with every touch.
but it should have been (Replace the texture instead of creating the Sprite again):-
CCSpriteFrame *frame1=CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(png0);
_colorBottom1->setDisplayFrame(frame1);

Resources