Use an object as an anchor and show gif around it - anchor

How I can detect objects and use these objects as anchors in an AR application? How I can show a gif around any particular detected object?
Platform: ARkit/ Realitykit

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Resizing shape of collision object in Game Object Script

I am having a game object.
I am increasing the size of the game object using game logic by setting the scale of the game object, however I am not able to change the size of the box of the collision object. Is there an API reference document or a better way to achieve this?
The support for physics scaling was added today in Defold 1.2.170. Read more in the release notes: https://forum.defold.com/t/defold-1-2-170-has-been-released/65631
You need to check the Allow Dynamic Transforms option in game.project to enable this feature.

Placing objects below the ground in AR Quick Look on iOS

I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.

ARcore Augmented images with 3D object interaction

I want to build a digital catelog application
where i detect the image in a catelogue and place a 3D object on it
This can be achieved by ARcore Augmented images.
what i need is When i click/touch the 3D object I need to show some information and videos
For this particular task i need some SDK options
without Vuforia can this be achieved using ARCore+Unity or Android OpenCV or any other.
This requires a lot of work from creating animations and layers to define colliders and controlling with backend code.
First you create the animations and animation controllers, then add colliders to the hot spots where you want to click on the object (e.g. touch the door to open), then map each collider click event to fire a specific animation.
actually it is better to follow a tutorial that shows the animating basics, then it will be easy to combine with AR project,
https://unity3d.com/learn/tutorials/s/animation

using three.js can I place a viewport in a scene between near field and far field

I am unsure if what I am trying to achieve is possible using three.js
What I aim to do is place the viewport within the frustum.
I want to create a 3d environment in which objects can appear between the viewport and the camera.
So if I move the camera then it appears that the object is rendered in 3d in front of the screen - i.e. the generated parallax gives the impression that it is projected out of the screen when things move.
I just can't find a way to do it. I am certain I did it a number of years ago in flash, I thought I would try three.js and webGL.

Resources