I'm working on a web app for a museum and i'm using aframe and aframe-ar.
The behavior is:
You see a marker and you point your phone to the marker.
In the phone you can see a image as a texture of the plane
You car move on x and y basis the object, and also you can rotate it.
It all works fine, i get the marker consistently and the posisitoning is really good, but the plane is very jumpy, it's like it thinks that I'm moving my hand very fast.
Anyway, i would appreciate very much if someone can help on how to resolve this issue.
Thank you guys
I think this is just the nature of marker-based AR using phone camera all in JS without native libraries ARCore or ARKit. Try ARCore or ARKit browsers?
Related
I am looking for ways of cropping the head and upper body contour form a live camera feed and putting it in front of a virtual background. For example how does zoom achieve this exact same thing with Virtual Background feature?
I know openCV is there, but I don't know there is just face detection or it can help with cropping the whole head and body including head hair, shoulders, arms etc.
I am not sure how apps like Instagram does it, but I know they have the functionality to replace the complete background of camera feed with virtual things. Not sure if they use ARKit or ARCore, but even these platforms only support detecting different positions on the face, nothing for detecting the boundary of the body itself.
Appreciate any help.
Thanks,
Amit.
Apps like Instagram and Snapchat use their own custom programs to achieve that like SparkAR in Instagram case and Lens Studio for Snapchat. I really believe they don't use ARKit or ARCore, for stability reasons.
Now, If you are building your own program to detect face or background than you would ideally start with OpenCV. Then, over the top of it you would use MATLAB for calculating boundary, head or whatever you want to achieve.
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
Think if someone in real life waved their hand and hit the 3D object in AR, how would I detect that? I basically want to know when something crosses over the AR object so I can know that something "hit" it and react.
Another example would be to place a virtual bottle on the table and then wave your hand in the air where the bottle is and then it gets knocked over.
Can this be done? If so how? I would prefer unity help but if this can only be done via Xcode and ARKit natively, I would be open to that as well.
ARKit does solve a ton of issues with AR and make them a breeze to work with. Your issue just isn't one of them.
As #Draco18s notes (and emphasizes well with the xkcd link 👍), you've perhaps unwittingly stepped into the domain of hairy computer vision problems. You have some building blocks to work with, though: ARKit provides pixel buffers for each video frame, and the projection matrix needed for you to work out what portion of the 2D image is overlaid by your virtual water bottle.
Deciding when to knock over the water bottle is then a problem of analyzing frame-to-frame differences over time in that region of the image. (And tracking that region's movement relative to the whole camera image, since the user probably isn't holding the device perfectly still.) The amount of of analysis required varies depending on the sophistication of effect you want... a simple pixel diff might work (for some value of "work"), or there might be existing machine learning models that you could put together with Vision and Core ML...
You should take a look at ManoMotion: https://www.manomotion.com/
They're working on this issue and suppose to release a solution in form of library soon.
I am new to ARKit and I'm using Unity along with it.
So I just got one of my custom models to be displayed and I can anchor it to the ground by tapping on a discovered plane. However my model is pretty big, its a life sized shack.
The problem is when I move around to much the model loses its anchor point and becomes unstable and starts moving around all over the place. This wasn't a problem when I had it as a smaller model, only when I scaled it up.
Has anyone else had this problem? Have you gotten it to work?
Thanks!
Its kind of inherent to the plane detection. If you look up at the ceiling for instance it is no longer seeing the floor and it is basicly only using the phones gyroscope and accelerator to know how the phone has moved. As far as i know there is no real solution to this since the object is only anchored to the detected plane.
I have a iOS application, with a MapView with several MKPolygons positioned to represent buildings on the map. Using the compass and GPS I want to be able to work out which of the polygons the handset is being aimed at.
I already am getting the GPS location and using the magnetometer to get the heading so just need to work out how to project from this point and work out which polygon it hits first.
Any suggestion??
You probably want to look at some like collision detection in 3D games to solve this problem. Sounds like an identical problem to me. There is a good overview of the different algorithms for this in this stackoverflow question: When to use Binary Space Partitioning, Quadtree, Octree?