How to place 3d model in arcore with accurate size of 3d model in real life - augmented-reality

I want to place a chair in arcore with actual size of chair in real life. The user should not able to zoom in or zoom out that chair in arcore.

The coordinate system in world space uses meters as the units. So if you create your model in Blender with 1 unit height it is 1 meter.
You can disable scaleController in TransformableNode to disable scaling:
node.scaleController.isEnabled = false

Related

How to reduce the view of camera in ARKit?

I put 180 planes around camera in each 2 degree.The count of plane show in view is deal due to the degree of planes is deal.
How to change the view of camera in ARKit to make the count of plane less than normal?
You cannot change the view of the camera as it is tightly coupled to physical camera of your device. What you could do is create a shader which will add a section plane.
To reduce the RGB view produced by ARCamera use SKCropNode from SpriteKit framework:
class SKCropNode: SKNode
Apple documentation says about it the following:
SKCropNode masks pixels drawn by its children so that only some pixels are seen

Notice when node position enters certain bounding box in scene kit?

is there a way to trigger a function, when a node (e.g. a tossed ball) enters a certain bounding box (e.g. a basket) in SceneKit / ARKit? If not, how would you implement this problem?
The way I would probably approach this. After obtaining a basketball hoop 3D dae model.
https://3dwarehouse.sketchup.com/model/fa42320b3def2c6b4741187cebbc52b3/Basketball-Hoop
I would first setup up the physicsShapes for the different elements.... backboard, pole & ring/hoop, floor & ball. I would then work out the cylinder size to fit inside the ring.
Using:
let cylinder = SCNCylinder(radius: 1.0, height: 0.1)
This becomes the physicBody for the hoop.
hoopNode.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(geometry:cylinder))
I would then workout the collisions the ball has with each different basketball element/node.
Don’t forget to use the debug feature
SceneView.debugOptions = .showPhysicsShapes
This will help to visually debug the collisions with the physics shapes & make sure the physic shapes are accurately scaled and in the correct position.
How to do setup collisions has already been answered in a previous post here...
how to setup SceneKit collision detection
You probably want the ball to bounce off the backboard, pole and ring, but if it hits the hoop physic shape... you might want to make the ball disappear and create a nice spark/flame animation or something. You could then have a score that registers as well.
This Arkit game on github will give you a nice template for handling all those things, collisions, animations & scoring
https://github.com/farice/ARShooter
You have to implement it using the ball's world position and the basket world position + bounding box.
Calculate the size of the basket using it's bounding box:
var (basketBoundingBoxMin, basketBoundingBoxMax) = (basket?.presentation.boundingBox)!
let size = basketBoundingBoxMax - basketBoundingBoxMin
Use this size and the basket's world position to calculate:
Basket's minimum x,y,z
Basket's maximum x,y,z
Then check if ball's world position is inside the basket's minimum and maximum position.

How to zoom to fit 3D points in the scene to screen?

I store my 3D points (many points) in a TGLPoints object. There is no other object in the scene than points. When drawing the points, I would like to fit them to the screen so they do not look far away or too close. I tried TGLCamera.ZoomAll but with no success and also the solution given here which adjusts the camera location, depth of view and scene scale:
objSize:=YourCamera.TargetObject.BoundingSphereRadius;
if objSize>0 then begin
if objSize<1 then begin
GLCamera.SceneScale:=1/objSize;
objSize:=1;
end else GLCamera.SceneScale:=1;
GLCamera.AdjustDistanceToTarget(objSize*0.27);
GLCamera.DepthOfView:=1.5*GLCamera.DistanceToTarget+2*objSize;
end;
The points did not appear on the screen this time.
What should I do to fit the 3D points to screen?
For each point build the scale factor by taking length of vector from points position to camera position. Then using this scale build your transformation matrix that you will apply to camera matrix. If scale is large that means point is farther you will apply reverse translation to bring that point in close proximity. I hope this is clear. To compute translation vector use following formula
translation vector = translation vector +/- (abs(scale)/2)
+/- will be decided by the scale magnitude if it is too far from camera you chose - in above equation.

How to I get more reliable Y position tracking for the Google Tango in Unity?

We have a unity scene that uses arealearning which has been extremely reliable and consistent about XZ position. However we are noticing that sometimes the tango delta camera’s Y position will "jump up" very high in the scene. When we force the tango to relocalize (by covering the sensors for a few seconds), the Y position remains very off. At other times, the Y position varies a 0.5 - 1.5 unity units when we first start up our Unity app on the tango and are holding it in the exact same position in the exact same room using the same ADF file. Is there a way to get a more reliable Y position tracking and/or correct for these jumps?
(All the XYZ coordinate is in the Unity convention in this context, x is right, y is up, z is forward)
Y position should work same as XZ coordinates, it relocalized to the height based on the ADF origin.
But note that, the ADF's origin is where you started learning(recording) ADF. Let's say you started the learning session by holding the device normally, then the ADF's origin might be a little bit higher than ground level. When you construct a virtual world to relocalize, you should take the height difference into consideration.
Another thing to check is that making sure there's no offset or original location set for DeltaPoseController prefab. DeltaPoseController will take the initial starting transformation as a offset, and add up pose on it. For example, if my DeltaPoseController's starting position is at (0,1,0), and my pose from device is (0,1,1), then the actually position for DeltaPoseController in Unity would be (0,2,1). This applies to both translation and rotation.
Another advanced (and preferred) way of defining ground level is to use the depth sensor to find out the ground height. In the Unity Augmented Reality example, it showed how to detect the plane and place a marker on it. You can easily apply the similar method to the ground plane, do a PlaneFinding and place the ground at the right height in Unity world space.

OpenCV continous Speed measurement using camera

I am new to OPENCV so bear with me if there are simple things that I am missing here.
I am trying to work out a camera based system that can continuously output the speed of a vehicle with the following assumptions:
1. The camera is horizontally placed and the vehicle passes near 3 to 5 feet of the camera lens.
2. The speed will not be more than 30KM/Hrs
I was hoping to start with the concept of a optical mouse which detects the displacement in the surface pattern. However I am unclear as to how to handle the background when the vehicle starts to enter the frame.
There are two methods I was interested in experiment with but am looking for further inputs.
Detect the vehicle as it enters the frame and separate from background.
Use cvGoodFeaturesToTrack to find points on the vehicle.
Track the point across the next frame. & Calculate the horizontal velocity using Lucas_Kanade Pyramid function for optical flow.
Repeat
Please suggest corrections and amendments.
Also I request more experienced members to help me code this procedure efficiently since I don't know which are the most correct functions to use here.
Thanks in advance.
Hope you will be using a simple camera with 20 fps to 30 fps and your camera is placed perpendicular to the road but away from it...the object i.e. your cars have a max velocity of 8 ms-1 in the image plane...calculate the speed of the cars in the image plane with the help of the lens you are using...
( speed in object plane / distance of camera from road ) = ( speed in image plane / focal length )
you should get in pixels per second if you know how much each pixel measures...
Steps...
You can use frame differentiation...that is subtract the current frame from the previous frame and take the absolute difference...threshold the difference...this segments out your moving car from the back ground...remember this segments all moving objects...so if u want a car and not a moving man you can use the shape characteristic that is the height is to width ratio...fit a rectangle to the segmented part and in each frame do the same steps. so in each frame you can keep a record of the coordinate of the leading edge of the bounding box... that way when a car enters the view till it pass out of the view you know for how long the car has persisted...use the number of frames , the frame rate and the coordinaes of the leading edge of the bounding box to calculate the speed...
You can use goodfeaturestotrack and optical flow of open cv...that way you can make distinguish between fast moving and slow moving objects...but keep refreshing the points that goodfeaturestotrack gives you or else any new car coming into the camera view will not be updated...record the displacement of the set of points picked by goodfeaturestotrack in each frame..that is the displacement of the moving object...calculate speed in the same way...the basic idea to calculate speed is to record the number of frames the object has persisted in the camera field of view...if your camera is fixed so is your field of view...hence what matters is in how many frames are you able to catch the object...
remember....the optical flow of opencv works for tracking slow moving objects or more theoretically the feature point (determined by goodfeatures to track..) displacement is less between 2 consecutive frames for the algorithm to work...big displacements will have some erroneous predictions by the algorithm...that is why the speed in the image plane is important..at least qualitatively you should get an idea of it...
NOTE: both the methods are for single object tracking ..for multiple object tracking you need some modifications...however you can start with either of the method...i think it will work..

Resources