Draw line instead of rendering Anchor in arcore - augmented-reality

I am new to AR, I am working on an APP using ARCore using this one AR-REMOTE-SUPPORT
When I am drawing it from my screen it is creating default android anchor, I want line instead of default android anchor.
How can I achieve this.
here is the function which is placing Anchors on the screen
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
// Creates an anchor if a plane or an oriented point was hit.
if ((trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose()))
|| (trackable instanceof Point
&& ((Point) trackable).getOrientationMode()
== Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
// Hits are sorted by depth. Consider only closest hit on a plane or oriented point.
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (anchors.size() >= 250) {
anchors.get(0).detach();
anchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3D model
// in the correct position relative both to the world and to the plane.
anchors.add(hit.createAnchor());
break;
}
}
}
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (camera.getTrackingState() == TrackingState.PAUSED) {
return;
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
if (isShowPointCloud()) {
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
mPointCloud.draw(viewmtx, projmtx);
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
}
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
if (isShowPlane()) {
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
}
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, mScaleFactor);
//mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
}
sendARViewMessage();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
Any help would be appreciated
TIA

One simple way to draw a line in ARCore is to create it between two anchor points.
The line itself is generally a 3D object also.
Here is a tested working example, based on the nice approach in this answer: https://stackoverflow.com/a/52816504/334402
private void drawLine(AnchorNode node1, AnchorNode node2) {
//Draw a line between two AnchorNodes (adapted from https://stackoverflow.com/a/52816504/334402)
Log.d(TAG,"drawLine");
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
//First, find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
Log.d(TAG,"drawLine insie .thenAccept");
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
Anchor lineAnchor = node2.getAnchor();
nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
);
}
You can see the full source here: https://github.com/mickod/LineView

Related

How to place an object at specific location in ARcore?

I am trying to place an object at specific location without tapping, Is there any direct method to place an object at real world location.
Certainly you could do this.
Search flat surface.
2.override onUpdate method.
loop through trackables and than your model has been projected without tapping.
i just give you an idea, how onUpdate method is look like, this is pseudo code
frame = arFragment.getArSceneView().getArFrame();
if (frame != null) {
//get the trackables
Iterator<Plane> var3 = frame.getUpdatedTrackables(Plane.class).iterator();
while (var3.hasNext()) {
Plane plane = var3.next();
//If a plane has been detected & is being tracked by ARCore
if (plane.getTrackingState() == TrackingState.TRACKING) {
//Hide the plane discovery helper animation
arFragment.getPlaneDiscoveryController().hide();
//Get all added anchors to the frame
Iterator<Anchor> iterableAnchor = frame.getUpdatedAnchors().iterator();
//place the first object only if no previous anchors were added
if (!iterableAnchor.hasNext()) {
//Perform a hit test at the center of the screen to place an object without tapping
List<HitResult> hitTest = frame.hitTest(getScreenVector3().x, getScreenVector3().y);
//iterate through all hits
Iterator<HitResult> hitTestIterator = hitTest.iterator();
while (hitTestIterator.hasNext()) {
HitResult hitResult = hitTestIterator.next();
//Create an anchor at the plane hit
Anchor modelAnchor = plane.createAnchor(hitResult.getHitPose());
//Attach a node to this anchor with the scene as the parent
anchorNode = new AnchorNode(modelAnchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
**create a new TranformableNode that will carry our model**
}
}
}
}
}
}
You can add the node directly to the scene:
node.localPosition = Vector3(x, y, z)
arSceneView.scene.addChild(node)

ARCore Unity: How do I Start and Stop Plane Detection on Command?

I am creating an app with ARCore, but I don't want ARCore to look for planes as soon as the app starts. Instead, I want the plane detection to begin when I hit a button in my app. It would also be great if I could stop the plane detection on command as well.
Does anyone know how I could do start and stop the ARCore plane detection on command?
I am building the app in Unity.
Thanks so much in advance!
on ARPlaneVisualizer.cs, there is this code
void OnEnable()
{
m_PlaneLayer = LayerMask.NameToLayer ("ARGameObject");
ARInterface.planeAdded += PlaneAddedHandler;
ARInterface.planeUpdated += PlaneUpdatedHandler;
ARInterface.planeRemoved += PlaneRemovedHandler;
HidePlane(true);
}
void OnDisable()
{
ARInterface.planeAdded -= PlaneAddedHandler;
ARInterface.planeUpdated -= PlaneUpdatedHandler;
ARInterface.planeRemoved -= PlaneRemovedHandler;
HidePlane(false);
}
you can use the OnEnable() code as start tracking and OnDisable() code to stop tracking.
Initially create bool to restrict surface detection code and inatially make bool to true.
bool isSurfaceDetected = true;
if (isSurfaceDetected) {
Session.GetTrackables<TrackedPlane> (_newPlanes, TrackableQueryFilter.New);
// Iterate over planes found in this frame and instantiate corresponding GameObjects to visualize them.
foreach (var curPlane in _newPlanes) {
// Instantiate a plane visualization prefab and set it to track the new plane. The transform is set to
// the origin with an identity rotation since the mesh for our prefab is updated in Unity World
// coordinates.
var planeObject = Instantiate (plane, Vector3.zero, Quaternion.identity,
transform);
planeObject.GetComponent<DetectedPlaneVisualizer> ().Initialize (curPlane);
// Debug.Log ("test....");
// Apply a random color and grid rotation.
// planeObject.GetComponent<Renderer>().material.SetColor("_GridColor", new Color(Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f)));
// planeObject.GetComponent<Renderer>().material.SetFloat("_UvRotation", Random.Range(0.0f, 360.0f));
//
}
Create a stop button in canvas and attatch below method
public void StopTrack()
{
// Make isSurfaceDetected to false to disable plane detection code
isSurfaceDetected = false;
// Tag DetectedPlaneVisualizer prefab to Plane(or anything else)
GameObject[] anyName = GameObject.FindGameObjectsWithTag ("Plane");
// In DetectedPlaneVisualizer we have multiple polygons so we need to loop and diable DetectedPlaneVisualizer script attatched to that prefab.
for (int i = 0; i < anyName.Length; i++)
{
anyName[i].GetComponent<DetectedPlaneVisualizer> ().enabled = false;
}
}
Make sure that stop button method is in ARController

2D augmented reality with sensor issue

I'm making a "geolocational AR app" in which I use Paint() to draw bitmap on my screen with the use of sensors to do some translation so that my image will only be shown at a specific point.
I've done everything
However, the image shakes while I present it.
I really want the effect as the following videos presents
https://www.youtube.com/watch?v=8U3vWETmk2U
I've implemented low pass filter to ease the situation but it is still not as steady as the images from the video.
This is how I achieve AR movement
float dx = (float) ( (canvas.getWidth()/ horizontalFOV) * (Math.toDegrees(orientation[0])-curBearingTo));
float dy = (float) ( (canvas.getHeight()/ verticalFOV) * Math.toDegrees(orientation[1]));
canvas.translate(0.0f, 0.0f-dy);
canvas.translate(0.0f-dx, 0.0f);
The curBearingTois the result of android location API: BearingTo(location, destination)
And this is how I get Orientation matrix
if (lastAccelerometer != null && lastCompass != null) {
boolean gotRotation = SensorManager.getRotationMatrix(rotation,
identity, lastAccelerometer, lastCompass);
if (gotRotation) {
// remap such that the camera is pointing straight down the Y
// axis
SensorManager.remapCoordinateSystem(rotation,
SensorManager.AXIS_X, SensorManager.AXIS_Z,
cameraRotation);
// orientation vector
SensorManager.getOrientation(cameraRotation, orientation);
}
Any suggestion?

XNA model with bones

I created a model in Blender and then I used armature to create some pose. How can I display this pose in XNA/MonoGame? I can't find any solution. I use model exported to fbx file.
My only code is here and it draws model without armature effect:
protected override void Draw(GameTime gameTime)
{
graphics.GraphicsDevice.Clear(Color.CornflowerBlue);
// Copy any parent transforms.
Matrix[] transforms = new Matrix[myModel.Bones.Count];
myModel.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model. A model can have multiple meshes, so loop.
foreach (ModelMesh mesh in myModel.Meshes)
{
// This is where the mesh orientation is set, as well as our camera and projection.
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation)
* Matrix.CreateTranslation(modelPosition);
effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f),
aspectRatio, 1.0f, 10000.0f);
}
// Draw the mesh, using the effects set above.
mesh.Draw();
}
base.Draw(gameTime);
}

Calcul new coords of camera after a 90 degres rotation in a isometric 2D projection

I made a 2D isometric renderer. It works fine but now I want to show my scene from 4 differents point of view (NO NW SE SW) but, on a 90° rotation, my camera cannot keep the center of my scene on screen.
What's working :
I calcul new projection of scene to match the new viewport (x y z in my world).
I reorganise part of my scene(chunk) to draw them in a correct order
I reorganise 'tiles' of 'chunks' to draw them in a correct order
I can keep the correct center with a 180 degres rotation.
What's do not working :
I cannot find a correct translation to apply to my camera after a 90 degres rotation.
What I know :
To keep the same center on a 180° rotation with my camera I have to do this :
camera.Position -= new Vector2(2 * camera.Position.X + camera.Width, 2 * camera.Position.Y + camera.Height);
Illustration
If the center of your map is origo (0,0,0), this gets easy:
First you store your default camera position in a Vector3 CameraOffset, then you calculate position using a rotation-matrix. 90* in redians is half a Pi, so we will use PiOverTwo. We will also use an enum to decide what direction to be pointing, so you can say
Camera.Orientation = Orientation.East;
and the camera should fix itself :)
public enum Orientation
{
North, East, South, West
}
in camera:
public Vector3 Position { get; protected set; }
Vector3 _CameraOffset = new Vector3(0, 20, 20);
public Vector3 CameraOffset
{
get
{
return _Orientation;
}
set
{
_Orientation = value;
UpdateOrientation();
}
}
Orientation _Orientation = Orientation.North;
public Orientation Orientation
{
get
{
return _Orientation;
}
set
{
_Orientation = value;
UpdateOrientation();
}
}
private void UpdateOrientation()
{
Position = Vector3.Transform(CameraOffset, Matrix.CreateRotationY(MathHelper.PiOverTwo * (int)Orientation));
}
If you want a gliding movement between positions, I think I can help too ;)
If your camera does not focus on Vector3.Zero and should not rotate around it, you just need to change:
Position = Vector3.Transform(CameraOffset, Matrix.CreateRotationY(MathHelper.PiOverTwo * (int)Orientation));
to:
Position = Vector3.Transform(CameraOffset, Matrix.CreateRotationY(MathHelper.PiOverTwo * (int)Orientation) * Matrix.CreateTranslation(FocusPoint));
Here, FocusPoint is the point in 3D that you rotate around (your worlds center). And now you also know how to let your camera move around, if you call UpdateOrientation() in your Camera.Update() ;)
EDIT; So sorry, totally missed the point that you use 2D. I'll be back later to see if I can help :P

Resources