The object gets blocked by the background in Spark AR - augmented-reality

I have a Spark AR effect with custom background (which is masked out where a person is detected).
I have also a 3d object attached in front of the user forehead.
The problem is that the object gets hidden when the user goes slightly farther from the camera, because(?) the view gets blocked by the custom background which becomes closer to the camera than the object.
Is there a way to keep the object fully visible, no matter how far the user goes from the camera?
The only workaround I can make up is to prevent the z coordinate from being less than zero, but it's far from ideal, because I need to keep the object at the same distance to the forehead.

You need to uncheck "Use Depth Test" and "Write to Depth Test" in the material for the object you would like to remain visible.
In the Scene hierarchy, move the object above your canvas/background.

so on the material property u will see Advance render option and if u click it u will see " use depth first " and u have to uncheck it . make sure ur rectangle is before the bg rectangle

Related

Swift SceneKit - I am trying to figure out if a Node object goes off the screen

Using SceneKit
I want to make the gray transparent box to disappear and only show the colored boxes when the user zooms in.
So I want to detect when that box's edges are starting to fall off the screen as I zoom, so I can hide the gray box accordingly.
First thoughts, but there may be better solutions:
You could do an unprojectPoint on the node and check against screen coordinates, do the +/- math on object size and skip Z. I "think" that would work
You can do some physics based collision detection against an invisible box or plane geometries that acts as your screen edges, has some complexity if your view is changing, but testing would be easy - just leave visible until you get what you want, then isVisible=false
isNode(insideFrustomof: ) - returns boolean on whether it "might" be visible. I'm assuming "might" means obscured by other geometry which in your case, shouldn't matter (edit) on second thought, that doesn't solve your problem but I'll leave it in here for reference.

ARCore ObjectRenderer detect object is in Camera frame

I am looking to show some helper text on the GLSurfaceView. But I want to show this only when I see the object in my Camera's frame not before that. How can I detect whether the 3d object is visible in the Camera's frame or not?
Rather than having to check every time what objects are in the camera frame, it might be easier, deepening on your particular application, to simply attach the helper text below or above the object you want it to apply to, on the same anchor.
This would also have the advantage that the text would be centred only when the object is centred - i.e. you would not have the text suddenly fully appear when just a corner of the object is in the camera view.
ViewRenderables allow you to render 'a 2D Android view in 3D space by attaching it to a Node with setRenderable(Renderable)'.
(https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/rendering/ViewRenderable)

DragAlongSurface Script moves object back to initial position after drag is finished

Currently, I have a Plane with Layer "Surface" and DragAlongSurface Script Attached. I have the table gameobject from the example and it also the surface controller attached to it. When I try to move the object, it moves to the desired location but comes back to its initial position after the drag is over. Please suggest a way to make object stay at the final position.
It seems like you have things swapped. You'll want XRSurfaceController attached to the Plane (which should be on the "Surface" layer). DragAlongSurface should be attached to the object you wish to drag around (the table, which should NOT be on the "surface" layer).

iOs draw an intereactive map

I need to draw an interactive map for an iOs application. For example it can be the map of the US showing the states. It will need to show all the states in different colors ( I'll get this from a delegate colorForStateNo: ) It will need to allow the user to select a state by touching it, when the color will change, and a "stick out" effect should be shown, maybe even a symbol animated to appear over the selected state. Also the color of some states will need to change depending on external events. This color change will mean an animation like a circle starting in the middle of the state and progressing towards the edges changing the color from the current one to the one inside the circle.
Can this be done ,easily in core-graphics? Or is it only possible with Open GL ES? What is the easiest way to do this? I have worked with core graphics and it doesn't seem to handle animation very well, I just redraw the entire screen when something needed to move... Also how could I use an external image to draw the map? Setting up a lot of drawLineToPoint seems like , a lot of work to draw only one state let alone the whole map ...
You could create the map using vector graphics and then have that converted to OpenGL calls.
Displaying SVG in OpenGL without intermediate raster
EDIT: The link applies to C++, but you may be able to find a similar solution.

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Resources