ARCore ObjectRenderer detect object is in Camera frame - arcore

I am looking to show some helper text on the GLSurfaceView. But I want to show this only when I see the object in my Camera's frame not before that. How can I detect whether the 3d object is visible in the Camera's frame or not?

Rather than having to check every time what objects are in the camera frame, it might be easier, deepening on your particular application, to simply attach the helper text below or above the object you want it to apply to, on the same anchor.
This would also have the advantage that the text would be centred only when the object is centred - i.e. you would not have the text suddenly fully appear when just a corner of the object is in the camera view.
ViewRenderables allow you to render 'a 2D Android view in 3D space by attaching it to a Node with setRenderable(Renderable)'.
(https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/rendering/ViewRenderable)

Related

Swift SceneKit - I am trying to figure out if a Node object goes off the screen

Using SceneKit
I want to make the gray transparent box to disappear and only show the colored boxes when the user zooms in.
So I want to detect when that box's edges are starting to fall off the screen as I zoom, so I can hide the gray box accordingly.
First thoughts, but there may be better solutions:
You could do an unprojectPoint on the node and check against screen coordinates, do the +/- math on object size and skip Z. I "think" that would work
You can do some physics based collision detection against an invisible box or plane geometries that acts as your screen edges, has some complexity if your view is changing, but testing would be easy - just leave visible until you get what you want, then isVisible=false
isNode(insideFrustomof: ) - returns boolean on whether it "might" be visible. I'm assuming "might" means obscured by other geometry which in your case, shouldn't matter (edit) on second thought, that doesn't solve your problem but I'll leave it in here for reference.

How to place 3D object on horizontal plane automatically (without tapping) in iOS 12?

I'm working on an app with AR feature. I want to be able to place a 3D model that I have on a horizontal plane that has been detected. So inside the renderer(didAdd) delegate function, I added a node for my 3D model, and set its position to the center of the plane anchor. However, when I run the app to test it, my model is floating on top of the plane instead of standing directly on top of it. My guess is that there is some translation that needs to be done with the coordinates, but don't know about the details. Can somebody give me some pointers?

Get frame of SceneKit node in app window?

I'm hoping to overlay a UIView (specifically a highlight box with some text, etc) over an object rendered in SceneKit, but I'm encountering an issue: I don't know exactly where the object will be onscreen at the time.
Is there a way to get the CGRect frame of a SCNNode's current position within the SCNView? Even just a center point for the node would be helpful, but ideally it would give the whole frame, indicating how much vertical and horizontal space the geometry was taking up onscreen as well.
I've searched in the documentation and online for various references to "frames" and "bounds" relative to an SCNNode, but all I'm finding is stuff about the coordinate system within the scene.
Is there no way of translating a SCNNode's position in a scene into the frame coordinates of the view, or the app window?
Check the post about calculating the projected size of an object.
Performance wise, it is much better to use SpriteKit in SceneKit if you want to overlay 2D contents. Check the overlaySKScene property of SCNSceneRenderer.

Get SCNMaterial's frame

I have an iPhone 3D model in my SceneKit application, it has a material which I get. It is called iScreen (the image that "is on" the screen of the iPhone):
var iScreen: SCNMaterial!
iScreen = iphone.geometry?.materialWithName("Screen")!
I decided to somehow project a webView there instead of an image.
Therefore I need the frame / position / size of the screen where iScreen "draws" to set the UIWebView's frame. Is that possible?
Note
Of course I tried position, frame, size, etc. but that all was not available :/
you will first have to know to which SCNGeometryElement the material applies (an SCNGeometry is made of one or several SCNGeometryElement).
That geometry element is essentially a list of indices to retrieve vertex data contained in the geometry's SCNGeometrySources. Here you are interested in the position source (it gives you the 3D coordinates of the vertices).
By iterating over these positions you'll be able to find the element's width and height.

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Resources