How to place 3D object on horizontal plane automatically (without tapping) in iOS 12? - ios

I'm working on an app with AR feature. I want to be able to place a 3D model that I have on a horizontal plane that has been detected. So inside the renderer(didAdd) delegate function, I added a node for my 3D model, and set its position to the center of the plane anchor. However, when I run the app to test it, my model is floating on top of the plane instead of standing directly on top of it. My guess is that there is some translation that needs to be done with the coordinates, but don't know about the details. Can somebody give me some pointers?

Related

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

ARCore ObjectRenderer detect object is in Camera frame

I am looking to show some helper text on the GLSurfaceView. But I want to show this only when I see the object in my Camera's frame not before that. How can I detect whether the 3d object is visible in the Camera's frame or not?
Rather than having to check every time what objects are in the camera frame, it might be easier, deepening on your particular application, to simply attach the helper text below or above the object you want it to apply to, on the same anchor.
This would also have the advantage that the text would be centred only when the object is centred - i.e. you would not have the text suddenly fully appear when just a corner of the object is in the camera view.
ViewRenderables allow you to render 'a 2D Android view in 3D space by attaching it to a Node with setRenderable(Renderable)'.
(https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/rendering/ViewRenderable)

ARKit: How to detect only Horizontal floor excluding obstacles

I am developing horizontal plane detection application using ARKit. It seems to be working fine. Once floor is detected I am trying to place SCNPlane 2meter Hight and 2meter width horizontally from the centre point(detected floor). It is also working fine when floor is empty. If floor has some objects(obstacles like furniture) then SCNPlane is being placed over the object instead of the floor(under the object). How to detect only Horizontal floor excluding the objects. please guide me. thanks
When you are searching and have found the floor the ARKit will put out a grid, normally people use some kind of grid image to display this, but some don't want to show it. Once the grid has placed you place a SCNPlane, which i assume has an physical body as you say it falls towards the floor / furniture?
You can do this in 3 ways:
You can to stop the worldTrackingConfiguration once the floor has
been found.
You can once the floor has been found, fetch that Y-position and bind every object to fall towards that Y-position.
I guess you could check if the Y-position of the new detection overlaps with the floor detection, then it's fine else it's not. (I have not tested this one)

Built in way to convert from screen coordinates to image coordinates?

I have an app where users can scale and position images in a number of ways. They can drag an entire layer of images around, scale that layer, drag around individual images inside the layer, and scale those individual images.
For some unrelated functionality, I need to generate the image coordinates that a user is pointing to on a given image (ie (0,0) for the top left & (width,height) for the bottom right), independent of how much it has been moved around and scaled. Is there a built in method for tranforming an absolute mouse position to it's relative position on an image (and vice versa) that takes into account any scaling/panning? I have started building my own methods for this tranformation but before I got too deep I wanted to see if it was already built in somewhere that I'm not seeing.
Konva doesn't have such methods yet. You have to implement them manually.
You can subscribe to this related issue: https://github.com/konvajs/konva/issues/303

Interact with complex figure in iOS

I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.

Resources