Is there any property to get vertical plane detection? [duplicate] - ios

How is it possible to implement a vertical plane detection (i.e. for walls)?
let configuration = ARWorldTrackingSessionConfiguration()
configuration.planeDetection = .horizontal //TODO

Edit: This is now supported as of ARKit 1.5 (iOS 11.3). Simply use .vertical. I have kept the previous post below for historical purposes.
TL;DR
Vertical plane detection is not (yet) a feature that exists in ARKit. The .horizontal suggests that this feature could be being worked on and might be added in the future. If it was just a Boolean value, this would suggest that it is final.
Confirmation
This suspicion was confirmed by a conversation that I had with an Apple engineer at WWDC17.
Explanation
You could argue that creating an implementation for this would be difficult as there are infinitely many more orientations for a vertical plane rather than a horizontal one, but as rodamn said, this is probably not the case.
From rodamn’s comment:
At its simplest, a plane is defined to be three coplanar points. You have a surface candidate once there are sufficient detected coplanar features detected along a surface (vertical, horizontal, or at any arbitrary angle). It's just that the normal for horizontals will be along the up/down axis, while vertical's normals will be parallel to the ground plane. The challenge is that unadorned drywall tends to generate few visual features, and plain walls may often go undetected. I strongly suspect that this is why the .vertical feature is not yet released.
However, there is a counter argument to this. See comments from rickster for more information.

Support for this is coming with iOS 11.3:
static var vertical: ARWorldTrackingConfiguration.PlaneDetection
The session detects surfaces that are parallel to gravity (regardless of other orientation).
https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration.planedetection
https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration.planedetection/2867271-vertical

Apple has release iOS 11.3 will feature various updates for AR, including ARKit 1.5. In this update ARKit includes the ability for ARKit to recognize and place virtual objects on vertical surfaces like wall and door.
Support for vertical is supported now in ARWorldTrackingConfiguration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
sceneView.session.run(configuration)

As the iPhone X is featuring a front facing depth camera, my suspicion is that a back facing one will be on the next version and perhaps the .vertical capability will be delegated until then.

In ARKit 1.0 there was just .horizontal enum's case for detecting horizontal surfaces like a table or a floor. In ARKit 1.5 and higher there are .horizontal and .vertical type properties of a PlaneDetection struct that conforms to OptionSet protocol.
To implement a vertical plane detection in ARKit 2.0+ use the following code:
configuration.planeDetection = .vertical
Or you can use values for both types of detected planes:
private func configureSceneView(_ sceneView: ARSCNView) {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical] //BOTH TYPES
configuration.isLightEstimationEnabled = true
sceneView.session.run(configuration)
}
Also you can add an extension of your class to handle the delegate calls:
extension ARSceneManager: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
print("Found plane: \(planeAnchor)")
}
}

i did it with Unity, but i need to do my math.
I use Random Sample Consensus to detect vertical plane from the point cloud returned by ARkit. It's like having a loop that randomly picks 3 points to create a plane and counts points that matches it, and see which try is the best.
It's working. But because ARkit can't return many points when the wall is in plain color. So it doesn't work in many situation.

Apple is said to be working on extra AR capabilities for the new iPhone i.e extra sensors for the Camera. Maybe this will be a feature when those device capabilities are known. Some speculation here. http://uk.businessinsider.com/apple-iphone-8-rumors-3d-laser-camera-augmented-reality-2017-7 and another source https://www.fastcompany.com/40440342/apple-is-working-hard-on-an-iphone-8-rear-facing-3d-laser-for-ar-and-autofocus-source

Related

Using a measure device with apple xcode and overlay with object detecting to measure distance between two points

I'm trying to get some thoughts on how this might be possible with apple frameworks arkit in xcode. I would like to use the measure with the touchesBegan and addDot functions with the ar camera. Then I would like to add automate the touchesbegan with an object detection model from a coremlmodel I have setup. When the coremlmodel detects the image it uses the bounding box as the end points. Is this possible? If it is I would like to have some information on where to look for this type of code. Or if its simple just leave it here. Thanks!
He is some starter code framework.
// Get measure by touching the screen
override func touchesBegan() {}
func addDot() {}
//getting the measurement between two points
SCNNode()
SCNVector3()
func calculate() {
let start = dotNodes[0]
let end = dotNodes[1]
print(start.position)
print(end.position)
}
// Adding in the vision component from a coremlmodel
VNCoreMLModel()
VNCoreMLRequest()
//Get the bounding box from the picture
How would I automate and tell swift that the boxes are what I want the touches to be for my measure and report the output. Thanks!

How to get projection, rotation and translation matrix from ARKit?

Hi I need to take a set of photos with my iphone and read the corresponding projection matrix, rotation and translation matrix for post-processing. I have never used ARKit or programmed in Swift/Object C before. What is the best way to get started?
If you create a default AR project in Xcode then you'll be able to study the basics of getting an AR app running in Swift.
To read the camera parameters you need to implement this function:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let transform = frame.camera.transform else {return}
let position = SCNVector3Make(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
let rotation = frame.camera.eulerAngles
let projection = frame.camera.projectionMatrix
print(position)
print(rotation)
print(projectionMatrix)
print("================="
}
(I.e copy / paste that code as another function in your ViewController class in the default app. The rotation is given in euler angles.
Given you've not used Swift before this may not be that helpful. But in theory it's all you need.
ARKit grabs frames at 60 frames per second (you can change this) - this function is called automatically for every new frame.
If you're using it to take photos you'll have to add more code to get all the timing right etc.

Possibility to track multiple entries of same ARReferenceImage on iOS 11.3 beta, xCode 9.3 beta, ARKit 1.5

In iOS SDK iOS 11.3 beta, xCode 9.3 beta,
ARKit 1.5 gives us the possibility to track reference images via camera in the same way we did it with ARToolKit or Vuforia.
The question is, can I track the count of entries of the exact same reference image and put some shape on the top of each one, as if they are separate items? The documentation states:
When you run a world-tracking AR session and specify ARReferenceImage objects for the session configuration's detectionImages property, ARKit searches for those images in the real-world environment. When the session recognizes an image, it automatically adds to its list of anchors an ARImageAnchor for each detected image.
I was able to feed my ARWorldTrackingConfiguration with three exactly same images (but rotated differently) but it only found first hit image (they are printed on a piece of paper in matrix-like view). Does this mean that I will only be able to track first hit for each unique reference Image?
If we have the list of anchors we can possibly try to calculate if this is not the same exact spot and maybe try to force it search further?
Any help will be appreciated.
I'm pretty sure that it isn't possible to track multiple occcurences of the same image straight out of the box, as when an image is detected it is given an ARImageAnchor and this only occurs once:
If your AR experience adds virtual content to the scene when an image is detected, that action will by default happen only once. To allow the user to experience that content again without restarting your app, call the session’s remove(anchor:) method to remove the corresponding ARImageAnchor:
After the anchor is removed, ARKit will add a new anchor the next time
it detects the image.
Having said this, you could potentially track the number of times an images is shown by manually removing it's ARImageAnchor after a certain period of time by using this built in function:
func remove(anchor: ARAnchor)
I don't think this would work however if you had the same image within the frostrum of the camera at the same time though.
All things aside, hopefully this example may help you on your way...
Create two variables (one to store the detection count and one to store the anchors):
var anchors = [ARImageAnchor]()
var countOfDetectedImages = 0
Then:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Store The ARImageAnchors
anchors.append(currentImageAnchor)
//3. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
print("Image Name = \(name)")
//4. Increase The Count If The Reference Image Is Called Target
if name == "target"{
countOfDetectedImages += 1
print("\(name) Has Been Detected \(countOfDetectedImages)")
//6. Remove The Anchor
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
self.augmentedRealitySession.remove(anchor: anchor)
}
}
}
And for a total reset of the variables:
/// Removes All The ARImageAnchors & The Detected Count
func removeAllAnchorsAndResetCount(){
countOfDetectedImages = 0
anchors.forEach{ augmentedRealitySession.remove(anchor: $0) }
anchors.removeAll()
}
Possible WorkAround:
FYI, there are some notes in the Apple Documentation which has init methods for:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat):
Creates a new reference image from a Core Graphics image object.
init(CVPixelBuffer, orientation: CGImagePropertyOrientation,
physicalWidth: CGFloat)
Creates a new reference image from a Core Video pixel buffer.
So 'perhaps' and I haven't looked into this, you may be able to work with the orientation that way?

How to keep ARKit SCNNode in place

Hey I'm trying to figure out. How to keep a simple node in place. As I walk around it in ARKit
Code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
if planeDetected == false { // Bool only allows 1 plane to be added
planeDetected = true
self.addPlane(node: node, anchor: planeAnchor)
}
}
}
This adds the SCNNode
func addPlane(node: SCNNode, anchor: ARPlaneAnchor) {
// We add the anchor plane here
let showDebugVisuals = Bool()
let plane = Plane(anchor, showDebugVisuals)
planes[anchor] = plane
node.addChildNode(plane)
// We add our custom SCNNode here
let scene = SCNScene(named: "art.scnassets/PlayerModel.scn")!
let Body = scene.rootNode.childNode(withName: "Body", recursively: true)!
Body.position = SCNVector3.positionFromTransform(anchor.transform)
Body.movabilityHint = .movable
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
Ive tried adding a Plane/Anchor Node and putting the "Body" node in that but it still moves. I thought maybe it has something to do with the update function.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
}
Or most likely the position setting
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
Iv'e looked through every source / project file / video on the internet and nobody has a simple solution to this simple problem.
There are two kinds of "moving around" that could be happening here.
One is that ARKit is continuously refining its estimate of how the device's position in the real world maps to the abstract coordinate space you're placing virtual content in. For example, suppose you put a virtual object at (0, 0, -0.5), and then move your device to the left by exactly 10 cm. The virtual object will appear to be anchored in physical space only if ARKit tracks the move precisely. But visual-inertial odometry isn't an exact science, so it's possible that ARKit thinks you moved to the left by 10.5 cm — in that case, your virtual object will appear to "slip" to the right by 5 mm, even though its position in the ARKit/SceneKit coordinate space remains constant.
You can't really do much about this, other than hope Apple makes devices with better sensors, better cameras, or better CPUs/GPUs and improves the science of world tracking. (In the fullness of time, that's probably a safe bet, though that probably doesn't help with your current project.)
Since you're also dealing with plane detection, there's another wrinkle. ARKit is continuously refining its estimates of where a detected plane is. So, even though the real-world position of the plane isn't changing, its position in ARKit/SceneKit coordinate space is.
This kind of movement is generally a good thing — if you want your virtual object to appear anchored to the real-world surface, you want to be sure of where that surface is. You'll see some movement as plane detection gets more sure of the surface's position, but after a short time, you should see less "slip" as you move the camera around for plan-anchored virtual objects than those that are just floating in world space.
In your code, though, you're not taking advantage of plane detection to make your custom content (from "PlayerModel.scn") stick to the plane anchor:
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
This code uses the initial position of the plane anchor to position wrapperNode in world space (because you're making it a child of the root node). If you instead make wrapperNode a child of the plane anchor's node (the one you received in renderer(_:didAdd:for:)), it'll stay attached to the plane as ARKit refines its estimate of the plane's position. You'll get a little bit more movement initially, but as plane detection "settles", your virtual object will "slip" less.
(When you make the node a child of the plane, you don't need to set its position — a position of zero means it's right where the plane is. Inf anything, you need to set its position only relative to the plane — i.e. how far above/below/along it.)
To keep an SCNNode in place you can disable sceneView plane detection once you get the result you desired.
let configuration = ARWorldTrackingConfiguration();
configuration.planeDetection = []
self.sceneView.session.run(configuration)
The reason for this is that ARKit constantly reestimates the position of the detected plane resulting in your SCNNode moving around.

Back face culling in SceneKit

I am currently trying to set up a rotating ball in scene kit. I have created the ball and applied a texture to it.
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.doubleSided = true
ballGeometry.materials = [ballMaterial]
The current ballTexture is a semi-transparent texture as I am hoping to see the back face roll around.
However I get some strange culling where only half of the back facing polygons are shown even though the doubleSided property is set to true.
Any help would be appreciated, thanks.
This happens because the effects of transparency are draw-order dependent. SceneKit doesn't know to draw the back-facing polygons of the sphere before the front-facing ones. (In fact, it can't really do that without reorganizing the vertex buffers on the GPU for every frame, which would be a huge drag on render performance.)
The vertex layout for an SCNSphere has it set up like the lat/long grid on a globe: the triangles render in order along the meridians from 0° to 360°, so depending on how the sphere is oriented with respect to the camera, some of the faces on the far side of the sphere will render before the nearer ones.
To fix this, you need to force the rendering order — either directly, or through the depth buffer. Here's one way to do that, using a separate material for the inside surface to illustrate the difference.
// add two balls, one a child of the other
let node = SCNNode(geometry: SCNSphere(radius: 1))
let node2 = SCNNode(geometry: SCNSphere(radius: 1))
scene.rootNode.addChildNode(node)
node.addChildNode(node2)
// cull back-facing polygons on the first ball
// so we only see the outside
let mat1 = node.geometry!.firstMaterial!
mat1.cullMode = .Back
mat1.transparent.contents = bwCheckers
// my "bwCheckers" uses black for transparent, white for opaque
mat1.transparencyMode = .RGBZero
// cull front-facing polygons on the second ball
// so we only see the inside
let mat2 = node2.geometry!.firstMaterial!
mat2.cullMode = .Front
mat2.diffuse.contents = rgCheckers
// sphere normals face outward, so to make the inside respond
// to lighting, we need to invert them
let shader = "_geometry.normal *= -1.0;"
mat2.shaderModifiers = [SCNShaderModifierEntryPointGeometry: shader]
(The shader modifier bit at the end isn't required — it just makes the inside material get diffuse shading. You could just as well use a material property that doesn't involve normals or lighting, like emission, depending on the look you want.)
You can also do this using a single node with a double-sided material by disabling writesToDepthBuffer, but that could also lead to undesirable interactions with the rest of your scene content — you might also need to mess with renderingOrder in that case.
macOS 10.13 and iOS 11 added SCNTransparencyMode.dualLayer which as far as I can tell doesn't even require setting isDoubleSided to true (the documentation doesn't provide any information at all). So a simple solution that's working for me would be:
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.transparencyMode = .dualLayer
ballGeometry.materials = [ballMaterial]

Resources