iOS ARKit4 - how to fix white balance for the ARCamera? - ios

I have an app that generates point cloud from multiple ARFrame. It appears that the camera used to capture the image has dynamic white balance, and can change it in the middle of a capture session.
How do I configure ARView, ARSession, or ARCamera to force it to lock white balance for the duration of the session?
I have access to the following parameters, but do not see anything related to white balance.
var arView: ARView!
let session: ARSession = arView.session
var sampleFrame: ARFrame = session.currentFrame!
let camera = sampleFrame.camera
func configureSessionAndRun() {
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
configuration.planeDetection = [.horizontal, .vertical]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}

There are only two AR View's properties that could help, but they are just gettable, not settable:
let frame = arView.session.currentFrame
frame?.camera.exposureDuration // { get }
frame?.camera.exposureOffset // { get }

Related

ARKit does not recognize reference images

I'm trying to place a 3D model on top of a recognized image with ARKit and RealityKit - all programmatically. Before I start the ARView I'm downloading the model I want to show when the reference image is detected.
This is my current setup:
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
// Check if the device supports the AR experience
if (!ARConfiguration.isSupported) {
TLogger.shared.error_objc("Device does not support Augmented Reality")
return
}
guard let qrCodeReferenceImage = UIImage(named: "QRCode") else { return }
let detectionImages: Set<ARReferenceImage> = convertToReferenceImages([qrCodeReferenceImage])
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = detectionImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
I use the ARSessionDelegate to get notified when a new image anchor was added which means the reference image got detected:
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
print("Hello")
for anchor in anchors {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
addEntity(self.localModelPath!)
}
}
However, the delegate method never gets called while other delegate functions like func session(ARSession, didUpdate: ARFrame) are getting called so I assume that the session just doesn't detect the image. The image resolution is good and the printed image the big so it should definitely get recognized by the ARSession. I also checked that the image has been found before adding it to the configuration.
Can anyone lead me in the right direction here?
It looks like you have your configuration set up correctly. Your delegate-function should be called when the reference image is recognized. Make sure your configuration isn't overwritten at any point in your code.

ARKit switch between ARWorldTrackingConfiguration and ARFaceTrackingConfiguration - Rear and Front Camera

In my project I want to switch between ARWorldTrackingConfiguration and ARFaceTrackingConfiguration.
I use two different types of view a ARSCNView to use the rear camera and a ARView to do the face tracking.
First I start the ARSCNView and after, if the user want, he can switch to face tracking
I start my view controller in this mode:
sceneView.delegate = self
sceneView.session.delegate = self
// Set up scene content.
setupCamera()
sceneView.scene.rootNode.addChildNode(focusSquare)
let configurationBack = ARWorldTrackingConfiguration(
configurationBack.isAutoFocusEnabled = true
configurationBack.planeDetection = [.horizontal, .vertical]
sceneView.session.run(configurationBack, options: [.resetTracking, .removeExistingAnchors])
And I load my Object (.scn)
When I want to switch to front camera I and pass to ARView I do this:
let configurationFront = ARFaceTrackingConfiguration()
// here I stop my ARSCNView session
self.sceneView.session.pause()
self.myArView = ARView.init(frame: self.sceneView.frame)
self.myArView!.session.run(configurationFront)
self.myArView!.session.delegate = self
self.view.insertSubview(self.myArView!, aboveSubview: self.sceneView)
And than I load my .rcproject
So my problem begin here, when I try to return to back camera and pass to ARWorldTracking again.
This is my method:
// remove my ARView with face tracking
self.myArView?.session.pause()
UIView.animate(withDuration: 0.2, animations: {
self.myArView?.alpha = 0
}) { (true) in
self.myArView?.removeFromSuperview()
self.myArView = nil
}
// here I restart the initial ARSCNView
let configurationBack = ARWorldTrackingConfiguration(
configurationBack.isAutoFocusEnabled = true
configurationBack.planeDetection = [.horizontal, .vertical]
session.run(configurationBack, options: [.resetTracking, .removeExistingAnchors])
When I switch to back camera, the sensor doesn't track the planes correctly.
How can I fix that, so how can I switch correctly between ARWorldTrackingConfiguration and ARFaceTrackingConfiguration?
Thanks in advance
Make sure to also remove all added nodes to the scene when you're pausing the session. Add below code after where you pause the session with self.sceneView.session.pause():
self.sceneView.scene.rootNode.enumerateChildNodes { (childNode, _) in
childNode.removeFromParentNode()
}

Show bounding box while detecting object using ARKit 2

I have scanned and trained multiple real world objects. I do have the ARReferenceObject and the app detects them fine.
The issue that I'm facing is when an object doest not have distinct, vibrant features it takes few seconds to return a detection result, which I can understand. Now, I want the app to show a bounding box and an activity indicator on top the object while it is trying to detect it.
I do not see any information regarding this. Also, if there is any way to get the time when detection starts or the confidence percentage of the object being detected.
Any help is appreciated.
It is possible to show a boundingBox in regard to the ARReferenceObject prior to it being detected; although I am not sure why you would want to do that (in advance anyway).
For example, assuming your referenceObject was on a horizontal surface you would first need to place your estimated bounding box on the plane (or use some other method to place it in advance), and in the time it took to detect the ARPlaneAnchor and place the boundingBox it is most likely that your model would already have been detected.
Possible Approach:
As you are no doubt aware an ARReferenceObject has a center, extent and scale property as well as a set of rawFeaturePoints associated with the object.
As such we can create our own boundingBox node based on some of the sample code from Apple in Scanning & Detecting 3D Objects and create our own SCNNode which will display a bounding box of the approximate size of the ARReferenceObject which is stored locally prior to it being detected.
Note you will need to locate the 'wireframe_shader' from the Apple Sample Code for the boundingBox to render transparent:
import Foundation
import ARKit
import SceneKit
class BlackMirrorzBoundingBox: SCNNode {
//-----------------------
// MARK: - Initialization
//-----------------------
/// Creates A WireFrame Bounding Box From The Data Retrieved From The ARReferenceObject
///
/// - Parameters:
/// - points: [float3]
/// - scale: CGFloat
/// - color: UIColor
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(Float.greatestFiniteMagnitude)
var localMax = float3(-Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) Has Not Been Implemented") }
//----------------
// MARK: - Shaders
//----------------
/// Sets A Shader To Render The Cube As A Wireframe
///
/// - Parameter geometry: SCNBox
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
}
To display the bounding box you you would then do something like the following, noting that in my example I have the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
let augmentedRealitySession = ARSession()
To display the boundingBox prior to detection of the actual object itself, you would call the func loadBoundigBox in viewDidLoad e.g:
/// Creates A Bounding Box From The Data Available From The ARObject In The Local Bundle
func loadBoundingBox(){
//1. Run Our Session
augmentedRealityView.session = augmentedRealitySession
augmentedRealityView.delegate = self
//2. Load A Single ARReferenceObject From The Main Bundle
if let objectURL = Bundle.main.url(forResource: "fox", withExtension: ".arobject"){
do{
var referenceObjects = [ARReferenceObject]()
let object = try ARReferenceObject(archiveURL: objectURL)
//3. Log it's Properties
print("""
Object Center = \(object.center)
Object Extent = \(object.extent)
Object Scale = \(object.scale)
""")
//4. Get It's Scale
let scale = CGFloat(object.scale.x)
//5. Create A Bounding Box
let boundingBoxNode = BlackMirrorzBoundingBox(points: object.rawFeaturePoints.points, scale: scale)
//6. Add It To The ARSCNView
self.augmentedRealityView.scene.rootNode.addChildNode(boundingBoxNode)
//7. Position It 0.5m Away From The Camera
boundingBoxNode.position = SCNVector3(0, -0.5, -0.5)
//8. Add It To The Configuration
referenceObjects.append(object)
configuration.detectionObjects = Set(referenceObjects)
}catch{
print(error)
}
}
//9. Run The Session
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
augmentedRealityView.automaticallyUpdatesLighting = true
}
The above example simple creates a boundingBox from the non-detected ARReferenceObject and places it 0.5m down from and 0.5meter away from the Camera which yields something like this:
You would of course need to handle the position of the boundBox initially, as well as hoe to handle the removal of the boundingBox 'indicator'.
The method below simply shows a boundBox when the actual object is detected e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
}
Which yields something like this:
In regard to the detection timer, there is an example in the Apple Sample Code, which displays how long it takes to detect the model.
In its crudest form (not accounting for milliseconds) you can do something like so:
Firstly create A Timer and a var to store the detection time e.g:
var detectionTimer = Timer()
var detectionTime: Int = 0
Then when you run your ARSessionConfiguration initialise the timer e.g:
/// Starts The Detection Timer
func startDetectionTimer(){
detectionTimer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(logDetectionTime), userInfo: nil, repeats: true)
}
/// Increments The Total Detection Time Before The ARReference Object Is Detected
#objc func logDetectionTime(){
detectionTime += 1
}
Then when an ARReferenceObject has been detected invalidate the timer and log the time e.g:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have A Valid ARObject Anchor
guard let _ = anchor as? ARObjectAnchor else { return }
//2. Stop The Timer
detectionTimer.invalidate()
//3. Log The Detection Time
print("Total Detection Time = \(detectionTime) Seconds")
//4. Reset The Detection Time
detectionTime = 0
}
}
This should be more than enough to get your started...
And please note, that this example doesn't provide a boundingBox when scanning an object (look at the Apple Sample Code for that), it provides one based on an existing ARReferenceObject which is implied in your question (assuming I interpreted it correctly).

How to query Mapbox MGLMapView features without visible map?

For an AR experience I'm working on, I want to have a "camera view" that shows annotations based on the user's location. If the user is in a certain area, show the annotation.
I'm able to do something like this using below
extension ViewController: AnnotationManagerDelegate {
func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
print("camera did change tracking state: \(camera.trackingState)")
let annotationLocation = CLLocation()
let point = CGPoint(x: annotationLocation.coordinate.longitude, y: annotationLocation.coordinate.latitude)
let features = mapView.visibleFeatures(at: point);
if let score = features.first(where: { $0.attributes["score"] as! Int >= 5 }) {
// ...
But in my AR view, I want to hide the map - not show it. When I try setting mapView.isHidden = true - the query always fails.
This makes sense because the query is for visible features. How can instead hide the map, but still query tiles for features?
Go into Mapbox Studio https://www.mapbox.com/studio/ and create a new map style and remove all of the layers. You can style a Mapbox map to be a single solid color (land, water, roads, etc.), which is what you need. If you need to toggle between a map and a blank map, simply toggle between styles.
let basicMap = URL(string: "mapbox://styles/mapbox/outdoors-v9")
let blankMap = URL(string: "yourCustomURLFromMapboxStudio")
let mapView = MGLMapView(frame: view.bounds, styleURL: blankMap)

AVCaptureMetadataOutput Region of Interest for Barcodes

I am looking to display a UIView subclass within a UIStackView. The subclass is called PLBarcodeScannerView and is using AVCaptureMetadataOuput to detect barcodes within the camera's field of view. Because this view is not the entire screen, I need to set the region of interest to be the same as the frame of the PLBarcodeScannerView because the user is only seeing a portion of the camera view and we want to be sure that the barcode in the visible view is the one being scanned.
Issue
I cannot seem to set the metadataOutputRectOfInterest properly, nor does the "zoom level" of the preview layer on this view seem correct, although the aspect ratio is correct. The system does receive barcodes successfully, but they are not always visible within the preview window. Codes are still scanned when they reside outside the visible preview window.
Screenshot:
The colorful photo is the PLBarcodeScannerView. Only codes which are fully visible inside this view should be considered.
Below is the code that initializes the view:
This is called within the init methods of PLBarcodeScannerView:UIView
func setupView() {
session = AVCaptureSession()
let tap = UITapGestureRecognizer(target: self, action: #selector(self.resume))
addGestureRecognizer(tap)
// Set the captureDevice.
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// Create input object.
let videoInput: AVCaptureDeviceInput?
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
// Add input to the session.
if (session!.canAddInput(videoInput)) {
session!.addInput(videoInput)
} else {
scanningNotPossible()
}
// Create output object.
let metadataOutput = AVCaptureMetadataOutput()
// Add output to the session.
if (session!.canAddOutput(metadataOutput)) {
session!.addOutput(metadataOutput)
// Send captured data to the delegate object via a serial queue.
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
// Set barcode type for which to scan: EAN-13.
metadataOutput.metadataObjectTypes = [
AVMetadataObjectTypeCode128Code
]
} else {
scanningNotPossible()
}
// Determine the size of the region of interest
let x = self.frame.origin.x/UIScreen.main.bounds.width
let y = self.frame.origin.y/UIScreen.main.bounds.height
let width = self.frame.width/UIScreen.main.bounds.height
let height = self.frame.height/UIScreen.main.bounds.height
let scanRectTransformed = CGRect(x: x, y: y, width: 1, height: height)
metadataOutput.metadataOutputRectOfInterest(for: scanRectTransformed)
// Add previewLayer and have it show the video data.
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = self.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
layer.addSublayer(previewLayer)
// Begin the capture session.
session!.startRunning()
}

Resources