in brief: How to pass every frame from AR Session to a funcion?
I'm making an app:
It displays the image of the rear camera in real time. (in AR View)(✅)
When it detects a QR code, it will read the text content in the QR code.(I don't know how to do it in real time, get data from AR session)
func getQRCodeContent(_ pixel: CVPixelBuffer) -> String {
let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixel, options: [:])
let request = VNDetectBarcodesRequest()
request.symbologies = [.qr]
try! requestHandler.perform([request])
let result = request.results?.first?.payloadStringValue
if let result = result {
return result
} else {
return "non"
}
}
And then do some logic with the content, and display the corresponding AR model in the AR View.
I know I have to feed images into the Vision Framework, I started with AVFoundation, but I found that when AR View is loaded, the AVCaptureSession is paused.
And I want to feed AR Session's frame into Vision Framework. However, all the tutorials I can find are based on story board and UI kit to complete this function. I don't know how to complete this function in Swift UI at all.
I tried to extent ARView:
extension ARView: ARSessionDelegate {
func renderer(_ renderer: SKRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
let capturedImage = session.currentFrame?.capturedImage
print(capturedImage)
}
}
struct ARViewCustom: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.session.delegate = arView
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
No error, but it doesn't work.
Related
I am trying to create a 3D scene for an iOS app using an rcproject file output from RealityComposer.
I tried to display a Scene loaded from a .rcproject file in ARView with .nonAR mode. The code is as follows.
I have confirmed that the simulators display the scene properly.
But scene is not displayed on my actual iPhone8 and iPhone12 mini (Both iOS 16.03).
I am puzzled why scene loaded from .rcproject file does not show up with my actual device. If anyone has had the similar experience or has an idea of the cause, I would appreciate it.
Thank you for taking the time to read this post.
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
// if cameramode is ".ar", work properly
let arview = ARView(frame: .zero, cameraMode: .nonAR)
Sample.loadMySceneAsync { (result) in
do {
let myScene = try result.get()
arview.scene.anchors.append(myScene)
} catch {
print("Failed to load myScene")
}
}
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0.2, 0.5])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
return arview
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
right now my App can detect images and place some Models. I am usnig ARKit and RealityKit. This is my setup:
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors.first as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
let anchor = AnchorEntity(anchor: imageAnchor)
// Add Model Entity to anchor
anchor.addChild(model)
arView.scene.anchors.append(anchor)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
arView.session.delegate = self
resetTrackingConfig()
}
func resetTrackingConfig() {
guard let refImg = ARReferenceImage.referenceImages(inGroupNamed: "Sub",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImg
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
arView.session.run(config, options: ARSession.RunOptions(options))
}
}
Now the problem is that I am not completely satisfied how the image detection works. It has troubles detecting images if for example the light is slightly different.
This is my image:
But it should also be able to detect these images:
(2nd one has no white background)
And for that I thought I could use Machine Leaning. But how can I combine that with my ARKit setup? Right now it just takes the images from my assets-folder. I tried searching for that topic but couldn't find anything.. Is this kind of project even possible the way I described it? Any help is appreciated! Let me know if you need any more information.
I'm trying to place a 3D model on top of a recognized image with ARKit and RealityKit - all programmatically. Before I start the ARView I'm downloading the model I want to show when the reference image is detected.
This is my current setup:
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
// Check if the device supports the AR experience
if (!ARConfiguration.isSupported) {
TLogger.shared.error_objc("Device does not support Augmented Reality")
return
}
guard let qrCodeReferenceImage = UIImage(named: "QRCode") else { return }
let detectionImages: Set<ARReferenceImage> = convertToReferenceImages([qrCodeReferenceImage])
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = detectionImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
I use the ARSessionDelegate to get notified when a new image anchor was added which means the reference image got detected:
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
print("Hello")
for anchor in anchors {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
addEntity(self.localModelPath!)
}
}
However, the delegate method never gets called while other delegate functions like func session(ARSession, didUpdate: ARFrame) are getting called so I assume that the session just doesn't detect the image. The image resolution is good and the printed image the big so it should definitely get recognized by the ARSession. I also checked that the image has been found before adding it to the configuration.
Can anyone lead me in the right direction here?
It looks like you have your configuration set up correctly. Your delegate-function should be called when the reference image is recognized. Make sure your configuration isn't overwritten at any point in your code.
My app runs Vision on a CoreML model. The camera frames the machine learning model runs on are from an ARKit sceneView (basically, the camera). I have a method that's called loopCoreMLUpdate() that continuously runs CoreML so that we keep running the model on new camera frames. The code looks like this:
import UIKit
import SceneKit
import ARKit
class MyViewController: UIViewController {
var visionRequests = [VNRequest]()
let dispatchQueueML = DispatchQueue(label: "com.hw.dispatchqueueml") // A Serial Queue
override func viewDidLoad() {
super.viewDidLoad()
// Setup ARKit sceneview
// ...
// Begin Loop to Update CoreML
loopCoreMLUpdate()
}
// This is the problematic part.
// In fact - once it's run there's no way to stop it, is there?
func loopCoreMLUpdate() {
// Continuously run CoreML whenever it's ready. (Preventing 'hiccups' in Frame Rate)
dispatchQueueML.async {
// 1. Run Update.
self.updateCoreML()
// 2. Loop this function.
self.loopCoreMLUpdate()
}
}
func updateCoreML() {
///////////////////////////
// Get Camera Image as RGB
let pixbuff : CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
if pixbuff == nil { return }
let ciImage = CIImage(cvPixelBuffer: pixbuff!)
// Note: Not entirely sure if the ciImage is being interpreted as RGB, but for now it works with the Inception model.
// Note2: Also uncertain if the pixelBuffer should be rotated before handing off to Vision (VNImageRequestHandler) - regardless, for now, it still works well with the Inception model.
///////////////////////////
// Prepare CoreML/Vision Request
let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
// let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage!, orientation: myOrientation, options: [:]) // Alternatively; we can convert the above to an RGB CGImage and use that. Also UIInterfaceOrientation can inform orientation values.
///////////////////////////
// Run Image Request
do {
try imageRequestHandler.perform(self.visionRequests)
} catch {
print(error)
}
}
}
As you can see the loop effect is created by a DispatchQueue with the label com.hw.dispatchqueueml that keeps calling loopCoreMLUpdate(). Is there any way to stop the queue once CoreML is not needed anymore? Full code is here.
I suggest instead o running coreML model here in viewDidLoad, you can use ARSessionDelegate function for the same.
func session(_ session: ARSession, didUpdate frame: ARFrame) method to get the frame, you can set the flag, here to enable when you want the the model to work and when you dont.
Like this below:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// This is where we will analyse our frame
// We return early if currentBuffer is not nil or the tracking state of camera is not normal
// TODO: - Core ML Functionality Commented
guard isMLFlow else { //
return
}
currentBuffer = frame.capturedImage
guard let buffer = currentBuffer, let image = UIImage(pixelBuffer: buffer) else { return }
<Code here to load model>
CoreMLManager.manager.updateClassifications(for: image)
}
I'm trying to persist a model in ARKit using the ARWorldMap. I can save and load the models, but the orientation I apply to the objects before I save is not persisted with the object.
What I'm currently doing
Objects are saved and loaded:
/// - Tag: GetWorldMap
#objc func saveExperience(_ button: UIButton) {
sceneView.session.getCurrentWorldMap { worldMap, error in
guard let map = worldMap
else { self.showAlert(title: "Can't get current world map", message: error!.localizedDescription); return }
// Add a snapshot image indicating where the map was captured.
guard let snapshotAnchor = SnapshotAnchor(capturing: self.sceneView) else {
fatalError("Can't take snapshot")
}
map.anchors.append(snapshotAnchor)
do {
let data = try NSKeyedArchiver.archivedData(withRootObject: map, requiringSecureCoding: true)
try data.write(to: self.mapSaveURL, options: [.atomic])
DispatchQueue.main.async {
self.loadExperienceButton.isHidden = false
self.loadExperienceButton.isEnabled = true
}
} catch {
fatalError("Can't save map: \(error.localizedDescription)")
}
}
}
/// - Tag: RunWithWorldMap
#objc func loadExperience(_ button: UIButton) {
/// - Tag: ReadWorldMap
let worldMap: ARWorldMap = {
guard let data = mapDataFromFile
else { fatalError("Map data should already be verified to exist before Load button is enabled.") }
do {
guard let worldMap = try NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self, from: data)
else { fatalError("No ARWorldMap in archive.") }
return worldMap
} catch {
fatalError("Can't unarchive ARWorldMap from file data: \(error)")
}
}()
// Display the snapshot image stored in the world map to aid user in relocalizing.
if let snapshotData = worldMap.snapshotAnchor?.imageData,
let snapshot = UIImage(data: snapshotData) {
self.snapshotThumbnail.image = snapshot
} else {
print("No snapshot image in world map")
}
// Remove the snapshot anchor from the world map since we do not need it in the scene.
worldMap.anchors.removeAll(where: { $0 is SnapshotAnchor })
let configuration = self.defaultConfiguration // this app's standard world tracking settings
configuration.initialWorldMap = worldMap
sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
isRelocalizingMap = true
virtualObjectAnchor = nil
}
Rotation:
#objc func didRotate(_ gesture: UIRotationGestureRecognizer) {
sceneView.scene.rootNode.eulerAngles.y = objectRotation
gesture.rotation = 0
}
And then it's rendered:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard anchor.name == virtualObjectAnchorName else {
return
}
// save the reference to the virtual object anchor when the anchor is added from relocalizing
if virtualObjectAnchor == nil {
virtualObjectAnchor = anchor
}
node.addChildNode(virtualObject)
}
How can I do this?
How can I go about doing this? I have tried multiple solutions, but the orientation is never kept. It loads the object at the correct position, but rotation and scaling is never kept, even if I apply it to the rootnode. The only option I can see is to also store the transform as a seperate data object, and load that and apply it. But seems like it should be possible to store this data with the object.
Apple Documentation for ARWorldMap shows that the properties for an ARWorldMap class are:
When you archive a world map, these are the only information that get saved. Any information about the nodes added to the anchors during the session (e.g. changing node scale and orientation) are not saved along with the world map during the archiving.
I remember watching a WWDC session where they demoed a multiplayer AR game called SwiftShot where players hit different objects with balls. They provided the source code and I noticed they used a custom ARAnchor subclass called BoardAnchor which they used to store additional information in the anchor class such as the size of the game board.
See: SwiftShot: Creating a Game for Augmented Reality.
You can use the same approach to store, for example, the scale and orientation of a node, so that when you unarchive the world map and it get's relocalized, you can use ARSCNViewDelegate's renderer(_:didAdd:for:) to resize and scale the node based on the information stored in your custom ARAnchor.