Learning and LOVING ARKit, I'm going through an online class where the project is to create the Harry Potter-like newspaper that plays a video when the camera looks at an image on the paper. I'm taking the project a step further in trying to do two things:
Have the video pause when the camera moves off the image connected to the video and also
Play multiple video based off multiple images
I got the Play/Pause feature to work using this method():
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else {return}
if imageAnchor.isTracked {
videoNode.play()
} else {
videoNode.pause()
}
}
And I got multiple videos to play using two global properties:
var movieName = "Friends.mp4"
var videoNode = SKVideoNode(fileNamed: "Friends.mp4")
And then within the ARSCNViewDeleegate method, I added the following code...
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
if imageAnchor.referenceImage.name == "HarryAndFriends" {
movieName = "Friends.mp4"
} else if imageAnchor.referenceImage.name == "Train" {
movieName = "TrainStation.mp4"
}
videoNode = SKVideoNode(fileNamed: movieName)
videoNode.play()
let videoScene = SKScene(size: CGSize(width: 1280, height: 720))
videoNode.position = CGPoint(x: videoScene.size.width / 2, y: videoScene.size.height / 2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
This mostly works!
I'm able to track Video1, move the camera off of Image1, and Video1 pauses. When I point the camera back on Image1, Video1 resumes. Perfect! Then moving to Image2, Video2 starts playing, as it should. I can move the camera off of Image2, and Video2 pauses, as it should.
However, when I position the camera back on Image1, Video1 appears, as and where it should, but Video2 resumes playing (I can hear the audio from Video2). Video1 does not play and is still paused where I left off.
Video2 is somehow stored in memory, and I can't figure out how to get it to swtich back.
I pulled info from these posts:
Tracking multiple images and play their videos in ARkit2
Multiple image tracking ARKit
ARKit How to Pause video when out of tracking
I just can't seem to combine the two concepts to resume the correct video. Any help?
I'm fairly new to programming and I'm building an app on XCODE working with AR.
It basically identify a target and let the correspondent video play; I would like to start the AR experience placing the video on the target plane and be able to switch to a full screen view (which is then no more linked to the target) trough a button or a switch when the video it's already playing.
This is the code for the AR part, which is working flawlessly; we have 4 separate targets and each one is linked to a different video.
public func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
// Show video overlaid
if let imageAnchor = anchor as? ARImageAnchor {
// Plane creation
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
if imageAnchor.referenceImage.name == "01ADAMOTARGET" {
// Set AVPlayer as the plane's texture and play
plane.firstMaterial?.diffuse.contents = self.adamoquadVideoPlayer
self.adamoquadVideoPlayer.play()
} else {
self.adamoquadVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "02GIOSUETARGET" {
plane.firstMaterial?.diffuse.contents = self.noeVideoPlayer
self.noeVideoPlayer.play()
} else {
self.noeVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "04CAINOTARGET" {
plane.firstMaterial?.diffuse.contents = self.moseVideoPlayer
self.moseVideoPlayer.play()
} else {
self.moseVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "03GIUSEPPETARGET" {
plane.firstMaterial?.diffuse.contents = self.giuseppeVideoPlayer
self.giuseppeVideoPlayer.play()
} else {
self.giuseppeVideoPlayer.pause()
}
let planeNode = SCNNode(geometry: plane)
// Plane rotation
planeNode.eulerAngles.x = -.pi / 2
// Plane node
node.addChildNode(planeNode)
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = (anchor as? ARImageAnchor) else { return }
if imageAnchor.isTracked {
self.adamoquadVideoPlayer.play()
} else {
self.adamoquadVideoPlayer.pause()
}
}
}
return node
I've also wrote the part for the full screen view and it's working but I don't really know how to put the two functions together, here's an example code for a single video. It pauses when it encounters another target and another video starts playing.
if imageAnchor.referenceImage.name == "01ADAMOTARGET" {
let layer = AVPlayerLayer (player: adamoVideoPlayer)
layer.frame = view.bounds
view.layer.addSublayer(layer)
adamoVideoPlayer.play()
} else {
self.adamoVideoPlayer.pause()
On image marker detection, I want to play animation of walking guy within that marker boundary only using ARKit. For that I want to find out the position of that 3D object while it is walking on marker. Animation is created using external 3D authoring tool and saved in .scnassets as .dae file. I have added node and start animation using below code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let imageAnchor = anchor as? ARImageAnchor {
DispatchQueue.main.async {
//let translation = imageAnchor.transform.columns.3
let idleScene = SCNScene(named: "art.scnassets/WalkAround/WalkAround.dae")!
// This node will be parent of all the animation models
let node1 = SCNNode()
// Add all the child nodes to the parent node
for child in idleScene.rootNode.childNodes {
node1.addChildNode(child)
}
node1.scale = SCNVector3(0.2, 0.2, 0.2)
let physicalSize = imageAnchor.referenceImage.physicalSize
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
skScene.backgroundColor = .white
let plane = SCNPlane(width: self.referenceImage!.physicalSize.width, height: self.referenceImage!.physicalSize.height)
let material = SCNMaterial()
material.lightingModel = SCNMaterial.LightingModel.constant
material.isDoubleSided = true
material.diffuse.contents = skScene
plane.materials = [material]
let rectNode = SCNNode(geometry: plane)
rectNode.eulerAngles.x = -.pi / 2
node.addChildNode(rectNode)
node.addChildNode(node1)
self.loadAnimation(withKey: "walking", sceneName: "art.scnassets/WalkAround/SambaArmtr", animationIdentifier: "SambaArmtr-1")
}
}
}
func loadAnimation(withKey: String, sceneName:String, animationIdentifier:String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObject = sceneSource?.entryWithIdentifier(animationIdentifier, withClass: CAAnimation.self) {
// The animation will only play once
animationObject.repeatCount = 1
}
}
I tried using node.presentation.position in both below methods to get current position of object.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval)
// Or
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
If I will not move device anymore once animation has been started, those methods will not get called and till the time I am getting same position of node. Thats why I am not getting where I am wrong. Or is there any way to get current position of object while animation is going on in ARKit?
I don't know of any way to get the current frame within an embedded animation. With that said, the animation embedded within a model uses CoreAnimation to run the animation. You could use the CAAimationDelegate to listen to the start/end events of your animation and run a timer. The timer would give you the best estimate of which frame the animation is on.
References:
SceneKit Animating Content Documentation: https://developer.apple.com/documentation/scenekit/animation/animating_scenekit_content
CAAnimationDelegate Documentation: https://developer.apple.com/documentation/quartzcore/caanimationdelegate
With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...
I want to create an App, that detects reference images, then a 3D (SCNScene) object appears (multiple images / objects in 1 Camera is possible). This is already running.
Now, when the user taps on the object, I need to know the file-name of the referenceImage, because the image should be shown.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
private var planeNode: SCNNode?
private var imageNode: SCNNode?
private var animationInfo: AnimationInfo?
private var currentMediaName: String?
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
sceneView.scene = scene
sceneView.delegate = self
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Load reference images to look for from "AR Resources" folder
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
// Add previously loaded images to ARScene configuration as detectionImages
configuration.detectionImages = referenceImages
// Run the view's session
sceneView.session.run(configuration)
let tap = UITapGestureRecognizer(target: self, action: #selector(handleTap(rec:)))
//Add recognizer to sceneview
sceneView.addGestureRecognizer(tap)
}
//Method called when tap
#objc func handleTap(rec: UITapGestureRecognizer){
//GET Reference-Image Name
loadReferenceImage()
if rec.state == .ended {
let location: CGPoint = rec.location(in: sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if !hits.isEmpty{
let tappedNode = hits.first?.node
}
}
}
func loadReferenceImage(){
print("CLICK")
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else {
return
}
currentMediaName = imageAnchor.referenceImage.name
// 1. Load plane's scene.
let planeScene = SCNScene(named: "art.scnassets/plane.scn")!
let planeNode = planeScene.rootNode.childNode(withName: "planeRootNode", recursively: true)!
// 2. Calculate size based on planeNode's bounding box.
let (min, max) = planeNode.boundingBox
let size = SCNVector3Make(max.x - min.x, max.y - min.y, max.z - min.z)
// 3. Calculate the ratio of difference between real image and object size.
// Ignore Y axis because it will be pointed out of the image.
let widthRatio = Float(imageAnchor.referenceImage.physicalSize.width)/size.x
let heightRatio = Float(imageAnchor.referenceImage.physicalSize.height)/size.z
// Pick smallest value to be sure that object fits into the image.
let finalRatio = [widthRatio, heightRatio].min()!
// 4. Set transform from imageAnchor data.
planeNode.transform = SCNMatrix4(imageAnchor.transform)
// 5. Animate appearance by scaling model from 0 to previously calculated value.
let appearanceAction = SCNAction.scale(to: CGFloat(finalRatio), duration: 0.4)
appearanceAction.timingMode = .easeOut
// Set initial scale to 0.
planeNode.scale = SCNVector3Make(0.0, 0.0, 0.0)
// Add to root node.
sceneView.scene.rootNode.addChildNode(planeNode)
// Run the appearance animation.
planeNode.runAction(appearanceAction)
self.planeNode = planeNode
self.imageNode = node
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor, updateAtTime time: TimeInterval) {
guard let imageNode = imageNode, let planeNode = planeNode else {
return
}
// 1. Unwrap animationInfo. Calculate animationInfo if it is nil.
guard let animationInfo = animationInfo else {
refreshAnimationVariables(startTime: time,
initialPosition: planeNode.simdWorldPosition,
finalPosition: imageNode.simdWorldPosition,
initialOrientation: planeNode.simdWorldOrientation,
finalOrientation: imageNode.simdWorldOrientation)
return
}
// 2. Calculate new animationInfo if image position or orientation changed.
if !simd_equal(animationInfo.finalModelPosition, imageNode.simdWorldPosition) || animationInfo.finalModelOrientation != imageNode.simdWorldOrientation {
refreshAnimationVariables(startTime: time,
initialPosition: planeNode.simdWorldPosition,
finalPosition: imageNode.simdWorldPosition,
initialOrientation: planeNode.simdWorldOrientation,
finalOrientation: imageNode.simdWorldOrientation)
}
// 3. Calculate interpolation based on passedTime/totalTime ratio.
let passedTime = time - animationInfo.startTime
var t = min(Float(passedTime/animationInfo.duration), 1)
// Applying curve function to time parameter to achieve "ease out" timing
t = sin(t * .pi * 0.5)
// 4. Calculate and set new model position and orientation.
let f3t = simd_make_float3(t, t, t)
planeNode.simdWorldPosition = simd_mix(animationInfo.initialModelPosition, animationInfo.finalModelPosition, f3t)
planeNode.simdWorldOrientation = simd_slerp(animationInfo.initialModelOrientation, animationInfo.finalModelOrientation, t)
//planeNode.simdWorldOrientation = imageNode.simdWorldOrientation
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let name = currentImageAnchor.referenceImage.name!
print("TEST")
print(name)
}
func refreshAnimationVariables(startTime: TimeInterval, initialPosition: float3, finalPosition: float3, initialOrientation: simd_quatf, finalOrientation: simd_quatf) {
let distance = simd_distance(initialPosition, finalPosition)
// Average speed of movement is 0.15 m/s.
let speed = Float(0.15)
// Total time is calculated as distance/speed. Min time is set to 0.1s and max is set to 2s.
let animationDuration = Double(min(max(0.1, distance/speed), 2))
// Store animation information for later usage.
animationInfo = AnimationInfo(startTime: startTime,
duration: animationDuration,
initialModelPosition: initialPosition,
finalModelPosition: finalPosition,
initialModelOrientation: initialOrientation,
finalModelOrientation: finalOrientation)
}
}
Since your ARReferenceImage is stored within the Assets.xcassets catalogue you can simply load your image using the following initialization method of UIImage:
init?(named name: String)
For your information:
if this is the first time the image is being
loaded, the method looks for an image with the specified name in the
application’s main bundle. For PNG images, you may omit the filename
extension. For all other file formats, always include the filename
extension.
In my example I have an ARReferenceImage named TargetCard:
So to load it as a UIImage and then apply it as an SCNNode or display it in screenSpace you could so something like so:
//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: "TargetCard"){
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}
//2. Load The Image Into A UIImageView
if let image = UIImage(named: "TargetCard"){
let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}
In your context:
Each SCNNode has a name property:
var name: String? { get set }
As such I suggest that when you create content in regard to your ARImageAnchor you provide it with the name of your ARReferenceImage e.g:
//---------------------------
// MARK: - ARSCNViewDelegate
//---------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARImageAnchor & Check It's The One We Want
guard let validImageAnchor = anchor as? ARImageAnchor,
let targetName = validImageAnchor.referenceImage.name else { return}
//2. Create An SCNNode With An SCNPlaneGeometry
let nodeToAdd = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeToAdd.geometry = planeGeometry
//3. Set It's Name To That Of Our ARReferenceImage
nodeToAdd.name = targetName
//4. Add It To The Hierachy
node.addChildNode(nodeToAdd)
}
}
Then it is easy to get a reference to the Image later e.g:
/// Checks To See If We Have Hit A Named SCNNode
///
/// - Parameter gesture: UITapGestureRecognizer
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An SCNHitTest To See If We Have Tapped A Valid SCNNode & See If It Is Named
guard let hitTestForNode = self.augmentedRealityView.hitTest(currentTouchLocation, options: nil).first?.node,
let nodeName = hitTestForNode.name else { return }
//3. Load The Reference Image
self.loadReferenceImage(nodeName, inAR: true)
}
/// Loads A Matching Image For The Identified ARReferenceImage Name
///
/// - Parameters:
/// - fileName: String
/// - inAR: Bool
func loadReferenceImage(_ fileName: String, inAR: Bool){
if inAR{
//1. Load The Image Onto An SCNPlaneGeometry
if let image = UIImage(named: fileName){
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.firstMaterial?.diffuse.contents = image
planeNode.geometry = planeGeometry
planeNode.position = SCNVector3(0, 0, -1.5)
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
}
}else{
//2. Load The Image Into A UIImageView
if let image = UIImage(named: fileName){
let imageView = UIImageView(frame: CGRect(x: 10, y: 10, width: 300, height: 150))
imageView.image = image
imageView.contentMode = .scaleAspectFill
self.view.addSubview(imageView)
}
}
}
Important:
One thing I have just discovered is that if we load the the ARReferenceImage e.g:
let image = UIImage(named: "TargetCard")
Then the image is displayed is in GrayScale, which is properly what you dont want!
As such what you probably need to do is to copy the ARReferenceImage into the Assets Catalogue and give it a prefix e.g. ColourTargetCard...
Then you would need to change the function slightly by naming your nodes using a prefix e.g:
nodeToAdd.name = "Colour\(targetName)"
Hope it helps...