ARKIT Xcode, Switching from target to full screen view - ios

I'm fairly new to programming and I'm building an app on XCODE working with AR.
It basically identify a target and let the correspondent video play; I would like to start the AR experience placing the video on the target plane and be able to switch to a full screen view (which is then no more linked to the target) trough a button or a switch when the video it's already playing.
This is the code for the AR part, which is working flawlessly; we have 4 separate targets and each one is linked to a different video.
public func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
// Show video overlaid
if let imageAnchor = anchor as? ARImageAnchor {
// Plane creation
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
if imageAnchor.referenceImage.name == "01ADAMOTARGET" {
// Set AVPlayer as the plane's texture and play
plane.firstMaterial?.diffuse.contents = self.adamoquadVideoPlayer
self.adamoquadVideoPlayer.play()
} else {
self.adamoquadVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "02GIOSUETARGET" {
plane.firstMaterial?.diffuse.contents = self.noeVideoPlayer
self.noeVideoPlayer.play()
} else {
self.noeVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "04CAINOTARGET" {
plane.firstMaterial?.diffuse.contents = self.moseVideoPlayer
self.moseVideoPlayer.play()
} else {
self.moseVideoPlayer.pause()
}; if imageAnchor.referenceImage.name == "03GIUSEPPETARGET" {
plane.firstMaterial?.diffuse.contents = self.giuseppeVideoPlayer
self.giuseppeVideoPlayer.play()
} else {
self.giuseppeVideoPlayer.pause()
}
let planeNode = SCNNode(geometry: plane)
// Plane rotation
planeNode.eulerAngles.x = -.pi / 2
// Plane node
node.addChildNode(planeNode)
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = (anchor as? ARImageAnchor) else { return }
if imageAnchor.isTracked {
self.adamoquadVideoPlayer.play()
} else {
self.adamoquadVideoPlayer.pause()
}
}
}
return node
I've also wrote the part for the full screen view and it's working but I don't really know how to put the two functions together, here's an example code for a single video. It pauses when it encounters another target and another video starts playing.
if imageAnchor.referenceImage.name == "01ADAMOTARGET" {
let layer = AVPlayerLayer (player: adamoVideoPlayer)
layer.frame = view.bounds
view.layer.addSublayer(layer)
adamoVideoPlayer.play()
} else {
self.adamoVideoPlayer.pause()

Related

ARKit: Multiple videos PLUS play/pause functionality that resumes the correct video

Learning and LOVING ARKit, I'm going through an online class where the project is to create the Harry Potter-like newspaper that plays a video when the camera looks at an image on the paper. I'm taking the project a step further in trying to do two things:
Have the video pause when the camera moves off the image connected to the video and also
Play multiple video based off multiple images
I got the Play/Pause feature to work using this method():
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else {return}
if imageAnchor.isTracked {
videoNode.play()
} else {
videoNode.pause()
}
}
And I got multiple videos to play using two global properties:
var movieName = "Friends.mp4"
var videoNode = SKVideoNode(fileNamed: "Friends.mp4")
And then within the ARSCNViewDeleegate method, I added the following code...
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
if imageAnchor.referenceImage.name == "HarryAndFriends" {
movieName = "Friends.mp4"
} else if imageAnchor.referenceImage.name == "Train" {
movieName = "TrainStation.mp4"
}
videoNode = SKVideoNode(fileNamed: movieName)
videoNode.play()
let videoScene = SKScene(size: CGSize(width: 1280, height: 720))
videoNode.position = CGPoint(x: videoScene.size.width / 2, y: videoScene.size.height / 2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
This mostly works!
I'm able to track Video1, move the camera off of Image1, and Video1 pauses. When I point the camera back on Image1, Video1 resumes. Perfect! Then moving to Image2, Video2 starts playing, as it should. I can move the camera off of Image2, and Video2 pauses, as it should.
However, when I position the camera back on Image1, Video1 appears, as and where it should, but Video2 resumes playing (I can hear the audio from Video2). Video1 does not play and is still paused where I left off.
Video2 is somehow stored in memory, and I can't figure out how to get it to swtich back.
I pulled info from these posts:
Tracking multiple images and play their videos in ARkit2
Multiple image tracking ARKit
ARKit How to Pause video when out of tracking
I just can't seem to combine the two concepts to resume the correct video. Any help?

How can I reduce 3 blocks of code to 1 in iOS

I created a fun little AR app that allows me to point my phone at index cards, that contain 2D images of Christmas items drawn by my niece, and have 3D versions of the images pop up.
However, I have limited my app to just 3 images for now, as each picture/3D combo has its own block of code (shown below) but I would like to somehow consolidate to one block (even if I need to rename the image files to a numbered format, but would appreciate any advice).
Screenshots of the app, using the "3 blocks of code method" are included.
// MARK: - ARSCNViewDelegate
//the anchor will be the image detected and the node will be a 3d object
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
//the 3d object
let node = SCNNode()
//if this detects a plane or point or anything other then an image, will not run this code
if let imageAnchor = anchor as? ARImageAnchor {
//this plane is created from the image detected (card). want the width and height in the physical world to be the same as the card, so effectively saying "look at the anchor image found, and get its size properties”
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
//once dimension is given to the plane, use it to create a planenode which is a 3d object that will be rendered on top of the card
let planeNode = SCNNode(geometry: plane)
//makes the plane translucent
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
//by default the plane created is vertical so need to flip it to be flat with the card detected
planeNode.eulerAngles.x = -.pi / 2
// tap into the node created above and add a child node which will be the plane node
node.addChildNode(planeNode)
// ------------3 BLOCKS OF CODE BELOW THAT NEED TO SIMPLIFY TO 1----------------
// This is the first block of code for the ChristmasTree.png image/ChristmasTree.scn 3D object – note that the 2D image is not detected if “.png” is included at the end of the image anchor, however the “.scn” appears to be required for the 3D object).
if imageAnchor.referenceImage.name == "ChristmasTree" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/ChristmasTree.scn") {
//create a node that will represent the 3d object.
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the second block of code for the Gift.png image/Gift.scn 3D object
if imageAnchor.referenceImage.name == "Gift" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/Gift.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
// This is the third block of code for the GingerbreadMan.png image/GingerbreadMan.scn 3D object
if imageAnchor.referenceImage.name == "GingerbreadMan" {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/GingerbreadMan.scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
}
//this method is expecting an output of SCNNode and it needs to send that back onto the scene so it can render the 3d object
return node
}
}
EDIT
Using Vadian's recommendation, I have revised the code as follows, however it has resulted in only the translucent plane appearing when pointing the camera at the index card:
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.5)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
//------------single block of code------------
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn") {
if let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
//------------single block of code------------
}
return node
}
}
As the only difference is the name the three branches can be reduced to
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name) {
//create the 3d model on top of the card
if let cardScene = SCNScene(named: "art.scnassets/\(name).scn") {
//create a node that will represent the 3d object
if let cardNode = cardScene.rootNode.childNodes.first {
//Since the 3d image is rotated the other way, need to bring it forward so same as above only with positive
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}
}
}
Or avoiding the pyramid of doom
let name = imageAnchor.referenceImage.name
if ["ChristmasTree", "Gift", "GingerbreadMan"].contains(name),
let cardScene = SCNScene(named: "art.scnassets/\(name).scn"),
let cardNode = cardScene.rootNode.childNodes.first {
cardNode.eulerAngles.x = .pi / 2
planeNode.addChildNode(cardNode)
}

How to get current position of 3D object while animation is going on in ARKit?

On image marker detection, I want to play animation of walking guy within that marker boundary only using ARKit. For that I want to find out the position of that 3D object while it is walking on marker. Animation is created using external 3D authoring tool and saved in .scnassets as .dae file. I have added node and start animation using below code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let imageAnchor = anchor as? ARImageAnchor {
DispatchQueue.main.async {
//let translation = imageAnchor.transform.columns.3
let idleScene = SCNScene(named: "art.scnassets/WalkAround/WalkAround.dae")!
// This node will be parent of all the animation models
let node1 = SCNNode()
// Add all the child nodes to the parent node
for child in idleScene.rootNode.childNodes {
node1.addChildNode(child)
}
node1.scale = SCNVector3(0.2, 0.2, 0.2)
let physicalSize = imageAnchor.referenceImage.physicalSize
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
skScene.backgroundColor = .white
let plane = SCNPlane(width: self.referenceImage!.physicalSize.width, height: self.referenceImage!.physicalSize.height)
let material = SCNMaterial()
material.lightingModel = SCNMaterial.LightingModel.constant
material.isDoubleSided = true
material.diffuse.contents = skScene
plane.materials = [material]
let rectNode = SCNNode(geometry: plane)
rectNode.eulerAngles.x = -.pi / 2
node.addChildNode(rectNode)
node.addChildNode(node1)
self.loadAnimation(withKey: "walking", sceneName: "art.scnassets/WalkAround/SambaArmtr", animationIdentifier: "SambaArmtr-1")
}
}
}
func loadAnimation(withKey: String, sceneName:String, animationIdentifier:String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObject = sceneSource?.entryWithIdentifier(animationIdentifier, withClass: CAAnimation.self) {
// The animation will only play once
animationObject.repeatCount = 1
}
}
I tried using node.presentation.position in both below methods to get current position of object.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval)
// Or
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
If I will not move device anymore once animation has been started, those methods will not get called and till the time I am getting same position of node. Thats why I am not getting where I am wrong. Or is there any way to get current position of object while animation is going on in ARKit?
I don't know of any way to get the current frame within an embedded animation. With that said, the animation embedded within a model uses CoreAnimation to run the animation. You could use the CAAimationDelegate to listen to the start/end events of your animation and run a timer. The timer would give you the best estimate of which frame the animation is on.
References:
SceneKit Animating Content Documentation: https://developer.apple.com/documentation/scenekit/animation/animating_scenekit_content
CAAnimationDelegate Documentation: https://developer.apple.com/documentation/quartzcore/caanimationdelegate

How do i detect multiple images in AR?

I am creating this live newspaper app with ARKit which transforms images on newspaper into videos.I am able to detect one image and play a video on it but when i try doing it on two images and play a corresponding video on that images i get an error like
Attempted to add a SKNode which already has a parent
I tried by checking tracked images and comparing them to reference images but i think something is wrong with my logic.
This is my ViewDidAppear() Method
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARImageTrackingConfiguration()
if let trackedImages = ARReferenceImage.referenceImages(inGroupNamed: "NewsPaperImages", bundle: Bundle.main) {
configuration.trackingImages = trackedImages
configuration.maximumNumberOfTrackedImages = 20
}
// Run the view's session
sceneView.session.run(configuration)
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
if(imageAnchor.referenceImage == UIImage(named: "Image2_River")) {
self.videoNode = SKVideoNode(fileNamed: "riverBeauty.mp4")
}
print("Yes it is an image")
self.videoNode.play()
let videoScene = SKScene(size: CGSize(width: 480, height: 360))
videoNode.position = CGPoint(x: videoScene.size.width/2, y: videoScene.size.height/2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi/2
node.addChildNode(planeNode)
}
return node
}
it should be playing video on every detected image but instead it crashes.
Looking at your code, and from what you have said I think that you need to change your logic, to create a Video Node using the name property of your ARReferenceImage instead.
When you create an ARReferenceImage either statically (within the ARResourceBundle) or dynamically you can assign it a name e.g:
Then you can use the names to add logic to assign a different video to each referenceImage detected.
And in order to keep your code DRY you could create a reusable function to create your video node.
As such a simple example might look something like so:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARImageAnchor
guard let validAnchor = anchor as? ARImageAnchor else { return }
//2. Create A Video Player Node For Each Detected Target
node.addChildNode(createdVideoPlayerNodeFor(validAnchor.referenceImage))
}
/// Creates An SCNNode With An AVPlayer Rendered Onto An SCNPlane
///
/// - Parameter target: ARReferenceImage
/// - Returns: SCNNode
func createdVideoPlayerNodeFor(_ target: ARReferenceImage) -> SCNNode{
//1. Create An SCNNode To Hold Our VideoPlayer
let videoPlayerNode = SCNNode()
//2. Create An SCNPlane & An AVPlayer
let videoPlayerGeometry = SCNPlane(width: target.physicalSize.width, height: target.physicalSize.height)
var videoPlayer = AVPlayer()
//3. If We Have A Valid Name & A Valid Video URL The Instanciate The AVPlayer
if let targetName = target.name,
let validURL = Bundle.main.url(forResource: targetName, withExtension: "mp4", subdirectory: "/art.scnassets") {
videoPlayer = AVPlayer(url: validURL)
videoPlayer.play()
}
//4. Assign The AVPlayer & The Geometry To The Video Player
videoPlayerGeometry.firstMaterial?.diffuse.contents = videoPlayer
videoPlayerNode.geometry = videoPlayerGeometry
//5. Rotate It
videoPlayerNode.eulerAngles.x = -.pi / 2
return videoPlayerNode
}
}
As you can see I have opted to use an AVPlayer as my video content, although you can continue to use your videoScene should you so desire.
Hope it points you in the right direction...

Physics object falls to infinity in SceneKit

I'm making an AR app that's a ball toss game using Swift's ARKit.
Click here for my repo
The point of the game is to toss the ball and make it land in the hat. However, whenever I try to toss the ball, it always appear to fall to infinity instead of landing in the hat or on the floor plane that I've created.
Here's the code for tossing the ball:
#IBAction func throwBall(_ sender: Any) {
// Create ball
let ball = SCNSphere(radius: 0.02)
currentBallNode = SCNNode(geometry: ball)
currentBallNode?.physicsBody = .dynamic()
currentBallNode?.physicsBody?.allowsResting = true
currentBallNode?.physicsBody?.isAffectedByGravity = true
// Apply transformation
let camera = sceneView.session.currentFrame?.camera
let cameraTransform = camera?.transform
currentBallNode?.simdTransform = cameraTransform!
// Add current ball node to balls array
balls.append(currentBallNode!)
// Add ball node to root node
sceneView.scene.rootNode.addChildNode(currentBallNode!)
// Set force to be applied
let force = simd_make_float4(0, 0, -3, 0)
let rotatedForce = simd_mul(cameraTransform!, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
// Apply force to ball
currentBallNode?.physicsBody?.applyForce(vectorForce, asImpulse: true)
}
And here's the physics body setting for the floor:
Look at below screenshot for get more idea.
Nevermind, I managed to resolve this by adding the following function:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor, planeAnchor.center == self.planeAnchor?.center || self.planeAnchor == nil else { return }
// Set the floor's geometry to be the detected plane
let floor = sceneView.scene.rootNode.childNode(withName: "floor", recursively: true)
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.y))
floor?.geometry = plane
}

Resources