How do I render a jpg image using ARKit ARSKViewDelegate? - ios

I'd like to render a jpg image as a 2d rectangle floating in space. Using the SpriteKit example - how do I return a jpg image from the ARSKViewDelegate?
The demo returns a SKLabelNode - is there a Node class that would be appropriate for a jpg image that I would fetch from the network, maybe a UIImage?
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
let labelNode = SKLabelNode(text: "👾")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode;
}

You can use a SCNPlane and assign a UIImage as content of that plane.
let imagePlane = SCNPlane(width: sceneView.bounds.width/6000, height: sceneView.bounds.height/6000)
imagePlane.firstMaterial?.diffuse.contents = //<-- UIImage here
imagePlane.firstMaterial?.lightingModel = .constant
let planeNode = SCNNode(geometry: imagePlane)
UPDATE: now I noticed that you're using SpriteKit. The code I shared is using SceneKit.

Looks like I can use an SKSpriteNode from an SKTexture. The only issue with that is I see in the logs warnings about degraded ar performance. [Technique] World tracking performance is being affected by resource constraints [1]
let url = URL(string: imageURL)
let data = try? Data(contentsOf: url!) //make sure your image in this url does exist, otherwise unwrap in a if let check
let theImage = UIImage(data: data!)
let Texture = SKTexture(image: theImage!)
return SKSpriteNode(texture: Texture)

Related

Can I use SKScene for material on an Obj file loaded in SceneKit?

My goal is to be able to tap on a specific model and color the surface. I have managed to do this with generic SCNSphere, SCNBox, etc.
I set it up like this, and it basically works for SCNSphere, etc:
let node = SCNNode(geometry: geometry)
let material = SCNMaterial()
material.specular.contents = SKColor.init(white: 0.1, alpha: 1)
material.shininess = 2.0;
material.normal.contents = "wood-normal.png"
let skScene = SKScene.init(size: CGSize(width: SPRITE_SIZE, height: SPRITE_SIZE))
skScene.backgroundColor = SKColor.orange
skScene.scaleMode = .aspectFill
material.diffuse.contents = skScene
geometry.firstMaterial = material
scnScene.rootNode.addChildNode(node)
However, when I load a .obj file and try to set the material with a SKScene I don't get anything. Here is how I set up the obj
let bundle = Bundle.main
let path = bundle.path(forResource: name, ofType: "obj")
let url = NSURL(fileURLWithPath: path!)
let asset = MDLAsset(url: url as URL)
guard let object = asset.object(at: 0) as? MDLMesh else {
print("Failed to get mesh from obj asset")
return
}
let material = SCNMaterial()
material.specular.contents = SKColor.init(white: 0.1, alpha: 1)
let skScene = SKScene.init(size: CGSize(width: SPRITE_SIZE, height: SPRITE_SIZE))
skScene.backgroundColor = SKColor.orange
skScene.scaleMode = .aspectFill
material.diffuse.contents = skScene
let geometry = SCNGeometry.init(mdlMesh: object)
let node = SCNNode(geometry: geometry)
node.geometry?.materials = [material]
scnScene.rootNode.addChildNode(node)
But as you can see, the color of the teapot is not orange
My question is, am I doing something wrong in terms of using a SKScene as a material for an obj file, or is there some limitation that I'm aware of? I've spent a few hours on this, and am open to suggestions if anyone has any ideas.
Also, in this sort of situation, how do I decide what the size of the SKScene is since I want to arbitrarily wrap the whole mesh with a SKScene?
Update 1
I sort of figured something out. My obj file doesn't have texture coordinates, only vertices. When I tried another obj file with texture coordinates, then the skScene seems to have loaded correctly. So I guess my teapot doesn't have uv coordinates, so it can't map the texture. Is there a way to add uv coordinates if the obj file doesn't have them already?

Show image on SCNPlane in Scene kit & AR kit in swift iOS

I followed this raywenderlich tutorial to detect image using AR and Scene Kit. In this example they showed displaying video on top of detected image.
https://www.raywenderlich.com/6957-building-a-museum-app-with-arkit-2
I want to display image on top of detected image. Please help me how can I add childNode to SCNNode so that I can display image on top of it.
Here is the code which I tried it's not working showing a blank white screen.
DispatchQueue.main.async {
let size = imageAnchor.referenceImage.physicalSize
let imageView = UIImageView(image: UIImage(named: "BG.png")!)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: size.width, height: size.height)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
Please help to show image on top of detected image. Thanks in advance.
I'm able to show image on top of detected image but it takes lot of memory what is wrong in that code.
And same as image view I'm displaying gif play on top of detected image with the below code.
let size = imageAnchor.referenceImage.physicalSize
let imgNameFromDoc = person["actionToTake"] as! String
let documentsPathURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let imgpath = documentsPathURL?.appendingPathComponent("/Packages/\(self.selectedDatasetName)/3DScenes/\(imgNameFromDoc)")
let imageData = try! Data(contentsOf: URL(fileURLWithPath: imgpath!.path))
let imageURL = UIImage.gifImageWithData(imageData)
self.gifImgView = UIImageView(image: imageURL)
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = self.gifImgView
let imgPlane = SCNPlane(width: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[0]), height: CGFloat(scaleActionSize(wid: Float(size.width), hei:Float(size.height))[1]))
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
With the help of image extension I'm showing GIF on image view on top of detected image.
https://github.com/kiritmodi2702/GIF-Swift/blob/master/GIF-Swift/iOSDevCenters%2BGIF.swift
How to load GIF image in Swift?
I dragged iOSDevCenters+GIF.swift class inside app and displayed gif.
When I use this gif image viewto display gif and image view to display image it takes more than 800 MB memory in Xcode running with iPad.
can somebody help me to find any memory leaks(tried with instruments not able to fix) or anything wrong in this code.
I guess you have image tracking working.
The next method is called when a new anchor is added. In this case when your image tracked is detected. So the parameter node is the image tracked and you can add a child to that image as in the tutorial. So you can try the next (use ARSCNViewDelegate):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
let imageView = UIImageView(image: UIImage())
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: 0.1, height: 0.1)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
}
The imgNode is added to the node and the node is already added to the scene. So what happen if the node is not added to the scene? You have to add it by your own. So in case you are not using the renderer method, an example of it is the next one:
self.imgNode?.position = SCNVector3(0, 0, 0)
sceneView.scene.rootNode.addChildNode(self.imgNode)
Displaying UIKit views as material property contents is not supported.
Instead of a UIImageView you should directly use the UIImage or even better just a String or URL that points to the image on disk. This way the image won't be loaded twice in memory (in SceneKit and UIKit).

How do i detect multiple images in AR?

I am creating this live newspaper app with ARKit which transforms images on newspaper into videos.I am able to detect one image and play a video on it but when i try doing it on two images and play a corresponding video on that images i get an error like
Attempted to add a SKNode which already has a parent
I tried by checking tracked images and comparing them to reference images but i think something is wrong with my logic.
This is my ViewDidAppear() Method
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARImageTrackingConfiguration()
if let trackedImages = ARReferenceImage.referenceImages(inGroupNamed: "NewsPaperImages", bundle: Bundle.main) {
configuration.trackingImages = trackedImages
configuration.maximumNumberOfTrackedImages = 20
}
// Run the view's session
sceneView.session.run(configuration)
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
if(imageAnchor.referenceImage == UIImage(named: "Image2_River")) {
self.videoNode = SKVideoNode(fileNamed: "riverBeauty.mp4")
}
print("Yes it is an image")
self.videoNode.play()
let videoScene = SKScene(size: CGSize(width: 480, height: 360))
videoNode.position = CGPoint(x: videoScene.size.width/2, y: videoScene.size.height/2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi/2
node.addChildNode(planeNode)
}
return node
}
it should be playing video on every detected image but instead it crashes.
Looking at your code, and from what you have said I think that you need to change your logic, to create a Video Node using the name property of your ARReferenceImage instead.
When you create an ARReferenceImage either statically (within the ARResourceBundle) or dynamically you can assign it a name e.g:
Then you can use the names to add logic to assign a different video to each referenceImage detected.
And in order to keep your code DRY you could create a reusable function to create your video node.
As such a simple example might look something like so:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARImageAnchor
guard let validAnchor = anchor as? ARImageAnchor else { return }
//2. Create A Video Player Node For Each Detected Target
node.addChildNode(createdVideoPlayerNodeFor(validAnchor.referenceImage))
}
/// Creates An SCNNode With An AVPlayer Rendered Onto An SCNPlane
///
/// - Parameter target: ARReferenceImage
/// - Returns: SCNNode
func createdVideoPlayerNodeFor(_ target: ARReferenceImage) -> SCNNode{
//1. Create An SCNNode To Hold Our VideoPlayer
let videoPlayerNode = SCNNode()
//2. Create An SCNPlane & An AVPlayer
let videoPlayerGeometry = SCNPlane(width: target.physicalSize.width, height: target.physicalSize.height)
var videoPlayer = AVPlayer()
//3. If We Have A Valid Name & A Valid Video URL The Instanciate The AVPlayer
if let targetName = target.name,
let validURL = Bundle.main.url(forResource: targetName, withExtension: "mp4", subdirectory: "/art.scnassets") {
videoPlayer = AVPlayer(url: validURL)
videoPlayer.play()
}
//4. Assign The AVPlayer & The Geometry To The Video Player
videoPlayerGeometry.firstMaterial?.diffuse.contents = videoPlayer
videoPlayerNode.geometry = videoPlayerGeometry
//5. Rotate It
videoPlayerNode.eulerAngles.x = -.pi / 2
return videoPlayerNode
}
}
As you can see I have opted to use an AVPlayer as my video content, although you can continue to use your videoScene should you so desire.
Hope it points you in the right direction...

Convert coordinates in ARImageTrackingConfiguration

With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...

ARKIT - how many tracking images can it track?

So I understand that in order to track images, we need to create a AR Resource Folder and place all the images we intend to track there, as well as configuring thru the inspector their real world size properties.
Then we set the array of ARReferenceImages to the Session's World Config.
All good with that.
But HOW MANY can we track ? 10? 100? 1000000? and would it be possible to download those images and create ARReferences on the fly, instead of having them in the bundle from the very beginning ?
Having a look at the Apple Docs it doesn't seem to specify a limit. As such it is likely to assume it would likely depend on memory management etc.
Regarding creating images on the fly, this is definitely possible.
According to the docs this can be done one of two ways:
Creating a a new reference image from a Core Graphics image object:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Creating a new reference image from a Core Video pixel buffer:
init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Here is an example of creating a referenceImage on the fly using an image from the standard Assets Bundle, although this can easily be adapted for parsing an image from a URL etc:
// Create ARReference Images From Somewhere Other Than The Default Folder
func loadDynamicImageReferences(){
//1. Get The Image From The Folder
guard let imageFromBundle = UIImage(named: "moonTarget"),
//2. Convert It To A CIImage
let imageToCIImage = CIImage(image: imageFromBundle),
//3. Then Convert The CIImage To A CGImage
let cgImage = convertCIImageToCGImage(inputImage: imageToCIImage)else { return }
//4. Create An ARReference Image (Remembering Physical Width Is In Metres)
let arImage = ARReferenceImage(cgImage, orientation: CGImagePropertyOrientation.up, physicalWidth: 0.2)
//5. Name The Image
arImage.name = "CGImage Test"
//5. Set The ARWorldTrackingConfiguration Detection Images Assuming A Configuration Is Running
configuration.detectionImages = [arImage]
}
/// Converts A CIImage To A CGImage
///
/// - Parameter inputImage: CIImage
/// - Returns: CGImage
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
We can then test this within ARSCNViewDelegate e.g.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
As you can see the process if fairly easy. So in your case, you are probably more interested in the conversion function above which uses this method to create the dynamic images:
init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)
Paraphrasing the Human Interface Guidelines for AR... image detection performance/accuracy deteriorates as the number of images increases. So there’s no hard limit in the API, but if you try to put more than around 25 images in the current detection set, it’ll start getting to where it’s too slow/inaccurate to be useful.
There are lots of other factors affecting performance/accuracy, too, so consider that a guideline, not a hard limit. Depending on scene conditions in the place where you’re running the app, how much you’re stressing the CPU with other tasks, how distinct your reference images are from one another, etc, you might manage a few more than 25... or start having detection problems with a few less than 25.

Resources