I've created AR app which detects image, upon image detection I want to play gif on top of it.
I followed this tutorial to detect image: https://www.raywenderlich.com/6957-building-a-museum-app-with-arkit-2
In VC I added Imageview like this:
var imageView = GIFImageView(frame: CGRect(x: 0, y: 0, width: 600, height: 600))
Here is my code in ARSCNViewDelegate didAdd node for method.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async { self.instructionLabel.isHidden = true }
if let imageAnchor = anchor as? ARImageAnchor {
// handleFoundImage(imageAnchor, node)
let size = imageAnchor.referenceImage.physicalSize
DispatchQueue.main.async(){ // If we remove this we are getting UIview setAnimation is being call from background thread error is coming.
self.imageView.animate(withGIFNamed: "tenor.gif") // I actually access gif from Document folder i.e Data format
}
let imgMaterial = SCNMaterial()
imgMaterial.diffuse.contents = imageView
let imgPlane = SCNPlane(width: size.width, height: size.height)
imgPlane.materials = [imgMaterial]
let imgNode = SCNNode(geometry: imgPlane)
imgNode.eulerAngles.x = -.pi / 2
node.addChildNode(imgNode)
node.opacity = 1
}
}
After playing gif when I go back to my previous/next/same VC I can't tap on any UI elements(buttons etc).
In console I see this but I did not find solution to this. view animation is there in UIImage+gif swift file.
UIView setAnimationsEnabled being called from a background thread. Performing any operation from a background thread on UIView or a subclass is not supported and may result in unexpected and insidious behavior
Just run this in Device.
https://drive.google.com/file/d/1FKHPO6SkdOEZ-w_GFnrU5CeeeMQrNT-h/view?usp=sharing
You just run this project in device and scan dinosaur.png image(added ion xcode) you will gif playing on top of it. Once if you go back to firstVC that's all app is freezed you can't tap on any button in First VC and also hyou can't start AR scene again.
I can't figure out this issue why it's happening after palying GIF can you pleach check and let me know.
If anything is required please let me know.. Thanks in advance.
Gif are a bit annoying to handle is iOS, and because of that, I use mp4 resources instead of Gif, which can be loaded into a AVPlayer directly.
In your case I see that you have the resources in the Bundle, so you can convert them to mp4 and use it instead.
To play a video in SceneKit you can use this link How do I create a looping video material in SceneKit on iOS in Swift 3?.
Related
This is a long question so I wanted to put a TL;DR on top:
I want to track QR codes via on of two methods: image tracking by cropping them upon detection, or placing anchors with raycasting. Both of these methods fail when the phone is in portrait mode. Camera source is an ARSession, SceneKit and RealityKit not used. There's only ARKit. What to do?
I am currently working on an application with Swift in which I try to render some stuff on a server, transmit the video to iPhone and display it on screen using a MTKView. I only needed a custom Meal shader to apply some complex calculations to received frames, so I did not use SceneKit or RealityKit. I only have ARSession from ARKit and a Metal view here, and up to this point everything works fine.
I am able to do image tracking at this point. However, I want to apply this behaviour to QR codes. What I want is to detect a QR code (multiple if possible) and then track it just like images. Since I don't have the QR code as ARReferenceImages beforehand like normal image tracking, I was left with two options:
Option 1: Using raycast(_:) on ARSession
This is probably the right way to do it. However, for this I need to activate both plane tracking options on ARSession, which then creates many anchors and managing them with image tracking becomes harder. This is not the actual problem though. Actual problem is that when the phone is in landscape mode, raycasting works as intended. When phone goes into portrait mode, even if I pass the frame in correct orientation it misses everything and hit test results return empty. I am not using hitTest(_:) because it is deprecated.
I want to explain the "correct orientation" thing here before going into second option. ARSession is capturing frames and I am able to check each frame through didUpdate delegate function of the session. When I read the pixel buffer out of the frame using frame.capturedImage and turn it into a CIImage, the image is always in landscape mode (width > height). Doesn't matter if the phone is in portrait mode or not. So whenever I want to pass this image, I am using oriented(.right) for portrait and oriented(.up) for landscape. I got that idea from another question asked about QR bounding box, and so far it is the best option (but not good enough). Just want to note that when I tried raycasting, I tried it with the image size, not screen size (screen size = my Metal view size because it is fullscreen) since the image is larger than the screen in reality. I am able to see this if I put a breakpoint and quicklook my CIImage created from current camera frame.
Option 2: Cropping the QR and treating it as image tracking
This is another approach which I am currently working on. Algorithm is simple: check every frame with Vision. If there are detected QR codes, read their data first. If that data matches with an existing QR, then re-read it if the cropped QR size is larger than existing one. If not, do nothing. Then use this cropped QR image for tracking QR as an image. At this point we would have the data already so no problems here.
However, I tried many times to do the proper transformation explained here in the answer. Again, I think I am able to transform normalized bounding box into a real rect which can correctly crop the image. Yet, as it is in raycasting, works perfectly only if the phone is in landscape position. When in portrait it works good enough ONLY IF the phone is really close to QR code and it is centered on the screen.
For related code, I have this in my View controller:
private var ciContext: CIContext = CIContext.init(options: nil)
private var sequenceHandler: VNImageRequestHandler?
And then I have this code to extract QR codes from CIImage:
func extractQrCode(image: CIImage) -> [VNBarcodeObservation]? {
self.sequenceHandler = VNImageRequestHandler(ciImage: image)
let barcodeRequest = VNDetectBarcodesRequest()
barcodeRequest.symbologies = [.QR]
try? self.sequenceHandler?.perform([barcodeRequest])
guard let results = barcodeRequest.results else {
return nil
}
return results
}
An this is the delegate that checks and operates on every frame (code currently for Option 2):
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let rotImg = self.renderer?.getInterfaceOrientation() == .portrait ? CIImage(cvPixelBuffer: frame.capturedImage).oriented(.right) : CIImage(cvPixelBuffer: frame.capturedImage)
if let barcodes = self.extractQrCode(image: rotImg) {
for barcode in barcodes {
guard let payload = barcode.payloadStringValue else { continue }
var rect = CGRect()
rect = VNImageRectForNormalizedRect(barcode.boundingBox.botToTop(), Int(rotImg.extent.width), Int(rotImg.extent.height))
let existingQR = TrackedImagesManager.imagesToTrack.filter{ $0.isQR && $0.QRData == payload}.first
if ((rect.size.width < 800 || rect.size.height < 800 || abs(rect.size.height - rect.size.width) > 32) && existingQR == nil) {
DispatchQueue.main.async {
self.showToastMessage(message: "Please get closer to the QR code and try centering it on your screen.", font: UIFont.systemFont(ofSize: 18), duration: 3)
}
continue
} else if (existingQR != nil) {
if (rect.width > existingQR?.originalImage?.size.width ?? 999) {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
existingQR?.originalImage = trackImg
existingQR?.image = ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1)
} else {
continue
}
} else if rect.width != 0 {
let croppedImg = rotImg.cropped(to: rect)
let croppedCgImage = self.ciContext.createCGImage(croppedImg, from: croppedImg.extent)!
let trackImg = UIImage(cgImage: croppedCgImage)
TrackedImagesManager.imagesToTrack.append(TrackedImage(id: 9, type: 1, image: ARReferenceImage(croppedCgImage, orientation: .up, physicalWidth: 0.1), originalImage: trackImg, isQR: true, QRData: payload))
print("qr norm rect: \(barcode.boundingBox) \n qr rect: \(rect) \nqr data: \(payload) \nqr hittestres: ")
}
}
}
}
Finally, for the transformation, I have this extension (tried various ways, this is the best so far):
extension CGRect {
func botToTop() -> CGRect {
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
return self.applying(transform)
}
}
So for both options I need some advice to make things right. Android side of the same thing is implemented as in Option 2, but Android returns a nicely cropped QR code upon detection. We don't have that. What do I do now?
I'm adding an anchor to my sceneView in the world origin position:
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
sceneView.session.add(anchor: ARAnchor(name: "world origin", transform: matrix_identity_float4x4))
sceneView.presentScene(SKScene())
I then choose to show a SKLabelNode at my anchor position
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
let labelNode = SKLabelNode(text: "👾")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode;
}
When I run the app, and move the camera around, I notice that I either don't see the node at all (I just see the normal camera), or I see the entire screen is purple (as though the label node font size is huge).
I tried adding labelNode.fontSize = 5 but that still has the same problem.
How do I get nodes to display in my SpriteKit scene properly?
When I compare my code with Apple's default SpriteKit + ARKit sample app, the main difference is the line:
sceneView.presentScene(SKScene())
In their sample code, they use:
// Load the SKScene from 'Scene.sks'
if let scene = SKScene(fileNamed: "Scene") {
sceneView.presentScene(scene)
}
It appears like SKScene.sks has a size of 750 x 1334 and therefore is equivalent to:
sceneView.presentScene(SKScene(size: CGSize(width: 750, height: 1334)))
When I update the code to the last line (i.e. give the SKScene a size), everything displays properly. However, I don't think hardcoding the width and height is the right way to do things. I'll ask a separate question about that.
Is there a way to display camera images without using AVCaptureVideoPreviewLayer?
I want to do screen capture, but I can not do it.
session = AVCaptureSession()
camera = AVCaptureDevice.default(
AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .front) // position: .front
do {
input = try AVCaptureDeviceInput(device: camera)
} catch let error as NSError {
print(error)
}
if(session.canAddInput(input)) {
session.addInput(input)
}
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraView.backgroundColor = UIColor.red
previewLayer.frame = cameraView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
cameraview.layer.addSublayer(previewLayer)
session.startRunning()
I am currently trying to broadcast a screen capture. It is to synthesize the camera image and some UIView. However, if you use AVCaptureVideoPreviewLayer screen capture can not be done and the camera image is not displayed. Therefore, I want to display the camera image so that screen capture can be performed.
Generally the views that are displayed using GPU directly may not be redrawn on the CPU. This includes situations like openGL content or these preview layers.
The "screen capture" redraws the screen on a new context on CPU which obviously ignores the GPU part.
You should try and play around with adding some outputs on the session which will give you images or rather CMSampleBuffer shots which may be used to generate the image.
There are plenty ways in doing this but you will most likely need to go a step lower. You can add output to your session to receive samples directly. Doing this is a bit of a code so please refer to some other posts like this one. The point in this you will have a didOutputSampleBuffer method which will feed you CMSampleBufferRef objects that may be used to construct pretty much anything in terms of images.
Now in your case I assume you will be aiming to get UIImage from sample buffer. To do so you may again need a bit of code so refer to some other post like this one.
To put it all together you could as well simply use an image view and drop the preview layer. As you get the sample buffer you can create image and update your image view. I am not sure what the performance of this would be but I discourage you on doing this. If image itself is enough for your case then you don't need a view snapshot at all.
But IF you do:
On snapshot create this image. Then overlay your preview layer with and image view that is showing this generated image (add a subview). Then create the snapshot and remove the image view all in a single chunk:
func snapshot() -> UIImage? {
let imageView = UIImageView(frame: self.previewPanelView.bounds)
imageView.image = self.imageFromLatestSampleBuffer()
imageView.contentMode = .aspectFill // Not sure
self.previewPanelView.addSubview(imageView)
let image = createSnapshot()
imageView.removeFromSuperview()
return image
}
Let us know how things turn and you tried, what did or did not work.
I am using a very simple method to setup a SKVideoNode and place it inside an SCNNode via the geometry's diffuse contents. When I do this, the only time the texture updates and shows the video properly is when the camera or node is moving. When both are stationary, the texture never updates (like the video isn't even playing) but the sound does play.
Obviously it's still playing the video, but not rendering properly. I have no idea why.
func setupAndPlay() {
// create the asset & player and grab the dimensions
let path = NSBundle.mainBundle().pathForResource("indycar", ofType: "m4v")!
let asset = AVAsset(URL: NSURL(fileURLWithPath: path))
let size = asset.tracksWithMediaType(AVMediaTypeVideo)[0].naturalSize
let player = AVPlayer(playerItem: AVPlayerItem(asset: asset))
// setup the video SKVideoNode
let videoNode = SKVideoNode(AVPlayer: player)
videoNode.size = size
videoNode.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
// setup the SKScene that will house the video node
let videoScene = SKScene(size: size)
videoScene.addChild(videoNode)
// create a wrapper ** note that the geometry doesn't matter, it happens with spheres and planes
let videoWrapperNode = SCNNode(geometry: SCNSphere(radius: 10))
videoWrapperNode.position = SCNVector3(x: 0, y: 0, z: 0)
// set the material's diffuse contents to be the video scene we created above
videoWrapperNode.geometry?.firstMaterial?.diffuse.contents = videoScene
videoWrapperNode.geometry?.firstMaterial?.doubleSided = true
// reorient the video properly
videoWrapperNode.scale.y = -1
videoWrapperNode.scale.z = -1
// add it to our scene
scene.rootNode.addChildNode(videoWrapperNode)
// if I uncomment this, the video plays correctly; if i comment it, the texture on the videoWrapperNode only
// get updated when I'm moving the camera around. the sound always plays properly.
videoWrapperNode.runAction( SCNAction.repeatActionForever( SCNAction.rotateByAngle(CGFloat(M_PI * 2.0), aroundAxis: SCNVector3(x: 0, y: 1, z: 0), duration: 15.0 )))
videoNode.play()
}
Has anyone come across anything similar? Any help would be appreciated.
Sounds like you need to set .playing = true on your SCNView.
From the docs.
If the value of this property is NO (the default), SceneKit does not
increment the scene time, so animations associated with the scene do
not play. Change this property’s value to YES to start animating the
scene.
I also found that setting rendersContinously to true on the renderer (e.g. scnView) will make the video play.
If you log the SCNRendererDelegate's update calls, you can see when frames are drawn
I have been unsuccessful & getting a SKVideoNode to display video in Xcode 7.0 or 7.1. If I run my code or other samples on a hardware device like an iPad or iPhone the video display fine, but on the simulator only audio plays. Same code works fine in xCode 6.4's simulator.
I have the CatNap example from Ray Wenderlich iOS & tvOS Games by tutorial (iOS 9) & it does NOT run in the Simulator 9.1 that comes with Xcode 7.1. I believe the simulator is broken & have filed a bug with Apple but have had no response in a month.
Does anyone have sample code for a SKVideoNode the works on the simulator in xCode 7.1??
I'm new to SceneKit... trying to get some basic stuff working without much success so far. For some reason when I try to apply a png texture to a CNBox I end up with nothing but blackness. Here is the simple code snippet I have in viewDidLoad:
let sceneView = (view as SCNView)
let scene = SCNScene()
let boxGeometry = SCNBox(width: 10.0, height: 10.0, length: 10.0, chamferRadius: 1.0)
let mat = SCNMaterial()
mat.locksAmbientWithDiffuse = true
mat.diffuse.contents = ["sofb.png","sofb.png","sofb.png","sofb.png","sofb.png", "sofb.png"]
mat.specular.contents = UIColor.whiteColor()
boxGeometry.firstMaterial = mat
let boxNode = SCNNode(geometry: boxGeometry)
scene.rootNode.addChildNode(boxNode)
sceneView.scene = scene
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
What it ends up looking like is a white light source reflecting off of a black cube against a black background. What am I missing? I appreciate all responses
If you had different images, you would build a different SCNMaterial object from each like so:
let material_L = SCNMaterial()
material_L.diffuse.contents = UIImage(named: "CapL")
Here, CapL refers to a .png file that has been stored in the project's Assets.xcassets folder. After building 6 such objects, you hand them to the boxNode as follows:
boxGeometry.materials = [material_L, material_green_r, material_K, material_purple_r, material_g, material_j]
Note that "boxGeometry" would be better named "box" or "cube". Also, it would be a good idea to do that work in a new class in your project, constructed like:
class BoxScene: SCNScene {
Which you would then call with modern Swift in your viewController's viewDidLoad method like this:
let scnView = self.view as! SCNView
scnView.scene = BoxScene()
(For that let statement to work, go to Main.storyboard -> View Controller Scene -> View Controller -> View -> Identity icon Then under Custom Class, change it from UIView to SCNView. Otherwise, you receive an error message, like:
Could not cast value of type 'UIView' to 'SCNView'
Passing an array of images (to create a cube map) is only supported by the reflective material property and the scene's background.
In your case, all the images are the same, so you would only have to assign the image (not an array) to the contents to have it appear on all sides of the box