ARKit takes 30 seconds to start tracking? - ios

Ok, exploring ARKit with Swift in iOS 11 here and I have made a very simple app that just adds nodes at the point where the user taps:
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Create a new scene
let actualScene = SCNScene(named: "art.scnassets/ship.scn")!
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
let results = sceneView.hitTest(touch.location(in: sceneView), types: [ARHitTestResult.ResultType.featurePoint])
guard let hitFeature = results.last else { return }
let hitTransform = SCNMatrix4.init(hitFeature.worldTransform) // <- if higher than beta 1, use just this -> hitFeature.worldTransform
let hitPosition = SCNVector3Make(hitTransform.m41,
hitTransform.m42,
hitTransform.m43)
createBall(hitPosition: hitPosition)
}
func createBall(hitPosition : SCNVector3) {
let newBall = SCNSphere(radius: 0.01)
let newBallNode = SCNNode(geometry: newBall)
newBallNode.position = hitPosition
self.sceneView.scene.rootNode.addChildNode(newBallNode)
}
And this works. My issue is that when first running the app, it takes 30-60 seconds of just panning the camera around where tapping does nothing.
It seems like ARKit is "loading", so that when I tap in this first minute no nodes appear in the tapped position. Nothing happens for that first minute.
Why is this? Is there a way to expedite this loading process? What is happening here?

On the overrided function viewWillAppear call this function
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
sceneView.session.run(configuration)
sceneView.delegate = self
}
and delete everything in viewdidload and add what you need in view will appear

Related

CAMetalLayer.nextDrawable takes much time, even more than 8ms

CAMetalLayer.nextDrawable() should not be a very time-comsuming method.But sometimes it often takes much time, even more than 8ms
Copy the code below
Follow the comment guide in viewDidLoad
See the log print
class TestVC: UIViewController {
let metalLayer:CAMetalLayer = CAMetalLayer()
var displayLink:CADisplayLink!
var thread:Thread!
var animationView:AnimationView!
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
self.displayLink.isPaused = true
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
self.displayLink.isPaused = false
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
print("ahhhaa")
}
override func viewDidLoad() {
super.viewDidLoad()
self.view.backgroundColor = .red
let scale = UIScreen.main.scale
metalLayer.frame = self.view.bounds
metalLayer.drawableSize = CGSize(width: self.view.frame.width * scale, height: self.view.frame.height * scale)
self.view.layer.addSublayer(metalLayer)
self.addDisplayLinkInUITaskRunner()
// Wait screen loading complete,and next,You can:
// 1. Swipe down the control center, you will see a lot log print in console, and it will
// constantly exists.
// 2. Move the app to bg, and then move to fg, you will see a lot of log,
// it also will constantly exist for a long time.
// 3. Just don't do anything, you might see it the log print constantly....
// And when you tap the screen, it may disappear, and After a while, you may see log again....
}
func addDisplayLinkInUITaskRunner() {
self.thread = Thread(block: {
RunLoop.current.add(NSMachPort(), forMode: .common)
RunLoop.current.run()
})
self.thread.name = ""
self.thread.start()
self.perform(#selector(addDisplayLink), on: thread, with: nil, waitUntilDone: false)
}
#objc func addDisplayLink() {
self.displayLink = CADisplayLink(target: self, selector: #selector(onDisplayLink))
if #available(iOS 15.0, *) {
self.displayLink.preferredFrameRateRange = .init(minimum: 60, maximum: 120, preferred: 120)
} else {
self.displayLink.preferredFramesPerSecond = 120
}
self.displayLink.add(to: .current, forMode: .common)
}
#objc private func onDisplayLink() {
let startTime = CACurrentMediaTime()
let frameDrawable = metalLayer.nextDrawable()!
let timeUsed = CACurrentMediaTime() - startTime
// If time used to get next drawble over 3ms,
// we print it here to indicate this method take much time!
if (timeUsed > 0.005) {
print("CAMetalLayer.nextDrawable take much time!! -> \(String(format: "%.2f", timeUsed * 1000)) ms")
}
frameDrawable.present()
}
}
If you have displaySyncEnabled set to true (the default) it will wait for the next vsync to display the drawable. This means you can very quickly run out of drawables, and so nextDrawable will wait until one becomes available (or up to 1 second).
In other words, since you already have drawables queued to be presented, you can't get another one until one actually becomes available.
If you indeed want to present them as fast as possible - set displaySyncEabled to false. However, the current behavior may just be what you want, since you rarely gain anything from displaying frames faster than the display's refresh rate.

How to track time of finger on screen swift

I`m here because after weeks of trying different solutions and don't come with the right answer and functional in-app I am exhausted.
I need to track the time of finger on-screen and if a finger is on screen longer than 1 sec I need to call function. But also if now user is performing gestures like a pan or pinch function must be not called.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
let touch = touches.first
guard let touchLocation = touch?.location(in: self) else {return }
let tile: Tile?
switch atPoint(touchLocation){
case let targetNode as Tile:
tile = targetNode
case let targetNode as SKLabelNode:
tile = targetNode.parent as? Tile
case let targetNode as SKSpriteNode:
tile = targetNode.parent as? Tile
default:
return
}
guard let tile = tile else {return }
paint(tile: tile)
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesEnded(touches, with: event)
}
private func paint(tile: Tile){
let col = tile.column
let row = tile.row
let colorPixel = gridAT(column: col, row: row)
if !tile.isTouchable {
}else {
if !selectedColor.isEmpty {
if tile.mainColor == selectedColor {
tile.background.alpha = 1
tile.background.color = UIColor(hexString:selectedColor)
tile.text.text = ""
tile.backgroundStroke.color = UIColor(hexString:selectedColor)
uniqueColorsCount[selectedColor]?.currentNumber += 1
didPaintCorrect(uniqueColorsCount[selectedColor]?.progress ?? 0)
colorPixel?.currentState = .filledCorrectly
tile.isTouchable = false
}
else if tile.mainColor != selectedColor {
tile.background.color = UIColor(hexString:selectedColor)
tile.background.alpha = 0.5
colorPixel?.currentState = .filledIncorrectly
}
}
}
}
Here is a small example I created for you with my suggestion of using UILongPressGestureRecognizer as it seems easier to manage for your situation than processing touchesBegin and touchesEnded
You can give it the minimum time the user needs to tap so it seems perfect for your requirement.
You can read more about it here
First I just set up a basic UIView inside my UIViewController with this code and add a long tap gesture recognizer to it:
override func viewDidLoad() {
super.viewDidLoad()
// Create a basic UIView
longTapView = UIView(frame: CGRect(x: 15, y: 30, width: 300, height: 300))
longTapView.backgroundColor = .blue
view.addSubview(longTapView)
// Initialize UILongPressGestureRecognizer
let longTapGestureRecognizer = UILongPressGestureRecognizer(target: self,
action: #selector(self.handleLongTap(_:)))
// Configure gesture recognizer to trigger action after 2 seconds
longTapGestureRecognizer.minimumPressDuration = 2
// Add gesture recognizer to the view created above
view.addGestureRecognizer(longTapGestureRecognizer)
}
This gives me something like this:
Next, to get the location of the tap, the main question to ask yourself is - Where did the user tap in relation to what view ?
For example, let's say the user taps here:
Now we can ask what is location of the tap in relation to
The blue UIView - It is approx x = 0, y = 0
The ViewController - It is approx x = 15, y = 30
The UIView - It is approx x = 15, y = 120
So based on your application, you need to decide, in relation to which view do you want the touch.
So here is how you can get the touch based on the view:
#objc
private func handleLongTap(_ sender: UITapGestureRecognizer)
{
let tapLocationInLongTapView = sender.location(in: longTapView)
let tapLocationInViewController = sender.location(in: view)
let tapLocationInWindow = sender.location(in: view.window)
print("Tap point in blue view: \(tapLocationInLongTapView)")
print("Tap point in view controller: \(tapLocationInViewController)")
print("Tap point in window: \(tapLocationInWindow)")
// do your work and function here
}
For same touch as above image, I get the following output printed out:
Tap point in blue view: (6.5, 4.5)
Tap point in view controller: (21.5, 34.5)
Tap point in window: (21.5, 98.5)

React To Press Of Button On Apple Watch Sprite iOS app

Here's my question. I would like to have an Apple Watch game react based on the Apple Watch side. But I'm totally new and lost with it.
I think the connectivity between Apple Watch and iOS app is ok, but I don't know how can I catch the event in the Game Controller, and even less how to deal this with Game Scene.
The result of all of my research have just shown characters can move by themselves or by a click event, but nothing about how to bind to any buttons.
So I'm asking for some help regarding this, and if you have any examples, they are welcome.
Here is my viewDidLoad method to load GameScene :
override func viewDidLoad() {
super.viewDidLoad()
guard WCSession.isSupported() else {
return
}
let session = WCSession.default
session.delegate = self
session.activate()
if let view = self.view as! SKView? {
// Load the SKScene from 'GameScene.sks'
if let scene = SKScene(fileNamed: "GameScene") {
// Set the scale mode to scale to fit the window
scene.scaleMode = .aspectFill
// Present the scene
view.presentScene(scene)
}
view.ignoresSiblingOrder = true
view.showsFPS = true
view.showsNodeCount = true
}
}
And my delegate GameController which, I think is supposed to link to the action button with movement logic in GameScene (but maybe I'm wrong on that point):
extension GameViewController: WCSessionDelegate {
func session(_ session: WCSession, didReceiveApplicationContext applicationContext:
[String : Any]) {
//Truly don't know what to put in there
}
And finally my GameScene 'touchesBegan' method, which contains the movement logic:
var locx = CGFloat()
var locy = CGFloat()
var character = SKSpriteNode()
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if(locx > 0) {
print(locx)
character.position.x += 1
} else if (locx < 0) {
character.position.x -= 1
}
if(locy > 0) {
print(locy)
character.position.y += 1
}else if (locy < 0) {
character.position.y -= 1
}
}
I'm pretty sure there are some misunderstandings on my side, but I can't find any issues.
Thank for all help you could provide.

How to change camera position by dragging finger on the screen - Swift

I'm trying to learn ARKIT and make a small demo app to draw in 3D.
The following is the code I wrote and so far there are no problems:
import UIKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet weak var sceneView: ARSCNView!
#IBOutlet weak var DRAW: UIButton!
#IBOutlet weak var DEL: UIButton!
let config = ARWorldTrackingConfiguration()
override func viewDidLoad() {
super.viewDidLoad()
self.sceneView.session.run(config)
self.sceneView.delegate = self
}
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else {return}
let transform = pointOfView.transform
let cameraOrientation = SCNVector3(-transform.m31,-transform.m32,-transform.m33)
let cameraLocation = SCNVector3(transform.m41,transform.m42,transform.m43)
let cameraCurrentPosition = cameraOrientation + cameraLocation
DispatchQueue.main.async {
if (self.DRAW.isTouchInside){
let sphereNode = SCNNode(geometry: SCNSphere(radius: 0.02))
sphereNode.position = cameraCurrentPosition
self.sceneView.scene.rootNode.addChildNode(sphereNode)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
print("RED Button is Pressed")
}else if (self.DEL.isTouchInside){
self.sceneView.scene.rootNode.enumerateChildNodes{
(node, stop) in
node.removeFromParentNode()
}
}else{
let pointer = SCNNode(geometry: SCNSphere(radius: 0.01))
pointer.name = "pointer"
pointer.position = cameraCurrentPosition
self.sceneView.scene.rootNode.enumerateChildNodes({(node,_) in
if node.name == "pointer"{
node.removeFromParentNode()
}
})
self.sceneView.scene.rootNode.addChildNode(pointer)
pointer.geometry?.firstMaterial?.diffuse.contents = UIColor.purple
}
}
}
}
func +(left:SCNVector3,right:SCNVector3) -> SCNVector3 {
return SCNVector3Make(left.x + right.x, left.y + right.y, left.z + right.z)
}
As you can see, I set the scene and configure it,
I create a button to draw when pressed, a pointer (or viewfinder) that takes the center of the scene and a button to delete the nodes inserted.
Now I would like to be able to move the cameraCurrentPosition to a different point from the center: I would like to move it if possible with a touch on the screen taking the position of the finger.
If possible, could someone help me with the code?
Generally speaking, you can't programmatically move the Camera within an ARSCN, the camera transform is the physical position of the device relative to the virtual scene.
With that being said, one way you could draw the user touches to the screen is using the touchesMoved method within your View Controller.
var touchRoots: [SCNNode] = [] // list of root nodes for each set of touches drawn
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// get the initial touch event
if let touch = touches.first {
guard let pointOfView = self.sceneView.pointOfView else { return }
let transform = pointOfView.transform // transformation matrix
let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33) // camera rotation
let location = SCNVector3(transform.m41, transform.m42, transform.m43) // location of camera frustum
let currentPostionOfCamera = orientation + location // center of frustum in world space
DispatchQueue.main.async {
let touchRootNode : SCNNode = SCNNode() // create an empty node to serve as our root for the incoming points
touchRootNode.position = currentPostionOfCamera // place the root node ad the center of the camera's frustum
touchRootNode.scale = SCNVector3(1.25, 1.25, 1.25)// touches projected in Z will appear smaller than expected - increase scale of root node to compensate
guard let sceneView = self.sceneView else { return }
sceneView.scene.rootNode.addChildNode(touchRootNode) // add the root node to the scene
let constraint = SCNLookAtConstraint(target: self.sceneView.pointOfView) // force root node to always face the camera
constraint.isGimbalLockEnabled = true // enable gimbal locking to avoid issues with rotations from LookAtConstraint
touchRootNode.constraints = [constraint] // apply LookAtConstraint
self.touchRoots.append(touchRootNode)
}
}
}
override func func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let translation = touch.location(in: self.view)
let translationFromCenter = CGPoint(x: translation.x - (0.5 * self.view.frame.width), y: translation.y - (0.5 * self.view.frame.height))
// add nodes using the main thread
DispatchQueue.main.async {
guard let touchRootNode = self.touchRoots.last else { return }
let sphereNode : SCNNode = SCNNode(geometry: SCNSphere(radius: 0.015))
sphereNode.position = SCNVector3(-1*Float(translationFromCenter.x/1000), -1*Float(translationFromCenter.y/1000), 0)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
touchRootNode.addChildNode(sphereNode) // add point to the active root
}
}
}
Note: solution only handles a single touch, but it is simple enough to extend the example to add multi-touch support.

Detect which `View` was clicked on a `Cell`

i have three UIImageView on a single Cell when i click on any of the UIImageView on the cell i want to detect which one was clicked on onCellSelection, without placing a UITapGestureRecognizer on each UIImageview
func SocialViewRow(address: SocialMedia)-> ViewRow<SocialMediaViewFile> {
let viewRow = ViewRow<SocialMediaViewFile>() { (row) in
row.tag = UUID.init().uuidString
}
.cellSetup { (cell, row) in
// Construct the view
let bundle = Bundle.main
let nib = UINib(nibName: "SocialMediaView", bundle: bundle)
cell.view = nib.instantiate(withOwner: self, options: nil)[0] as? SocialMediaViewFile
cell.view?.backgroundColor = cell.backgroundColor
cell.height = { 50 }
print("LINK \(address.facebook?[0] ?? "")")
cell.view?.iconOne.tag = 90090
//self.itemDetails.activeURL = address
let openFace = UITapGestureRecognizer(target: self, action: #selector(QuickItemDetailVC.openFace))
let openT = UITapGestureRecognizer(target: self, action: #selector(QuickItemDetailVC.openTwit))
let you = UITapGestureRecognizer(target: self, action: #selector(QuickItemDetailVC.openYouYub))
cell.view?.iconOne.addGestureRecognizer(openFace)
cell.view?.iconTwo.addGestureRecognizer(openT)
cell.view?.iconThree.addGestureRecognizer(you)
cell.frame.insetBy(dx: 5.0, dy: 5.0)
cell.selectionStyle = .none
}.onCellSelection() {cell,row in
//example
//print(iconTwo was clicked)
}
return viewRow
}
Using UITapGestureRecogniser (or UIButton) would be a better approach. These classes intended for tasks like this.
If you still want to use different approach, add method to your cell subclass (replace imageView1, imageView2, imageView3 with your own properties)
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
let point = touch.location(in: view)
if imageView1.frame.containsPoint(point) {
// code for 1st image view
} else if imageView2.frame.containsPoint(point) {
// code for 2nd image view
} else if imageView3.frame.containsPoint(point) {
// code for 3rd image view
}
}
Docs:
location(ofTouch:in:)
contains(_ point: CGPoint)
Override the touchesbegan function. This method is called every time the user touches the screen. Every time it is called, check to see if the touches began in the same location an image is.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let location = touch.location(in: self)
//check here, include code that compares the locations of the image
}
Location will be a CGPoint. You should be able to get the CGPoints for the bounds of your images and then determine if the touchBegan in those bounds. If you want to include the entire path the user touched, there are ways to do that too but the beginning touch should be sufficient for what you want.

Resources