Remove touch gestures (rotation/translation) for Ios ar object - ios

Summary: Remove touch gestures on an AR object after adding it?
AR View Code (Just the relevant bit).
Where arObject is a model entity with a mesh, material and a collision shape.
func updateUIView(_ uiView: ARView, context: Context) {
arObject = CreateCustomModelEntity()
uiView.installGestures([.translation, .rotation], for: arObject)
}
The above could would add touch gestures to my arObject and allow it to be rotated and moved across the anchored plane.
However, I want to remove the touch gestures after adding it.
User Flow:
The user would click a model, move it around and place it where they'll like and touch the Confirm button. After the confirm button is touched, the arObject can no longer be moved around.
Looking at the Apple docs, there's an installGestures, but no equivalent removeGestures. Is this even possible?
A few ideas,
completely remove the anchor and recreate it (but then the placement of the object is lost, so this is bad)
Override the existing child ar object with a new one without the touch gestures installed. I believe this would retain the object placement, but double creating ar objects isn't ideal unless there's no other solution.
Create a temp ar object with install gestures and then override it with a new arObject (without touch gestures) after placement has been confirmed. Similar to 2. solution.

The installGestures method returns an array of EntityGestureRecognizers and also assigns these gesturesRecognizers to the arView.gesturesRecognizers array. If you want to remove gestures for a given entity you need to find the gesturesRecognizers you want and remove them from that array.
The code below assigns the translation and rotation gestureRecognizers separately to a property which you use to find the corresponding index in the arView.gesturesRecognizers array.
func updateUIView(_ uiView: ARView, context: Context) {
arObject = CreateCustomModelEntity()
translationGestureRecognizer = uiView.installGestures([.translation], for: arObject).first
rotationGestureRecognizer = uiView.installGestures([.rotation], for: arObject).first
}
func removeGesture() {
let recognizerIndex = arView.gestureRecognizers?.firstIndex(of: translationGestureRecognizer!)
arView.gestureRecognizers?.remove(at: recognizerIndex!)
}
I found removing gestureRecognizers from the arView mandatory also when removing an object from the scene because otherwise it isn't released from memory and thus it increases the app's footprint.

Related

Disable DoubleTapGesture Zoom out on MKMapView

I have observed MKMapView zoom-out on Double tap gesture, I didn't find any way to disable it. I tried to add own Double Tap Gesture to catch Double tap action but it still zoom out. Any thought?
There is no API to disable or change double-tap behavior with MKMapView.
But, if you really want to do so, one approach would be to find and remove the double-tap gesture recognizer from the MKMapView object.
In the project you shared, you could do that in makeUIView in your UIMapView class:
func makeUIView(context: UIViewRepresentableContext<UIMapView>) -> UIViewType {
self.configureView(mapView, context: context)
setRegion(to: palce)
if let v = mapView.subviews.first,
let ga1 = v.gestureRecognizers
{
let ga2: [UITapGestureRecognizer] = ga1.compactMap { $0 as? UITapGestureRecognizer } .filter { ($0.numberOfTapsRequired == 2) }
for g in ga2 {
v.removeGestureRecognizer(g)
}
}
return mapView
}
I wouldn't necessarily suggest that you do so, however.
Apple may change the MKMapView object in the future, which could then break this.
User's tend to prefer that common UI elements behave in expected ways.
Personally, I get rather annoyed when using an app and the developer has changed standard functionality of UI elements. For example, if I see a table view row with a disclosure indicator (the right-arrow / chevron), I expect that tapping the row will "push" to another screen related to that row. I've seen apps that do not follow that pattern, and it just gets confusing.

Detecting when a SKNode is tapped on Apple Watch

I'm writing an app for Apple Watch using SpriteKit, so I don't have access to functions like touchesBegan and I have to use a WKTapGestureRecognizer to detect taps, no big deal, but I have issues detecting taps on a node.
In my InterfaceController I have:
#IBAction func handleTap(tapGestureRecognizer: WKTapGestureRecognizer){
scene?.didTap(tapGesture: tapGestureRecognizer)
}
And in my Scene file I have
func didTap(tapGesture:WKTapGestureRecognizer) {
let position = tapGesture.locationInObject()
let hitNodes = self.nodes(at: position)
if hitNodes.contains(labelNode) {
labelNode.text = "tapped!"
}
Problem is the Tap Gesture Recognizer gives me the absolute coordinates of the touch point (for example 11.0, 5,0) while my node is positioned relatively to the center of the screen (so its position is -0.99,-11.29 even though is at the center of the screen) therefore the tap is hitting the node not when actually tapping it, but when I tap on the top left of the screen. I searched everywhere and it looks like this is the way to do it yet I don't find people having the same issues. The node has been added via the editor. What am I doing wrong?
So you have the right idea. You are getting this wrong because hitNodes is an array of SKNodes. Those are newly created. So when you use hitNodes.contains the addresses of the labelNode and the address of the newly created SKNode that is being compared would be completely different. Therefore it would never be tapped.
Here's what I would do. This would be in my Scene File. Your InterfaceController class is correct.
func didTap(tapGesture:WKTapGestureRecognizer) {
let position = tapGesture.locationInObject()
if labelNode.contains(position) {
labelNode.text = "tapped!"
}
}
OR another way would be this. I like this way because you only have one function which would be in the WKInterfaceControlller And you would need no functions in your Scene File.
#IBAction func tapOnScreenAct(_ sender: WKGestureRecognizer) {
if scene.labelNode.contains(sender.locationInObject()) {
scene.labelNode.text = "tapped!"
}
}
Either way, both should work. Let me know if you have any more questions or clarifications.

Detecting touch events on SKSpriteNodes

so there are some similar questions on here, but they don't really answer the questions/problem I am having.
The following explanation of my project may or may not help, I am adding it here just in case...
I am trying to build my first iOS Game, and I have built a scene using the scene editor. The scene contains a SKNode named "World". There is then a backgroundNode and a foregroundNode that are children of the world node. Everything in the scene right now, all SKSpriteNodes, are children of the backgroundNode.
In my GameScene.swift file I have variables attached to the background and foreground nodes so that I can add children to these nodes as the game progresses.
Hopefully this is clear so far...
Now I have added 5 other *.sks files to my project. These file contain scenes that I have made that will be added as children of the foreground node through code in my GameScene.swift file. The SKSpriteNodes in these scene files are placed in the foreground, but their z-position is less than the z-position of one of the background child nodes. This is because I want to have a box appear behind a light beam (the light beam is apart of the background and the box is added to the foreground). Here is a picture in case I caused any confusion
My problem is this...
I want to tap on the screen using gesture recognizers so that when I tap on the box I can then do some stuff. Trouble is that since the light beam has a greater z-position (cause of the effect I want), every time I use the atPoint(_ p: CGPoint) -> SKNode method to determine what node I tapped on I get returned the light beam node and not the box node.
How do I tap on just the box? I have tried changing the isUserInteractionEnabled property for the lights to false already, and I have tried using touchesBegan as shown in many of the other responses to similar question. I have also tried reading the swift developer documents provided by Apple, but I can't figure this out.
The code for my gesture Recognizers is below:
//variables for gesture recognition
let tapGesture = UITapGestureRecognizer()
let swipeUpGesture = UISwipeGestureRecognizer()
//set up the tap gesture recognizer
tapGesture.addTarget(self, action: #selector(GameScene.tappedBox(_:)))
self.view!.addGestureRecognizer(tapGesture)
tapGesture.numberOfTapsRequired = 1
tapGesture.numberOfTouchesRequired = 1
//handles the tap event on the screen
#objc func tappedBox(_ recognizer: UITapGestureRecognizer) {
//gets the location of the touch
let touchLocation = recognizer.location(in: recognizer.view)
if TESTING_TAP {
print("The touched location is: \(touchLocation)")
}
//enumerates through the scenes children to see if the box I am trying to tap on can be detected (It can be detected using this just don't know how to actually detect it by tapping on the box)
enumerateChildNodes(withName: "//Box1", using: { node, _ in
print("We have found a single box node")
})
//tells me what node is returned at the tapped location
let touchedNode = atPoint(touchLocation)
print("The node touched was: \(String(describing: touchedNode.name))")
//chooses which animation to run based on the game and player states
if gameState == .waitingForTap && playerState == .idle{
startRunning() //starts the running animation
unPauseAnimation()
}else if gameState == .waitingForTap && playerState == .running {
standStill() //makes the player stand still
unPauseAnimation()
}
}
Hopefully this is enough for you guys... If you need some more code from me, or need me to clarify anything please let me know
Please read notes in the code. You can get what you want easily in spriteKit. Hope you get the answer.
#objc func tappedBox(_ recognizer: UITapGestureRecognizer) {
let touchLocation = recognizer.location(in: recognizer.view)
// Before you check the position, you need to convert the location from view to Scene. That's the right location.
let point = (recognizer.view as! SKView).convert(touchLocation, to: self)
print ( getNodesatPoint(point, withName: "whatever Node name" ) )
}
// 2 : This function gives you all nodes with the name you assign. If you node has a unique name, you got it.
/// You can change name to other properties and find out.
private func getNodesatPoint(_ point : CGPoint , withName name: String) -> [SKNode] {
return self.nodes(at: point).filter{ $0.name == name}
}
You want to use nodesAtPoint to do a hit test and get all of your nodes that is associated with the point. Then filter the list to find the box you are looking for.
If you have a lot of layers going on, you also may want to just add an invisible layer on top of your graphics that handles nodes being touched
Your other option is to turn off isUserInteractionEnabled on everything except the boxes and then override the touch events for your boxes, but that would mean you can't use gestures like you are doing now.

ARKit API by example

I'm trying to wrap my head around Apple's ARKit API and I have pushed their example ARKitExample project up to GitHub.
In this demo/sample project, you move your phone camera around your environment, and it appears to automatically detect flat surfaces and place a set of "focus squares" over where your camera is centered over that surface. If you then press a "+" UI button and select from one of several objects (lamp, cups, vase, etc.) and it will render that virtual object in place of the focus squares. You can see all of this in action right here which is probably better than me trying to explain it!
I'm trying to find the place in the code where the virtual object is actually invoked and rendered onscreen. This would be just after it is selected, which I think takes place here:
#IBAction func chooseObject(_ button: UIButton) {
// Abort if we are about to load another object to avoid concurrent modifications of the scene.
if isLoadingObject { return }
textManager.cancelScheduledMessage(forType: .contentPlacement)
performSegue(withIdentifier: SegueIdentifier.showObjects.rawValue, sender: button)
}
But essentially, the user selects a virtual object and then it gets rendered wherever the focus square are currently located at -- I'm looking for where this happens, any ideas?
It adds the virtualObject instance (which is a subclass of SCNNode) as a child of the SCNScene's root node:
func virtualObjectSelectionViewController(_: VirtualObjectSelectionViewController, didSelectObjectAt index: Int) {
guard let cameraTransform = session.currentFrame?.camera.transform else {
return
}
let definition = VirtualObjectManager.availableObjects[index]
let object = VirtualObject(definition: definition)
let position = focusSquare?.lastPosition ?? float3(0)
virtualObjectManager.loadVirtualObject(object, to: position, cameraTransform: cameraTransform)
if object.parent == nil {
serialQueue.async {
self.sceneView.scene.rootNode.addChildNode(object)
}
}
}

Why doesn't my View respond to a gesture using a gestureRecognizer?

Having just spent a day beating my head against the keyboard, I thought I'd share my diagnosis and solution.
Situation: You add a custom View of custom class CardView to an enclosing view myCards in your app and want each card to respond to a tap gesture (for example to indicate you want to discard the card). Typical code you start with:
In your ViewController:
class MyVC : UIViewController, UIGestureRecognizerDelegate {
...
func discardedCard(sender: UITapGestureRecognizer) {
let cv : CardView = sender.view! as! CardView
...
}
In your myCards construction:
cv = CardView(card: card)
myCards.addSubview(cv)
cv.userInteractionEnabled = true
...
let cvTap = UITapGestureRecognizer(target: self, action: Selector("discardedCard:"))
cvTap.delegate = self
cv.addGestureRecognizer(cvTap)
I found the arguments here very confusing and the documentation not at all helpful. It isn't clear that the target: argument refers to the class that implements discardedCard(sender: UITapGestureRecognizer) . If you're constructing the recognizer and cards in your ViewController that's going to be self. If you want to move the discardedCard into your custom View class (for example), then replace self with CardView in my example, including on the delegate line.
Testing the above code, I found that the discardedCard function was never called. What was going on?
So a day later, here's what I had to fix. I hope this checklist is useful to somebody else. I'm new to iOS (coming from Android), so it may be obvious to you veterans:
Make sure the touched view (cv in the example) has userInteractionEnabled=true . Note that if you use your own constructor it will be set false by default
Make sure all enclosing views also have userInteractionEnabled=true
Others have posted that the order of the delegate statement and addGestureRecognizer statement made a difference; I didn't find that using Xcode 7.2 and iOS 9.2
Most important: Make sure your touched view is fully within the bounds of the enclosing views. In my example, I was building a myCards container that didn't have the width set correctly and was cutting off the right-most cards (and since clipping is disabled by default, this wasn't visually obvious until I looked in the debugger at the View hierarchy)

Resources