UIPanGestureRecognizer not working on iOS 13.0 - ios

I am setting UIPanGestureRecognizer on my view and my code working on iOS version before 13.0 but in 13.0 it is not working and also I debugged my code it is running all fine but it is not translating anything.
Following is the way of setting UIPanGestureRecognizer on my view namely viewContentBG
let recognizer = UIPanGestureRecognizer(target: self, action: #selector(self.handlePan(recognizer:)))
recognizer.delegate = self
viewContentBG.isUserInteractionEnabled = true
viewContentBG.addGestureRecognizer( recognizer)
Now after setting delegate i am getting my recogniser called. but following code is not working on iOS 13, however it is working fine on all versions before 13.0
if recognizer.state == .changed {
let velocity = recognizer.velocity(in: self.viewContentBG)
translation = recognizer.translation(in: self.viewContentBG)
if( (velocity.x > 0 && self.viewContentBG.frame.origin.x < self.viewContentBG.frame.size.width / 3.0)){
let movedPoint = CGPoint(x: originalCenter.x+translation.x, y: originalCenter.y)
self.viewContentBG.center = movedPoint
self.btnLeftAction1.frame.origin.x = 0
self.btnLeftAction1.frame.size.width = 0
self.btnLeftAction2.frame.origin.x = 0
self.btnLeftAction2.frame.size.width = self.viewContentBG.frame.origin.x
}
From above code following lines are most important but it is not working on 13.0 and higher versions. the view (viewContentBg) gets
move a little from wrong directions some times and gets back to its
original position and sometimes it really does not move a little.
let movedPoint = CGPoint(x: originalCenter.x+translation.x, y: originalCenter.y)
self.viewContentBG.center = movedPoint
I can suspect there is some sort of new restriction in 13.0 and higher
versions of iOS for using UIPanGestureRecognizer, but I really do not
know what is the main reason which is causing it not working
correctly. Please help me I am stuck in it. and its kinda main feature
of my app. :(

Answering to my own question, may be it will be useful for someone.
With so many new changes in iOS13 the way of handling Auto-layout is also changed a little bit. So after some research and debugging I found out that gesture is working and gets detected by the view but the view was making wrong movements (not as per my code says)
the Main problem was that I have a view constrained with the outer view. and I was applying pangesture into inner view. The inner view was the one I wanted to translate with user gesture.
so i set one property to true on my inner view and all gets working as expected. following is the property that I set to true.....
myView.translatesAutoresizingMaskIntoConstraints = true
So in iOS13 the auto-layout constraints are more phenomena

Related

ARKit - Object stuck to camera after tap on screen

I started out with the template project which you get when you choose ARKit project. As you run the app you can see the ship and view it from any angle.
However, once I allow camera control and tap on the screen or zoom into the ship through panning the ship gets stuck to camera. Now wherever I go with the camera the ship is stuck to the screen.
I went through the Apple Guide and seems like the don't really consider this as unexpected behavior as there is nothing about this behavior.
How to keep the position of the ship fixed after I zoom it or touch the screen?
Well, looks like allowsCameraControl is not the answer at all. It's good for SceneKit but not for ARKit(maybe it's good for something in AR but I'm not aware of it yet).
In order to zoom into the view a UIPinchGestureRecognizer is required.
// 1. Find the touch location
// 2. Perform a hit test
// 3. From the results take the first result
// 4. Take the node from that first result and change the scale
#objc private func handlePan(recognizer: UIPinchGestureRecognizer) {
if recognizer.state == .changed {
// 1.
let location = recognizer.location(in: sceneView)
// 2.
let hitTestResults = sceneView.hitTest(location, options: nil)
// 3.
if let hitTest = hitTestResults.first {
let shipNode = hitTest.node
let newScaleX = Float(recognizer.scale) * shipNode.scale.x
let newScaleY = Float(recognizer.scale) * shipNode.scale.y
let newScaleZ = Float(recognizer.scale) * shipNode.scale.z
// 4.
shipNode.scale = SCNVector3(newScaleX, newScaleY, newScaleZ)
recognizer.scale = 1
}
}
Regarding #2. I got confused a little with another hitTest method called hitTest(_:types:)
Note from documentation
This method searches for AR anchors and real-world objects detected by
the AR session, not SceneKit content displayed in the view. To search
for SceneKit objects, use the view's hitTest(_:options:) method
instead.
So that method cannot be used if you want to scale a node which is a SceneKit content

Xcode 9 Swift 4 - Reposition views by dragging

Im quite new to iOS development. But have years of programming experience.
Anyway, Im having a hard time finding a solution for my problem.
In my app i render rows of colored circles based on data from the server.
Each of these circles has different properties set to them on the server.
One of these is the "offset" property.
This should be used to render the circle with a distance from its left sibling, or the start of the parent view if its the first.
Each circle should then also be able to be moved by dragging it to the right or left. But never less then 0 from its left sibling.
In android this was very easy, just set the left-margin on drag, and all was good.
But in xcode im having a very hard time figuring out how to get this done.
Im sure its me thats way to inexperienced. So I hope someone that has a bit more knowledge about swift can help me with this.
Heres some images to make clear what Im looking to achive.
First render where one circle has an offset
The gesture where the 3. last circle is drages to the right
The result of the gesture
I need this to move seamless, so not reposiotioning after the gesture ends, but move along with the finger.
As you can see, the circles right of the one that is drages, keep their relative position to the one that is moved.
Thank you.
There are couples of ways to do this.The First possible solution can be using the Swipe gestures to move the objects.
override func viewDidLoad() {
super.viewDidLoad()
let swipeGesture = UISwipeGestureRecognizer(target: self, action: "handleSwipe:")
swipeGesture.direction = [.Down, .Up]
self.view.addGestureRecognizer(swipeGesture)
}
func handleSwipe(sender: UISwipeGestureRecognizer) {
print(sender.direction)
}
Use these Gestures to move along the objects with your fingers either you can use .left and .right gestures depending upon your need.
The Second solution for drag components can be a Pan Gesture
func detectPan(recognizer:UIPanGestureRecognizer) {
var translation = recognizer.translationInView(self.superview!)
self.center = CGPointMake(lastLocation.x + translation.x, lastLocation.y + translation.y)
}
The translation variable detects the new coordinates of the view when panning. The center of the view will be adjusted according to the changed coordinates.
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
// Promote the touched view
self.superview?.bringSubviewToFront(self)
// Remember original location
lastLocation = self.center
}
When the view is clicked, the current view will be displayed in front of the other views and the center of the view will be assigned to the lastlocation variable
Hope this helps.

Move objects around, with gesture recognizer for multiple Objects

I am trying to make an app where you can use Stickers like on Snapchat and Instagram. It fully worked to find a technique, that adds the images, but now I want that if you swipe the object around the object changes its position (I also want to make the scale / rotate function).
My code looks like this:
#objc func StickerLaden() {
for i in 0 ..< alleSticker.count {
let imageView = UIImageView(image: alleSticker[i])
imageView.frame = CGRect(x: StickerXScale[i], y:StickerYScale[i], width: StickerScale[i], height: StickerScale[i])
ImageViewsSticker.append(imageView)
ImageView.addSubview(imageView)
imageView.isUserInteractionEnabled = true
let aSelector : Selector = "SlideFunc"
let slideGesture = UISwipeGestureRecognizer(target: self, action: aSelector)
imageView.addGestureRecognizer(slideGesture)
}
}
func SlideFunc(fromPoint:CGPoint, toPoint: CGPoint) {
}
Here are the high-level steps you need to take:
Add one UIPanGestureRecognizer to the parent view that has the images on it.
Implement UIGestureRecognizerDelegate methods to keep track of the user touching and releasing the screen.
On first touch, loop through all your images and call image.frame.contains(touchPoint). Add all images that are under the touch point to an array.
Loop through the list of touched images and calculate the distance of the touch point to the center of the image. Chose the image whose center is closest to the touched point.
Move the chosen image to the top of the view stack. [You now have selected an image and made it visible.]
Next, when you receive pan events, change the frame of the chosen image accordingly.
Once the user releases the screen, reset any state variables you may have, so that you can start again when the next touch is done.
The above will give you a nicely working pan solution. It's a good amount of things you need to sort out, but it's not very difficult.
As I said in my comment, scale and rotate are very tricky. I advise you to forget that for a bit and first implement other parts of your app.

App crashes when adding a subview to a dynamic item in Interface Builder

So I am using UI Kit Dynamics to create a slider which can be swiped out of the way to reveal what is behind it. I am using two UIFieldBehaviour's, an attachment behaviour attached to a UIPanGestureRecognizer, and UICollisionBehaviour to achieve this. The structure in Interface Builder looks like this:
Here an image of my view hierarchy
The TrayContainerView has constraints applied to it and it is also the reference view for my DynamicAnimator. trayView is the dynamicItem which dynamicAnimator acts on.
This works fine until I add a subview to trayView from within Interface Builder, at which point I get this message whenever my app switches to that particular viewController:
Terminating app due to uncaught exception 'Invalid Association', reason: 'UIDynamicAnimator supports a maximum of 32 distinct fields'
The strange thing is, if I add the subview programmatically it works fine save for some issues with the layout.
Here's the setup for my dynamics behaviours
container = UICollisionBehavior(items: [self])
let boundaryWidth = UIScreen.mainScreen().bounds.size.width
container.addBoundaryWithIdentifier("upper", fromPoint: CGPointMake(0, offset), toPoint: CGPointMake(boundaryWidth, offset))
let boundaryHeight = self.superview!.frame.size.height + self.frame.size.height - self.layoutMargins.top
container.addBoundaryWithIdentifier("lower", fromPoint: CGPointMake(0,boundaryHeight), toPoint: CGPointMake(boundaryWidth,boundaryHeight))
tractorField = UIFieldBehavior.linearGravityFieldWithVector(CGVectorMake(0, 30))
tractorField.direction = CGVectorMake(0, -10)
let referenceRegion = (superview?.frame)!
let fieldCoefficient:CGFloat = 1
tractorField.region = UIRegion(size: CGSize(width: referenceRegion.width, height: referenceRegion.height*fieldCoefficient))
tractorField.position = CGPointMake((self.superview?.center.x)!, referenceRegion.height*fieldCoefficient/2)
tractorField.addItem(self)
gravityField = UIFieldBehavior.linearGravityFieldWithVector(CGVectorMake(0, 30))
gravityField.direction = CGVectorMake(0, 10)
gravityField.region = UIRegion(size: CGSize(width: referenceRegion.width, height: referenceRegion.height*fieldCoefficient))
gravityField.position = CGPointMake((self.superview?.center.x)!, referenceRegion.height + referenceRegion.height*fieldCoefficient/2)
gravityField.addItem(self)
animator.addBehavior(gravityField)
animator.addBehavior(tractorField)
dynamicItem = UIDynamicItemBehavior(items: [self])
animator.addBehavior(container)
animator.addBehavior(dynamicItem)
any ideas would be greatly appreciated.
Figured it out. Problem was I tried to subclass UIView and do all of the dynamic stuff in there. Unfortunately that does not work as I suspect the dynamic animator is called to early? I instantiated it in awakeFromNib which is the reason for the crashing. Anyway implementing the whole thing in the viewDidLoad method of a viewController and it all worked as it should.

More than 5 UILongPressGestureRecognizers in an App

In my swift ios app. I use below code to create a circle shape UIView on my screen. Then I add a UILongPressGestureRecognizer to it, keeping the view itself as well as the gestureRecognizer in their respective arrays. Every time user touch and hold onto this view, the code runs again and creates another view and add another UILongPressGestureRecognizer to it.
let x = CGFloat(arc4random_uniform(UInt32(sW-w)))
let y = CGFloat(arc4random_uniform(UInt32(sH-h-10)))+15
circle.append(UIButton())
touch.append(UILongPressGestureRecognizer())
let l = circle.count-1
circle[l].frame = CGRect(x:x, y:y, width:70, height:70)
circle[l].backgroundColor = UIColor.redColor()
circle[l].layer.cornerRadius = w/2
circle[l].clipsToBounds = true
touch[l].addTarget(self, action: "touched:")
touch[l].minimumPressDuration = 0
touch[l].numberOfTouchesRequired = 1
touch[l].numberOfTapsRequired = 0
touch[l].allowableMovement = 0
touch[l].delegate = self
circle[l].addGestureRecognizer(touch[l])
self.view.addSubview(circle[l])
Now when I run the app, I can tap and hold on to the circle view and its state changes to .Began and it also fires .Changed and .Ended statuses up to 5 views. BUT when the 6th view is added, the gesture recognizer does not work on it.
There is nothing in Apple documentation about any maximum number of gestureRecognizers that would work simultaneously. What else could be causing this behaviour?
It's a bad approach to have more than 1 long press recognizer working simultaneously. All recognizers are reacting by chain after previous one in chain fails, if you not specifying by their delegate that they should recognize touches simultaneously.
But also you can implement a delegate callback for all your recognizers:
- (BOOL)gestureRecognizer:shouldRecieveTouch:
and check if touch fits to the corresponding recognizer. If yes - you can disable all other recognizers in this delegate callback, and after suitable recognizer handles the touch, then re-enable them again.

Resources