More than 5 UILongPressGestureRecognizers in an App - ios

In my swift ios app. I use below code to create a circle shape UIView on my screen. Then I add a UILongPressGestureRecognizer to it, keeping the view itself as well as the gestureRecognizer in their respective arrays. Every time user touch and hold onto this view, the code runs again and creates another view and add another UILongPressGestureRecognizer to it.
let x = CGFloat(arc4random_uniform(UInt32(sW-w)))
let y = CGFloat(arc4random_uniform(UInt32(sH-h-10)))+15
circle.append(UIButton())
touch.append(UILongPressGestureRecognizer())
let l = circle.count-1
circle[l].frame = CGRect(x:x, y:y, width:70, height:70)
circle[l].backgroundColor = UIColor.redColor()
circle[l].layer.cornerRadius = w/2
circle[l].clipsToBounds = true
touch[l].addTarget(self, action: "touched:")
touch[l].minimumPressDuration = 0
touch[l].numberOfTouchesRequired = 1
touch[l].numberOfTapsRequired = 0
touch[l].allowableMovement = 0
touch[l].delegate = self
circle[l].addGestureRecognizer(touch[l])
self.view.addSubview(circle[l])
Now when I run the app, I can tap and hold on to the circle view and its state changes to .Began and it also fires .Changed and .Ended statuses up to 5 views. BUT when the 6th view is added, the gesture recognizer does not work on it.
There is nothing in Apple documentation about any maximum number of gestureRecognizers that would work simultaneously. What else could be causing this behaviour?

It's a bad approach to have more than 1 long press recognizer working simultaneously. All recognizers are reacting by chain after previous one in chain fails, if you not specifying by their delegate that they should recognize touches simultaneously.
But also you can implement a delegate callback for all your recognizers:
- (BOOL)gestureRecognizer:shouldRecieveTouch:
and check if touch fits to the corresponding recognizer. If yes - you can disable all other recognizers in this delegate callback, and after suitable recognizer handles the touch, then re-enable them again.

Related

UIPanGestureRecognizer not working on iOS 13.0

I am setting UIPanGestureRecognizer on my view and my code working on iOS version before 13.0 but in 13.0 it is not working and also I debugged my code it is running all fine but it is not translating anything.
Following is the way of setting UIPanGestureRecognizer on my view namely viewContentBG
let recognizer = UIPanGestureRecognizer(target: self, action: #selector(self.handlePan(recognizer:)))
recognizer.delegate = self
viewContentBG.isUserInteractionEnabled = true
viewContentBG.addGestureRecognizer( recognizer)
Now after setting delegate i am getting my recogniser called. but following code is not working on iOS 13, however it is working fine on all versions before 13.0
if recognizer.state == .changed {
let velocity = recognizer.velocity(in: self.viewContentBG)
translation = recognizer.translation(in: self.viewContentBG)
if( (velocity.x > 0 && self.viewContentBG.frame.origin.x < self.viewContentBG.frame.size.width / 3.0)){
let movedPoint = CGPoint(x: originalCenter.x+translation.x, y: originalCenter.y)
self.viewContentBG.center = movedPoint
self.btnLeftAction1.frame.origin.x = 0
self.btnLeftAction1.frame.size.width = 0
self.btnLeftAction2.frame.origin.x = 0
self.btnLeftAction2.frame.size.width = self.viewContentBG.frame.origin.x
}
From above code following lines are most important but it is not working on 13.0 and higher versions. the view (viewContentBg) gets
move a little from wrong directions some times and gets back to its
original position and sometimes it really does not move a little.
let movedPoint = CGPoint(x: originalCenter.x+translation.x, y: originalCenter.y)
self.viewContentBG.center = movedPoint
I can suspect there is some sort of new restriction in 13.0 and higher
versions of iOS for using UIPanGestureRecognizer, but I really do not
know what is the main reason which is causing it not working
correctly. Please help me I am stuck in it. and its kinda main feature
of my app. :(
Answering to my own question, may be it will be useful for someone.
With so many new changes in iOS13 the way of handling Auto-layout is also changed a little bit. So after some research and debugging I found out that gesture is working and gets detected by the view but the view was making wrong movements (not as per my code says)
the Main problem was that I have a view constrained with the outer view. and I was applying pangesture into inner view. The inner view was the one I wanted to translate with user gesture.
so i set one property to true on my inner view and all gets working as expected. following is the property that I set to true.....
myView.translatesAutoresizingMaskIntoConstraints = true
So in iOS13 the auto-layout constraints are more phenomena

Adding animation to 3D models in ARKit

In this video an object is given an animation to hover around the room when placed and then when tapped it gets dropped down with another animation.
How can I add this kind of animations in my project?
Is adding an already animated object the only way?
Thank you
https://youtu.be/OS_kScr0XkQ
I think the hovering up and down is most likely an animation sequence. The following animation sequence for a hovering effect will work inside the first drop down selection function.
let moveDown = SCNAction.move(by: SCNVector3(0, -0.1, 0), duration: 1)
let moveUp = SCNAction.move(by: SCNVector3(0,0.1,0), duration: 1)
let waitAction = SCNAction.wait(duration: 0.25)
let hoverSequence = SCNAction.sequence([moveUp,waitAction,moveDown])
let loopSequence = SCNAction.repeatForever(hoverSequence)
node2Animate.runAction(loopSequence)
self.sceneView.scene.rootNode.addChildNode(node2Animate)
The second part to stop the animation when you tap the node put this inside the tap gesture function.
node2animate.removeAllActions()
The last part dropping to the floor, the node2animate might be a physicsBody and before the tap the gravity attribute
node2animate.physicsBody?.isAffectedByGravity = false
after the tap you set it to true
node2animate.physicsBody?.isAffectedByGravity = true
There is other stuff going on as well, collision with the floor have also been set etc.

Move objects around, with gesture recognizer for multiple Objects

I am trying to make an app where you can use Stickers like on Snapchat and Instagram. It fully worked to find a technique, that adds the images, but now I want that if you swipe the object around the object changes its position (I also want to make the scale / rotate function).
My code looks like this:
#objc func StickerLaden() {
for i in 0 ..< alleSticker.count {
let imageView = UIImageView(image: alleSticker[i])
imageView.frame = CGRect(x: StickerXScale[i], y:StickerYScale[i], width: StickerScale[i], height: StickerScale[i])
ImageViewsSticker.append(imageView)
ImageView.addSubview(imageView)
imageView.isUserInteractionEnabled = true
let aSelector : Selector = "SlideFunc"
let slideGesture = UISwipeGestureRecognizer(target: self, action: aSelector)
imageView.addGestureRecognizer(slideGesture)
}
}
func SlideFunc(fromPoint:CGPoint, toPoint: CGPoint) {
}
Here are the high-level steps you need to take:
Add one UIPanGestureRecognizer to the parent view that has the images on it.
Implement UIGestureRecognizerDelegate methods to keep track of the user touching and releasing the screen.
On first touch, loop through all your images and call image.frame.contains(touchPoint). Add all images that are under the touch point to an array.
Loop through the list of touched images and calculate the distance of the touch point to the center of the image. Chose the image whose center is closest to the touched point.
Move the chosen image to the top of the view stack. [You now have selected an image and made it visible.]
Next, when you receive pan events, change the frame of the chosen image accordingly.
Once the user releases the screen, reset any state variables you may have, so that you can start again when the next touch is done.
The above will give you a nicely working pan solution. It's a good amount of things you need to sort out, but it's not very difficult.
As I said in my comment, scale and rotate are very tricky. I advise you to forget that for a bit and first implement other parts of your app.

How do I alter the touch location when handling a pinch and a pan gesture recognizer at the same time?

I'm trying to recreate an interaction similar to the photos app where you can pinch and pan a photo at the same time. Adding or removing a touch mid pan works perfectly.
In my code I'm using the location of touch to move the view. When I drag with two fingers, the pan gesture recognizers puts the point between the two fingers (as it should), but when I lift a finger it changes the point to that of that one finger, causing the view to jerk to a new position.
Setting the maximumNumberOfTouches to 1 does not solve my problem since you can touch with finger1, pan, touch with finger 2, pan, lift finger 1 and the view will jerk to the position of finger 2. Plus, I want to allow 2 finger panning since they can pinch to zoom and rotate the image as well.
I also cannot use UIScrollView for this for other reasons, but I know it doesn't have that problem.
The only solution I can think of is to get the initial touch location, then every time a finger is added or removed, offset the new location based on the old location. But I'm not sure how to get that information.
Is there an API for this? Is the above way the only way, and if so, how do I do it?
As I understand it, the issue is that your code for responding to a pan (drag) doesn't work if the user changes the number of fingers in mid-drag, because the gesture recognizer's location(in:) jumps.
The problem is that the entire basic assumption underlying your code is wrong. To make a view draggable, you do not check the location(in:). You check the translation(in:). That's what it's for.
This is the standard pattern for making a view draggable with a pan gesture recognizer:
#objc func dragging(_ p : UIPanGestureRecognizer) {
let v = p.view!
switch p.state {
case .began, .changed:
let delta = p.translation(in:v.superview)
var c = v.center
c.x += delta.x; c.y += delta.y
v.center = c
p.setTranslation(.zero, in: v.superview)
default: break
}
}
That works fine even if the user starts with multiple fingers and lifts some during the drag.
Ok, so here's how I solved it.
Inside the gesture function I have a global variable being given the touch location.
self.touchInView.x = sender.location(in: superview).x - frame.origin.x
self.touchInView.y = sender.location(in: superview).y - frame.origin.y
self.touchInParent = sender.location(in: superview)
In state == .began I have a variable called OriginalTouch which I set the location of touch.
if gesture.state == .began {
originalTouch = self.touchInView
}
Then in state == .changed I detect if the number of touches changed and calculate the offset:
//Reset original touch position if number of touch changes so view remains in the same position
if sender.numberOfTouches != lastNumberOfTouches {
originalTouch.x += (touchInView.x - originalTouch.x)
originalTouch.y += (touchInView.y - originalTouch.y)
}
lastNumberOfTouches = sender.numberOfTouches
Now I can set the view's location based on the originalTouch
self.frame.origin = touchInParent - originalTouch

How do I find out which direction a user is panning with UIPanGestureRecognizer?

So I am using UIPanGestureRecognizer in my project which I added to a view. I would like to know when a user either goes up, down,left or right. I am using the left and right feature to scrub through video. The up and down gesture is still to be determined. I have used the following code but I can't seem to figure it out. Thanks for the help!
#IBAction func panVideo(_ recognizer: UIPanGestureRecognizer) {
let vel = recognizer.velocity(in: self.videoView)
if vel.x > 0 {
// user dragged towards the right
print("right")
}
else {
// user dragged towards the left
print("left")
}
}
EDIT: Using Slider
if let duration = avPlayer?.currentItem?.duration {
let totalSeconds = CMTimeGetSeconds(duration)
let value = Float64(scrubberSlider.value) * totalSeconds
let seekTime = CMTime(value: Int64(value), timescale: 1)
avPlayer?.seek(to: seekTime, completionHandler: { (completedSeek) in
//perhaps do something later here
})
}
Joe's answer is close, but it won't take into account direct vertical or horizontal pans. (I'd comment on his answer except the formatting won't take.) Try this:
let vel = recognizer.velocity(in: self.videoView)
if vel.x > 0 {
// user dragged towards the right
print("right")
}
else if vel.x < 0 {
// user dragged towards the left
print("left")
}
if vel.y > 0 {
// user dragged towards the down
print("down")
}
else vel.y < 0 {
// user dragged towards the up
print("up")
In essence, you are getting the CGPoint of the gesture (x,y) and determining the velocity of the movement. You have an alternative to this - taking the starting and ending point:
var startingPoint = CGPoint.zero
#IBAction func panVideo(_ recognizer: UIPanGestureRecognizer) {
if recognizer.state == .began {
startingPoint = recognizer.location(in: self.videoView)
}
if recognizer.state == .ended {
let endingPoint = recognizer.location(in: self.videoView)
[ do the same comparing as above]
}
}
The advantage of the second option is you aren't doing unnecessary calculations during the pan. The disadvantage is that there are certain scenarios (like animating view movements) that are not conducive to it.
EDIT: I'm adding a bit more verbiage after reading your comment. It sounds to me that you may not be fully understanding what a pan gesture really is.
Like most (all?) gestures, it has a beginning, an in-between, and and end.
It is a two-dimensional drag with two components, both x and y.
There are actually SEVEN possible states, but FOUR of them (cancelled, failed, possible, recognized) do not happen with a pan gesture, leaving THREE states (began, changed, ended) that trigger.
I threw out one example - moving a view with a pan gesture - earlier. Now I'll try a second one - tracing an outline of, say, the Statue of Liberty in an image.
Here you want all THREE states, in order to know when to being tracing, when the path changes, and when it ends. And restricting this to the change state, I think you can see where both the X and the Y coordinate changes.
So yes, a logging of "left, up, left, up, left" is quite possible.I would think that if you traced a completely vertical line across the entire screen you might expect all "up" or "down" values in your log, but the odds of any human being panning that perfect is unlikely, so sure, a few "left" or "rights" may happen.
My tweak to Joe's code was to eliminate those moments of perfection. If vel.x == 0 you would have "left", and where bel.y == 0 you would have "down".
Again, if you simply want to know what the "result" of the pan is, use .began and .ended and ignore .changed - do not use recognizer.velocity but recognizer.state.
The "if" statements both of us gave you are really frameworks. If you understand both state and the two-dimensional nature of things, and you need to use .changed, then adapt those "if" statements - maybe compare the velocity of X to Y and take the greater, or eliminate those changes where the change in X or Y was under a threshold.
Try this code: tested in Swift 3.
Updated Answer: Below code will give you a starting and end location of your view when touch began.
if recognizer.state == .began {
let vel = recognizer.velocity(in: view) // view is your UIView
if vel.x > 0 {
print("right")
} else {
print("left")
}
}
if recognizer.state == .ended {
let vel = recognizer.velocity(in: view)
if vel.y > 0 {
print("down")
} else {
print("up")
}
}
Note : Your answer actually hidden in your code ?
#IBAction func panVideo(_ recognizer: UIPanGestureRecognizer) {
let vel = recognizer.velocity(in: self.videoView)
if vel.x > 0 {
// user dragged towards the right
print("right")
}
else {
// user dragged towards the left
print("left")
}
if vel.y > 0 {
// user dragged towards the down
print("down")
}
else {
// user dragged towards the up
print("up")
}
}
hope this helps...
Okay, now I'm getting the correct mental picture. You want scrub control. This is something very different, and I would recommend a UISlider over working with gestures - highly recommend it. For starters, they have the pan gesture already built in! Here's what I think apps like YouTube, QuickTime, etc. do.
(1) Let's take a specific example of having a video that is 1:53:22 in length, or (1*60*60)+(53*60)+22 = 6802 seconds in length.
(2) Add a "scrubber" subview to your main screen. You'll probably want a UISlider, two UILabels (one to each side of the slider), and anything else you think for a polished look.
(3) The UISLider will have a minimumValue of 0 seconds and a maximumValue of 6802 seconds. Of course, you'll want that max value to be calculated on each change of source.
(4) A question you'll want to answer for your app is whether to go the route of iTunes (where this scrubber view is always visible) or YouTube (where it is overly visible when the user or mouse cursor hovers over an area). For the former, you just need to position this scrub view in a position on the screen. For the latter though, you may wish to use a pan gesture - but only for visibility. Hold that thought.
(5a) You need two, maybe three more things on you UISlider. First is an automatic value update. Again it will depend on the visibility of the entire scrub view. You want to update, once a second, both the left hand UILabel and the UISLider value if it's always visible. For a disappearing one you probably can get away with only updating it once a second when it's visible.
(5b) The second thing you need to do with the UISlider is track changes the user makes to it (the "scrubbing") while it's visible. The event you are looking for is UIControl.valueChanged(). It will trigger anytime the user works with the slider, giving you the new seconds value to "scrub" the video to.
(5c) The third thing you might want to do with the UISlider is customize it a few ways - change the thumb image and the slider itself. My app changes the thumb image. These can only be done in code, there are no IB properties available.
Back to #4. All of the above doesn't need a pan gesture, unless you want the scrub view to appear only when needed.
If you have a mental picture of what I've described above, all you want to know is if a pan gesture has happened. No regards for direction. You might wish to have some regards for screen area - do want this scrub view to appear when a user pans over an area where the scrub view will not appear?
Wire up a CALayer (or the entire video view) with the pan gesture. Then code for a state of UIGestureRecognizer.began. Make the scrub view visible by changing it's alpha state from 0 to 1, or "sliding" it into view by changing it's origin or height. Add a UIView.animate(withDuration:) to it for a good effect.
Now, all that's left is setting the scrub view back to it's natural state. You'll need to code the reverse of whatever you did, and attach it to a timer set for however many seconds you want it visible.
TL;DR;
My app uses 4 UISliders that change various things (height, width, saturation, grill thickness) of of a photo effect that uses CoreImage. Performance is very tight, about 5/100 of a second to grab the new values of all 4 sliders and update the image.
These sliders are always visible today, but my next update (about 2 weeks away) will feature a "sliding control board" - think a keyboard with sliders and other controls on it. (There's limitations on the alpha value for a custom keyboard that forced me to write my own, but that's a separate discussion.)
So I know a "sliding scrub view" is possible. What I don't know for you is if you set the alpha value to a view to zero, will it detect pan gestures? I don't know, thus a CALayer may be needed.

Resources