How can I build custom quick links on 3d touch just like native apps in iPhone 6s and 6s Plus?
It's very simple as expected:
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch = touches.first else { return }
if traitCollection.forceTouchCapability == .Available {
println("Pressure is \(touch.force) and maximum force is \(touch.maximumPossibleForce)")
}
}
The other answers are close, but no cigar. You need to add the touchesMoved:withEvent: method (not touchesBegan:withEvent:):
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch = touches.first else {
return
}
if traitCollection.forceTouchCapability == .Available {
print("Touch pressure is \(touch.force), maximum possible force is \(touch.maximumPossibleForce)")
}
}
As I referred from realm.io
In fundamental level, exposed simply as a new property in UITouch
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
guard let touch = touches.first else { return }
if traitCollection.forceTouchCapability == .Available {
println("Touch pressure is \(touch.force), maximum possible force is \(touch.maximumPossibleForce)")
} }
In addition to the UITouch APIs, Apple has also provided two new sets of classes adding 3D Touch functionality to apps: UIPreviewAction and UIApplicationShortcutItem.
UIPreviewAction allows developers to quickly present content in a new ‘preview’ overlay when you 3D Touch a UI element. This is a fantastic way to allow a quick glance at app-specific content, such as email messages, photos, or even websites without needing to commit to a full view controller transition.
UIApplicationShortcutItem objects enable an amazing new feature right on the iOS Home screen. When users 3D Touch an app icon, a sheet of options is presented, allowing the user to quickly jump to a specific section of the app, or perform an in-app action.
Also, don't forget apple document for 3D touch.
Related
I have a custom UIGestureRecognizer for a two finger gesture that works perfectly except for it being very picky about how simultaneously the fingers have to touch the iOS-device for touchesBegan to be called with 2 touches. touchesBegan is often called with only one Touch even though I am trying to use two fingers.
Is there any way to make recognition for the number of Touches more forgiving in regards to how simultaneously you have to place your fingers on the touch screen?
I've noticed that a two finger tap is recognized even when you place first one finger and then another much later while still holding the first finger down.
Here is the code for my touchesBegan function:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
if touches.count != 2 {
state = .failed
return
}
// Capture the first touch and store some information about it.
if trackedTouch == nil {
trackedTouch = touches.min { $0.location(in: self.view?.window).x < $1.location(in: self.view?.window).x }
strokePhase = .topSwipeStarted
topSwipeStartPoint = (trackedTouch?.location(in: view?.window))!
// Ignore the other touch that had a larger x-value
for touch in touches {
if touch != trackedTouch {
ignore(touch, for: event)
}
}
}
}
For two-finger gestures, touchesBegan is most likely going to be called twice: once you put the first finger on the screen, and once for the second one.
In the state you keep, you should keep track of both touches (or for that matter, all current touches), and only start the gesture once both touches have been received and the gesture's start condition has been met.
public class TwoFingerGestureRecognizer: UIGestureRecognizer {
private var trackedTouches: Set<UITouch> = []
public override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
for touch in touches {
if self.trackedTouches.count < 2 {
self.trackedTouches.insert(touch)
}
else {
self.ignore(touch, for: event)
}
}
if self.trackedTouches.count == 2 {
// put your current logic here
}
}
public override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
}
public override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent) {
self.trackedTouches.subtract(touches)
}
public override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent) {
self.trackedTouches.subtract(touches)
}
public override func reset() {
super.reset()
self.trackedTouches = []
}
}
don't worry about touchesBegan:withEvent: instead use touchesEnded:withEvent: or touchesMoved:withEvent: if the end state does not contain both fingers, set it as .failed otherwise set it as .ended
tapping the screen with more than one finger simultaneously is impossible, so during touchesMoved:withEvent: you will find two fingers. I'm not sure about touchesEnded:withEvent: this one probably won't work since removing two fingers simultaneously is just as hard as applying two fingers simultaneously, but it's worth a try to see how it reacts.
I'd recommend making your code a little more forgiving. Although touchesBegan/Moved/Ended/Cancelled respond to events of "one or more touches" (as stated in the Apple docs) relying on the precision of a user to simultaneously touch the screen with 2 fingers is not ideal. This is assuming you have multi-touch enabled, which it sounds like you do.
Try tracking the touches yourself and executing your logic when you're collection of touches reaches 2. Obviously you'll also have to track when your touch amount becomes more or less and handle things accordingly, but I'd guess you're already handling this if you're gesture is meant to be for 2 fingers only (aside from the extra logic you'd have to add in touchesBegan).
Not sure why the other guys answering with using touchesMoved:withEvent did not answer your question, but maybe you need a github example.
Double touch move example.
Is touchesMoved an option in order for you to achieve the desire outcome? Also you could implement a counter before setting the state to failed
Don't forget to set isMultipleTouchEnabled = true
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if touches.count != 2 {
print("we do NOT have two touches")
if counter > 100 { // 100 "more fogiven"
state = .failed
}
counter += 1
return
} else {
print("we have to two touches")
}
I am trying to make my interface feel more responsive. A UIView changes color on user touch and I want it to do so already when the View is touched.
I could implement a UITapGestureRecognizer but a tap is not what I am looking for, since it requires the touch to end before being recognized.
I imagine this to be quite simple. Or am I wrong?
Do I create a custom UIGestureRecognizer class?
Have you tried touchedBegan?
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
// ...
}
super.touchesBegan(touches, with: event)
}
I have a number of UIButton instances in a UIViewController and I want to perform a couple of actions when any of these buttons are pressed with pressure (all the way down), I don't know the exact term here (force touch maybe?).
So when UIButton is pressured, I want to give haptic feedback via vibration, change the button image source and do some other stuff. Then when the pressure is released I want to restore the button image source to the normal state and do some more stuff.
What is the easiest way to do this?
Should I make my own custom UIButton like below or are there methods that can be overridden for 3D touch "pressed" and "released".
This is my custom UIButton code so far. Should I determine, by trial and error, what the maximum force should be? Also how do I change the source of the image for each button in the easiest way possible?
import AudioToolbox
import UIKit
class customButton : UIButton {
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
for touch in touches {
print("% Touch pressure: \(touch.force/touch.maximumPossibleForce)");
if touch.force > valueThatIMustFindOut {
AudioServicesPlayAlertSound(SystemSoundID(kSystemSoundID_Vibrate))
// change image source
// call external function
}
}
}
override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
print("Touches End")
// restore image source
// call external function
}
}
Please note that I am new to Swift so I would like to use the graphical interface in Xcode to create the user interface as much as possible. So I would like to avoid creating the UI from the code.
As for the force touch - you need to detect if it's available first:
func is3dTouchAvailable(traitCollection: UITraitCollection) -> Bool {
return traitCollection.forceTouchCapability == UIForceTouchCapability.available
}
if(is3dTouchAvailable(traitCollection: self.view!.traitCollection)) {
//...
}
and then in e.g. touchesMoved it will be available as touch.force and touch.maximumPossibleForce
func touchMoved(touch: UITouch, toPoint pos: CGPoint) {
let location = touch.location(in: self)
let node = self.atPoint(location)
//...
if is3dTouchEnabled {
bubble.setPressure(pressurePercent: touch.force / touch.maximumPossibleForce)
} else {
// ...
}
}
Here's the more detailed example with code samples:
http://www.mikitamanko.com/blog/2017/02/01/swift-how-to-use-3d-touch-introduction/
It's also a good practice to react on such "force touches" with haptic/taptic feedback, so user will experience the touch:
let generator = UIImpactFeedbackGenerator(style: .heavy)
generator.prepare()
generator.impactOccurred()
you might want to take a look at this post for details:
http://www.mikitamanko.com/blog/2017/01/29/haptic-feedback-with-uifeedbackgenerator/
My question: Is there a way to adjust the "sensitivity" of UIPanGestureRecognizer so that it turns on 'sooner', i.e. after moving a fewer number of 'pixels'?
I have a simple app with a UIImageView, and pinch and pan gesture recognizers tied to this so that the user can zoom in and draw on the image by hand. Works fine.
However, I notice the stock UIPanGestureRecognizer doesn't return a value of UIGestureRecognizerState.Changed until the user's gesture has moved about 10 pixels.
Example: Here's a screenshot showing several lines that I've attempted to draw shorter & shorter, and there is a noticeable finite length below which no line gets drawn because the pan gesture recognizer never changes state.
IllustrationOfProgressivelyShorterLines.png
...i.e., to the right of the yellow line, I was still trying to draw, and my touches were being recognized as touchesMoved events, but the UIPanGestureRecognizer wasn't firing its own "Moved" event and thus nothing was getting drawn.
(Note/clarification: That image takes up the entirety of my iPad's screen, so my finger is physically moving more than an inch even in the cases where no state change occurs to the recognizer. It's just that we're 'zoomed in' in terms of the tranformation generated by the pinch gesture recognizer, so a few 'pixels' of the image take up a significant amount of the screen.)
This is not what I want. Any ideas on how to fix it?
Maybe some 'internal' parameter of UIPanGestureRecognizer I could get at if I sub-classed it or some such? I thought I'd try to sub-class the recognizer in a manner such as...
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
var initialTouchLocation: CGPoint!
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
print("pan: touch begin detected")
print(self.state.hashValue) // this lets me check the state
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
print("pan: touch move detected")
print(self.state.hashValue) // this remains at the "began" value until you get beyond about 10 pixels
let some_criterion = (touches.first!.isEqual(something) && event.isEqual(somethingElse))
if (some_criterion) {
self.state = UIGestureRecognizerState.Changed
}
}
}
...but I'm not sure what to use for some_criterion, etc.
Any suggestions?
.
Other alternatives that could work, but that I'd rather not have to do:
I could simply attach my UIPanGestureRecognizer to some parent,
non-zoomed view, and then use affine transforms & such to remap the
points of the pan touches onto the respective parts of the image.
So why am I not doing that? Because the code is written so that
lots of other objects hang off the image view and they all get the
same gesture recognizers and....everything works just great without
my having keep track of anything (e.g. affine transformations), and the problem only shows up if you're really-really zoomed in.
I could abandon UIPanGestureRecognizer, and effectively just write my own using touchesBegan and touchesMoved (which is kind of
what I'm doing), however I like how UIPanGestureRecognizer
differentiates itself from, say, pinch events, in a way that I don't
have to worry about coding up myself.
I could just specify some maximum zoom beyond which the user can't go. This fails to implement what I'm going for, i.e. I want to allow for fine-detail level of manipulation.
Thanks.
[Will choose your answer over mine (i.e., the following) if merited, so I won't 'accept' this answer just yet.]
Got it. The basic idea of the solution is to change the state whenever touches are moved, but use the delegate method regarding simultaneous gesture recognizers so as not to "lock" out any pinch (or rotation) gesture. This will allow for one- and/or multi-fingered panning, as you like, with no 'conflicts'.
This, then, is my code:
class BetterPanGestureRecognizer: UIPanGestureRecognizer, UIGestureRecognizerDelegate {
var initialTouchLocation: CGPoint!
override init(target: AnyObject?, action: Selector) {
super.init(target: target, action: action)
self.delegate = self
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesBegan(touches, withEvent: event)
initialTouchLocation = touches.first!.locationInView(view)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent) {
super.touchesMoved(touches, withEvent: event)
if UIGestureRecognizerState.Possible == self.state {
self.state = UIGestureRecognizerState.Changed
}
}
func gestureRecognizer(_: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWithGestureRecognizer:UIGestureRecognizer) -> Bool {
if !(shouldRecognizeSimultaneouslyWithGestureRecognizer is UIPanGestureRecognizer) {
return true
} else {
return false
}
}
}
Generally setting that "shouldRecognizeSimultaneouslyWithGestureRecognizer" delegate to true always is what many people may want. I make the delegate return false if the other recognizer is another Pan, just because I was noticing that without that logic (i.e., and making the delegate return true no matter what), it was "passing through" Pan gestures to underlying views and I didn't want that. You may just want to have it return true no matter what. Cheers.
Swift 5 + small improvement
I had a case when accepted solution conflicted with basic taps on toolbar which also had this betterPanGesture so I added minimum horizontal offset parameter to trigger state changing to .changed
class BetterPanGestureRecognizer: UIPanGestureRecognizer {
private var initialTouchLocation: CGPoint?
private let minHorizontalOffset: CGFloat = 5
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesBegan(touches, with: event)
self.initialTouchLocation = touches.first?.location(in: self.view)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
super.touchesMoved(touches, with: event)
if self.state == .possible,
abs((touches.first?.location(in: self.view).x ?? 0) - (self.initialTouchLocation?.x ?? 0)) >= self.minHorizontalOffset {
self.state = .changed
}
}
}
I am making a SpriteKit game where in order to begin the game, you need to hold two separate spots (SKShapeNodes) for 3 seconds. If you let go either finger or move either finger off a node, the game will not start. I have it working fine with 1 spot, but when I try to do 2 spots, I'm stuck. What is the simplest way to detect the 2 correct touches on the correct nodes?
This doesn't seem like a very uncommon situation, so if anyone knows the best way to handle this, I would appreciate the help.
Swift preferred, also.
Set multipleTouchEnabled to YES and use the touchesForView: method.
You can get more specific information on multi touch from the Apple Docs Multitouch Events.
The main idea is to have all touches when users provides any actions and operate them at the same time.
So, add 3 handlers to your Scene class:
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
checkTouches((event?.allTouches())!)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
checkTouches((event?.allTouches())!)
}
override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
checkTouches((event?.allTouches())!)
}
In the checkTouches function you will see all touches with updated properties (like positions etc).
Simple example:
func checkTouches(touches: Set<UITouch>) {
// iterate over all touches
for t in touches {
let touch = t as UITouch
let touchLocation = touch.locationInNode(self)
if... <-- YOUR CODE HERE TO CHECK NODE NAME AND TOUCHED TIME
}
}
Using this approach you will be able to handle any changes simultaniously.
E.g. user may touch on your node, then move finger outside it and then move bach to this node.
Enjoy!