I've never used gestures before, and wrote this bit of code (Note: I am using Xamarin and iOS).
When I use the code below, the only value ever received is "Right", despite what I do in the simulator or on my actual iPhone. I'm stumped as to why. I figure I should either get nothing, or everything should work.
// Swipe gesture
this.View.AddGestureRecognizer(new UISwipeGestureRecognizer(sw =>
{
if (sw.Direction == UISwipeGestureRecognizerDirection.Left) {
txtTestLabel.Text = "Left";
}
else if (sw.Direction == UISwipeGestureRecognizerDirection.Right) {
txtTestLabel.Text = "Right";
}
else {
txtTestLabel.Text = sw.Direction.ToString();
}
}));
The default direction of the UISwipeGestureRecognizer is right (see the documentation). Set the direction property to left to track the left swipe.
Related
I have a Viewcontroller ThirdViewControllerPassenger which has multiple subviews on it, including a UICollectionView called collectionViewwith horizontally scrolling Cards. So far, so good. I have written code to be executed from a tap action from inside the uicollectionviewcells. Tapping the action does work and prints to console. However, by pressing one of these cards I want to hide the whole UICollectionView. I have set up an onTap Function as shown here:
#objc func onTap(_ gesture: UIGestureRecognizer) {
if (gesture.state == .ended) {
/* action */
if favCoordinate.latitude == 1.0 && favCoordinate.longitude == 1.0 {
//There has been an error OR the User has pressed the new Address button
//do
}else{
ThirdViewControllerPassenger().collectionView.isHidden = true
if ThirdViewControllerPassenger().collectionView.isHidden == true {
print("done!")
}
}
}
}
As you can see, I have already been troubleshooting a bit. I have tested ThirdViewControllerPassenger().collectionView.isHidden = true from ThirdViewControllerPassenger directly, which worked. It does not work, however, from a cell. The "done!" print never gets printed to console, so the call never arrives. I wonder why or what I am doing wrong.
Don't mind the first if statement, that function is not written yet. That should not matter. I am guessing that the rest of my code would not lead to any more clues.
Every ThirdViewControllerPassenger() here
ThirdViewControllerPassenger().collectionView.isHidden = true
if ThirdViewControllerPassenger().collectionView.isHidden == true {
print("done!")
}
is a new instance not the real one , you need to get access to the real shown istance
delegate.collectionView.isHidden = true
if delegate.collectionView.isHidden == true {
print("done!")
}
In Nativescript Angular, I can swipe from left edge to right to go back on iOS. Is there any way i can detect or capture this swipe back?
The reason for this is because I have a button on screen that execute "this.routerExtensions.back()" when tapped. I would like to identify whether a "Back action" is from this button or iOS swipe event.
Thanks
NativeScript has this property on pages.
https://docs.nativescript.org/api-reference/classes/ui_page.page.html#enableswipebacknavigation
If you set that to false, you should then be able to create your own gesture for going back. Even on native iOS, I am not sure you can directly tap into the back gesture without instead replacing it with your own.
This is how I was able to solve it for Android and iOS:
adjust constructor to manipulate the pullToLeft function from Android, and disable the default swipe back from iOS:
constructor(...) {
if (isAndroid) {
Application.android.on(
AndroidApplication.activityBackPressedEvent,
(data: AndroidActivityBackPressedEventData) => {
data.cancel = true;
...yourCustomFunction()
}
);
} else {
page.enableSwipeBackNavigation = false;
}
}
for Android it is already working - for iOS you need then to write a function you can use on a template (this is also how you can recognize the swipe action):
public onSwipe = (args: SwipeGestureEventData) => {
if (!isAndroid) {
if (args.direction === SwipeDirection.right) {
...yourCustomFunction()
}
}
};
disable the default PopGestureSerializer and add the swipe function on top of your template/element and it should work on iOS as well:
<StackLayout [interactivePopGestureRecognizer]="false" (swipe)='onSwipe($event)'>
...
</StackLayout>
I want to do something when the focused voice-over element is being tapped again.
Function accessibilityElementDidBecomeFocused() will only get called when the element is focused for first time.
When we are again single tapping on the same focused element then this function won't get called. Can anybody suggest some solution?
Can anybody suggest some solution?
Here are some ideas on the fly to detect a single-tap on the same focused element:
Create a variable nbSelections that will count the number of single-taps.
Create a tap gesture recognizer on your element to increment the number of taps for instance:
let tap = UITapGestureRecognizer(target: self,
action: #selector(addTapCounter(info:)))
tap.numberOfTapsRequired = 1
self.addGestureRecognizer(tap)
Add the trait that will allow to catch the simple tap directly on the element:
override var accessibilityTraits: UIAccessibilityTraits {
get { return .allowsDirectInteraction }
set { }
}
Set nbSelections = 0 when the element loses the focus:
override open func accessibilityElementDidLoseFocus() { nbSelections = 0 }
Combining these ideas with the UIAccessibilityFocus informal protocol could be a good line of research to reach your goal.
However, this technical solution assumes that the single-tap is made directly on the element itself (-trait specific) and not anywhere else (I don't see how to catch this event the top off my head).
Imagine an iOS screen where you can move your finger up/down to do something (imagine say, "scale"),
Or,
as a separate function you can move your finger left/right to do something (imagine say, "change color" or "rotate").
They are
separate
functions, you can only do one at at time.
So, if it "begins as" a horizontal version, it is remains only a horizontal version. Conversely if it "begins as" a vertical version, it remains only a vertical version.
It is a little bit tricky to do this, I present exactly how to do it below...the fundamental pattern is:
if (r.state == .Began || panway == .WasZeros )
{
prev = tr
if (tr.x==0 && tr.y==0)
{
panway = .WasZeros
return
}
if (abs(tr.x)>abs(tr.y)) ... set panway
}
This works very well and here's exactly how to do it in Swift.
In storyboard take a UIPanGestureRecognizer, drag it to the view in question. Connect the delegate to the view controller and set the outlet to this call ultraPan:
enum Panway
{
case Vertical
case Horizontal
case WasZeros
}
var panway:Panway = .Vertical
var prev:CGPoint!
#IBAction func ultraPan(r:UIPanGestureRecognizer!)
{
let tr = r.translationInView(r.view)
if (r.state == .Began || panway == .WasZeros )
{
prev = tr
if (tr.x==0 && tr.y==0)
{
panway = .WasZeros
return
}
if (abs(tr.x)>abs(tr.y))
{
panway = .Horizontal
}
else
{
panway = .Vertical
}
}
if (panway == .Horizontal) // your left-right function
{
var h = tr.x - prev.x
let sensitivity:CGFloat = 50.0
h = h / sensitivity
// adjust your left-right function, example
someProperty = someProperty + h
}
if (panway == .Vertical) // bigger/smaller
{
var v = tr.y - prev.y
let sensitivity:CGFloat = 2200.0
v = v / sensitivity
// adjust your up-down function, example
someOtherProperty = someOtherProperty + v
}
prev = tr
}
That's fine.
But it would surely be better to make a new subclass (or something) of UIPanGestureRecognizer, so that there are two new concepts......
UIHorizontalPanGestureRecognizer
UIVerticalPanGestureRecognizer
Those would be basically one-dimensional panners.
I have absolutely no clue whether you would ... subclass the delegates? or the class? (what class?), or perhaps some sort of extension ... indeed, I basically am completely clueless on this :)
The goal is in one's code, you can have something like this ...
#IBAction func horizontalPanDelta( ..? )
{
someProperty = someProperty + delta
}
#IBAction func verticalPanDelta( ..? )
{
otherProperty = otherProperty + delta
}
How to inherit/extend UIPanGestureRecognizer in this way??
But it would surely be better to make a new subclass (or something) of UIPanGestureRecognizer, so that there are two new concepts......
UIHorizontalPanGestureRecognizer
UIVerticalPanGestureRecognizer
Those would be basically one-dimensional panners
Correct. That's exactly how to do it, and is the normal approach. Indeed, that is exactly what gesture recognizers are for: each g.r. recognizes only its own gesture, and when it does, it causes the competing gesture recognizers to back off. That is the whole point of gesture recognizers! Otherwise, we'd still be back in the pre-g.r. days of pure touchesBegan and so forth (oh, the horror).
My online book discusses, in fact, the very example you are giving here:
http://www.apeth.com/iOSBook/ch18.html#_subclassing_gesture_recognizers
And here is an actual downloadable example that implements it in Swift:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch05p203gestureRecognizers/ch18p541gestureRecognizers/HorizVertPanGestureRecognizers.swift
Observe the strategy used here. We make UIPanGestureRecognizer subclasses. We override touchesBegan and touchesMoved: the moment recognition starts, we fail as soon as it appears that the next touch is along the wrong axis. (You should watch Apple's video on this topic; as they say, when you subclass a gesture recognizer, you should "fail early, fail often".) We also override translationInView so that only movement directly along the axis is possible. The result (as you can see if you download the project itself) is a view that can be dragged either horizontally or vertically but in no other manner.
#Joe, this is something I scrambled together quickly for the sake of this question. For ease of making it, I simply gave it a callback rather than implementing a target-action system. Feel free to change it.
enum PanDirection {
case Horizontal
case Vertical
}
class OneDimensionalPan: NSObject {
var handler: (CGFloat -> ())?
var direction: PanDirection
#IBOutlet weak var panRecognizer: UIPanGestureRecognizer! {
didSet {
panRecognizer.addTarget(self, action: #selector(panned))
}
}
#IBOutlet weak var panView: UIView!
override init() {
direction = .Horizontal
super.init()
}
#objc private func panned(recognizer: UIPanGestureRecognizer) {
let translation = panRecognizer.translationInView(panView)
let delta: CGFloat
switch direction {
case .Horizontal:
delta = translation.x
break
case .Vertical:
delta = translation.y
break
}
handler?(delta)
}
}
In storyboard, drag a UIPanGestureRecognizer onto your view controller, then drag an Object onto the top bar of the view controller, set its class, and link its IBOutlets. After that you should be good to go. In your view controller code you can set its callback and pan direction.
UPDATE FOR EXPLANATION >
I want to clarify why I made the class a subclass of NSObject: the driving idea behind the class is to keep any unnecessary code out of the UIViewController. By subclassing NSObject, I am able to then drag an Object onto the view controller's top bar inside storyboard and set its class to OneDimensionalPan. From there, I am able to connect #IBOutlets to it. I could have made it a base class, but then it would have had to be instantiated programmatically. This is far cleaner. The only code inside the view controller is for accessing the object itself (through an #IBOutlet), setting its direction, and setting its callback.
I'm developing a game on Unity for iOS devices. I've implemented the following code for touch:
void Update () {
ApplyForce ();
}
void ApplyForce() {
if (Input.touchCount > 0 && Input.GetTouch (0).phase == TouchPhase.Began) {
Debug.Log("Touch Occured");
}
}
I've dragged this script onto my game object which is a sphere. But the log message appears no matter where I touch. I want to detect touch only when the users touches the object.
stick the code bellow in your camera, or something else that will persist in the level you're designing. It can be anything, even an empty gameobject. The only important thing is that it only exists ONCE in your level, otherwise you'll have multiple touch checks running at the same time, which will cause some seriously heavy load on the system.
void Update () {
if (Input.touchCount > 0 && Input.GetTouch (0).phase == TouchPhase.Began)
{
Ray ray = Camera.main.ScreenPointToRay( Input.GetTouch(0).position );
RaycastHit hit;
if ( Physics.Raycast(ray, out hit) && hit.transform.gameObject.Name == "myGameObjectName")
{
hit.GetComponent<TouchObjectScript>().ApplyForce();
}
}
}
In the above script, hit.transform.gameObject.Name == "myGameObjectName" can be replaced by any other method you want to check that the object hit by the raycast is the object you want to apply a force to. One easy method is by Tags for example.
Another thing to keep in mind is that raycast will originate at your camera (at the position relative to your finger's touch) and hit the first object with a collider. So if your object is hidden behind something else, you won't be able to touch it.
Also put the script bellow in your touch object (in this example, called TouchObjectScript.cs)
void Update () {
}
public void ApplyForce() {
Debug.Log("Touch Occured");
}
If something wasn't clear, or you need any further help, leave a comment and I'll get back to you.
The best way to do this, is just adding:
void OnMouseDown () {}
to the gameobject you want to be clicked.
The problem with this is that you cannot press 2 things at one time.