So I have a basic game I'm developing for iOS.
There's a 2 player mode and it involves hitting buttons. Once the button is hit (or pressed) something happens in the game.
So, each player has half of the screen, with their own buttons inside of their half, that act independently.
Do I need to worry about if player 1 hits a button on the lower part of the screen at the exact moment player 2 hit's a button on the top part of the screen?
If I do, do I need to handle it like the answer in THIS QUESTION.
I figure the chances are extremely slim, but never-the-less should be handled.
You don't need to worry about them being tapped at the same time. Since your buttons are independent each one should have it's own action method that is called when tapped, and it's okay if those methods are called at the same time since they do their own thing.
The reason the question you linked to is different is that they wanted to do special logic IF both buttons were tapped. Since you don't care whether they were tapped at the same time, you don't need to implement that code.
Related
In my React Native app I'm trying to have a button that the user can long press, and without lifting their finger, able to interact with another view. Here is roughly what I want:
Think of it like how 3D touch/long press worked prior to iOS 13/14 (depending on place in system and device): user either 3D touched or long pressed a button, for example an app icon, and a contextual menu popped up. Then, users could, without lifting the finger, hover onto one of the buttons and release their finger, triggering the button tap.
I have complete control over my buttons, touchables, and views (even the tab bar is custom, as opposed to the illustrations I made above).
How can I achieve this? (I'm on React Native 0.63)
There may be a better solution to this but off the top of my head I would use the Gesture Responder System
https://reactnative.dev/docs/gesture-responder-system
You can have a one container view that wraps tab bar and buttons. Then listen to the onResponderMove event to decide when these buttons should appear. This may happen for example when the locationY exceeds some value.
You can also use the onResponderRelease event (again with the help of locationX and locationY parameters) to determine if the finger was released above the button.
I want to build some analytics into my app and I would like to send some data when user leaves current screen, though there are multiple ways he can do so (back button, other button, sidebar menu, etc). Is there any efficient way to do this? I really don't feel like implementing it to every possible button that can lead the user to different screen.
You should call your function inside viewWillDisappear, which is called every time the current view controller is about to disappear from screen. See the documentation of viewWillDisappear
Also see the view controller life cycle (thanks #Paolo for the tip) below (documentation).
Suppose there is button on first screen of an app which is connecting to second screen(i.e on click on that button second screen will come) and also to a function on first screen class .
Now my question is what is the flow first function||second screen will executes?or both simultaneously will execute.
You have to know that iOS Applications usually use MVC pattern.
So for your example, you have the View (V), containing the design, your button etc..
And the Controller (C), implementing the logic.
The answer is : You can decide which one comes first, just depends how you implement it.
But I don't really understand the logic in your example, can you be more specific.
You should try to make 1 interface element for 1 purpose.
I've created a simple xib file with three buttons and a UISlider.
When I start the app on my phone I'm able to tap the buttons no problem, but the UISlider is locked and doesn't move..
However if I keep trying to move it with my finger it eventually scrolls left and right fine, but only after about a minute of furious screen swiping..
I thought it might be because it was hidden behind something onscreen (not that there's anything for it to hide behind) so I did 'arrange -> send to front', but this doesn't help.
Does anyone know why it might be doing this?
In my iOS application, when the user pushes a button in a view, a NSTimer is trigered in the controller.
On the third tick, I would like to make the background of the view bliking.
I've written the blinking function in the view (it should't be written in the controller, should it ?)
I can trigger this blinking function in the controller by
LostView *lostView = (LostView* ) self.view;
[lostView blinkBackground];
But it's bad, isn't it ? The controller shoudn't know the view neither the name of the function ?
I would like to apply the MVC pattern
Is the observer/obervable pattern applicable in this situation ?
Thanks
No it's not bad at all. It looks like you implemented the method to make the view blink in the view itself. That's fine, because it's directly related to the visual representation (i.e. the view part of MVC). You could reuse that view in any other app that requires a blinking view.
Since that blinking is triggered by an NSTimer I assume that it's somehow dependent on the logic in this specific app. The view can't (shouldn't) know when it's supposed to blink (that would only be the case if that blinking was a direct reaction of an interaction with it or another related part of the UI - or it was part of a more complex element, for example a countdown timer that always starts to blink when it reaches the last 10 secs or so. For example the UIButton provides the possibility to highlight it self if it's touched.)
But if that blinking is a reaction of some state transition in your app, maybe some new data becomes available or a countdown is about to expire, the controller is a perfectly reasonable place to trigger that.