iOS Swift 4 Screen game swipe boxes - ios

I am creating an app so that user has to Swipe all the boxes from the screen. The goal is to swipe all the boxes until all boxes are swiped like example below.
So my question is:
Is it better to create the boxes using Stack View or rather draw manually by coordinates on the screen?
How to detect if user has swiped through the boxes (using UIGestureRecognizer)?
Note: When user swiped through the boxes, swiped boxes will turn into other color.

Both stack view or manually should work nicely. I would go with manually in this case but this is just a preference because you might have more power over it. But there is a downside that you need to reposition them when screen size changes. A third option is also a collection view.
The gesture recognizer should be pretty straight forward. You just add it on the superview of these cells and check the location when it moves or and when it starts. A pan gesture seems the most appropriate but it will not detect if user just taps the screen. This may be a feature but if you want to handle all touches you should either use a long press gesture with zero press duration (It makes little sense, I know but it works), or you may simply just override touch methods:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
handleDrag(at: touch.location(in: viewWhereAllMiniViewsAre))
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
handleDrag(at: touch.location(in: viewWhereAllMiniViewsAre))
}
}
func handleDrag(at location: CGPoint) {
// TODO: handle the nodes
}
The gesture recognizer procedure would do something like:
func onDrag(_ sender: UIGestureRecognizer) {
switch sender.state {
case .began, .changed, .ended, .cancelled: handleDrag(at: sender.location(in: viewWhereAllMiniViewsAre))
case .possible, .failed: break
}
}
Now all you need is your data source. An array of all of your items should be enough. Like:
static let rows: Int = 10
static let columns: Int = 10
var nodes: [Node] = {
return Array<Node>(repeating: Node(), count: LoginViewController.rows * LoginViewController.columns)
}()
And a list of all of your mini views:
var nodeViews: [UIView] = { ... position them or get them from stack view or from collection view }
Now the implementation on touch handle:
func handleDrag(at location: CGPoint) {
nodeViews.enumerated().forEach { index, view in
if view.frame.contains(location) {
view.backgroundColor = UIColor.green
nodes[index].selected = true
}
}
}
This is just an example. An easy one and rather a bad one from maintenance perspective at least. In general I would rather have a node view of custom UIView subclass with a reference to a node. Also it should hook using delegate to a Node instance so that the node reports when the selection state changes.
This way you have much cleaner solution when handling touches:
func handleDrag(at location: CGPoint) {
nodeViews.first(where: { $0.frame.contains(location) }).node.selected = true
}
Checking if all are green is then just
var allGreen: Bool {
return !nodes.contains(where: { $0.selected == false })
}

Related

Swift iOS: Detect when a user drags/slides finger over a UILabel?

I have a project where I’m adding three UILabels to the view controller’s view. When the user begins moving their finger around the screen, I want to be able to determine when they their finger is moving over any of these UILabels.
I’m assuming a UIPanGestureRecognizer is what I need (for when the user is moving their finger around the screen) but I’m not sure where to add the gesture. (I can add a tap gesture to a UILabel, but this isn’t what I need)
Assuming I add the UIPanGestureRecognizer to the main view, how would I go about accomplishing this?
if gesture.state == .changed {
// if finger moving over UILabelA…
// …do this
// else if finger moving over UILabelB…
// …do something else
}
You can do this with either a UIPanGestureRecognizer or by implementing touchesMoved(...) - which to use depends on what else you might be doing.
For pan gesture, add the recognizer to the view (NOT to the labels):
#objc func handlePan(_ g: UIPanGestureRecognizer) {
if g.state == .changed {
// get the location of the gesture
let loc = g.location(in: view)
// loop through each label to see if its frame contains the gesture point
theLabels.forEach { v in
if v.frame.contains(loc) {
print("Pan Gesture - we're panning over label:", v.text)
}
}
}
}
For using touches, no need to add a gesture recognizer:
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let t = touches.first {
// get the location of the touch
let loc = t.location(in: view)
// loop through each label to see if its frame contains the touch point
theLabels.forEach { v in
if v.frame.contains(loc) {
print("Touch - we're dragging the touch over label:", v.text)
}
}
}
}

How to stop button action if user moves their finger off the button in sprite kit?

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
let positionInScene = touch!.location(in: self)
let touchedNode = self.atPoint(positionInScene)
if let name = touchedNode.name {
if name == "leftbutton" {
print("left button stopped")
touchedNode.run(buttonStoppedPressAction)
player?.removeAllActions()
}
if name == "rightbutton" {
print("right button stopped")
touchedNode.run(buttonStoppedPressAction)
player?.removeAllActions()
}
}
}
Here I have code that when the user lifts off their finger from the buttons it stops the action but only if they lift of their finger inside the button. So if they press it and begin to move their finger somewhere else on the screen while continuously pressing down the button will not stop executing its code. Thank you for any help.
Essentially you should check for touch location at touch down and compare to the location at touch up. If the touch is no longer in the area of your button, you cancel all effects.
First, though, a point. It seems like you are handling button logic in the SKScene level, which is what tutorials often tell you to do. However, this may not be the best approach. The risks here, in addition to just a cluttered mess of a SKScene, emerge from handling multiple objects and how they react to touch events, and also additional complexity from multitouch (if allowed).
Years ago when I started with SpriteKit, I felt like this was a huge pain. So I made a button class that handles all the touch logic independently (and sends signals back to the parent when something needs to happen). Benefits: No needless clutter, no trouble distinguishing between objects, the ability to determine multitouch allowances per-node.
What I do in my class to see if the touch hasn't left the button before touch up is that I store the size of the button area (as a parameter of the object) and touch position within it. Simple simple.
In fact, it has baffled me forever that Apple didn't just provide a rudimentary SKButton class by default. Anyhow, I think you might want to think about it. At least for me it saves sooo much time every day. And I've shipped multiple successful apps with the same custom button class.
EDIT: Underneath is my barebones Button class.
import SpriteKit
class Button: SKNode {
private var background: SKSpriteNode?
private var icon: SKNode?
private var tapAction: () -> Void = {}
override init() {
super.init()
isUserInteractionEnabled = true
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
isUserInteractionEnabled = true
}
// MARK: Switches
public func switchButtonBackground(buttonBackgroundSize: CGSize, buttonBackgroundColor: SKColor) {
background = SKSpriteNode(color: buttonBackgroundColor, size: buttonBackgroundSize)
addChild(background!)
}
public func switchButtonIcon(_ buttonIcon: SKNode) {
if icon != nil {
icon = nil
}
icon = buttonIcon
addChild(icon!)
}
public func switchButtonTapAction(_ buttonTapAction: #escaping () -> Void) {
tapAction = buttonTapAction
}
// MARK: Touch events
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
tapAction()
}
}
And then you create the Button object by first initiating it, assigning it a background using a size and color, then assign it an icon, assign it a function to run when tapped and finally add it as a child to the scene.
let icon = SKNode()
let size = CGSize(width: 20.0, height: 20.0)
let button = Button()
button.switchButtonBackground(buttonBackgroundSize: size, buttonBackgroundColor: .clear)
button.switchButtonIcon(icon)
button.switchButtonTapAction(buttonPressed)
addChild(button)
The background defines the touch area for the button, and you can either have a color for it or determine it as .clear. The icon is sort of supposed to hold any text or images you want on top of the button. Just package them into an SKNode and you're good to go. If you want to run a function with a parameter as the tap action, you can just make a code block.
Hope that helps! Let me if you need any further help :).

UIButton touchDragEnter and touchDragExit called too often

How can I avoid a UIButtons .touchDragEnter and .touchDragExit functions from rapid firing? This question demonstrates the issue perfectly, but the only answer does not describe how to work around it. I'm trying to animate a button when the users finger on the button, and animate it again when their finger slides off. Are there any better ways to do this? If not, how should I stop my animation code from firing multiple times when the users finger is right between an .enter and an .exit state?
You could instead track the location of the touch point itself and determine when the touch point moves in and out of the button
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let point = t.location(in: self)
// moving in to the button
if button.frame.contains(point) && !wasInButton {
// trigger animation
wasInButton = true
}
// moving out of the button
if !button.frame.contains(point) && wasInButton {
// trigger animation
wasInButton = false
}
}
}
wasInButton could be a boolean variable set to true when there is a touch down in the button's frame:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let point = t.location(in: self)
if button.frame.contains(point) {
wasInButton = true
// trigger animation
} else {
wasInButton = false
}
}
This would require you to subclass the button's superview. And since you might not want to animate as soon as the point leaves the button's frame (because the user's finger or thumb would still be covering most of the button), you could instead do the hit test in a larger frame that encapsulates your button.

Swipe to delete entire section in UITableView (iOS)

I've had a lot of success in the past using MGSwipeTableCell to swipe to dismiss cells, but my current task calls to swipe an entire section in the same behavior.
I currently have a swipe gesture recognizer in the UITableView, when the swipe gesture is triggered, I calculate the section the touch was recieved, and delete the objects that populate that section (in core data), then call the delete animation:
//Delete objects that populate table datasource
for notification in notifications {
notificationObject.deleted = true
}
DataBaseManager.sharedInstance.save()
let array = indexPathsToDelete
let indexSet = NSMutableIndexSet()
array.forEach(indexSet.add)
//Delete section with animation
self.notificationsTableView.deleteSections(indexSet as IndexSet, with: .left)
This works, but is not ideal. Ideally we would like the whole section to drag with your finger (and when released at a certain point, it goes off screen), similar to MGSwipeTableCell. What is the best way to approach this? Is there another library which allows swipe to delete sections (I can't find any)? Or is this something I will have to create myself.
I haven't tested this but the idea is below. Take a view (self.header) and use the touchesBegan... method to detect the user placing their finger on screen. Then, follow the finger with the touchesMoved... method and calculate the difference between the last offset and the next. It should grow by 1 (or more) depending on how fast the user is moving their finger. Use this value to subtract the origin.x of the cell's contentView.
var header: UIView!
var tableView:UITableView!
var offset:CGFloat = 0
override public func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Touches Began. Disable user activity on UITableView
if let touch = touches.first {
// Get the point where the touch started
let point = touch.location(in: self.header)
offset = point.x
}
}
override public func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
// Get the point where the touch is moving in header
let point = touch.location(in: self.header)
// Calculate the movement of finger
let x:CGFloat = offset - point.x
if x > 0 {
// Move cells by offset
moveCellsBy(x: x)
}
// Set new offset
offset = point.x
}
}
override public func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
// Reset offset when user lifts finter
offset = 0
}
func moveCellsBy(x: CGFloat) {
// Move each visible cell with the offset
for cell in self.tableView.visibleCells {
// Place in animation block for smoothness
UIView.animate(withDuration: 0.05, animations: {
cell.contentView.frame = CGRect(x: cell.contentView.frame.origin.x - x, y: cell.contentView.frame.origin.y, width: cell.contentView.frame.size.width, height: cell.contentView.frame.size.height)
})
}
}
Brandon's answer is correct, however, INSPullToRefresh library has issues when using touches began and other touch delegate methods.
What I had to do was implement a UIPanGestureRecognizer and track the touch when that gesture recognizer event is fired

UIButtons in specific zone of the screen make delay on Touch Down event

I'm creating a Custom Keyboard for iOS. I have 4 rows of keys, each key have two actions: Touch Down to highlight button, and Touch Up Inside to unhighlight the button in 0.4 seconds.
But at the left edge of the screen there is a zone where Touch Down event of any button makes a delay for about quarter of second to show highlight.
See the image
So to see highlighted version, I had to hold the button, or swipe right from it. The buttons are the same, no difference at all. When I switch from letters to symbols, this left edge zone also makes the same delay. I've tried to move all the keys to the right, about 20px, and they worked fine, without delay. Ok, stick to the edge back, and delay came back also. Then I noticed, that pressing the button on its right edge, about 1-2 pixels made no delay at all. So, it seems like the problem is in this left side edge zone of the screen particularly.
By the way, I am running this app on my 5S, I've tried to run it on my friend's 5C, the same problem. But when I run it in the simulator, there is no such delay.
Use new iOS 11 feature to solve this problem definitely.
var preferredScreenEdgesDeferringSystemGestures: UIRectEdge { get }
Documentation
I'm too creating a custom keyboard, and as far as I understand, that happens due to preferredScreenEdgesDeferringSystemGestures not working properly when overridden inside UIInputViewController, at least on iOS 13.
When you override this property in a regular view controller, it works as expected:
override var preferredScreenEdgesDeferringSystemGestures: UIRectEdge {
return [.left, .bottom, .right]
}
That's however not the case for UIInputViewController.
UPD: It appears, gesture recognizers will still get .began state update, without the delay. So, instead of following the rather messy solution below, you can add a custom gesture recognizer to handle touch events.
You can quickly test this adding UILongPressGestureRecognizer with minimumPressDuration = 0 to your control view.
Another solution:
My original workaround was calling touch down effects inside hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView?, which is called even when the touches are delayed for the view.
You have to ignore the "real" touch down event, when it fires about 0.4s later or simultaneously with touch up inside event. Also, it's probably better to apply this hack only in case the tested point is inside ~20pt lateral margins.
So for example, for a view with equal to screen width, the implementation may look like:
let edgeProtectedZoneWidth: CGFloat = 20
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
let result = super.hitTest(point, with: event)
guard result == self else {
return result
}
if point.x < edgeProtectedZoneWidth || point.x > bounds.width-edgeProtectedZoneWidth
{
if !alreadyTriggeredFocus {
isHighlighted = true
}
triggerFocus()
}
return result
}
private var alreadyTriggeredFocus: Bool = false
#objc override func triggerFocus() {
guard !alreadyTriggeredFocus else { return }
super.triggerFocus()
alreadyTriggeredFocus = true
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesCancelled(touches, with: event)
alreadyTriggeredFocus = false
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesEnded(touches, with: event)
alreadyTriggeredFocus = false
}
...where triggerFocus() is the method you call on touch down event. Alternatively, you may override touchesBegan(_:with:).

Resources