Custom UIButton .touchDragEnter and .touchDragExit area/size? - ios

Is it possible to customize the area from the button at which it is considered .touchDragExit (or .touchDragEnter) (out of its selectable area?)?
To be more specific, I am speaking about this situation: I tap the UIButton, the .touchDown gets called, then I start dragging my finger away from the button and at some point (some distance away) it will not select anymore (and of course I can drag back in to select...). I would like the modify that distance...
Is this even possible?

You need to overwrite the UIButton continueTracking and touchesEnded functions.
Adapting #Dean's link, the implementation would be as following (swift 4.2):
class ViewController: UIViewController {
#IBOutlet weak var button: DragButton!
override func viewDidLoad() {
super.viewDidLoad()
}
}
class DragButton: UIButton {
private let _boundsExtension: CGFloat = 0 // Adjust this as needed
override open func continueTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
let outerBounds: CGRect = bounds.insetBy(dx: CGFloat(-1 * _boundsExtension), dy: CGFloat(-1 * _boundsExtension))
let currentLocation: CGPoint = touch.location(in: self)
let previousLocation: CGPoint = touch.previousLocation(in: self)
let touchOutside: Bool = !outerBounds.contains(currentLocation)
if touchOutside {
let previousTouchInside: Bool = outerBounds.contains(previousLocation)
if previousTouchInside {
print("touchDragExit")
sendActions(for: .touchDragExit)
} else {
print("touchDragOutside")
sendActions(for: .touchDragOutside)
}
} else {
let previousTouchOutside: Bool = !outerBounds.contains(previousLocation)
if previousTouchOutside {
print("touchDragEnter")
sendActions(for: .touchDragEnter)
} else {
print("touchDragInside")
sendActions(for: .touchDragInside)
}
}
return true
}
override open func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch: UITouch = touches.first!
let outerBounds: CGRect = bounds.insetBy(dx: CGFloat(-1 * _boundsExtension), dy: CGFloat(-1 * _boundsExtension))
let currentLocation: CGPoint = touch.location(in: self)
let touchInside: Bool = outerBounds.contains(currentLocation)
if touchInside {
print("touchUpInside action")
return sendActions(for: .touchUpInside)
} else {
print("touchUpOutside action")
return sendActions(for: .touchUpOutside)
}
}
}
Try changing the _boundsExtension value

The drag area is exaclty equal to the area define by bounds.
So if you want to customize the drag are simple customise the bounds of your button.

Related

Swift -How to pass touch events to multiple buttons that are part of a second UIWindow but ignore everything else

I followed this answer to create a button that is part of a second window that is on top of all other windows. It works fine with 1 button because it allows only that one button to get touch events, ignores everything else inside the second window, but the main window underneath of it still receives all of it touch events.
// This comment is from #robmayoff's answer
// As I mentioned, I need to override pointInside(_:withEvent:) so that the window ignores touches outside the button:
var button: UIButton?
private override func pointInside(point: CGPoint, withEvent event: UIEvent?) -> Bool {
guard let button = button else { return false }
let buttonPoint = convertPoint(point, toView: button)
return button.pointInside(buttonPoint, withEvent: event)
}
The way I have it now inside the SecondWindow, I can receive touch events for the cancelButton but the other buttons get ignored along with everything else. The question is how do I receive touch events for the other buttons inside the second window but still ignore everything else?
The Second UIWindow. This is what I tried but it didn't work:
class SecondWindow: UIWindow {
var cancelButton: UIButton?
var postButton: UIButton? // I need this to also receive touch events
var reloadButton: UIButton? // I need this to also receive touch events
init() {
super.init(frame: UIScreen.main.bounds)
backgroundColor = nil
}
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
if let cancelButton = cancelButton {
let cancelButtonPoint = convert(point, to: cancelButton)
return cancelButton.point(inside: cancelButtonPoint, with: event)
}
if let postButton = postButton {
let postButtonPoint = convert(point, to: postButton)
return postButton.point(inside: postButtonPoint, with: event)
}
if let reloadButton = reloadButton {
let reloadButtonPoint = convert(point, to: reloadButton)
return reloadButton.point(inside: reloadButtonPoint, with: event)
}
return false
}
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
let hitView = super.hitTest(point, with: event)
guard let safeHitView = hitView else { return nil }
if safeHitView.isKind(of: SecondController.self) { return nil }
return safeHitView
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
The vc with the buttons:
class SecondController: UIViewController {
lazy var cancelButton: UIButton = { ... }()
lazy var postButton: UIButton = { ... }()
lazy var reloadButton: UIButton = { ... }()
let window = SecondWindow()
init() {
super.init(nibName: nil, bundle: nil)
window.frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
window.cancelButton = cancelButton
window.postButton = postButton
window.reloadButton = reloadButton
window.isHidden = false
window.backgroundColor = .clear
window.rootViewController = self
window.windowLevel = UIWindow.Level.normal
}
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .clear
// Anchors for buttons
}
}
I check to see if the point touched is inside the receiver (whichever button is touched). If it is it returns true then the touch is acknowledged and the button responds, if it's not it does nothing and skips to the next button, checks there and repeats the process until the last button and the same process is repeated.
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
guard let cancelButton = cancelButton, let postButton = postButton, let reloadButton = reloadButton else {
return false
}
let cancelButtonPoint = convert(point, to: cancelButton)
let cancelBool = cameraButton.point(inside: cameraButtonPoint, with: event)
if cancelBool {
return cancelButton.point(inside: cancelButtonPoint, with: event)
}
let postButtonPoint = convert(point, to: postButton)
let postBool = postButton.point(inside: postButtonPoint, with: event)
if postBool {
return postButton.point(inside: postButtonPoint, with: event)
}
let reloadButtonPoint = convert(point, to: reloadButton)
return reloadButton.point(inside: reloadsButtonPoint, with: event)
}

Changing UIImageView transform scale broke movement system

I am trying to create an Image view which I can move and scale on screen. the problem is that when I change the scale of the Image, the movement system seams to be broken.
I wrote some code to drag the object from an anchor point which could be different from the center of the UIImage, but the scale ruined the process.
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Main view controller for the AR experience.
*/
import ARKit
import SceneKit
import UIKit
import ModelIO
class ViewController: UIViewController, ARSessionDelegate , UIGestureRecognizerDelegate{
// MARK: Outlets
#IBOutlet var sceneView: ARSCNView!
#IBOutlet weak var blurView: UIVisualEffectView!
#IBOutlet weak var dropdown: UIPickerView!
#IBOutlet weak var AddStickerButton: UIButton!
#IBOutlet weak var deleteStickerButton: UIImageView!
var offset : CGPoint = CGPoint.zero
var isDeleteVisible : Bool = false
let array:[String] = ["HappyHeart_Lisa", "Logo_bucato", "Sweety_2_Lisa", "Sweety_Lisa", "Tonglue_Lisa"]
lazy var statusViewController: StatusViewController = {
return childViewControllers.lazy.flatMap({ $0 as? StatusViewController }).first!
}()
var stickers = [Sticker]()
// MARK: Properties
var myScene : SCNScene!
/// Convenience accessor for the session owned by ARSCNView.
var session: ARSession {
sceneView.session.configuration
//sceneView.scene.background.contents = UIColor.black
return sceneView.session
}
var nodeForContentType = [VirtualContentType: VirtualFaceNode]() //Tiene sotto controllo la selezione(Tipo maschera)
let contentUpdater = VirtualContentUpdater() //Chiama la VirtualContentUpdater.swift
var selectedVirtualContent: VirtualContentType = .faceGeometry {
didSet {
// Set the selected content based on the content type.
contentUpdater.virtualFaceNode = nodeForContentType[selectedVirtualContent]
}
}
// MARK: - View Controller Life Cycle
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = contentUpdater
sceneView.session.delegate = self
sceneView.automaticallyUpdatesLighting = true
createFaceGeometry()
// Set the initial face content, if any.
contentUpdater.virtualFaceNode = nodeForContentType[selectedVirtualContent]
// Hook up status view controller callback(s).
statusViewController.restartExperienceHandler = { [unowned self] in
self.restartExperience()
}
let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(scale))
let rotationGesture = UIRotationGestureRecognizer(target: self, action: #selector(rotate))
pinchGesture.delegate = self
rotationGesture.delegate = self
view.addGestureRecognizer(pinchGesture)
view.addGestureRecognizer(rotationGesture)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
/*
AR experiences typically involve moving the device without
touch input for some time, so prevent auto screen dimming.
*/
UIApplication.shared.isIdleTimerDisabled = true
resetTracking()
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
session.pause()
}
// MARK: - Setup
/// - Tag: CreateARSCNFaceGeometry
func createFaceGeometry() {
// This relies on the earlier check of `ARFaceTrackingConfiguration.isSupported`.
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
let glassesGeometry = ARSCNFaceGeometry(device: device)!
nodeForContentType = [
.faceGeometry: Mask(geometry: maskGeometry),
.overlayModel: GlassesOverlay(geometry: glassesGeometry),
.blendShapeModel: RobotHead(),
.sfere: RobotHead()
]
}
// MARK: - ARSessionDelegate
func session(_ session: ARSession, didFailWithError error: Error) {
guard error is ARError else { return }
let errorWithInfo = error as NSError
let messages = [
errorWithInfo.localizedDescription,
errorWithInfo.localizedFailureReason,
errorWithInfo.localizedRecoverySuggestion
]
let errorMessage = messages.flatMap({ $0 }).joined(separator: "\n")
DispatchQueue.main.async {
self.displayErrorMessage(title: "The AR session failed.", message: errorMessage)
}
}
func sessionWasInterrupted(_ session: ARSession) {
blurView.isHidden = false
statusViewController.showMessage("""
SESSION INTERRUPTED
The session will be reset after the interruption has ended.
""", autoHide: false)
}
func sessionInterruptionEnded(_ session: ARSession) {
blurView.isHidden = true
DispatchQueue.main.async {
self.resetTracking()
}
}
/// - Tag: ARFaceTrackingSetup
func resetTracking() {
statusViewController.showMessage("STARTING A NEW SESSION")
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.isLightEstimationEnabled = true
session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
// MARK: - Interface Actions
/// - Tag: restartExperience
func restartExperience() {
// Disable Restart button for a while in order to give the session enough time to restart.
statusViewController.isRestartExperienceButtonEnabled = false
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
self.statusViewController.isRestartExperienceButtonEnabled = true
}
resetTracking()
}
// MARK: - Error handling
func displayErrorMessage(title: String, message: String) {
// Blur the background.
blurView.isHidden = false
// Present an alert informing about the error that has occurred.
let alertController = UIAlertController(title: title, message: message, preferredStyle: .alert)
let restartAction = UIAlertAction(title: "Restart Session", style: .default) { _ in
alertController.dismiss(animated: true, completion: nil)
self.blurView.isHidden = true
self.resetTracking()
}
alertController.addAction(restartAction)
present(alertController, animated: true, completion: nil)
}
//Create a new Sticker
func createNewSticker(){
stickers.append(Sticker(view : self.view, viewCtrl : self))
}
#IBAction func addNewSticker(_ sender: Any) {
createNewSticker()
}
//Function To Move the Stickers, all the Touch Events Listener
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in (touches as! Set<UITouch>) {
var location = touch.location(in: self.view)
for sticker in stickers {
if(sticker.imageView.frame.contains(location) && !isSomeOneMoving()){
//sticker.imageView.center = location
offset = touch.location(in: sticker.imageView)
let offsetPercentage = CGPoint(x: offset.x / sticker.imageView.bounds.width, y: offset.y / sticker.imageView.bounds.height)
let offsetScaled = CGPoint(x: sticker.imageView.frame.width * offsetPercentage.x, y: sticker.imageView.frame.height * offsetPercentage.y)
offset.x = (sticker.imageView.frame.width / 2) - offsetScaled.x
offset.y = (sticker.imageView.frame.height / 2) - offsetScaled.y
location = touch.location(in: self.view)
location.x = (location.x + offset.x)
location.y = (location.y + offset.y)
sticker.imageView.center = location
disableAllStickersMovements()
isDeleteVisible = true
sticker.isStickerMoving = true;
deleteStickerButton.isHidden = false
}
}
}
}
func disableAllStickersMovements(){
for sticker in stickers {
sticker.isStickerMoving = false;
}
}
func isSomeOneMoving() -> Bool{
for sticker in stickers {
if(sticker.isStickerMoving){
return true
}
}
return false
}
var lastLocationTouched : CGPoint = CGPoint.zero
var lastStickerTouched : Sticker = Sticker()
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in (touches as! Set<UITouch>) {
var location = touch.location(in: self.view)
for sticker in stickers {
if(sticker.imageView.frame.contains(location) && sticker.isStickerMoving){
lastLocationTouched = location
location = touch.location(in: self.view)
location.x = (location.x + offset.x)
location.y = (location.y + offset.y)
sticker.imageView.center = location
//sticker.imageView.center = location
}
if(deleteStickerButton.frame.contains(lastLocationTouched) && isDeleteVisible && sticker.isStickerMoving){
sticker.imageView.alpha = CGFloat(0.5)
}else{
sticker.imageView.alpha = CGFloat(1)
}
}
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
for sticker in stickers {
if(deleteStickerButton.frame.contains(lastLocationTouched) && isDeleteVisible && sticker.isStickerMoving){
removeASticker(sticker : sticker)
disableAllStickersMovements()
}
}
disableAllStickersMovements()
isDeleteVisible = false
deleteStickerButton.isHidden = true
}
func removeASticker(sticker : Sticker ){
sticker.imageView.removeFromSuperview()
let stickerPosition = stickers.index(of: sticker)!
stickers.remove(at: stickerPosition)
for sticker in stickers {
sticker.isStickerMoving = false;
}
}
var identity = CGAffineTransform.identity
#objc func scale(_ gesture: UIPinchGestureRecognizer) {
for sticker in stickers {
if(sticker.isStickerMoving){
switch gesture.state {
case .began:
identity = sticker.imageView.transform
case .changed,.ended:
sticker.imageView.transform = identity.scaledBy(x: gesture.scale, y: gesture.scale)
case .cancelled:
break
default:
break
}
}
}
}
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
#objc func rotate(_ gesture: UIRotationGestureRecognizer) {
for sticker in stickers {
if(sticker.isStickerMoving){
sticker.imageView.transform = sticker.imageView.transform.rotated(by: gesture.rotation)
}
}
}
}
and then the sticker class
import UIKit
import Foundation
class Sticker : NSObject, UIGestureRecognizerDelegate{
var location = CGPoint(x: 0 , y: 0);
var sticker_isMoving = false;
let imageView = UIImageView()
var isStickerMoving : Bool = false;
init(view : UIView, viewCtrl : ViewController ) {
super.init()
imageView.image = UIImage(named: "BroccolFace_Lisa.png")
imageView.isUserInteractionEnabled = true
imageView.contentMode = UIViewContentMode.scaleAspectFit
imageView.frame = CGRect(x: view.center.x, y: view.center.y, width: 200, height: 200)
view.addSubview(imageView)
}
override init(){
}
}
This is because the imageView.bounds and the touch.location(in: imageView) are in unscaled values. This will overcome the problem:
offset = touch.location(in: imageView)
let offsetPercentage = CGPoint(x: offset.x / imageView.bounds.width, y: offset.y / imageView.bounds.height)
let offsetScaled = CGPoint(x: imageView.frame.width * offsetPercentage.x, y: imageView.frame.height * offsetPercentage.y)
offset.x = (imageView.frame.width / 2) - offsetScaled.x
offset.y = (imageView.frame.height / 2) - offsetScaled.y
Basically it converts the offset into a percentage based on the unscaled values and then converts that into scaled values based on the imageView frame (which is modified by the scale). It then uses that to calculate the offset.
EDIT (NUMBER TWO)
This is more complete way to do it and it should solve any issues that may arise due to scaling or rotation.
Add this structure to hold the details of the dragging for images:
struct DragInfo {
let imageView: UIImageView
let startPoint: CGPoint
}
Add these instance variables (you can also remove offset if you want):
var dragStartPoint: CGPoint = CGPoint.zero
var currentDragItems: [DragInfo] = []
var dragTouch: UITouch?
Change touchesBegan to this:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard self.dragTouch == nil, let touch = touches.first else { return }
self.dragTouch = touch
let location = touch.location(in: self.view)
self.dragStartPoint = location
for imageView in self.imageList {
if imageView.frame.contains(location) {
self.currentDragItems.append(DragInfo(imageView: imageView, startPoint: imageView.center))
}
}
}
Change touchesMoved to this:
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let dragTouch = self.dragTouch else { return }
for touch in touches {
if touch == dragTouch {
let location = touch.location(in: self.view)
let offset = CGPoint(x: location.x - self.dragStartPoint.x, y: location.y - self.dragStartPoint.y)
for dragInfo in self.currentDragItems {
let imageOffSet = CGPoint(x: dragInfo.startPoint.x + offset.x, y: dragInfo.startPoint.y + offset.y)
dragInfo.imageView.center = imageOffSet
}
}
}
}
Change touchesEnded to this:
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let dragTouch = self.dragTouch, touches.contains(dragTouch) else { return }
self.currentDragItems.removeAll()
self.dragTouch = nil
}
Set the following properties on the gesture recognisers used:
scaleGesture.delaysTouchesEnded = false
scaleGesture.cancelsTouchesInView = false
rotationGesture.delaysTouchesEnded = false
rotationGesture.cancelsTouchesInView = false
Some explanation about how it works.
With all the touch events it only considers the first touch because dragging from multiple touches doesn't make much sense (what if two touches were over the same image view and move differently). It records this touch and then only considers that touch for dragging things around.
When touchesBegan is called it checks no touch for dragging exists (indicating a drag in progress) and it finds all image views that are under the touch and for each one it records the details of itself and it start centre position in a DragInfo structure and stores it in the currentDragItems array. It also records the position the touch started in the main view and the touch that initiated it.
When touchesMoved is called it only considers the touch that started the dragging and it calculates the offset from the original position the touch started in the main view and then goes down the list of images involved in the dragging and calculates their new centre based on their original starting position and the offset calculated and sets that as the new centre.
When touchesEnded is called assuming it is the dragging touch that is ended it clears the array of DragInfo objects to ready for the next drag.
You need to set the delaysTouchesEnded and cancelsTouchesInView properties on all gesture recognisers so that all touches are passed through to the view otherwise the touchesEnded methods in particular are not called.
Doing the calculations like this removes the problems of scale and rotation as you are just concerned with offsets from initial positions. It also works if multiple image views are dragged at the same time as their details are kept separately.
Now there are some things to be aware of:
You will need to put in all the other code you app required as this is just a basic example to show the idea.
This assumes that you only want to drag image views that you pick up at the start. If you want to collect image views as you drag around you would need to develop a much more complicated system.
As I stated only one drag operation can be in progress at a time and it takes the first touch registered as this source touch. This source touch is then used to filter out any other touches that may happen. This is done to keep things simple and otherwise you would have to account for all kinds of strange situations like if multiple touches were on the same image view.
I hope this all makes sense and you can adapt it to solve your problem.
Here is an extension that I use to pan, pinch and rotate an image with UIPanGestureRecognizer, UIPinchGestureRecognizer and UIRotationGestureRecognizer
extension ViewController : UIGestureRecognizerDelegate {
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
func panGesture(gesture: UIPanGestureRecognizer) {
switch gesture.state {
case .ended: fallthrough
case .changed:
let translation = gesture.translation(in: gesture.view)
if let view = gesture.view {
var finalPoint = CGPoint(x:view.center.x + translation.x, y:view.center.y + translation.y)
finalPoint.x = min(max(finalPoint.x, 0), self.myImageView.bounds.size.width)
finalPoint.y = min(max(finalPoint.y, 0), self.myImageView.bounds.size.height)
view.center = finalPoint
gesture.setTranslation(CGPoint.zero, in: gesture.view)
}
default : break
}
}
func pinchGesture(gesture: UIPinchGestureRecognizer) {
switch gesture.state {
case .changed:
let scale = gesture.scale
gesture.view?.transform = gesture.view!.transform.scaledBy(x: scale, y: scale)
gesture.scale = 1
default : break
}
}
func rotateGesture(gesture: UIRotationGestureRecognizer) {
switch gesture.state {
case .changed:
let rotation = gesture.rotation
gesture.view?.transform = gesture.view!.transform.rotated(by: rotation)
gesture.rotation = 0
default : break
}
}
}
setting the UIGestureRecognizerDelegate will help you do the three of gestures at the same time.

cancelTracking called unexpectedly when dragging in customized segmented control

I'm using the customized segmented control from this tutorial, in addition, I would like the selected segment to be changed on a swipe/drag, so I added these functions:
override func beginTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
super.beginTracking(touch, with: event)
let location = touch.location(in: self)
lastTouchLocation = location
return true
}
override func continueTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
super.continueTracking(touch, with: event)
let location = touch.location(in: self)
print(location.x - lastTouchLocation!.x)
let newX = thumbView.frame.origin.x + (location.x - lastTouchLocation!.x)
if frame.minX <= newX && newX + thumbView.frame.width <= frame.maxX {
thumbView.frame.origin.x = newX
}
lastTouchLocation = location
return true
}
override func endTracking(_ touch: UITouch?, with event: UIEvent?) {
super.endTracking(touch, with: event)
let location = touch != nil ? touch!.location(in: self) : lastTouchLocation!
var calculatedIndex : Int?
for (index, item) in labels.enumerated() {
if item.frame.contains(location) {
calculatedIndex = index
}
}
if calculatedIndex != nil && calculatedIndex != selectedIndex {
selectedIndex = calculatedIndex!
sendActions(for: .valueChanged)
} else {
displayNewSelectedIndex()
}
}
I've embedded the control in a UIView container, somehow the touch gets canceled when I drag the thumb view for a short distance
Could this be a problem with the view container, and how can I fix this?
Thank you if you've read the whole thing.
I ran into this recently when my custom UIControl was on a formsheet. It worked fine on a popover, but when I put the same control on a formsheet, it would abort the ability to drag sideways abruptly for seemingly no reason. cancelTracking was called but the event didn't tell me why. I figured out it had to do with iOS13's new way of dismissing Formsheets by dragging down. To fix it, I added this code to my class that extended UIControl:
override func gestureRecognizerShouldBegin(_ gestureRecognizer: UIGestureRecognizer) -> Bool {
if gestureRecognizer is UIPanGestureRecognizer {
return false
}
return true
}

How to drag images

I can move a single UIView using the code below but how do I move multiple UIView's individually using IBOutletCollection and tag values?
class TeamSelection: UIViewController {
var location = CGPoint(x: 0, y: 0)
#IBOutlet weak var ViewTest: UIView! // move a single image
#IBOutlet var Player: [UIView]! // collection to enable different images with only one outlet
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch: UITouch = touches.first! as UITouch
location = touch.location(in: self.view)
ViewTest.center = location
}
}
There are two basic approaches:
You could iterate through your subviews figuring out in which one the touch intersected and move it. But this approach (nor the use of cryptic tag numeric values to identify the view) is generally not the preferred method.
Personally, I'd put the drag logic in the subview itself:
class CustomView: UIView { // or subclass `UIImageView`, as needed
private var originalCenter: CGPoint?
private var dragStart: CGPoint?
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
originalCenter = center
dragStart = touches.first!.location(in: superview)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
var location = touch.location(in: superview)
if let predicted = event?.predictedTouches(for: touch)?.last {
location = predicted.location(in: superview)
}
center = dragStart! + location - originalCenter!
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
let location = touch.location(in: superview)
center = dragStart! + location - originalCenter!
}
}
extension CGPoint {
static func +(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
return CGPoint(x: lhs.x + rhs.x, y: lhs.y + rhs.y)
}
static func -(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
return CGPoint(x: lhs.x - rhs.x, y: lhs.y - rhs.y)
}
}
Remember to set "user interaction enabled" for the subviews if you use this approach.
By the way, if you're dragging views like this, make sure you don't have constraints on those views or else when the auto-layout engine next applies itself, everything will move back to the original location. If using auto-layout, you'd generally modify the constant of the constraint.
A couple of observations on that dragging logic:
You might want to use predictive touches, like above, to reduce lagginess of the drag.
Rather than moving the center to the location(in:) of the touch, I would rather keep track of by how much I dragged it and either move the center accordingly or apply a corresponding translation. It's a nicer UX, IMHO, because if you grab the the corner, it lets you drag by the corner rather than having it jump the center of the view to where the touch was on screen.
I'd create a subclass of UIView which supports dragging. Something like:
class DraggableView: UIView {
func setDragGesture() {
let panRecognizer = UIPanGestureRecognizer(target: self, action: #selector(DraggableView.handlePanGesture(_:)))
addGestureRecognizer(panRecognizer)
}
func handlePanGesture(_ recognizer: UIPanGestureRecognizer) {
guard let parentView = self.superview else { return }
let translation = recognizer.translation(in: parentView)
recognizer.view?.center = CGPoint(x: recognizer.view!.center.x + translation.x, y: recognizer.view!.center.y + translation.y)
recognizer.setTranslation(CGPoint.zero, in: self)
}
func getLocation() -> CGPoint {
return UIView().convert(center, to: self.superview)
}
}
So then you can add an array of draggable views and then ask for the location when you need to finish displaying that view controller.

Instance member cannot be used

I have this splash view and I'm having problems with use3dtouch because I'm having an error telling me that splashview has no instance member 'use3dtouch'. Here is the code.
Here is an image of the error
The error
import Foundation
import UIKit
class VPSplashView : UIView {
var vp = VPSplashView()
private lazy var __once: () = {
if VPSplashView.traitCollection.forceTouchCapability == UIForceTouchCapability.unavailable
{
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(VPSplashView.longPressed(_:)))
self.addGestureRecognizer(longPressRecognizer)
VPSplashView.use3DTouch = false
} else {
VPSplashView.use3DTouch = true
}
}()
static func addSplashTo(_ view : UIView, menuDelegate: MenuDelegate) -> VPSplashView{
let splashView = VPSplashView(view: view)
splashView.backgroundColor = UIColor.clear
splashView.isExclusiveTouch = true
if (view.isKind(of: UIScrollView.classForCoder())){
(view as! UIScrollView).canCancelContentTouches = false
}
splashView.menu?.delegate = menuDelegate
return splashView
}
// MARK: Initialization
var menu : VPSplashMenu?
fileprivate var use3DTouch : Bool = true
var onceToken: Int = 0
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
init(view: UIView){
super.init(frame:view.bounds)
view.addSubview(self)
self.menu = VPSplashMenu.init(center: self.center)
}
func setDataSource(_ source: MenuDataSource!){
self.menu?.dataSource = source
}
override func layoutSubviews() {
super.layoutSubviews()
self.superview?.bringSubview(toFront: self)
if (self.superview != nil){
self.setup()
}
}
fileprivate func setup(){
_ = self.__once;
}
// MARK: Long Press Handling
func longPressed(_ sender: UILongPressGestureRecognizer)
{
switch sender.state {
case .began:
let centerPoint = sender.location(in: self)
menu?.movedTo(centerPoint)
menu?.showAt(self)
menu?.squash()
case .ended:
menu?.cancelTap()
menu?.removeFromSuperview()
case .changed:
let centerPoint = sender.location(in: self)
menu?.handleTap((menu?.convert(centerPoint, from: self))!)
default:
menu?.removeFromSuperview()
}
}
// MARK: Touch Handling
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if (use3DTouch == true){
var centerPoint : CGPoint = CGPoint.zero
for touch in touches {
centerPoint = touch.location(in: self)
menu?.movedTo(centerPoint)
menu?.showAt(self)
}
}
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if (use3DTouch == true){
for touch in touches {
let centerPoint = touch.location(in: self)
if (menu?.shown == false){
menu?.movedTo(centerPoint)
if (touch.force > minimalForceToSquash){
menu?.squash()
}
} else {
menu?.handleTap((menu?.convert(centerPoint, from: self))!)
}
}
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if (use3DTouch == true){
menu?.hide()
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
if (use3DTouch){
menu?.hide()
}
}
}
Your added code has clarified some important things to write an answer...
traitCollection is an instance property which is declared in UITraitEnvironment, where UIView conforms to
use3DTouch is an instance property which is declared in VPSplashView explicitly
longPressed(_:) is an instance method which is declared in VPSplashView explicitly
The third is not critical in this case, but the first two are clearly related to the error message you have gotten: Instance member cannot be used.
When you access to instance members (in this case, instance properties), you need to prefix an instance reference before .memberName, not a class name.
In your case, self seems to be an appropriate instance:
private lazy var __once: () = {
//###
if self.traitCollection.forceTouchCapability == UIForceTouchCapability.unavailable
{
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(self.longPressed(_:)))
self.addGestureRecognizer(longPressRecognizer)
//###
self.use3DTouch = false
} else {
//###
self.use3DTouch = true
}
}()
But you are doing things in far more complex way than needed, so you may need some more fixes.
One critical thing is this line:
var vp = VPSplashView()
This may cause some runtime issue (even if you have solved some compile-time issue) and vp is not used. Better remove it.

Resources