How do I focus UIView? - ios

I want to implement functions like radio buttons. More specifically, I want to implement the ability to select only one UIView from several UIViews. This is similar to the Focus Engine on tvOS.
While searching of relevant this, I noticed that UIKit supports Focus-based Navigation. But I am not sure if this supports exactly what I want. There is also a lack of additional relevant examples.
I would like to hear some help and advice on related features. Is the Focus-based Navigation suitable for the purpose I was pursuing? And are there any other good ways to implement the functionality I want to implement?

#Paulw Thank you for your kind help.
The following steps solved the problem!
I have used a simple way that effects a specified UIView among multiple UIViews.
import UIKit
class ViewController: UIViewController {
var selectView: UIView?
override func viewDidLoad() {
super.viewDidLoad()
selectView = self.view
let viw = UIView(frame: CGRect(x: 100, y: 100, width: 150, height: 150))
viw.backgroundColor = UIColor.white
viw.layer.cornerRadius = 10
self.view.addSubview(viw)
let objectView = ObjectView()
objectView.frame.size = CGSize(width: 150, height: 150)
objectView.backgroundColor = UIColor.clear
self.view.addSubview(objectView)
let tapObject = UITapGestureRecognizer(target: self, action: #selector(handleTap(sender:)))
objectView.addGestureRecognizer(tapObject)
let tapObjects = UITapGestureRecognizer(target: self, action: #selector(handleTap(sender:)))
viw.addGestureRecognizer(tapObjects)
let tapRootView = UITapGestureRecognizer(target: self, action: #selector(handleTap(sender:)))
self.view.addGestureRecognizer(tapRootView)
}
#objc func handleTap(sender: UITapGestureRecognizer) {
if sender.state == .ended {
if selectView != self.view {
selectView?.layer.shadowColor = UIColor.clear.cgColor
}
selectView = sender.view
if selectView != self.view {
sender.view?.layer.shadowOffset = .zero
sender.view?.layer.shadowOpacity = 0.5
sender.view?.layer.shadowColor = UIColor.black.cgColor
}
}
}
}

Related

Press a button and thereafter be able to touch on screen and drag a hollow rectangle from (startpoint) until you let go (endpoint)

When using a drawing program or when using photoshop there is usually the ability to select a rectangle from the buttons panel. There you can press the button and afterwards be able to drag a rectangle on the screen depending on your chosen startpoint/endpoint.
Class
#IBDesignable class RectView: UIView {
#IBInspectable var startPoint:CGPoint = CGPoint.zero {
didSet{
self.setNeedsDisplay()
}
}
#IBInspectable var endPoint:CGPoint = CGPoint.zero {
didSet{
self.setNeedsDisplay()
}
}
override func draw(_ rect: CGRect) {
if (startPoint != nil && endPoint != nil){
let path:UIBezierPath = UIBezierPath(rect: CGRect(x: min(startPoint.x, endPoint.x),
y: min(startPoint.y, endPoint.y),
width: abs(startPoint.x - endPoint.x),
height: abs(startPoint.y - endPoint.y)))
UIColor.black.setStroke()
path.lineCapStyle = .round
path.lineWidth = 10
path.stroke()
}
}
}
Top ViewController
+Added class RectView to View(Custom Class)
class ViewController: UIViewController, UIGestureRecognizerDelegate {
let rectView = RectView()
#IBOutlet var myView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
let tap = UITapGestureRecognizer(target: self, action: #selector(panGestureMoveAround(sender:)))
tap.delegate = self
myView.addGestureRecognizer(tap)
ViewController
#objc func panGestureMoveAround(sender: UITapGestureRecognizer) {
var locationOfBeganTap: CGPoint
var locationOfEndTap: CGPoint
if sender.state == UIGestureRecognizer.State.began {
locationOfBeganTap = sender.location(in: rectView)
rectView.startPoint = locationOfBeganTap
rectView.endPoint = locationOfBeganTap
} else if sender.state == UIGestureRecognizer.State.ended {
locationOfEndTap = sender.location(in: rectView)
rectView.endPoint = sender.location(in: rectView)
} else{
rectView.endPoint = sender.location(in: rectView)
}
}
Code gives no particular errors however nothing is happening on screen. Any advice would be helpful.
You should try to focus on one issue at a time, however...
1) #IBDesignable and #IBInspectable are for use during design-time - that is, when laying out views in Storyboard / Interface Builder. That's not what you're trying to do here, so remove those.
2) CGrect() needs x, t, width and height:
let path:UIBezierPath = UIBezierPath(rect: CGRect(x: min(startPoint!.x, endPoint!.x),
y: min(startPoint!.y, endPoint!.y),
width: fabs(startPoint!.x - endPoint!.x),
height: fabs(startPoint!.y - endPoint!.y)))
3) Wrong way to instantiate your view:
// wrong
let rectView = RectView.self
// right
let rectView = RectView()
Correct those issue and see where you get. If you're still running into problems, first search for the specific issue. If you can't find the answer from searching, then come back and post a specific question.
Probably worth reviewing How to Ask

Efficiently load many views in a UIScrollView

I have a UIScrollView that I am trying to use as an Image Viewer. For this I have paging enabled and I add "Slides" to the view for each Image, including a UIImageView and multiple labels and buttons. This works perfectly while I only have a few Slides to show, but I will need to have more than 100 of them, and I am running into really bad performance issues.
When I present the ViewController, and therefore set up the ScrollView, I get a good 10-15s of delay. Apparently loading this many views is a little much.
So I was wondering if any of you had an idea how I could make this more efficient.
I have tried making the array of Slides in the previous VC, and passing it, instead of creating it on the spot, that helped a bit, but not enough to make it feel acceptable, especially since changing device orientation will require me to set the ScrollView up again (because the Slides height/width will be off).
Here are the functions to set up the Slides, and to present them on the ScrollView:
func createSlides() -> [Slide] {
print("creating Slides")
let Essence = EssenceModel.Essence
var ImageArray = [Slide]()
var slide: Slide
var count = 0
for img in Essence{
count += 1
slide = Bundle.main.loadNibNamed("Slide", owner: self, options: nil)?.first as! Slide
slide.imageView.image = UIImage(named: img.imageUrl)
slide.isUserInteractionEnabled = true
slide.textLabel.text = img.description
slide.likeButton.imageView?.contentMode = .scaleAspectFit
slide.hero.id = img.heroID
slide.tag = count
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(showOrHide))
slide.imageView.addGestureRecognizer(tapGesture)
let dismissGesture = UITapGestureRecognizer(target: self, action: #selector(dismissVC))
slide.backButton.addGestureRecognizer(dismissGesture)
slide.backButton.isUserInteractionEnabled = true
let swipeUp = UISwipeGestureRecognizer(target: self, action: #selector(swipedUp))
swipeUp.direction = .up
slide.addGestureRecognizer(swipeUp)
let swipeDown = UISwipeGestureRecognizer(target: self, action: #selector(swipedDown))
swipeDown.direction = .down
slide.addGestureRecognizer(swipeDown)
let slideRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSlideshow))
slide.slideButton.addGestureRecognizer(slideRecognizer)
slide.likeButton.imageView?.contentMode = .scaleAspectFit
slide.setupZoom()
ImageArray.append(slide)
}
count = 0
print(ImageArray.count)
return ImageArray
}
func setupSlideScrollView(slides : [Slide]) {
scrollView.subviews.forEach { $0.removeFromSuperview() }
scrollView.frame = CGRect(x: 0, y: 0, width: view.frame.width, height: view.frame.height)
scrollView.contentSize = CGSize(width: view.frame.width * CGFloat(slides.count), height: view.frame.height)
scrollView.isPagingEnabled = true
for i in 0 ..< slides.count {
slides[i].frame = CGRect(x: view.frame.width * CGFloat(i), y: 0, width: view.frame.width, height: view.frame.height)
scrollView.addSubview(slides[i])
}
}
As I said, I am looking for ways of making this more efficient in any way so I can actually use it. Preferebly I would probably just load the Slide that I am on, the next and previous one, but I have no clue how I would go about doing that.
Here is also a Screenshot, so you can see what it looks like.
Would be better to do something like that:
1) Add UICollectionView
2) Add UITableViewCell with restorationID
3) Add relations to your controller/view
4) Set horizontal scroll direction: https://www.screencast.com/t/wmPiwVdY
5) And after that create logic something like that:
class ImageCollectionViewVC: UIViewController, UICollectionViewDelegate, UICollectionViewDataSource {
#IBOutlet weak var collectionView: UICollectionView!
private var images = [Slide]()
override func viewDidLoad() {
super.viewDidLoad()
collectionView.delegate = self
collectionView.dataSource = self
}
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return images.count
}
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
guard let imageCell = collectionView.dequeueReusableCell(withReuseIdentifier: "ImageSlideCellRestorationID", for: indexPath) as? ImageSlideCell else {
return ImageSlideCell()
}
imageCell.image.image = UIImage(named: images[indexPath.row].imageUrl)
return imageCell
}
}
class ImageSlideCell: UICollectionViewCell {
#IBOutlet weak var image: UIImageView!
}
Instead using uiscroll and got a performance problems. You should using uicollectionview, scroll horizontal, custom cell with one image and buttons.

How to add a UIPanGestureRecognizer to an shape drawn on CAShapLayer - swift?

I have an imageView that I have drawn a blue circle in its layer. I would like a user to be able to tap, hold and drag this blue circle anywhere within the UIImageView. I am unsure how to attach this shape to a UIPanGestureRecognizer. My effort so far is below:
class DrawCircleViewController: UIViewController {
#IBOutlet weak var imgView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// DRAW A FILLED IN BLUE CIRCLE
drawBlueCircle()
// ADD GESTURE RECOGNIZER
let panRecgonizer = UIPanGestureRecognizer.init(target: ???, action: <#T##Selector?#>)
}
func drawBlueCircle(){
let fourDotLayer = CAShapeLayer()
fourDotLayer.path = UIBezierPath.init(roundedRect: CGRect.init(x: 60, y: 60, width: 30, height: 30), cornerRadius: 50).cgPath
fourDotLayer.fillColor = UIColor.blue.cgColor
self.imgView.layer.addSublayer(fourDotLayer)
}
}
use this code to move the view
#objc func handlePanRecgonizer(_ gestureRecognizer: UIPanGestureRecognizer){
if panRecgonizer.state == .began || panRecgonizer.state == .changed {
let translation = panRecgonizer.translation(in: self.view)
panRecgonizer.view!.center = CGPoint(x: panRecgonizer.view!.center.x + translation.x, y: panRecgonizer.view!.center.y + translation.y)
panRecgonizer.setTranslation(CGPoint.zero, in: self.view)
}
}
if you want to add UIPanGestureRecognizer programmatically:
let gestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(handlePanRecgonizer))
self.someDraggableView.addGestureRecognizer(gestureRecognizer)
As you might have noticed you can't add GestureRecognizers to CALayers. They can only be added to UIView types.
The solution is to add a subview to your imageview and draw the blue circle in it.
var drawingView: UIView?
override func viewDidLoad() {
super.viewDidLoad()
drawingView.frame = imgView.bounds
imgView.addSubview(drawingView)
// DRAW A FILLED IN BLUE CIRCLE
drawBlueCircle()
// ADD GESTURE RECOGNIZER
let panRecgonizer = UIPanGestureRecognizer(target: drawingView, action: #selector())
drawingView.addGestureRecognizer(panRecgonizer)
}
func drawBlueCircle(){
let fourDotLayer = CAShapeLayer()
fourDotLayer.path = UIBezierPath.init(roundedRect: CGRect.init(x: 60, y: 60, width: 30, height: 30), cornerRadius: 50).cgPath
fourDotLayer.fillColor = UIColor.blue.cgColor
drawingView?.layer.addSublayer(fourDotLayer)
}

How to handle selection when using GMSAutocompleteFetcher in iOS?

I have google place autocomplete which display an GMSAutocompleteFetcher. the code is come from Google developer website but the problem is I can't find a way how to handle selection right after user seeing the words appear on the textView to display it as place ID, so far this is my code:
import UIKit
import GoogleMaps
class FetcherSampleViewController: UIViewController {
var textField: UITextField?
var resultText: UITextView?
var fetcher: GMSAutocompleteFetcher?
override func viewDidLoad() {
super.viewDidLoad()
let tap = UITapGestureRecognizer(target: self, action: "handletap:")
self.view.backgroundColor = UIColor.whiteColor()
self.edgesForExtendedLayout = .None
// Set bounds to inner-west Sydney Australia.
let neBoundsCorner = CLLocationCoordinate2D(latitude: -33.843366,
longitude: 151.134002)
let swBoundsCorner = CLLocationCoordinate2D(latitude: -33.875725,
longitude: 151.200349)
let bounds = GMSCoordinateBounds(coordinate: neBoundsCorner,
coordinate: swBoundsCorner)
// Set up the autocomplete filter.
let filter = GMSAutocompleteFilter()
filter.type = .Establishment
// Create the fetcher.
fetcher = GMSAutocompleteFetcher(bounds: bounds, filter: filter)
fetcher?.delegate = self
textField = UITextField(frame: CGRect(x: 5.0, y: 0,
width: self.view.bounds.size.width - 5.0, height: 44.0))
textField?.autoresizingMask = .FlexibleWidth
textField?.addTarget(self, action: "textFieldDidChange:",
forControlEvents: .EditingChanged)
resultText = UITextView(frame: CGRect(x: 0, y: 45.0,
width: self.view.bounds.size.width,
height: self.view.bounds.size.height - 45.0))
resultText?.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
resultText?.text = "No Results"
resultText?.editable = false
resultText?.addGestureRecognizer(tap)
self.view.addSubview(textField!)
self.view.addSubview(resultText!)
}
func textFieldDidChange(textField: UITextField) {
fetcher?.sourceTextHasChanged(textField.text!)
}
func handletap (sender: UITapGestureRecognizer){
print("I dont know what to do here")
}
}
extension FetcherSampleViewController: GMSAutocompleteFetcherDelegate {
func didAutocompleteWithPredictions(predictions: [GMSAutocompletePrediction]) {
let resultsStr = NSMutableAttributedString()
for prediction in predictions {
resultsStr.appendAttributedString(prediction.attributedPrimaryText)
resultsStr.appendAttributedString(NSAttributedString(string: "\n"))
}
resultText?.attributedText = resultsStr
}
func didFailAutocompleteWithError(error: NSError) {
resultText?.text = error.localizedDescription
}
}
I use UITapGestureRecognizer but I don't know what should I do. If you can help me I would appreciate it :)
Since all the predictions are just newline-delimited rows in a text field, it's going to be difficult to tell which of them the user tapped on.
How about instead of a UITextView, you use a UITableView and have one row per prediction. This will make it easy to detect which prediction was selected.

How can I get UIGestureRecognizer to work with manually added subviews

I programmatically made a subview and it shows up in my main view. It also has a separate view controller.
Though my gestureRecognizers work on UIImageViews in the main view, they do not work in my sub view.
Here's what I have in the main view controller:
var hVC: HandViewController = HandViewController()
override func viewDidLoad() {
super.viewDidLoad()
createHandImageView()
}
func createHandImageView() {
addChildViewController(hVC)
let w: CGFloat = cardWidth + ((hVC.maxHandCards-1) * hVC.handCardSep)
let h: CGFloat = cardHeight
let screenWidth = view.frame.size.width
let screenHeight = view.frame.size.height
let x: CGFloat = (screenWidth - w) / 2
let frame = CGRectMake(x, screenHeight - cardHeight - 20, w, h)
hVC.view = UIImageView(frame: frame)
hVC.view.backgroundColor = UIColor(white: 0, alpha: 0.3)
// This is where I add the Hand View that eventually holds the card views
view.addSubview(hVC.view)
hVC.didMoveToParentViewController(self)
}
And the sub view controller:
init() {
super.init(nibName: nil, bundle: nil)
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override func viewDidLoad() {
super.viewDidLoad() // This NEVER fires
NSLog("did load");
}
func updateHandCardsView(cards: [Int]) {
handCardViews = [];
for card in cards {
addNewHandCardImage(card)
}
}
func addNewHandCardImage(card: Int) {
let imageView = UIImageView(frame:CGRectMake(0, 0, cardWidth, cardHeight));
imageView.image = UIImage(named: Deck.getCardName(card))
// This is where I add each Card View to the Hand View
self.view.addSubview(imageView)
handCardViews.append(imageView)
addEventRecognizers(imageView)
}
func addEventRecognizers(view: UIImageView) {
let singleTap = UITapGestureRecognizer(target: self, action: "highlightCard:")
singleTap.numberOfTapsRequired = 1
singleTap.numberOfTouchesRequired = 1
view.userInteractionEnabled = true
view.addGestureRecognizer(singleTap)
let doubleTap = UITapGestureRecognizer(target: self, action: "playCard:")
doubleTap.numberOfTapsRequired = 2
doubleTap.numberOfTouchesRequired = 1
view.userInteractionEnabled = true
view.addGestureRecognizer(doubleTap)
}
All the card views show up in the hand view. All programmatically created.
When I copy and paste the gesture code into the main view and use it on the cards on the table, the action gets called, but not in the sub view (HandView).
What am I missing?
Gesture recognizers only work on views that they belong to. There is a UIView method to add gesture recognizers. Your addEventRecognizers is only adding recognizers to whatever UIImageView passed in. You should change the function call to accept UIView, since UIImageView is just a subclass of UIView, it will still work on your images. Then call
addEventRecognizers(HandView) //Pass in the view that will get set with gesture recognizer.
Alternatively, if you just want to add one gesture recognizer just call HandView.addGestureRecognizer(gesture)

Resources