How to make multiple UIImages fall with gravity at the same time? - ios

As the user taps on superview I am detecting it via UITapGestureRecognizer and calling below function to create a UIImage view, add it to superview, add gravity to it and let it fall out of the superview.
Problem is that if the user taps again while the first UIImage is still on the screen, first image halts in place and the second UIImage starts falling. What is the best way to let multiple UIImage fall independently on the screen.
Secondly, I feel, I should be removing these UIImage from superview to conserve memory (as there could be hundreds of taps but I cant figure out how to automatically remove a UIImage as soon as it is out of the screen.
Help in swift please.
func dropBall(tapLocation:CGPoint) {
let ball = UIImageView(image: UIImage(named: "White50.png"))
ball.frame = CGRect(origin: CGPoint(x:tapLocation.x-25,y:tapLocation.y-25), size: CGSize(width:50,height:50))
view.addSubview(ball)
animator = UIDynamicAnimator(referenceView: self.view)
gravity = UIGravityBehavior(items: [ball])
animator.addBehavior(gravity)
}

The reason the first animation stops is that when you call this again, you are instantiating a new animator and releasing the old one. You should instantiate the animator only once. Likewise, you should instantiate gravity just once.
In terms of stopping the behavior once the view is off screen, one technique is to add an action block for the behavior, iterate through the behavior's items and see if each is still on screen, and remove the item from the behavior's list of items when the item falls off screen.
override func viewDidLoad() {
super.viewDidLoad()
animator = UIDynamicAnimator(referenceView: view)
gravity = UIGravityBehavior()
gravity.action = { [unowned self] in
let itemsToRemove = self.gravity.items.filter() { !CGRectIntersectsRect(self.view.bounds, $0.frame) }
for item in itemsToRemove {
self.gravity.removeItem(item as UIDynamicItem)
item.removeFromSuperview()
}
}
animator.addBehavior(gravity)
}
Now, to give an item gravity, just add the item to the gravity behavior and it will automatically be removed when it falls out of view:
gravity.addItem(ball)

Related

view.setNeedsDisplay() and view.setNeedsDisplay(view.frame) is not equivalent

I was trying to solve this problem (TL;DR An overlaid SKScene using the overlaySKScene property in SCNView wasn't causing a redraw when children were added and removed from it) using view.setNeedsDisplay() to force a redraw since the SCNView wasn't doing it automatically.
The problem with using view.setNeedsDisplay() was that the CPU usage was spiking to 50% and I assumed it was because the entire SCNView was having to redraw its contents, which included a 3D SCNScene as well. My solution was to use view.setNeedsDisplay(_: CGRect) to minimise the region that needs to be redrawn. However, to my surprise, no matter what I put as the CGRect value the SCNView refused to render the SKScene contents that had been overlaid on it.
Steps to reproduce issue
Open SceneKit template
From the Main (Base) storyboard, set the "Scene" attribute on the SCNView to be "art.scnassets/ship.scn" or whatever the path is
Delete all boilerplate code and just leave
class CustomSKScene: SKScene {
override func didMove(to view: SKView) {
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(userTapped(_:)))
view.addGestureRecognizer(tapGestureRecognizer)
}
#objc func userTapped(_ sender: UITapGestureRecognizer) {
let finger = convertPoint(fromView: sender.location(in: view))
let circle = SKShapeNode(circleOfRadius: 25)
circle.position = finger
addChild(circle)
}
}
class GameViewController: UIViewController {
private var gameView: SCNView { view as! SCNView }
override func viewDidLoad() {
gameView.overlaySKScene = CustomSKScene(size: gameView.bounds.size)
}
}
(This should still allow the ship scene to render when you run the app)
When you tap the screen, circles shouldn't show up. Fix this issue by adding view!.setNeedsDisplay() below the addChild function. Notice how CPU usage goes up to around 40-50% if you tap repeatedly after adding this fix.
Replace view!.setNeedsDisplay() with view!.setNeedsDisplay(view!.frame) (which should be equivalent).
At this point we are now back to square one. The circles are not showing up on screen again and confusion ensues. view.setNeedsDisplay() and view.setNeedsDisplay(view.frame) should be equivalent, yet, nothing is redrawn.
Does anyone know how to fix this problem? I feel it only happens when using the overlaySKScene property so maybe there is some caveat with its implementation that I am unaware of.
Some observations:
When you debug the view hierarchy, the overlaid SKScene doesn't show up anywhere, which is strange
sender.view === view returns true
(sender.view as! SCNScene).overlaySKScene === self also returns true

constraints are reapplied when timer is running and changes label text

I'm working on a swift camera app and trying to solve the following problem.
My camera app, which allows taking a video, can change the camera focus and exposure when a user taps somewhere on the screen. Like the default iPhone camera app, I displayed the yellow square to show where the user tapped. I can see exactly where the user tapped unless triggering the timer function and updating the total video length on the screen.
Here is the image of the camera app.
As you can see there is a yellow square and that was the point I tapped on the screen.
However, when the timer counts up (in this case, 19 sec to 20sec) and updates the text of total video length, the yellow square moves back to the center of the screen. Since I put the yellow square at the center of the screen on my storyboard, I guess when the timer counts up and updates the label text, it also refreshing the yellow square, UIView, so displaying at the center of the screen (probably?).
So if I'm correct, how can I display the yellow square at the user tapped location regardless of the timer function, which updates UIlabel for every second?
Here is the code.
final class ViewController: UIViewController {
private var pointOfInterestHalfCompletedWorkItem: DispatchWorkItem?
#IBOutlet weak var pointOfInterestView: UIView!
#IBOutlet weak var time: UILabel!
var counter = 0
var timer = Timer()
override func viewDidLayoutSubviews() {
pointOfInterestView.layer.borderWidth = 1
pointOfInterestView.layer.borderColor = UIColor.systemYellow.cgColor
}
#objc func startTimer() {
timer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(timerAction), userInfo: nil, repeats: true)
}
#objc func timerAction() {
counter += 1
secondsToHoursMinutesSeconds(seconds: counter)
}
func secondsToHoursMinutesSeconds (seconds: Int) {
// format seconds
time.text = "\(strHour):\(strMin):\(strSec)"
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let focusPoint = touches.first!.location(in: lfView)
showPointOfInterestViewAtPoint(point: focusPoint)
}
func showPointOfInterestViewAtPoint(point: CGPoint) {
print("point is here \(point)")
pointOfInterestHalfCompletedWorkItem = nil
pointOfInterestComplatedWorkItem = nil
pointOfInterestView.center = point
pointOfInterestView.transform = CGAffineTransform(scaleX: 1.5, y: 1.5)
let animation = UIViewPropertyAnimator(duration: 0.3, curve: .easeInOut) {
self.pointOfInterestView.transform = .identity
self.pointOfInterestView.alpha = 1
}
animation.startAnimation()
let pointOfInterestHalfCompletedWorkItem = DispatchWorkItem { [weak self] in
guard let self = self else { return }
let animation = UIViewPropertyAnimator(duration: 0.3, curve: .easeInOut) {
self.pointOfInterestView.alpha = 0.5
}
animation.startAnimation()
}
DispatchQueue.main.asyncAfter(deadline: .now() + 2, execute: pointOfInterestHalfCompletedWorkItem)
self.pointOfInterestHalfCompletedWorkItem = pointOfInterestHalfCompletedWorkItem
}
}
Since I thought it's a threading issue, I tried to change the label text & to show the yellow square in the main thread by writing DispatchQueue.main.asyncAfter, but it didn't work. Also, I was not sure if it becomes serial queue or concurrent queue if I perform both
loop the timer function and constantly updating label text
detect UI touch event and show yellow square
Since UI updates are performed in the main thread, I guess I need to figure out a way to share the main thread for the timer function and user touch event...
If someone knows a clue to solve this problem, please let me know.
It isn't a threading issue. It is an auto layout issue.
Presumably you have positioned the yellow square view in your storyboard using constraints.
You are then modifying the yellow square's frame directly by modifying the center property; this has no effect on the constraints that are applied to the view. As soon as the next auto layout pass runs (triggered by the text changing, for example) the constraints are reapplied and the yellow square jumps back to where your constraints say it should be.
You have a couple of options;
Compute the destination point offset from the center of the view and then apply those offsets to the constant property of your two centering constraints
Add the yellow view programatically and position it by setting its frame directly. You can then adjust the frame by modifying center as you do now.

How to tap and move scene nodes in ARKit

I'm currently trying to build an AR Chess app and I'm having trouble getting the movement of the pieces working.
I would like to be able to tap on a chess piece, then the legal moves it can make on the chess board will be highlighted and it will move to whichever square the user tapped on.
Pic of the chess board design and nodes:
https://gyazo.com/2a88f9cda3f127301ed9b4a44f8be047
What I would like to implement:
https://imgur.com/a/IGhUDBW
Would greatly appreciate any suggestions on how to get this working.
Thanks!
ViewController Code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Add lighting to the scene
sceneView.autoenablesDefaultLighting = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration to track an external image
let configuration = ARImageTrackingConfiguration()
// Image detection
// Reference which group to find the image to detect in the Assets folder e.g. "Detection Card"
if let imageDetect = ARReferenceImage.referenceImages(inGroupNamed: "Detection Card", bundle: Bundle.main) {
// Sets image tracking properties to the image in the referenced group
configuration.trackingImages = imageDetect
// Amount of images to be tracked
configuration.maximumNumberOfTrackedImages = 1
}
// Run the view's session
sceneView.session.run(configuration)
}
// Run when horizontal surface is detected and display 3D object onto image
// ARAnchor - tells a certain point in world space is relevant to your app, makes virtual content appear "attached" to some real-world point of interest
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode {
// Creates 3D object
let obj = SCNNode()
// Check if image detected through camera is an ARImageAnchor - which contains position and orientation data about the image detected in the session
if let imageAnchor = anchor as? ARImageAnchor {
// Set dimensions of the horizontal plane to be displayed onto the image to be the same as the image uploaded
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
// Display mild transparent layer onto detected image
// This is to ensure image detection works by display a faint layer on the image
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.2)
// Set geometry shape of the plane
let planeNode = SCNNode(geometry: plane)
// Flip vertical plane to horizontal plane
planeNode.eulerAngles.x = -Float.pi / 2
obj.addChildNode(planeNode)
// Initialise chess scene
if let chessBoardSCN = SCNScene(named: "art.scnassets/chess.scn") {
// If there is a first in the scene file
if let chessNodes = chessBoardSCN.rootNode.childNodes.first {
// Displays chessboard upright
chessNodes.eulerAngles.x = Float.pi / 2
// Adds chessboard to the overall 3D scene
obj.addChildNode(chessNodes)
}
}
}
return obj
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
}
You will need to add gestures on to your view and use the ARSceneViews hitTest method to detect what the gesture is touching in your scene. You can then update the positions based on the movement from the gestures.
Here is a question that deals with roughly the same requirement of dragging nodes around.
Placing, Dragging and Removing SCNNodes in ARKit
First, you need to add a gesture recognizer for tap into your viewDidLoad, like this:
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
myScnView.addGestureRecognizer(tapGesture)
Then realize the handler function:
#objc
func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// HERE YOU NEED TO DETECT THE TAP
// check what nodes are tapped
let location = gestureRecognize.location(in: myScnView)
let hitResults = myScnView.hitTest(location, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let tappedPiece = hitResults[0].node
// HERE YOU CAN SHOW POSSIBLE MOVES
//Ex. showPossibleMoves(for: tappedPiece)
}
}
Now, to show the possible moves, you need to identify all quadrants and your node position on the chessboard.
To do this, you can assign a name or a number, or a combination of letter and number, or moreover a combination of numbers. (I suggest combination of number, like row 1 column 1, like a matrix).
let's take my suggestion, so you can name each quadrant 1.1 1.2 ... 2.1 2.2 and so on.
Now, to detect where your piece is, you can check contact with the PhysicsContactDelegate.
Now you have the tappedPiece and the place where it is, so you have to define the rule for the pieces, for example:
let rules = ["tower":"cross"] //add the others
N.B You can choose what you want to define the rules.
Let's take my suggestion for good, now you should create the function to highlight:
func highlight(quadrant: SCNNode){
quadrant.geometry?.firstMaterial?.emission.contents = UIColor.yellow
}
Finally the showPossibleMoves(for: tappedPiece) could be something this:
func showPossibleMoves(for piece: SCNNode){
let pieceType = piece.name //You have to give the name as you did into your rules variable
//ex. if you have rules like ["tower":"cross"] you have to set all towers name to "tower"
let rule = rules[pieceType]
switch rule{
case "cross":
//you have to highlight all nodes on the right, left, above and bottom
// you can achieve this by selecting the start point and increase it
//assuming you named your quadrants like 1.1 1.2 or 11 12 13 ecc...
let startRow = Int(startQuadrant.name.first)
let startColumn = Int(startQuadrant.name.last)
//Now loop the highlight on right
for column in startColumn+1...MAX_COLUMN-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(startRow).\(column)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//Now loop for above quadrants
for row in startRow+1...MAX_ROW-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(row).\(startColumn)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//DO THE SAME FOR ALL DIRECTIONS
}
// ADD ALL CASES, like bishop movements "diagonals" and so on
}
NOTE: In the handlerTap function you have to check what you're tapping, for example, to check if you're tapping on a quadrant after selecting a piece (you want to move you're piece) you can check a boolean value and the name of the selected node
//assuming you have set the boolean value after selecting a piece
if pieceSelected && node.name != "tower"{
//HERE YOU CAN MOVE YOUR PIECE
}

Detecting taps on an animating UIImageView

I am using a custom path animation on UIImageView items for a Swift 3 project. The code outline is as follows:
// parentView and other parameters are configured externally
let imageView = UIImageView(image: image)
imageView.isUserInteractionEnabled = true
let gr = UITapGestureRecognizer(target: self, action: #selector(onTap(gesture:)))
parentView.addGestureRecognizer(gr)
parentView.addSubview(imageView)
// Then I set up animation, including:
let animation = CAKeyframeAnimation(keyPath: "position")
// .... eventually ....
imageView.layer.add(animation, forKey: nil)
The onTap method is declared in a standard way:
func onTap(gesture:UITapGestureRecognizer) {
print("ImageView frame is \(self.imageView.layer.visibleRect)")
print("Gesture occurred at \(gesture.location(in: FloatingImageHandler.parentView))")
}
The problem is that each time I call addGestureRecognizer, the previous gesture recognizer gets overwritten, so any detected tap always points to the LAST added image, and the location is not detected accurately (so if someone tapped anywhere on the parentView, it would still trigger the onTap method).
How can I detect a tap accurately on per-imageView basis? I cannot use UIView.animate or other methods due to a custom path animation requirement, and I also cannot create an overlay transparent UIView to cover the parent view as I need these "floaters" to not swallow the events.
It is not very clear what are you trying to achieve, but i think you should add gesture recognizer to an imageView and not to a parentView.
So this:
parentView.addGestureRecognizer(gr)
Should be replaced by this:
imageView.addGestureRecognizer(gr)
And in your onTap function you probably should do something like this:
print("ImageView frame is \(gesture.view.layer.visibleRect)")
print("Gesture occurred at \(gesture.location(in: gesture.view))")
I think you can check the tap location that belongs imageView or not on the onTap function.
Like this:
func ontap(gesture:UITapGestureRecognizer) {
let point = gesture.location(in: parentView)
if imageView.layer.frame.contains(point) {
print("ImageView frame is \(self.imageView.layer.visibleRect)")
print("Gesture occurred at \(point)")
}
}
As the layers don't update their frame/position etc, I needed to add the following in the image view subclass I wrote (FloatingImageView):
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
let pres = self.layer.presentation()!
let suppt = self.convert(point, to: self.superview!)
let prespt = self.superview!.layer.convert(suppt, to: pres)
return super.hitTest(prespt, with: event)
}
I also moved the gesture recognizer to the parent view so there was only one GR at any time, and created a unique tag for each of the subviews being added. The handler looks like the following:
func onTap(gesture:UITapGestureRecognizer) {
let p = gesture.location(in: gesture.view)
let v = gesture.view?.hitTest(p, with: nil)
if let v = v as? FloatingImageView {
print("The tapped view was \(v.tag)")
}
}
where FloatingImageView is the UIImageView subclass.
This method was described in an iOS 10 book (as well as in WWDC), and works for iOS 9 as well. I am still evaluating UIViewPropertyAnimator based tap detection, so if you can give me an example of how to use UIViewPropertyAnimator to do the above, I will mark your answer as the correct one.

Scene Transition Fading But Not Switching Scenes

I am trying to transition from the default, root scene to a new scene with SpriteKit. However, whenever I press the Start button, it grays out the old scene (although it remains visible) and the Drawing Board label shows up. The scene remains greyed out. All the buttons from the old scene can still be pressed but do not perform their associated actions. A UIButton triggers this func:
startButton.addTarget(self, action: "goToDrawingBoard:", forControlEvents: UIControlEvents.TouchUpInside)
The func:
#objc func goToDrawingBoard(sender: UIButton){
let drawingBoardScene = DrawingBoardScene(size: self.size)
self.scene?.view?.presentScene(drawingBoardScene, transition: SKTransition.crossFadeWithDuration(1.0))
}
And the DrawingBoardScene.swift file:
import Foundation
import SpriteKit
import UIKit
class DrawingBoardScene: SKScene {
let titleLabel = SKLabelNode(text: "DRAWING BOARD")
override func didMoveToView(view: SKView) {
/*LABEL: Displays title*/
titleLabel.fontColor = UIColor.blackColor()
titleLabel.fontSize = 60
titleLabel.position = CGPoint(x: CGRectGetMidX(self.frame), y: CGRectGetMidY(self.frame))
self.addChild(titleLabel)
}
}
Looks like you are presenting the scene incorrectly, try the following:
#objc func goToDrawingBoard(sender: UIButton){
let drawingBoardScene = DrawingBoardScene(size: self.size)
self.view?.presentScene(drawingBoardScene, transition: SKTransition.crossFadeWithDuration(1.0))
}
There is no reason to add the new scene as a child to the old scene, and who knows why your scene has a scene object.
As a personal note, presenting your scene in this matter is not a good way to present scenes. It is the views job to be presenting scenes, so what you should be doing is when it comes time for the scene to be removes, send a notification in some way to the view that the scene is done working and is waiting for it to be removed, and have the view then present the scene. This will allow the view to properly remove the old scene without having any retainers holding it back. One method to do this is threw delegation
My friend and I spent a couple nights working on various solutions and the one we finally came up with is this:
override func willMoveFromView(view: SKView) {
self.removeAllChildren()
delete(startButton)
}
override func delete(sender: AnyObject?) {
let subviews = (self.view?.subviews)! as [UIView]
for v in subviews {
if let button = v as? UIButton {
button.removeFromSuperview()
}
}
}
The only problem with this is that when the scene shifts the buttons can take a split second longer to disappear, giving it a kind of glitchy feel. It does work though, so for a short term solution it is great.

Resources