I've got a UIGesture recognizer that calls the below function when the screen is tapped.
The device is in portrait mode and the cameraView takes up a variable height between about 50% and 90% of the screen, but that shouldn't matter because I'm getting the tap location within that view.
The apple docs state the focal point is a normalized coordinate (x and y are both between 0 and 1) with the top left in landscape mode (with the home button to the right) at (0, 0) and bottom right at (1, 1).
Setting the focus mode to AutoFocus rather than ContinuousAutoFocus made things worse. What am I missing? Other apps have this working just fine.
func focusPhoto(recognizer : UITapGestureRecognizer) {
if photoCaptureDevice.lockForConfiguration(nil) && photoCaptureDevice.focusPointOfInterestSupported {
var tapLocation = recognizer.locationInView(cameraView);
var focalPoint = CGPoint(x: tapLocation.x/cameraView.frame.width, y: tapLocation.y/cameraView.frame.height);
photoCaptureDevice.focusPointOfInterest = focalPoint
//photoCaptureDevice.focusMode = AVCaptureFocusMode.AutoFocus
photoCaptureDevice.unlockForConfiguration()
}
}
Attempted this as well:
func focusPhoto(recognizer : UITapGestureRecognizer) {
if photoCaptureDevice.lockForConfiguration(nil) && photoCaptureDevice.focusPointOfInterestSupported {
photoCaptureDevice.focusPointOfInterest = previewLayer!.captureDevicePointOfInterestForPoint(tapLocation)
photoCaptureDevice.unlockForConfiguration()
}
}
Are you using a AVCaptureVideoPreviewLayer?
There is a method you can use to convert a tapped point into the correct coordinate space for focusing.
- captureDevicePointOfInterestForPoint:
Related
I took Paul Hudson's (Hacking with Swift) animation tutorial (His Project 15) and tried to extend it to see the effect of multiple animations layered on one another.
There are four distinct animations: scale, translate, rotate and color. Instead of a single button that cycles through all of them, as in his tutorial, I used four buttons to allow selection of each animation individually. I also modified his "undo" animation by reversing the original animation rather than using CGAffineTransform.identity, since the identity transform would undo all the animations to that point.
My problem is that when I click my c button, it does the appropriate scaling the first click but, rather than scale the penguin back to its original size on the second click, it rotates the view. Subsequent clicks of the scale button continue to rotate the view as if I were clicking the rotate button.
There are other anomalies, such as the first click of the move button moves the penguin appropriately but following that with a click of the rotate button, both rotates the penguin and moves it back to the original position. I'm not sure why it moves back but I accept that that might be my own ignorance of the animation system.
I've added print statements and put in breakpoints to debug the scale problem. Everything in the code seems to be working exactly as coded but the animations defy logic! Any help would be appreciated.
The complete program is relatively simple:
import UIKit
class ViewController: UIViewController {
var imageView: UIImageView!
var scaled = false
var moved = false
var rotated = false
var colored = false
#IBOutlet var scaleButton: UIButton!
#IBOutlet var moveButton: UIButton!
#IBOutlet var rotateButton: UIButton!
#IBOutlet var colorButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
imageView = UIImageView(image: UIImage(named: "penguin"))
imageView.center = CGPoint(x: 200, y: 332)
view.addSubview(imageView)
}
#IBAction func tapped(_ sender: UIButton) {
sender.isHidden = true
var theAnimation: ()->Void
switch sender {
case scaleButton:
theAnimation = scaleIt
case moveButton:
theAnimation = moveIt
case rotateButton:
theAnimation = rotateIt
case colorButton:
theAnimation = colorIt
default:
theAnimation = { self.imageView.transform = .identity }
}
print("scaled = \(scaled), moved = \(moved), rotated = \(rotated), colored = \(colored)")
UIView.animate(withDuration: 1, delay: 0, options: [], animations: theAnimation) { finished in
sender.isHidden = false
}
}
func scaleIt() {
print("scaleIt()")
let newScale: CGFloat = self.scaled ? -1.5 : 1.5
self.imageView.transform = CGAffineTransform(scaleX: newScale, y: newScale)
self.scaled.toggle()
}
func moveIt() {
print("moveIt()")
let newX: CGFloat = self.moved ? 0 : -50
let newY: CGFloat = self.moved ? 0 : -150
self.imageView.transform = CGAffineTransform(translationX: newX, y: newY)
self.moved.toggle()
}
func rotateIt() {
print("rotateIt()")
let newAngle = self.rotated ? 0.0 : CGFloat.pi
self.imageView.transform = CGAffineTransform(rotationAngle: newAngle)
self.rotated.toggle()
}
func colorIt() {
print("colorIt()")
let newAlpha: CGFloat = self.colored ? 1 : 0.1
let newColor = self.colored ? UIColor.clear : UIColor.green
self.imageView.alpha = newAlpha
self.imageView.backgroundColor = newColor
self.colored.toggle()
}
}
By any combination of clicking the buttons, I can only arrive at 10 configurations of the Penguin (five with CGAffineTransforms and the same five modified by the color button).
I'll use these images to answer some of your questions. I've tried to label these images "Configuration1A, Configuration2A, ... Configuration1B", where the B configurations are identical to the A configurations but with the color button's effect. Perhaps the labels will show up in my post (I'm very new to posting on StackOverflow).
Here are the five configurations with the first one repeated using the color button:
I first tried repeated button pressings. For each series of pressing a given button, I first returned the penguin to its original position, size, angle and color. For these repeated button pressings, the behavior I observed was as follows:
Button Observed behavior
scale Penguin scales from Configuration1A to Configuration2A (as expected).
scale Penguin rotates by pi from Configuration2A to Configuration4A (WHAT???).
scale Penguin rotates by pi from Configuration4A to Configuration2A (WHAT???).
scale ... steps 2 and 3, above, repeat indefinitely (WHAT???).
move Penguin moves (-50, -150), Configuration1A to Configuration3A (as expected).
move Penguin moves (50, 150) Configuration3A to Configuration1A (as expected).
move ... behavior above repeats indefinitely (as expected).
rotate Penguin rotates by pi, Configuration1A to Configuration5A (as expected).
rotate Penguin rotates by pi, Configuration5A to Configuration1A (as expected).
rotate ... behavior above repeats indefinitely (as expected).
color Penguin's color changes, Configuration1A to Configuration1B (as expected).
color Penguin's color changes, Configuration1B to Configuration1A (as expected).
color ... behavior above repeats indefinitely (as expected).
The opposite of scale 1.5 (50% bigger) is not -1.5, it's 0.5. (50% of it's original size)
Actually, wouldn't you want it to alternate between a scale of 1.5 (50% bigger) and 1.0 (normal size?)
Assuming you do want to alternate between 50% bigger and 50% smaller, change your scaleIt function to:
func scaleIt() {
print("scaleIt()")
let newScale: CGFloat = self.scaled ? -1.5 : 0.5
self.imageView.transform = CGAffineTransform(scaleX: newScale, y: newScale)
self.scaled.toggle()
}
When you set the X or Y scale to a negative number, it inverts the coordinates in that dimension. Inverting in both dimensions will appear like a rotation.
As somebody else mentioned, the way you've written your functions, you won't be able to combine the different transforms. You might want to rewrite your code to apply the changes to the view's existing transform:
func scaleIt() {
print("scaleIt()")
// Edited to correct the math if you are applying a scale to an existing transform
let scaleAdjustment: CGFloat = self.scaled ? -1.5 : 0.66666
self.imageView.transform = self.imageView.transform.scaledBy(x: scaleAdjustment, y: scaleAdjustment)
self.scaled.toggle()
}
Edit:
Note that changes to transforms are not "commutative", which is a fancy way of saying that the order in which you apply them matters. Applying a shift, then a rotate, will give different results than applying a rotate, then a shift, for example.
Edit #2:
Another thing:
The way you've written your functions, they will set the view to its current state, and then invert that current state for next time (you toggle scaled after deciding what to do.)
That means that for function like moveIt() nothing will happen the first time you tap the button. Rewrite that function like this:
func moveIt() {
print("I like to moveIt moveIt!")
self.moved.toggle() //Toggle the Bool first
let newX: CGFloat = self.moved ? 0 : -50
let newY: CGFloat = self.moved ? 0 : -150
self.imageView.transform = self.imageView.transform.translatedBy(x: newX, y: newY)
}
I'm having a hard time setting boundaries and positioning camera properly inside my view after panning. So here's my scenario.
I have a node that is bigger than the screen and I want to let user pan around to see the full map. My node is 1000 by 1400 when the view is 640 by 1136. Sprites inside the map node have the default anchor point.
Then I've added a camera to the map node and set it's position to (0.5, 0.5).
Now I'm wondering if I should be changing the position of the camera or the map node when the user pans the screen ? The first approach seems to be problematic, since I can't simply add translation to the camera position because position is defined as (0.5, 0.5) and translation values are way bigger than that. So I tried multiplying/dividing it by the screen size but that doesn't seem to work. Is the second approach better ?
var map = Map(size: CGSize(width: 1000, height: 1400))
override func didMove(to view: SKView) {
(...)
let pan = UIPanGestureRecognizer(target: self, action: #selector(panned(sender:)))
view.addGestureRecognizer(pan)
self.anchorPoint = CGPoint.zero
self.cam = SKCameraNode()
self.cam.name = "camera"
self.camera = cam
self.addChild(map)
self.map.addChild(self.cam!)
cam.position = CGPoint(x: 0.5, y: 0.5)
}
var previousTranslateX:CGFloat = 0.0
func panned (sender:UIPanGestureRecognizer) {
let currentTranslateX = sender.translation(in: view!).x
//calculate translation since last measurement
let translateX = currentTranslateX - previousTranslateX
let xMargin = (map.nodeSize.width - self.frame.width)/2
var newCamPosition = CGPoint(x: cam.position.x, y: cam.position.y)
let newPositionX = cam.position.x*self.frame.width + translateX
// since the camera x is 320, our limits are 140 and 460 ?
if newPositionX > self.frame.width/2 - xMargin && newPositionX < self.frame.width - xMargin {
newCamPosition.x = newPositionX/self.frame.width
}
centerCameraOnPoint(point: newCamPosition)
//(re-)set previous measurement
if sender.state == .ended {
previousTranslateX = 0
} else {
previousTranslateX = currentTranslateX
}
}
func centerCameraOnPoint(point: CGPoint) {
if cam != nil {
cam.position = point
}
}
Your camera is actually at a pixel point 0.5 points to the right of the centre, and 0.5 points up from the centre. At (0, 0) your camera is dead centre of the screen.
I think the mistake you've made is a conceptual one, thinking that anchor point of the scene (0.5, 0.5) is the same as the centre coordinates of the scene.
If you're working in pixels, which it seems you are, then a camera position of (500, 700) will be at the top right of your map, ( -500, -700 ) will be at the bottom left.
This assumes you're using the midpoint anchor that comes default with the Xcode SpriteKit template.
Which means the answer to your question is: Literally move the camera as you please, around your map, since you'll now be confident in the knowledge it's pixel literal.
With one caveat...
a lot of games use constraints to stop the camera somewhat before it gets to the edge of a map so that the map isn't half off and half on the screen. In this way the map's edge is showing, but the furthest the camera travels is only enough to reveal that edge of the map. This becomes a constraints based effort when you have a player/character that can walk/move to the edge, but the camera doesn't go all the way out there.
In my app as shown in figure I need to rotate the view circular as shown in figures. how can i set that.
green is nbutton and red is nview. my code is :
func handleRotate(recognizer : UIRotationGestureRecognizer)
{
nView.transform = CGAffineTransformRotate(imageViews[i].transform, recognizer.rotation)
nButton.center = CGPointMake((nView.frame.origin.x), (nView.frame.origin.y))
recognizer.rotation = 0
}
Actually I am enable to get the frame of imageView here. Even bounds not working for me..
So I am developing an Ipad app that allows the user to solve a jigsaw puzzle. I've worked out getting the panning motion for each piece, but getting them where I want to has not worked properly. I am trying to make a piece snap into it's final destination when it's within a small range, which is followed by a clicking sound.
Here is a bit of code for a single puzzle piece. When my new game button is pressed, an Image View gets set to the corresponding picture, and randomly placed on the canvas.
#IBAction func NewGameTapped(sender: UIButton){
let bounds = UIScreen.mainScreen().bounds
let height = bounds.size.height
let width = bounds.size.width
image1.image = UIImage(named:"puzzleImage1.png")
image1.center.x = CGFloat(100 + arc4random_uniform(UInt32(width)-300))
image1.center.y = CGFloat(100 + arc4random_uniform(UInt32(height)-300))
//Create Panning (Dragging) Gesture Recognizer for Image View 1
let panRecognizer1 = UIPanGestureRecognizer(target: self, action: "handlePanning1:")
// Add Panning (Dragging) Gesture Recognizer to Image View 1
image1.addGestureRecognizer(panRecognizer1)
}
This is where I am having some issues.
func handlePanning1(recognizer: UIPanGestureRecognizer) {
let center = dict1_image_coordinates["puzzleImage1"] as![Int]
let newTranslation: CGPoint = recognizer.translationInView(image1)
recognizer.view?.transform = CGAffineTransformMakeTranslation(lastTranslation1.x + newTranslation.x, lastTranslation1.y + newTranslation.y)
if recognizer.state == UIGestureRecognizerState.Ended {
lastTranslation1.x += newTranslation.x
lastTranslation1.y += newTranslation.y
}
checkPosition(image1, center: center)
}
func checkPosition(image: UIImageView, center: [Int]){
let distance: Double = sqrt(pow((Double(image.center.x) - Double(center[0])),2) + pow((Double(image.center.y) - Double(center[1])),2))
//if the distance is within range, set image to new location.
if distance <= 20{
image.center.x = CGFloat(center[0])
image.center.y = CGFloat(center[1])
AudioServicesPlaySystemSound(clickSoundID)
}
For whatever reason, the puzzle piece only wants to snap to it's spot when the piece begins the game within the acceptable snap distance. I have tried checking for the object position in various different parts of my program, but nothing has worked so far. Any help or other tips are greatly appreciated.
The issue is likely caused by this line
image1.addGestureRecognizer(panRecognizer1)
Usually people add gestureRecognizer on the parentView, or the rootView of the view controller instead of the image1 itself. The benefit is that the parentView never moves, where as the image1 is constantly being transformed, which may or may not affect the recognizer.translationInView(x) method return value.
do this instead:
self.view.addGestureRecognizer(panRecognizer1)
and change to this line in handlePanning1 function:
image1.transform = CGAffineTransformMakeTranslation(lastTranslation1.x + newTranslation.x, lastTranslation1.y + newTranslation.y)
I'm trying to write a piece of code in iOS using swift that creates a square where the user touches and lets them drag it around. The catch is I want the area it can move around in to be confined to the UIView it was created it.
The code below almost works. You can only create the square by pressing within the box, but then you can just drag it where you want. I'm not picky about if the box stays in the "fence" and tracks with your finger or just disappears until you move your finger back in, but I can't have it all over the screen.
I'm pretty new to this, so if there's a better way to go about it, I'm happy to be corrected.
class ViewController: UIViewController {
var dragableSquare = UIView() // a square that will appear on press and be dragged around
var fence = UIView() // a view that the square should be confined within
override func viewDidLoad() {
super.viewDidLoad()
// define the fence UIView and it to view
fence.frame = CGRectMake(view.frame.width/2 - 100, view.frame.height/2 - 100, 200, 200)
fence.backgroundColor = UIColor.grayColor()
view.addSubview(fence)
// give the fence a gesture recognizer
var pressRecog = UILongPressGestureRecognizer(target: self, action: "longPress:")
pressRecog.minimumPressDuration = 0.001
fence.addGestureRecognizer(pressRecog)
}
func longPress(gesture: UILongPressGestureRecognizer) {
print("press!")
// get location of the press
var touchPoint = gesture.locationInView(fence)
// When the touch begins place the square at that point
if gesture.state == UIGestureRecognizerState.Began {
print("began")
// create and add square to fence view
dragableSquare.frame = CGRectMake(touchPoint.x-5, touchPoint.y-5, 10, 10)
dragableSquare.backgroundColor = UIColor.blueColor()
self.fence.addSubview(dragableSquare)
// While the press continues, update the square's location to the current touch point
} else {
print("moving")
dragableSquare.center = touchPoint
}
}
I just joined stack overflow and I've been really impressed with how generous and helpful the community is. I hope I'll get enough experience to start helping others out soon too.
You can use CGRectIntersection to get the size of the intersection rectangle between to views. In your case, you want to keep moving the square as long as the intersection rectangle between the square and the fence is the same size as the square (meaning the square is still wholly within the fence). So your else clause should look like this,
} else {
print("moving")
if CGRectIntersection(dragableSquare.frame, fence.bounds).size == dragableSquare.frame.size {
dragableSquare.center = touchPoint
}
}