I'm trying to make an app with swift, and I want to use front-facing camera.
I used AVFoundation and tried some codes. But I couldn't set front-facing zoom parameter. Is it possible? For back-camera, everything worked successfully.
I dont want to use Affine Transform. Because, it can be decrease image quality. So, how can I set this parameter programatically?
Thanks.
You'll need to add a zoomFactor variable to your camera.
var zoomFactor: CGFloat = 1.0
Next define a function zoom to be used in conjunction with a pinch recognizer. I assume you have created a front capture device and input. frontDevice is an optional capture device on my camera. Here's how I zoom that device.
public func zoom(pinch: UIPinchGestureRecognizer) {
guard let device = frontDevice else { return }
func minMaxZoom(_ factor: CGFloat) -> CGFloat { return min(max(factor, 1.0), device.activeFormat.videoMaxZoomFactor) }
func update(scale factor: CGFloat) {
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = factor
} catch {
debugPrint(error)
}
}
let newScaleFactor = minMaxZoom(pinch.scale * zoomFactor)
switch pinch.state {
case .began: fallthrough
case .changed: update(scale: newScaleFactor)
case .ended:
zoomFactor = minMaxZoom(newScaleFactor)
update(scale: zoomFactor)
default: break
}
}
Finally add a pinch recognizer to some view.
let pgr = UIPinchGestureRecognizer(target: self, action: #selector(zoom))
view.addGestureRecognizer(pgr)
The previous answer can be done without the internal methods, allowing it to be more straightforward and understandable.
To fully explain the code:
The zoom variable keeps track of what zoom you were at after the last gesture. Before any gesture happens there is no zoom, so you're at 1.0.
During a gesture the scale property of pinch holds the ratio of the pinch during the active gesture. This is 1.0 when your fingers haven't moved from their initial position and grows and shrinks with pinching. By multiplying this with the previously held zoom you get what scale to be at in the moment while the gesture is occurring. It's important to keep this scale in the range of [1, device.activeFormat.videoMaxZoomFactor] or you'll get a SIGABRT.
When the gesture finishes (pinch.state) you need to update zoom so that the next gesture starts at the current zoom level.
It's important to lock when modifying a camera property to avoid concurrent modification. defer will release the lock after the block of code no matter what, similar to a finally block.
var zoom: CGFloat = 1.0
#objc func pinch(_ pinch: UIPinchGestureRecognizer) {
guard let device = frontDevice
else { return }
let scaleFactor = min(max(pinch.scale * zoom, 1.0), device.activeFormat.videoMaxZoomFactor)
if pinch.state == .ended {
zoom = scaleFactor
}
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = scaleFactor
} catch {
print(error)
}
}
Related
I have an object on SCNScene and I want the user to zoom in/out on specific parts using double tap and
I thought of two options:
Make the camera itself move to that part, similar to that question,
scenekit - zoom in/out to selected node of scene
and It didn't zoom out when I took this approach or even zoom in accurately.
Add camera node in front of each part, so when the user tap on a part it should reposition the default camera of the scene to the configured camera I added, but I was thinking this would affect the performance due to the nodes I keep adding. Should I try this?
This is the code I tried to the first approach.
#objc
internal func handleTapGesture(_ gestureRecognizer: UIGestureRecognizer) {
let hitPoint = gestureRecognizer.location(in: sceneViewVehicle)
let hitResults = sceneViewVehicle.hitTest(hitPoint, options: nil)
if hitResults.count > 0 {
let result = hitResults.first!
let scale = CGFloat(result.node.simdScale.y)
switch gestureRecognizer.state {
case .changed: fallthrough
case .ended:
cameraNode.camera?.multiplyFOV(by: scale)
default: break
}
}
Adding the Gesture
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTapGesture(_:)))
tapGesture.numberOfTapsRequired = 2
sceneViewVehicle.addGestureRecognizer(tapGesture)
Zooming for camera
extension SCNCamera {
public func setFOV(_ value: CGFloat) {
fieldOfView = value
}
public func multiplyFOV(by multiplier: CGFloat) {
fieldOfView *= multiplier
}
}
I'm trying to detect when the camera is facing my object that I've placed in ARSKView. Here's the code:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
//let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
let distance = simd_distance(anchor.transform.columns.3,
currentFrame.camera.transform.columns.3)
//print("DISTANCE BETWEEN CAMERA AND TOKEN: \(distance)")
if distance <= captureDistance {
// token is within the camera view and within capture distance
print("token is within the camera view and within capture distance")
}
}
}
}
}
The problem is that the intersects method is returning true both when the object is directly in front of the camera, as well as directly behind you. How can I update this code so it only detects when the spriteNode is in the current camera viewfinder? I'm using SpriteKit by the way, not SceneKit.
Here's the code I'm using to actually create the anchor:
self.captureDistance = captureDistance
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if sceneView.session.currentFrame != nil {
print("token dropped at \(distance) meters and bearing: \(bearing)")
// Add a new anchor to the session
let transform = getTransformGiven(bearing: bearing, distance: distance)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
func getTransformGiven(bearing: Float, distance: Float) -> matrix_float4x4 {
let origin = MatrixHelper.translate(x: 0, y: 0, z: Float(distance * -1))
let bearingTransform = MatrixHelper.rotateMatrixAroundY(degrees: bearing * -1, matrix: origin)
return bearingTransform
}
I have spent a while looking at this, and have come to the conclusion that trying to get the distance between the currentFrame.camera and the anchor doesn't work simply because it returns similar values irregardless of whether the anchor is infront of, or behind the camera. By this I mean that if we assume that our anchor is at point x, and we move forwards 1meter or backwards 1 meter, the distance from the camera and the anchor is still 1 meter.
As such after some experimenting I believe we need to look at the following variables and functions to help us detect whether our SKNode is infront of the camera:
(a) The zPosition of the SpriteNode which refers to:
The z-order of the node (used for ordering). Negative z is "into" the screen, Positive z is "out" of the screen
(b) open func intersects(_ node: SKNode) -> Bool which:
Returns true if the bounds of this node intersects with the
transformed bounds of the other node, otherwise false.
As such the following seems to do exactly what you need:
override func update(_ currentTime: TimeInterval) {
//1. Get The Current ARSKView & Current Frame
guard let sceneView = self.view as? ARSKView, let currentFrame = sceneView.session.currentFrame else { return }
//3. Iterate Through Our Anchors & Check For Our Token Node
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token"{
/*
If The ZPosition Of The SpriteNode Is Negative It Can Be Seen As Into The Screen Whereas Positive Is Out Of The Screen
However We Also Need To Know Whether The Actual Frostrum (SKScene) Intersects Our Object
If Our ZPosition Is Negative & The SKScene Doesnt Intersect Our Node Then We Can Assume It Isnt Visible
*/
if spriteNode.zPosition <= 0 && intersects(spriteNode){
print("Infront Of Camera")
}else{
print("Not InFront Of Camera")
}
}
}
}
Hope it helps...
You can also use this function to check the camera's position :-
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame; {
simd_float4x4 transform = session.currentFrame.camera.transform;
SCNVector3 position = SCNVector3Make(transform.columns[3].x,
transform.columns[3].y,
transform.columns[3].z);
// Call any function to check the Position.
}
I would give you a clue. Check the ZPosition like this.
if let spriteNode = sceneView.node(for: anchor),
spriteNode.name == "token",
intersects(spriteNode) && spriteNode.zPosition < 0 {....}
I am new to Swift and SpriteKit and am learning to understand the control in the game "Fish & Trip". The sprite node is always at the center of the view and it will rotate according to moving your touch, no matter where you touch and move (hold) it will rotate correspondingly.
The difficulty here is that it is different from the Pan Gesture and simple touch location as I noted in the picture 1 and 2.
For the 1st pic, the touch location is processed by atan2f and then sent to SKAction.rotate and it is done, I can make this working.
For the 2nd pic, I can get this by setup a UIPanGestureRecognizer and it works, but you can only rotate the node when you move your finger around the initial point (touchesBegan).
My question is for the 3rd pic, which is the same as the Fish & Trip game, you can touch anywhere on the screen and then move (hold) to anywhere and the node still rotate as you move, you don't have to move your finger around the initial point to let the node rotate and the rotation is smooth and accurate.
My code is as follow, it doesn't work very well and it is with some jittering, my question is how can I implement this in a better way? and How can I make the rotation smooth?
Is there a way to filter the previousLocation in the touchesMoved function? I always encountered jittering when I use this property, I think it reports too fast. I haven't had any issue when I used UIPanGestureRecoginzer and it is very smooth, so I guess I must did something wrong with the previousLocation.
func mtoRad(x: CGFloat, y: CGFloat) -> CGFloat {
let Radian3 = atan2f(Float(y), Float(x))
return CGFloat(Radian3)
}
func moveplayer(radian: CGFloat){
let rotateaction = SKAction.rotate(toAngle: radian, duration: 0.1, shortestUnitArc: true)
thePlayer.run(rotateaction)
}
var touchpoint = CGPoint.zero
var R2 : CGFloat? = 0.0
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
for t in touches{
let previousPointOfTouch = t.previousLocation(in: self)
touchpoint = t.location(in: self)
if touchpoint.x != previousPointOfTouch.x && touchpoint.y != previousPointOfTouch.y {
let delta_y = touchpoint.y - previousPointOfTouch.y
let delta_x = touchpoint.x - previousPointOfTouch.x
let R1 = mtoRad(x: delta_x, y: delta_y)
if R2! != R1 {
moveplayer(radiant: R1)
}
R2 = R1
}
}
}
This is not an answer (yet - hoping to post one/edit this into one later), but you can make your code a bit more 'Swifty' by changing the definition for movePlayer() from:
func moveplayer(radian: CGFloat)
to
rotatePlayerTo(angle targetAngle: CGFloat) {
let rotateaction = SKAction.rotate(toAngle: targetAngle, duration: 0.1, shortestUnitArc: true)
thePlayer.run(rotateaction)
}
then, to call it, instead of:
moveplayer(radiant: R1)
use
rotatePlayerTo(angle: R1)
which is more readable as it describes what you are doing better.
Also, your rotation to the new angle is constant at 0.1s - so if the player has to rotate further, it will rotate faster. it would be better to keep the rotational speed constant (in terms of radians per second). we can do this as follows:
Add the following property:
let playerRotationSpeed = CGFloat((2 *Double.pi) / 2.0) //Radian per second; 2 second for full rotation
change your moveShip to:
func rotatePlayerTo(angle targetAngle: CGFloat) {
let angleToRotateBy = abs(targetAngle - thePlayer.zRotation)
let rotationTime = TimeInterval(angleToRotateBy / shipRotationSpeed)
let rotateAction = SKAction.rotate(toAngle: targetAngle, duration: rotationTime , shortestUnitArc: true)
thePlayer.run(rotateAction)
}
this may help smooth the rotation too.
What is the best way to create an accurate Auto Focus and Exposure for AVFoundation custom layer camera?, for example, currently my camera preview layer is square, I would like the camera focus and exposure to be specify to that frame bound. I need this in Swift 2 if possible, if not please write your answer I would be able to convert it myself.
Current Auto Focus and Exposure: But as you can see this will evaluate the entire view when focusing.
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
//Get Touch Point
let Point = touches.first!.locationInView(self.capture)
//Assign Auto Focus and Auto Exposour
if let device = currentCameraInput {
do {
try! device.lockForConfiguration()
if device.focusPointOfInterestSupported{
//Add Focus on Point
device.focusPointOfInterest = Point
device.focusMode = AVCaptureFocusMode.AutoFocus
}
if device.exposurePointOfInterestSupported{
//Add Exposure on Point
device.exposurePointOfInterest = Point
device.exposureMode = AVCaptureExposureMode.AutoExpose
}
device.unlockForConfiguration()
}
}
}
Camera Layer: Anything in the 1:1 ratio should be considered as focus and exposure point, and anything outside this bound would not even be considered as a touch event for camera focus.
public func captureDevicePointOfInterestForPoint(pointInLayer: CGPoint) -> CGPoint
will give you the point for the device to focus on based on the settings of your AVCaptureVideoPreviewLayer. See the docs.
Thanks to JLW here is how you do it in Swift 2. First, we need to setup Tap gesture you can do this programmatically or Storyboard.
//Add UITap Gesture Capture Frame for Focus and Exposure
let captureTapGesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: "AutoFocusGesture:")
captureTapGesture.numberOfTapsRequired = 1
captureTapGesture.numberOfTouchesRequired = 1
self.captureFrame.addGestureRecognizer(captureTapGesture)
Create a function base on our selector in captureTapGesture.
/*=========================================
* FOCUS & EXPOSOUR
==========================================*/
var animateActivity: Bool!
internal func AutoFocusGesture(RecognizeGesture: UITapGestureRecognizer){
let touchPoint: CGPoint = RecognizeGesture.locationInView(self.captureFrame)
//GET PREVIEW LAYER POINT
let convertedPoint = self.previewLayer.captureDevicePointOfInterestForPoint(touchPoint)
//Assign Auto Focus and Auto Exposour
if let device = currentCameraInput {
do {
try! device.lockForConfiguration()
if device.focusPointOfInterestSupported{
//Add Focus on Point
device.focusPointOfInterest = convertedPoint
device.focusMode = AVCaptureFocusMode.AutoFocus
}
if device.exposurePointOfInterestSupported{
//Add Exposure on Point
device.exposurePointOfInterest = convertedPoint
device.exposureMode = AVCaptureExposureMode.AutoExpose
}
device.unlockForConfiguration()
}
}
}
Also, if you like to use your animation indicator, please use touchPoint at your touch of an event and assign it to your animated layer.
//Assign Indicator Position
touchIndicatorOutside.frame.origin.x = touchPoint.x - 10
touchIndicatorOutside.frame.origin.y = touchPoint.y - 10
First off, I have already seen and tried to implement the other answers to similar questions here, here and here. The problem is I started programming for iOS last year with Swift and (regrettably) I did not learn ObjC first (yes, it's now on my to-do list). ;-)
So please take a look and see if you might help me see my way thru this.
I can easily pinch to zoom the whole SKScene. I can also scale an SKSpiteNode up/down by using other UI Gestures (ie. swipes) and SKActions.
Based off this post I have applied the SKAction to the UIPinchGestureRecognizer and it works perfectly to zoom IN, but I cannot get it to zoom back OUT.
What am I missing?
Here is my code on a sample project:
class GameScene: SKScene {
var board = SKSpriteNode(color: SKColor.yellowColor(), size: CGSizeMake(200, 200))
func pinched(sender:UIPinchGestureRecognizer){
println("pinched \(sender)")
// the line below scales the entire scene
//sender.view!.transform = CGAffineTransformScale(sender.view!.transform, sender.scale, sender.scale)
sender.scale = 1.01
// line below scales just the SKSpriteNode
// But it has no effect unless I increase the scaling to >1
var zoomBoard = SKAction.scaleBy(sender.scale, duration: 0)
board.runAction(zoomBoard)
}
// line below scales just the SKSpriteNode
func swipedUp(sender:UISwipeGestureRecognizer){
println("swiped up")
var zoomBoard = SKAction.scaleBy(1.1, duration: 0)
board.runAction(zoomBoard)
}
// I thought perhaps the line below would scale down the SKSpriteNode
// But it has no effect at all
func swipedDown(sender:UISwipeGestureRecognizer){
println("swiped down")
var zoomBoard = SKAction.scaleBy(0.9, duration: 0)
board.runAction(zoomBoard)
}
override func didMoveToView(view: SKView) {
self.addChild(board)
let pinch:UIPinchGestureRecognizer = UIPinchGestureRecognizer(target: self, action: Selector("pinched:"))
view.addGestureRecognizer(pinch)
let swipeUp:UISwipeGestureRecognizer = UISwipeGestureRecognizer(target: self, action: Selector("swipedUp:"))
swipeUp.direction = .Up
view.addGestureRecognizer(swipeUp)
let swipeDown:UISwipeGestureRecognizer = UISwipeGestureRecognizer(target: self, action: Selector("swipedDown:"))
swipeDown.direction = .Down
view.addGestureRecognizer(swipeDown)
}
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
// should I be using this function instead?
}
Thanks to the help from #sangony I have gotten this working finally. I thought I'd post the working code in case anyone else would like to see it in Swift.
var board = SKSpriteNode(color: SKColor.yellowColor(), size: CGSizeMake(200, 200))
var previousScale = CGFloat(1.0)
func pinched(sender:UIPinchGestureRecognizer){
if sender.scale > previousScale {
previousScale = sender.scale
if(board.size.height < 800) {
var zoomIn = SKAction.scaleBy(1.05, duration:0)
board.runAction(zoomIn)
}
}
if sender.scale < previousScale {
previousScale = sender.scale
if(board.size.height > 200) {
var zoomOut = SKAction.scaleBy(0.95, duration:0)
board.runAction(zoomOut)
}
}
I tried your code (in Objective C) and got it to zoom in and out using pinch. I don't think there's anything wrong with your code but you are probably not taking into account the scale factors as they are placed on the ever changing sprite size.
You can easily zoom so far out or in that it requires multiple pinch gestures to get the node back to a manageable size. Instead of using the scale property directly for your zoom factor, I suggest you use a step process. You should also have max/min limits for your scale size.
To use the step process you create a CGFloat ivar previousScale to store the last scale value as to determine whether the current pinch is zooming in or out. You then compare the new passed sender.scale to the ivar and zoom in or out based on the comparison.
Apply min and max scale limits to stop scaling once they are reached.
The code below is in Obj-C but I'm sure you can get the gist of it:
First declare your ivar float float previousScale;
- (void)handlePinch:(UIPinchGestureRecognizer *)sender {
NSLog(#"pinchScale:%f",sender.scale);
if(sender.scale > previousScale) {
previousScale = sender.scale;
// only scale up if the node height is less than 200
if(node0.size.height < 200) {
// step up the scale factor by 0.05
[node0 runAction:[SKAction scaleBy:1.05 duration:0]];
}
}
if(sender.scale < previousScale) {
previousScale = sender.scale;
// only scale down if the node height is greater than 20
if(node0.size.height > 20) {
// step down the scale factor by 0.05
[node0 runAction:[SKAction scaleBy:0.95 duration:0]];
}
}
}