Auto Focus and Auto Exposure in AVFoundation on Custom Camera Layer - ios

What is the best way to create an accurate Auto Focus and Exposure for AVFoundation custom layer camera?, for example, currently my camera preview layer is square, I would like the camera focus and exposure to be specify to that frame bound. I need this in Swift 2 if possible, if not please write your answer I would be able to convert it myself.
Current Auto Focus and Exposure: But as you can see this will evaluate the entire view when focusing.
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
//Get Touch Point
let Point = touches.first!.locationInView(self.capture)
//Assign Auto Focus and Auto Exposour
if let device = currentCameraInput {
do {
try! device.lockForConfiguration()
if device.focusPointOfInterestSupported{
//Add Focus on Point
device.focusPointOfInterest = Point
device.focusMode = AVCaptureFocusMode.AutoFocus
}
if device.exposurePointOfInterestSupported{
//Add Exposure on Point
device.exposurePointOfInterest = Point
device.exposureMode = AVCaptureExposureMode.AutoExpose
}
device.unlockForConfiguration()
}
}
}
Camera Layer: Anything in the 1:1 ratio should be considered as focus and exposure point, and anything outside this bound would not even be considered as a touch event for camera focus.

public func captureDevicePointOfInterestForPoint(pointInLayer: CGPoint) -> CGPoint
will give you the point for the device to focus on based on the settings of your AVCaptureVideoPreviewLayer. See the docs.

Thanks to JLW here is how you do it in Swift 2. First, we need to setup Tap gesture you can do this programmatically or Storyboard.
//Add UITap Gesture Capture Frame for Focus and Exposure
let captureTapGesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: "AutoFocusGesture:")
captureTapGesture.numberOfTapsRequired = 1
captureTapGesture.numberOfTouchesRequired = 1
self.captureFrame.addGestureRecognizer(captureTapGesture)
Create a function base on our selector in captureTapGesture.
/*=========================================
* FOCUS & EXPOSOUR
==========================================*/
var animateActivity: Bool!
internal func AutoFocusGesture(RecognizeGesture: UITapGestureRecognizer){
let touchPoint: CGPoint = RecognizeGesture.locationInView(self.captureFrame)
//GET PREVIEW LAYER POINT
let convertedPoint = self.previewLayer.captureDevicePointOfInterestForPoint(touchPoint)
//Assign Auto Focus and Auto Exposour
if let device = currentCameraInput {
do {
try! device.lockForConfiguration()
if device.focusPointOfInterestSupported{
//Add Focus on Point
device.focusPointOfInterest = convertedPoint
device.focusMode = AVCaptureFocusMode.AutoFocus
}
if device.exposurePointOfInterestSupported{
//Add Exposure on Point
device.exposurePointOfInterest = convertedPoint
device.exposureMode = AVCaptureExposureMode.AutoExpose
}
device.unlockForConfiguration()
}
}
}
Also, if you like to use your animation indicator, please use touchPoint at your touch of an event and assign it to your animated layer.
//Assign Indicator Position
touchIndicatorOutside.frame.origin.x = touchPoint.x - 10
touchIndicatorOutside.frame.origin.y = touchPoint.y - 10

Related

How can I pick item from collectionView and add it to SCNScene?

I am working with SceneKit and ARKit. I have made a collectionView with an array of emoji's. Now I want the user to be able to select the emoji from collectionView and when he/she touches the screen that selected emoji will be placed in 3D.
How can I do that? I think I have to create a function for the Node, but still my idea is blurry in the mind and I am not very much clear.
As far as any emoji is a 2D element, it's better to use a SpriteKit framework to upload them, not a SceneKit. But, of course, you might choose a SceneKit as well. So, there are two ways you can work with emojis in ARKit:
Using SpriteKit. In that case all the 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites are rotates about their pivot point facing a camera.
Using SceneKit. In ARSCNView you can use all your sprites as a texture for 3D geometry. This texture could be for a plane, cube, sphere, or any custom model, it's up to you. To make, for example, a plane (with emojis texture on it) to face a camera use SCNBillboardConstraint constraint.
Here's how you code in ViewController might look like:
// Element's index coming from `collectionView`
var i: Int = 0
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
let emojiArray = ["🐶","🦊","🐸","🐼","🐹"]
let emojiNode = SKLabelNode(text: emojiArray[i])
emojiNode.horizontalAlignmentMode = .center
emojiNode.verticalAlignmentMode = .center
return emojiNode
}
...and in Scene.swift file:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.75 // 75 cm from camera
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
Or, if you use hit-testing, your code might look like this:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let sceneView = self.view as? ARSKView else { return }
if let touchLocation = touches.first?.location(in: sceneView) {
if let hit = sceneView.hitTest(touchLocation, types: .featurePoint).first {
sceneView.session.add(anchor: ARAnchor(transform: hit.worldTransform))
}
}
}
If you'd like to create an UICollectionView overlay containing emojis to choose from, read the following post.
If you'd like to create an SKView overlay, containing emojis to choose from, read the following post.

How to keep track of animation in Sprite Kit

I need to keep track of animation with texture. I am animating power bar and when user clicks the screen it should stop and save the power. I can not figure out how to save power. So far I have this: on first touch power bar animates, but on the second touch it only stops but does not save power.
This is how I create animation:
textureAtlas = SKTextureAtlas(named:"images")
for i in 1...textureAtlas.textureNames.count{
let name = "\(i).png"
textureArray.append(SKTexture(imageNamed: name))
}
let animateForward = SKAction.animate(with: textureArray, timePerFrame: 0.1)
let animateBackward = SKAction.reversed(animateForward)
let sequence = SKAction.sequence([animateForward,animateBackward()])
This is how I detect touches:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let firstTouchStorage = touches.first{ // after first touch
let animateForward = SKAction.animate(with: textureArray, timePerFrame: 0.1)
let animateBackward = SKAction.reversed(animateForward)
let sequence = SKAction.sequence([animateForward,animateBackward()])
arrow.removeAllActions()
arrow.run(SKAction.repeatForever(sequence))
firstTouch = firstTouchStorage
}
for touch in touches{
if touch == firstTouch{
touchesArray.append(touch)
let angle = arrow.zRotation
if touchesArray.count == 2{
arrow.removeAllActions()
arrow.removeFromParent()
}
}
}
I am trying to solve this problem for too long, but I can not figure it out. I hope you will help me.
As #KnightOfDragon suggested : animateWithTextures has a restore: parameter, if you set that to false, when you stop animating the last texture should stay on the sprite. If you go ahead and read the textures description, then you will know what texture it stopped on, and could plan accordingly

how to set fron-facing camera zoom with swift avfoundation

I'm trying to make an app with swift, and I want to use front-facing camera.
I used AVFoundation and tried some codes. But I couldn't set front-facing zoom parameter. Is it possible? For back-camera, everything worked successfully.
I dont want to use Affine Transform. Because, it can be decrease image quality. So, how can I set this parameter programatically?
Thanks.
You'll need to add a zoomFactor variable to your camera.
var zoomFactor: CGFloat = 1.0
Next define a function zoom to be used in conjunction with a pinch recognizer. I assume you have created a front capture device and input. frontDevice is an optional capture device on my camera. Here's how I zoom that device.
public func zoom(pinch: UIPinchGestureRecognizer) {
guard let device = frontDevice else { return }
func minMaxZoom(_ factor: CGFloat) -> CGFloat { return min(max(factor, 1.0), device.activeFormat.videoMaxZoomFactor) }
func update(scale factor: CGFloat) {
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = factor
} catch {
debugPrint(error)
}
}
let newScaleFactor = minMaxZoom(pinch.scale * zoomFactor)
switch pinch.state {
case .began: fallthrough
case .changed: update(scale: newScaleFactor)
case .ended:
zoomFactor = minMaxZoom(newScaleFactor)
update(scale: zoomFactor)
default: break
}
}
Finally add a pinch recognizer to some view.
let pgr = UIPinchGestureRecognizer(target: self, action: #selector(zoom))
view.addGestureRecognizer(pgr)
The previous answer can be done without the internal methods, allowing it to be more straightforward and understandable.
To fully explain the code:
The zoom variable keeps track of what zoom you were at after the last gesture. Before any gesture happens there is no zoom, so you're at 1.0.
During a gesture the scale property of pinch holds the ratio of the pinch during the active gesture. This is 1.0 when your fingers haven't moved from their initial position and grows and shrinks with pinching. By multiplying this with the previously held zoom you get what scale to be at in the moment while the gesture is occurring. It's important to keep this scale in the range of [1, device.activeFormat.videoMaxZoomFactor] or you'll get a SIGABRT.
When the gesture finishes (pinch.state) you need to update zoom so that the next gesture starts at the current zoom level.
It's important to lock when modifying a camera property to avoid concurrent modification. defer will release the lock after the block of code no matter what, similar to a finally block.
var zoom: CGFloat = 1.0
#objc func pinch(_ pinch: UIPinchGestureRecognizer) {
guard let device = frontDevice
else { return }
let scaleFactor = min(max(pinch.scale * zoom, 1.0), device.activeFormat.videoMaxZoomFactor)
if pinch.state == .ended {
zoom = scaleFactor
}
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = scaleFactor
} catch {
print(error)
}
}

How to allow drag UIView (PanGesture..) just in some area

I need drag UIView via PanGestureRecognizer (I know how to do), but I can't figure out, how to make it with limitation. Need some padding from top and also if there is collision with one of four sides (left, right, top (here is the padding) and bottom of device) stop the drag and you can't over - or 1px padding like on the top, whatever. :)
I tried this one: https://github.com/andreamazz/UIView-draggable but if I set the area with limitation via cagingArea, iPad (Air) is lagged. Also the moving is not smooth, I think the native PanGestureRecognizer is the best, need just the limitation area, do you know how I can do that please? :)
I'm writing in Swift. And also found some related topics, like this one -> Use UIPanGestureRecognizer to drag UIView inside limited area but I don't know what insideDraggableArea doing?..
Thank you so much programmers!
Same problem that I faced in my project,
Try this,
1) Init PanGesture
let panRec = UIPanGestureRecognizer()
2) Add PanGesture to your UIView
override func viewDidLoad() {
....
....
panRec.addTarget(self, action: "draggedView:")
yourview.addGestureRecognizer(panRec)
yourview.userInteractionEnabled = true
....
....
}
3) Set your limitation on draggedView function
func draggedView(sender:UIPanGestureRecognizer){
println("panning")
var translation = sender.translationInView(self.view)
println("the translation x:\(translation.x) & y:\(translation.y)")
//sender.view.
var tmp=sender.view?.center.x //x translation
var tmp1=sender.view?.center.y //y translation
//set limitation for x and y origin
if(translation.x <= 100 && translation.y <= 50 )
{
sender.view?.center=CGPointMake(tmp!+translation.x, tmp1!+translation.y)
sender.setTranslation(CGPointZero, inView: self.view)
}
}
Do you have to use UIPanGestureRecognizer ?
add view to a viewcontroller and check user interaction enabled.
i think you should tag view and use touchesMoved method.
override func touchesMoved(touches: Set<NSObject>, withEvent event: UIEvent) {
if let touch = touches.first as? UITouch {
if touch.view.tag == 2 {
if self.yourView.center.y <= CGFloat(230) { //if you want use limitation drag
var touchLocation = touch.locationInView(self.view)
self.yourView.center.y = touchLocation.y
self.yourView.center.x = touchLocation.x
}
}
}
}
This works for me:
guard let view = UIApplication.shared.windows.last else { return }
view.addSubview(customView)

See if SKShapeNode touched?: How to Identify node?

I am doing a small for fun project in Swift Xcode 6. The function thecircle() is called at a certain rate by a timer in didMoveToView(). My question is how do I detect if any one of the multiple circle nodes on the display is tapped? I currently do not see a way to access a single node in this function.
func thecircle() {
let circlenode = SKShapeNode(circleOfRadius: 25)
circlenode.strokeColor = UIColor.whiteColor()
circlenode.fillColor = UIColor.redColor()
let initialx = CGFloat(20)
let initialy = CGFloat(1015)
let initialposition = CGPoint(x: initialx, y: initialy)
circlenode.position = initialposition
self.addChild(circlenode)
let action1 = SKAction.moveTo(CGPoint(x: initialx, y: -20), duration: NSTimeInterval(5))
let action2 = SKAction.removeFromParent()
circlenode.runAction(SKAction.sequence([action1, action2]))
}
There are many problems with this.
You shouldnt be creating any looping timer in your games. A scene comes with an update method that is called at every frame of the game. Most of the time this is where you will be checking for changes in your scene.
You have no way of accessing circlenode from outside of your thecircle method. If you want to access from somewhere else you need to set up circlenode to be a property of your scene.
For example:
class GameScene: BaseScene {
let circlenode = SKShapeNode(circleOfRadius: 25)
You need to use the method touchesBegan. It should have come with your spritekit project. You can detect a touch to your node the following way:
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
for touch: AnyObject in touches {
// detect touch in the scene
let location = touch.locationInNode(self)
// check if circlenode has been touched
if self.circlenode.containsPoint(location) {
// your code here
}
}
}

Resources