AVCaptureDevice's exposurePointOfInterest does not work - ios

I'm trying to change the exposure in my camera app according to certain point of the image.
I'm using the following code that is triggered when the user taps on screen. For now I simply try to expose to the center.
#IBAction func didTap()
{
if captureDevice.isExposurePointOfInterestSupported
{
try! captureDevice.lockForConfiguration()
captureDevice.exposurePointOfInterest = CGPoint(x: 0.5, y: 0.5)
captureDevice.exposureMode = .continuousAutoExposure
captureDevice.unlockForConfiguration()
}
}
But nothing happens.
captureDevice.isExposurePointOfInterestSupported is true. The captureDevice currently is .builtInDualCamera.
This code is in a simple camera test app based on sample code. It shows the live camera image on screen.
Has anyone got exposurePointOfInterest working on iOS 14.4?
What could I be missing?

I actually ran into this issue yesterday. Turns out there's a problem with using exactly (0.5, 0.5). When I use (0.51, 0.51) it works every time 🤷
extension AVCaptureDevice {
func change(_ block: (AVCaptureDevice) -> ()) {
try! self.lockForConfiguration()
block(self)
self.unlockForConfiguration()
}
}
#objc func handleTap() {
device.change {
$0.exposurePointOfInterest = CGPoint(x: 0.51, y: 0.51)
$0.exposureMode = .autoExpose
}
}
Update
It may also be worth noting that, although it's a point specified exposure, the region around that point still has to be large enough to trigger an exposure adjust. Let's call this the trigger region.
From what I understand from my tests, the point (0.5, 0.5) has a special effect on the trigger region's size. Whenever this point is used as the exposurePointOfInterest, the trigger region is rather large, regardless of whether exposureMode is .continuousAutoExpose or .autoExpose.
You can get an idea of the size of this region by using the following code, pointing your phone at a bright area (like a lamp), and seeing how close you have to get until a tap adjusts the exposure. You'll find that the exposure does adjust, but you have to get rather close.
#objc func handleTap() {
device.change {
$0.exposurePointOfInterest = CGPoint(x: 0.5, y: 0.5)
$0.exposureMode = .autoExpose
}
}
Or, you could not use a tap, and just keep the properties exposureMode and exposurePointOfInterest at their default values of .continuousAutoExpose and (0.5, 0.5). Or you could use the native camera app and see when it automatically adjusts the exposure. The results are the same.
Now, if you were to set the exposurePointOfInterest to a value close to but not equal to the midpoint, say (0.51, 0.51), you'll find that the trigger region becomes much, much smaller.
You could also use .continuousAutoExpose and call this only once, and you'll find that the automatic exposure adjustments are a lot more sensitive as the trigger region is a lot smaller:
func viewDidLoad() {
super.viewDidLoad()
device.change {
$0.exposurePointOfInterest = CGPoint(x: 0.51, y: 0.51)
$0.exposureMode = .continuousAutoExpose
}
}
To get an idea of the size of this smaller region, open the native camera app and tap somewhere to focus/expose at that point. You'll see a small bounding box. That's pretty much the size of the trigger region.
Say you have a tap like so:
#objc func handleTap() {
device.change {
$0.exposurePointOfInterest = CGPoint(x: 0.51, y: 0.51)
$0.exposureMode = .autoExpose
}
}
If nothing happens, the region is not large enough, and you should be able to reproduce the same no-effect in the native camera app when you try to tap to expose at that point.
Side Note
Your didTap() method is setting the default values, so it's essentially useless.
If you want to adjust exposure on a tap, use .autoExpose if the point is always the same. Don't use .continuousAutoExpose cuz that's gonna be adjusting exposure all the time, not just on a tap. It only makes sense to do this if the tap will change the point.

Related

How to handle a video overexposure in Swift

I'm working on a camera app, and I think the behavior of my app and the iPhone default camera app against overexposure is very different.
Like the image below, the default camera app adjusts the overexposure when it's detected. (I feel the whole screen gets slightly yellow-ish to get rid of the overexposed brightness area. So I can see the white keyboard even putting dark stuff covers most of the screen.
Here is my app and I set the exposure mode to the continuous exposure mode, but it won't adjust the overexposed area.
I want to adjust the brightness, but I also don't want to display the image including the overexposed part (I mean... I just want my app to show like the default camera does.)
This is the code for adjust the focus and exposure.
func setFocus(with focusMode: AVCaptureDevice.FocusMode, with exposureMode: AVCaptureDevice.ExposureMode, at point: CGPoint, monitorSubjectAreaChange: Bool, completion: #escaping (Bool) -> Void) {
guard let captureDevice = captureDevice else { return }
do {
try captureDevice.lockForConfiguration()
} catch {
completion(false)
return
}
if captureDevice.isSmoothAutoFocusSupported, !captureDevice.isSmoothAutoFocusEnabled { captureDevice.isSmoothAutoFocusEnabled = true }
if captureDevice.isFocusPointOfInterestSupported, captureDevice.isFocusModeSupported(focusMode) {
captureDevice.focusPointOfInterest = point
captureDevice.focusMode = focusMode
}
if captureDevice.isExposurePointOfInterestSupported, captureDevice.isExposureModeSupported(exposureMode) {
captureDevice.exposurePointOfInterest = point
captureDevice.exposureMode = exposureMode
}
captureDevice.isSubjectAreaChangeMonitoringEnabled = monitorSubjectAreaChange
captureDevice.unlockForConfiguration()
completion(true)
}
and this is how I call the function
func setFocusToCenter() {
let center: CGPoint = CGPoint(x: cameraView.bounds.width / 2, y: cameraView.bounds.height / 2)
let pointInCamera = cameraView.layer.captureDevicePointConverted(fromLayerPoint: center)
setFocus(with: .continuousAutoFocus, with: .continuousAutoExposure, at: pointInCamera, monitorSubjectAreaChange: false, completion: { [weak self] success in
guard let self = self, success else { return }
// do some animation
})
}
if I need to work on the camera exposure and even if I set the ExposureMode as continuous auto exposure, do I still need to handle overexposure in code?
Also, if you have experienced for adjusting the overexposure, how did you achieve that?
Added this part later...
I took screenshots to compare the my app camera and the native iPhone camera app.
Here is my camera app with .continuousAutoExposure and set the exposurePointOfInterest to center of the screen.
However, the native iPhone camera app wont overexposed if I shoot a dark image from the similar distance...
I think the native iPhone app is also .continuousAutoExposure mode until I touch the screen and adjust focus to a point.
I droped the image quality in order to paste on this post, but I don't really see the blur on the original screenshots. I configure the fps to 30 (also the native iPhone camera is also 30).
So waht could be the reason for getting this overexposure....

iPhone back camera cannot focus correctly

I've been making an iOS camera app and trying to solve this problem for two days (but cannot solve this).
What I'm working on now is change the focus and exposure automatically depending on the user's tapped location. Sometimes it works fine (maybe about 20% in total), but mostly it fails. Especially when I try to focus on a far object (like 5+metre) or when there are two objects and try to switch the focus of one object to another. The image below is an example.
The yellow square locates where the user tapped and even though I tapped the black cup in the first picture, the camera still focuses on the red cup.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touchPoint = touches.first! as UITouch
let focusPoint = touchPoint.location(in: lfView)
print("focusPoint \(focusPoint)")
showPointOfInterestViewAtPoint(point: focusPoint)
setFocus(focusMode: .autoFocus, exposureMode: .autoExpose, atPoint: focusPoint, shouldMonitorSujectAreaChange: true)
}
func setFocus(focusMode: AVCaptureDevice.FocusMode, exposureMode: AVCaptureDevice.ExposureMode, atPoint devicePoint: CGPoint, shouldMonitorSujectAreaChange: Bool) {
guard let captureDevice = captureDevice else { return }
do {
try captureDevice.lockForConfiguration()
} catch let error as NSError { return }
if captureDevice.isFocusPointOfInterestSupported, captureDevice.isFocusModeSupported(focusMode) {
captureDevice.focusPointOfInterest = devicePoint
captureDevice.focusMode = focusMode
print("devicePoint: \(devicePoint)")
}
// other codes in here...
captureDevice.isSubjectAreaChangeMonitoringEnabled = shouldMonitorSujectAreaChange
captureDevice.unlockForConfiguration()
}
I called the setFocus function in touchesBegan function and both focusPoint & devicePoint comments show the same coordinate, like (297.5, 88.0).
When I tapped the black cup in the picture, I can see the iPhone camera is zooming in and out a little bit, like same as when I use the default iPhone camera app and try to focus on an object. So I guess my camera app is trying to focus on the black cup but it fails.
Since this is not an error, I'm not sure which code to change. Is there any clue what is going on here and what causes this problem?
ADD THIS PART LATER
I also read this document and it says
This property’s CGPoint value uses a coordinate system where {0,0} is the top-left of the picture area and {1,1} is the bottom-right.
As I wrote before, the value of devicePoint gives me more than 1, like 297.5, 88.0. Does this cause the problem?
Thanks to #Artem I was able to solve the problem. All I needed to do was convert the absolute coordinate to the value used in focusPointOfInterest (min (0,0) to max (1,1)).
Thank you, Artem!!

Maintain correct SCNNode position in ARKit while walking, without calling run and .resetTracking on each CLLocation update

I'm building a simple navigation app with ARKit. The app shows a pin at a destination, which can be far away or nearby. The user is able to walk toward the pin to navigate.
In my ARSCNView I have an SCNNode called waypointNode, which represents the pin at the destination.
To determine where to place the waypointNode, I calculate the distance to the destination in meters, and the bearing (degrees away from North) to the destination. Then, I create and multiply some transformations and apply them to the node to put it in the proper position.
There's also some logic establish a maximum distance away for the waypointNode so it's not too small for the user to see.
This is how I configure the ARSCNView, so the axes line up with the real-world compass directions:
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading
configuration.planeDetection = .horizontal
session.run(configuration, options: [.resetTracking])
}
Every time the device gets a new CLLocation from CoreLocation, I update the distance and bearing then call this function to update the position of the waypointNode:
func updateWaypointNode() {
// limit the max distance so the node doesn't become invisible
let distanceLimit: Float = 80
let translationDistance: Float
if navigationInfo.distance > distanceLimit {
translationDistance = distanceLimit
} else {
translationDistance = navigationInfo.distance
}
// transform matrix to adjust node distance
let distanceTranslation = SCNMatrix4MakeTranslation(0, 0, -translationDistance)
// transform matrix to rotate node around y-axis
let rotation = SCNMatrix4MakeRotation(-1 * GLKMathDegreesToRadians(Float(navigationInfo.bearing)), 0, 1, 0)
// multiply the rotation and distance translation matrices
let distanceTimesRotation = SCNMatrix4Mult(distanceTranslation, rotation)
// grab the current camera transform
guard let cameraTransform = session.currentFrame?.camera.transform else { return }
// multiply the rotation and distance translation transform by the camera transform
let finalTransform = SCNMatrix4Mult(SCNMatrix4(cameraTransform), distanceTimesRotation)
// update the waypoint node with this updated transform
waypointNode.transform = finalTransform
}
This works fine when the user first starts the session, and when the user moves less than about 100m.
Once the user covers a significant distance, like over 100m walking or driving, just calling updateWaypointNode() is not enough to maintain the proper position of the node at the destination. When walking toward the node, for example, it's possible for the user to eventually reach the node, even though the user has not reached the destination. Note: This incorrect positioning happens while the session open the whole time, not if the session is interrupted.
As a workaround, I'm also calling setUpSceneView() every time the device gets a location update.
Even though this works OK, it feels wrong to me. It doesn't seem like I should have to call run with the .resetTracking option every time. I think I might just be overlooking something in my translations. I also see some jumpiness in the camera that seems to happen every time run is called when the session is running, so that's less desirable than simply updating translations.
Is there something different I could do to avoid calling run on the session and resetting the tracking every time the device gets a location update?

Move SKPhysicsBody with device tilt swift

I have an image on the screen that is an SKSpriteNode with an SKPhysicsBody attached to it. I'm trying to move it around on the screen via tilting of the device. I have it moving along the x-axis but I cant seem to get it to move along the y-axis.
I used parts of this tutorial to get to the point I'm at. I used the part about the user ship to get my image to move along the x-axis.
http://www.raywenderlich.com/76740/make-game-like-space-invaders-sprite-kit-and-swift-tutorial-part-1
func processUserMotionForUpdate(currentTime: CFTimeInterval) {
let ship = childNodeWithName(kShipName)
if let data = motionManager.accelerometerData {
if (fabs(data.acceleration.x) > 0.2) {
ship!.physicsBody!.applyForce(CGVectorMake(40.0 * CGFloat(data.acceleration.x), 0))
}
if (fabs(data.acceleration.y) > 0.2) {
ship!.physicsBody!.applyForce(CGVectorMake(40.0 * CGFloat(data.acceleration.y), 0))
}
}
}
That if statement with the y is the part I added to try to move the image up and down, however it isn't working.
Is there a mistake in the logic here? Or am I just going about this the wrong way?
When you create the Vector to apply the force for the y-Axis, you pass the y-parameter to the x-axis.
To solve this, you have to swap the parameters in the second applyForce-statement:
if (fabs(data.acceleration.y) > 0.2) {
ship!.physicsBody!.applyForce(CGVectorMake(0, 40.0 * CGFloat(data.acceleration.y)))
}

How to simulate a world larger than the screen in SpriteKit?

I'm looking for the proper SpriteKit way to handle something of a scrollable world. Consider the following image:
In this contrived example, the world boundary is the dashed line and the blue dot can move anywhere within these boundaries. However, at any given point, a portion of this world can exist off-screen as indicated by the image. I would like to know how I can move the blue dot anywhere around the "world" while keeping the camera stationary on the blue dot.
This is Adventure, a sprite kit game by apple to demonstrate the point I made below. Read through the docs, they explain everything
Theres a good answer to this that I can't find at the moment. The basic idea is this:
Add a 'world' node to your scene. You can give it a width/height that is larger than the screen size.
When you 'move' the character around (or blue dot), you actually move your world node instead, but in the opposite direction, and that gives the impression that you're moving.
This way the screen is always centered on the blue dot, yet the world around you moves
below is an example from when I was experimenting a while ago:
override func didMoveToView(view: SKView) {
self.anchorPoint = CGPointMake(0.5, 0.5)
//self.size = CGSizeMake(600, 600)
// Add world
world = SKShapeNode(rectOfSize: CGSize(width: 500, height: 500))
world.fillColor = SKColor.whiteColor()
world.position = CGPoint(x: size.width * 0.5, y: size.height * 0.5)
world.physicsBody?.usesPreciseCollisionDetection = true
self.addChild(world)
}
override func update(currentTime: CFTimeInterval) {
world.position.x = -player.position.x
world.position.y = -player.position.y
}
override func didSimulatePhysics() {
self.centerOnNode(self.camera)
}
func centerOnNode(node: SKNode) {
if let parent = node.parent {
let nodePositionInScene: CGPoint = node.scene!.convertPoint(node.position, fromNode: parent)
parent.position = CGPoint(
x: parent.position.x - nodePositionInScene.x,
y: parent.position.y - nodePositionInScene.y)
}}
If you create a "camera" node which you add to your "world" node, a couple of simple functions (above) allow you to "follow" this camera node as it travels through the world, though actually you are moving the world around similar to Abdul Ahmad's answer.
This method allows you to use SpriteKit functionality on the camera. You can apply physics to it, run actions on it, put constraints on it, allowing effects like:
camera shaking (an action),
collision (a physics body, or matching the position of another node with a physics body),
a lagging follow (place a constraint on the camera that keeps it a certain distance from a character, for example)
The constraint especially adds a nice touch to a moving world as it allows the "main character" to move around freely somewhat while only moving the world when close to the edges of the screen.

Resources