Manually set exposure for iOS camera in Swift - ios

I understand that the camera in iOS automatically adjusts exposure continuously when capturing video and photos.
Questions:
How can I turn off the camera's automatic exposure?
In Swift code, how can I set the exposure for the camera to "zero" so that exposure is completely neutral to the surroundings and not compensating for light?

You can set the exposure mode by setting the "AVCaptureExposureMode" property. Documentation here.
var exposureMode: AVCaptureDevice.ExposureMode { get set }
3 things you gotta take into consideration.
1) Check if the device actually supports this with "isExposureModeSupported"
2) You have to "lock for configuration" before adjusting the exposure. Documentation here.
3) The exposure is adjusted by setting an ISO and a duration. You can't just set it to "0"
ISO:
This property returns the sensor's sensitivity to light by means of a
gain value applied to the signal. Only exposure duration values
between minISO and maxISO are supported. Higher values will result in
noisier images. The property value can be read at any time, regardless
of exposure mode, but can only be set using the
setExposureModeCustom(duration:iso:completionHandler:) method.

If you need only min, current and max exposure values, then you can use the following:
Swift 5
import AVFoundation
enum Esposure {
case min, normal, max
func value(device: AVCaptureDevice) -> Float {
switch self {
case .min:
return device.activeFormat.minISO
case .normal:
return AVCaptureDevice.currentISO
case .max:
return device.activeFormat.maxISO
}
}
}
func set(exposure: Esposure) {
guard let device = AVCaptureDevice.default(for: AVMediaType.video) else { return }
if device.isExposureModeSupported(.custom) {
do{
try device.lockForConfiguration()
device.setExposureModeCustom(duration: AVCaptureDevice.currentExposureDuration, iso: exposure.value(device: device)) { (_) in
print("Done Esposure")
}
device.unlockForConfiguration()
}
catch{
print("ERROR: \(String(describing: error.localizedDescription))")
}
}
}

Related

iOS fast image difference comparison

Im looking for a fast way to compare two frames of video, and decide if a lot has changed between them. This will be used to decide if I should send a request to image recognition service over REST, so I don't want to keep sending them, until there might be some different results. Something similar is doing Vuforia SDK. Im starting with a Framebuffer from ARKit, and I have it scaled to 640:480 and converted to RGB888 vBuffer_image. It could compare just few points, but it needs to find out if difference is significant nicely.
I started by calculating difference between few points using vDSP functions, but this has a disadvantage - if I move camera even very slightly to left/right, then the same points have different portions of image, and the calculated difference is high, even if nothing really changed much.
I was thinking about using histograms, but I didn't test this approach yet.
What would be the best solution for this? It needs to be fast, it can compare just smaller version of image, etc.
I have tested another approach using VNFeaturePointObservation from Vision. This works a lot better, but Im afraid it might be more CPU demanding. I need to test this on some older devices. Anyway, this is a part of code that works nicely. If someone could suggest some better approach to test, please let know:
private var lastScanningImageFingerprint: VNFeaturePrintObservation?
// Returns true if these are different enough
private func compareScanningImages(current: VNFeaturePrintObservation, last: VNFeaturePrintObservation?) -> Bool {
guard let last = last else { return true }
var distance = Float(0)
try! last.computeDistance(&distance, to: current)
print(distance)
return distance > 10
}
// After scanning is done, subclass should prepare suggestedTargets array.
private func performScanningIfNeeded(_ sender: Timer) {
guard !scanningInProgress else { return } // Wait for previous scanning to finish
guard let vImageBuffer = deletate?.currentFrameScalledImage else { return }
guard let image = CGImage.create(from: vImageBuffer) else { return }
func featureprintObservationForImage(image: CGImage) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(cgImage: image, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
guard let imageFingerprint = featureprintObservationForImage(image: image) else { return }
guard compareScanningImages(current: imageFingerprint, last: lastScanningImageFingerprint) else { return }
print("SCANN \(Date())")
lastScanningImageFingerprint = featureprintObservationForImage(image: image)
executeScanning(on: image) { [weak self] in
self?.scanningInProgress = false
}
}
Tested on older iPhone - as expected this causes some frame drops on camera preview. So I need a faster algorithm

Accessing state information in swift - DJI Mobile SDK iOS

I can't get my head around how to get out simple status data, like the current gimbal pitch for example.
I have not found a solid connection between the DJI SDK and what actually works in xcode. The SDK gives me hints and together with xcode autocompletion a go forwards, slowly..
Class GimbalState has member getAttitudeInDegrees() with description:
"The current gimbal attitude in degrees. Roll, pitch and yaw are 0 if the gimbal is level with the aircraft and points in the forward direction of North Pole." - Great!
However, it does not autocomplete in xcode nor does it compile.
Other approaches tested:
var gimbalStateInformation = DJIGimbalState()
print(gimbalStateInformatoin.attitudeInDegrees.pitch.description)
--> All pitch roll and yaw values come out as 0.0
var gimbalStateInformation = DJUGimbalAttitude()
print(gimbalStateInformatoin.pitch.description)
--> All pitch roll and yaw values come out as 0.0
I've tried to reach the information via keys, but my app crashes when I run the code.
func getGimbalAttitude(){
// Get the key
guard let gimbalAttitudeKey = DJIGimbalKey(param: DJIGimbalParamAttitudeInDegrees) else {
print("Could not create DJIGimbalParamAttitudeInDegrees key")
return
}
// Get the keyManager
guard let keyManager = DJISDKManager.keyManager() else {
print("Could not get the key manager, manke sure you are registered")
return
}
// Test if key is available
let testing = keyManager.isKeySupported(gimbalAttitudeKey)
self.statusLabel.text = String(testing) // This comes out true
// Use key to retreive info
let gimbalAttitudeValue = keyManager.getValueFor(gimbalAttitudeKey)
let gimbalAttitude = gimbalAttitudeValue?.value as! DJIGimbalState
_ = gimbalAttitude.attitudeInDegrees.pitch
// --> Application crashes on the line above
}
I'm working towards a Mavic Mini.
Please advise in general terms how to connect the DJI Mobile SDK to Swift and specifically how I can read out the current gimbal pitch value.
You can access the current Gimbal pitch through its didUpdate state delegate function
import DJISDK
class GimbalController: NSObject, DJIGimbalDelegate {
let gimbal: DJIGimbal
init(gimbal: DJIGimbal) {
self.gimbal = gimbal
super.init()
gimbal.delegate = self
}
func gimbal(_ gimbal: DJIGimbal, didUpdate state: DJIGimbalState) {
print(state.attitudeInDegrees.pitch)
}
}
// create an instance of the custom gimbal controller in some other class
// and pass it the gimbal instance
if let aircraft = DJISDKManager.product() as? DJIAircraft {
if let gimbal = aircraft.gimbal {
let gimbalController = GimbalController(gimbal: gimbal)
}
}

AVCaptureDevice's exposureDuration and iso not honored by AVCapturePhotoOutput

Problem: use AVCaptureDevice.setExposureModeCustom to set a fast "shutter speed" (exposureDuration) and high ISO, call AVCapturePhotoOutput to take a photo, see in the resulting image that the exposureDuration / ISO are not used (even though the live video feed shows that it is using the duration/ISO by brightening/darkening as expected)
Turns out that AVCapturePhotoSettings.isAutoStillImageStabilizationEnabled is to blame: by default this is true, and when true your exposure duration and ISO setting can be ignored / reset.
Solution is to set this to false when you're using a custom exposure setting, like this:
// self.customDuration is nil if we're on auto-exposure, non-nil if we are on manual exposure, ie. we called AVCaptureDevice.setExposureModeCustom
let photoSettings: AVCapturePhotoSettings
if self.photoOutput.availablePhotoCodecTypes.contains(.hevc), heicSupported {
photoSettings = AVCapturePhotoSettings(format:
[AVVideoCodecKey: AVVideoCodecType.hevc])
} else {
photoSettings = AVCapturePhotoSettings(format:
[AVVideoCodecKey: AVVideoCodecType.jpeg])
}
// auto still image stabilization destroys our settings for custom exposure (iso, duration), so turn it off if we have any
photoSettings.isAutoStillImageStabilizationEnabled = self.customDuration == nil ?
self.photoOutput.isStillImageStabilizationSupported : false

Why is the Vision framework unable to align two images?

I'm trying to take two images using the camera, and align them using the iOS Vision framework:
func align(firstImage: CIImage, secondImage: CIImage) {
let request = VNTranslationalImageRegistrationRequest(
targetedCIImage: firstImage) {
request, error in
if error != nil {
fatalError()
}
let observation = request.results!.first
as! VNImageTranslationAlignmentObservation
secondImage = secondImage.transformed(
by: observation.alignmentTransform)
let compositedImage = firstImage!.applyingFilter(
"CIAdditionCompositing",
parameters: ["inputBackgroundImage": secondImage])
// Save the compositedImage to the photo library.
}
try! visionHandler.perform([request], on: secondImage)
}
let visionHandler = VNSequenceRequestHandler()
But this produces grossly mis-aligned images:
You can see that I've tried three different types of scenes — a close-up subject, an indoor scene, and an outdoor scene. I tried more outdoor scenes, and the result is the same in almost every one of them.
I was expecting a slight misalignment at worst, but not such a complete misalignment. What is going wrong?
I'm not passing the orientation of the images into the Vision framework, but that shouldn't be a problem for aligning images. It's a problem only for things like face detection, where a rotated face isn't detected as a face. In any case, the output images have the correct orientation, so orientation is not the problem.
My compositing code is working correctly. It's only the Vision framework that's a problem. If I remove the calls to the Vision framework, put the phone of a tripod, the composition works perfectly. There's no misalignment. So the problem is the Vision framework.
This is on iPhone X.
How do I get Vision framework to work correctly? Can I tell it to use gyroscope, accelerometer and compass data to improve the alignment?
You should set secondImage as targetImage, and perform handler with firstImage.
I use your composite way.
check out this example from MLBoy:
let request = VNTranslationalImageRegistrationRequest(targetedCIImage: image2, options: [:])
let handler = VNImageRequestHandler(ciImage: image1, options: [:])
do {
try handler.perform([request])
} catch let error {
print(error)
}
guard let observation = request.results?.first as? VNImageTranslationAlignmentObservation else { return }
let alignmentTransform = observation.alignmentTransform
image2 = image2.transformed(by: alignmentTransform)
let compositedImage = image1.applyingFilter("CIAdditionCompositing", parameters: ["inputBackgroundImage": image2])

Knowing resolution of AVCaptureSession's session presets

I'm accessing the camera in iOS and using session presets as so:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.
So, something like this (theoretically):
[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];
which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).
I am no AVFoundation pro, but I think the way to go is:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureInput *input = [captureSession.inputs objectAtIndex:0]; // maybe search the input in array
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
I'm not sure about the last step and I didn't try it myself. Just found that in the documentation and think it should work.
Searching for CMVideoDimensions in Xcode you'll find the RosyWriter example project. Have a look at that code (I don't have time to do that now).
You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
private func getCaptureResolution() -> CGSize {
// Define default resolution
var resolution = CGSize(width: 0, height: 0)
// Get cur video device
let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice
// Set if video portrait orientation
let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown
// Get video dimensions
if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
if (portraitOrientation) {
resolution = CGSize(width: resolution.height, height: resolution.width)
}
}
// Return resolution
return resolution
}
FYI, I attach here an official reply from Apple.
This is a follow-up to Bug ID# 13201137.
Engineering has determined that this issue behaves as intended based on the following information:
There are several problems with the included code:
1) The AVCaptureSession has no inputs.
2) The AVCaptureSession has no outputs.
Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.
3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.
There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.
You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.
4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.
Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.
We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).
Thank you for taking the time to notify us of this issue.
Best Regards,
Developer Bug Reporting Team
Apple Worldwide Developer Relations
According to Apple, there's no API for that. It stinks, I've had the same problem.
May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...
[[UIDevice currentDevice] platformType] // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: #"iPhone 4G"
However, you have to update the list for each newer device model. Hope this helps :)
if preset is .photo, the return size is for still photo size, not preview video size
if preset is not .photo, the return size is for video size, not for captured photo size.
if self.session.sessionPreset != .photo {
// return video size, not captured photo size
let format = videoDevice.activeFormat
let formatDescription = format.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
} else {
// other way to get video size
}
Answer of #Christian Beer is a good way for specified preset.
My way is a good for active preset.
The best way to do what you want (get a known video or image format) is to set the format of the capture device.
First find the capture device you want to use:
if #available(iOS 10.0, *) {
captureDevice = defaultCamera()
} else {
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if((device as AnyObject).position == AVCaptureDevice.Position.back) {
captureDevice = device as AVCaptureDevice
}
}
}
}
self.autoLevelWindowCenter = ALCWindow.frame
if captureDevice != nil && currentUser != nil {
beginSession()
}
}
func defaultCamera() -> AVCaptureDevice? {
if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .back) {
return device
} else {
return nil
}
} else {
return nil
}
}
Then find the formats that that device can use:
let options = captureDevice!.formats
var supportable = options.first as! AVCaptureDevice.Format
for format in options {
let testFormat = format
let description = testFormat.description
if (description.contains("60 fps") && description.contains("1280x 720")){
supportable = testFormat
}
}
You can do more complex parsing of the formats, but you might not care.
Then just set the device to that format:
do {
try captureDevice?.lockForConfiguration()
captureDevice!.activeFormat = supportable
// setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
// add the device to an active CaptureSession
}
You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.
Hope this helps
Apple is using 4:3 ratio for the iPhone camera.
You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.
In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.
In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.

Resources