Understanding AVCaptureDevice exposureDuration, exposureTargetOffset, exposureTargetBias - ios

Come from Android os, I'm trying to understand the AVCaptureDevice API and find a match between the different parameters of the IOS and Android.
I'm working with auto-continuous exposure mode.
I'm having trouble with exposure parameters above:
To my understating:
exposureDuration - This is the length of time in which the expose actuacly happens. It can be normalized to units of [seconds] by using the value and scale of this property.
exposureTargetOffset, exposureTargetBias - I'm not sure what these values represents - are they kind of fix applied to get the desired exposure level? what is this exposure target value?

You aren't alone. I'm not a professional photographer either, so it's pretty confusing. I think your gut is leading you in the right direction.
If you set exposureDuration, you're out of "auto-exposure mode" and it'll freeze that exposure duration and current or specified ISO setting. If the light changes, you're stuck with that setting.
If you set the exposureTargetBias, it will mimic a fancy camera and move the automatically calculated exposure settings up or down an exposure value (combination of f-number and exposure duration). There's a standard value for exposure of an image, but sometimes you want to over-expose or under-expose for style or shutter-speed priority. Changing the bias tells the automatic exposure system to aim for a value over or under the "correct" standard value.
Here's a great article explaining it in iOS: https://www.imore.com/camera-api-ios-8-explained
Exposure compensation is expressed in f-stops. +1 f-stop doubles the brightness, -1 f-stop halves the brightness.
Developers can currently set exposure target biases between -8 and +8 for all existing iOS devices. However, Apple warns that that could change in the future.
If you have a new iPhone (11 or newer) you can even change the bias in real time.
Exposure Bias is explained here: https://digital-photography-school.com/using-exposure-bias-to-improve-picture-detail/
exposureTargetOffset tells you how well the camera is hitting your requested bias value. Sometimes it just can't adjust enough to darken the image (aiming at the sun, the camera tries to shorten the exposure time and drops the ISO very low) or lighten it (pitch-black closet, the camera tries to expose the image sensor for a long time and bumps up the ISO a ton to gather all the light, resulting in a dark and grainy image). If the camera can't hit the target or is in the process of adjusting to it, the offset tells you how far off it currently is. For video, the exposure is obviously limited by framerate.

Related

Is it possible to create a custom flashMode in AVFoundation?

Today I found out that settings like a custom exposuremode or a fixed lens position in the focus mode, which I set for the AVCaptureDevice in my AVCaptureSession conflicted with the flashMode.on option of the AVCaptureSettings, as it alters the previously specified ISO and exposure duration values etc. .
So my question is: Is it possible to specify a custom flashMode that does not analyze the viewed scene and uses strictly pre-defined settings?
For me it is more valuable to be in control of these settings, even though it might lead to a loss of photo quality.

Increasing the brightness level for .continuousAutoExposure Mode

I implemented a custom camera with the AVCaptureDevice.Preset.High preset and I'm using .continuousAutoExposure. Everything works as expected, however, the brightness of the picture is sometimes quite low.
I have researched the official documentation and found out, that I am able to set a custom ISO with setExposureModeCustomWithDuration. Unfortunately, doing it this way results in the loss of the wished automation of the exposure.
My question now is, is there a way to increase the overall brightness Percentage of the .continuousAutoExposure Mode? I need it to increase the exposure to around 5% only, but I also need to stick with the .continuousAutoExposure mode.
The trick is to set exposureTargetOffset property of the AVCaptureDevice instance. You need to use KVO to observe changes in the value of captureDevice.exposureTargetOffset and change it to your required exposure level. For more details, check this answer.

iOS dynamically slow down the playback of a video, with continuous value

I have a problem with the iOS SDK. I can't find the API to slowdown a video with continuous values.
I have made an app with a slider and an AVPlayer, and I would like to change the speed of the video, from 50% to 150%, according to the slider value.
As for now, I just succeeded to change the speed of the video, but only with discrete values, and by recompiling the video. (In order to do that, I used AVMutableComposition APIs.
Do you know if it is possible to change continuously the speed, and without recompiling?
Thank you very much!
Jery
The AVPlayer's rate property allows playback speed changes if the associated AVPlayerItem is capable of it (responds YES to canPlaySlowForward or canPlayFastForward). The rate is 1.0 for normal playback, 0 for stopped, and can be set to other values but will probably round to the nearest discrete value it is capable of, such as 2:1, 3:2, 5:4 for faster speeds, and 1:2, 2:3 and 4:5 for slower speeds.
With the older MPMoviePlayerController, and its similar currentPlaybackRate property, I found that it would take any setting and report it back, but would still round it to one of the discrete values above. For example, set it to 1.05 and you would get normal speed (1:1) even though currentPlaybackRate would say 1.05 if you read it. Set it to 1.2 and it would play at 1.25X (5:4). And it was limited to 2:1 (double speed), beyond which it would hang or jump.
For some reason, the iOS API Reference doesn't mention these discrete speeds. They were found by experimentation. They make some sense. Since the hardware displays video frames at a fixed rate (e.g.- 30 or 60 frames per second), some multiples are easier than others. Half speed can be achieved by showing each frame twice, and double speed by dropping every other frame. Dropping 1 out of every 3 frames gives you 150% (3:2) speed. But to do 105% is harder, dropping 1 out of every 21 frames. Especially if this is done in hardware, you can see why they might have limited it to only certain multiples.

cvCaptureFromCAM() / cvQueryFrame(): disable automatic image correction?

I'm using the two OpenCV functions mentioned above to retrieve frames from my webcam. No additional properties are set, just running with default parameters.
While reading frames in a loop I can see that the image changes, brightness and contrast seem to be adjusted automatically. It definitely seems to be a operation of OpenCV because the image captured by the camera is not changed and lit constantly.
So how can I disable this automated correction? I could not find a property that seems to be able to do that job.
You should try to play around with these three parameters:
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras)
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras)
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras)
Try to set them all to 50. Also (if it won't help) try to change another camera capture parameters from documentation.
To answer that for my own: OpenCV is buggy or outdated here.
it seems to be impossible to get images in native resolution of the camera, they're always 640x480; also forcing it to an other value by setting width and height properties does not change anything
it seems to be impossible to disable the automatic image correction, the properties mentioned above seem not to work
the brightness/contrast properties doesn't seem to work as well - or at least I could not find any good values for it or the automatic image correction always overrides them
To sum it up: I'd not recommend to use OpenCV for some more enhanced image capturing.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

Resources