How can I trigger auto focus and still keep current ISO and speed in camera2 API? - focus

I did a try but when I touch to focus the preview is flashing. I dit it by trigger focus and reapply current ISO and speed at onCaptureCompleted. What wrong with it? Thanks

Why are you reapplying ISO and exposure time? If you're staying with auto-exposure, it'll keep working automatically during and after the AF trigger.
So generally, just set CONTROL_AF_TRIGGER to START for a single capture request (not a repeating request!), keeping your other preview parameters the same.

Related

Gtkmm 3.0 draw blinking shapes and use of timeouts

In a Gtk::DrawingArea I have a pixbuf showing the layout of my house. I draw the measured room temperatures on it. I also would like to draw the state of my shutters on it with some lines. When and only when a shutter changes its state, I would love to make these lines blink with a time offset of 1 second. I assume, I would have to make use of a timeout triggered every second to redraw the lines for the shutters. I am already making use of a timeout every 2 minutes to fetch new data from the internet to be shown on my screen. I could set up the timeout to get called every second and then I would have to remember, when my last 2-minute fetch was accomplished, to trigger the next one on time. Also, if my shutters are not changing state like in 99.9 percent of their lifetime, I do not need blinking. It feels over engineered to me to call a method every second just to make a line blink. Is there a smarter way to do this?
I could post a lot of code here, but I think that would not help anybody understand my question. I am helpful for any hint.

How to control display speed in manim?

I found in manim there are two kinds of way to display animation, one way is to useself.play()and by adding run_time = Xin kwargs, the speed of animation can be set and the animation will finish in X seconds.
But I found another way to display by first adding an object with updater using self.add() and wait it to display using self.wait().
How to control the speed if use self.add() and self.wait()?
There is a demo to show the case
BTW, I'm using manimgl instead of community version in order to achieve real time rendering.

Best way to dynamically play notes in AudioKit?

Say I want to trigger a random note and velocity on an instrument every quarter note. What is the best way to achieve this in AudioKit V5?
The examples seem to use the sequencer to schedule sounds with proper timing, but then you have to add in the notes to the track in advance.
One solution is to pre-generate a bar of random quarter notes with looping enabled - when the bar of random notes is complete, clear the bar and replace with new random notes.
I'm wondering if there's a lower level way of doing this? Some kind of callback that is called with precise timing where I can generate the values as they're needed? Or another approach?
There's nothing that enforces you to have to obey the incoming note data or velocity from the sequencer. Just make your instrument respond to any note ons with a random note and velocity. That way you get the timing without worrying about anything else.

How can I get an accurate duration timer for an image that is displayed on the screen?

I am trying to get an accurate time interval for an image that is displayed on the screen for only a short duration.
Let's say that I want an image to be displayed for around 150ms on the screen, I know that most iOS devices have a variable refresh rate (usually between 20-60Hz) so this will mean that it is impossible to hit that 150ms perfectly on the mark. What I would like to know, is there a way to measure the exact time interval for the image being displayed? Ideally, I'd like for this to be accurate to within a few milliseconds.
Thanks in advance for any help I can get!
If you use Metal, you can add a "presented handler" block to be called when the drawable has been presented (shown on screen). Use the -addPresentedHandler: method of MTLDrawable to do that. In that block, you can query the presentedTime property of the drawable.
If you use that to first show an image and then clear the image (display black or white or whatever), then you can compare the two presented time values to determine how long the image was displayed.
In addition to that, you can schedule presentation of a drawable for a specific time, using the -presentDrawable:atTime: (or, depending on your needs, -presentDrawable:afterMinimumDuration:) method of MTLCommandBuffer.
You should look at using a CADisplayLink. It's a timer that's synced to the screen refresh cycle.

openCV: is it possible to time cvQueryFrame to synchronize with a projector?

When I capture camera images of projected patterns using openCV via 'cvQueryFrame', I often end up with an unintended artifact: the projector's scan line. That is, since I'm unable to precisely time when 'cvQueryFrame' captures an image, the image taken does not respect the constant 30Hz refresh of the projector. The result is that typical horizontal band familiar to those who have turned a video camera onto a TV screen.
Short of resorting to hardware sync, has anyone had some success with approximate (e.g., 'good enough') informal projector-camera sync in openCV?
Below are two solutions I'm considering, but was hoping this is a common enough problem that an elegant solution might exist. My less-than-elegant thoughts are:
Add a slider control in the cvWindow displaying the video for the user to control a timing offset from 0 to 1/30th second, then set up a queue timer at this interval. Whenever a frame is needed, rather than calling 'cvQueryFrame' directly, I would request a callback to execute 'cvQueryFrame' at the next firing of the timer. In this way, theoretically the user would be able to use the slider to reduce the scan line artifact, provided that the timer resolution is sufficient.
After receiving a frame via 'cvQueryFrame', examine the frame for the tell-tale horizontal band by looking for a delta in HSV values for a vertical column of pixels. Naturally this would only work when the subject being photographed contains a fiducial strip of uniform color under smoothly varying lighting.
I've used several cameras with OpenCV, most recently a Canon SLR (7D).
I don't think that your proposed solution will work. cvQueryFrame basically copies the next available frame from the camera driver's buffer (or advances a pointer in a memory mapped region, or blah according to your driver implementation).
In any case, the timing of the cvQueryFrame call has no effect on when the image was captured.
So as you suggested, hardware sync is really the only route, unless you have a special camera, like a point grey camera, which gives you explicit software control of the frame integration start trigger.
I know this has nothing to do with synchronizing but, have you tried extending the exposure time? Or doing so by intentionally "blending" two or more images into one?

Resources