Still pin image capture responding time - opencv

I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.

I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?

Related

Objective C Iphone take photos both cameras simultaneously

I need to take one picture from rear camera and another from back camera. I read that it wouldn´t possible at same time, but, you know if it is possible to switch between cameras in thi minimum time and try to take one fron and one back picture?
EDIT:
As I said before, I want to capture from both cameras at the same time. I know that it is not possible on Iphone devices but i tried to switch cameras very quickly. Iphone waste a lot of time switching between cameras. The ideal is to show in preview back camera and record frames from it, and record frames in front camera at the same time without previewing it and do not lose the front preview.
Thanks in advance.

AVCaptureSession captures black/dark frames after changing preset

I'm developing app which supports both still image and video capture with AVFoundation. Capturing them requires different AVCaptureSession presets. I check for canSetSessionPreset(), begin change with beginConfiguration(), set required preset with sessionPreset and end with commitConfiguration().
I found if I'm capturing still image with AVCaptureStillImageOutput immediately after changing preset, it returns no errors, but the resulting image is black or very dark sometimes.
If I start capturing video with AVCaptureMovieFileOutput immediately after changing preset, first several frames in a resulting file are also black or very dark at times.
Right after changing preset the screen flickers likely due to the camera adjusting the exposure. So it looks like immediately after changing preset camera start measuring exposure from very fast shutter speed, which results in black/dark frames.
Both problems goes away if I insert a 0.1 second delay between changing the preset and starting capture, but that's ugly and no one can guarantee it will work all the time on all devices.
Is there a clean solution to this problem?
This is for future users...
It was happening for me when I was setting the sessionPreset as high and as soon as I was starting recording I was making changes to video output connection and setting focus which I then moved to while setting up camera and it worked!!!

Kurento - Blurness in the Remote stream stored images

What I did:
I am using Kurento Media Server to store the video streaming frames in the server. I can store the frames in the server by using opencv-plugin sample.
I am storing the video frames in the below two scenarios.
1) I need to take the images when the user show their faces in front of
the camera.(Note: No movements)
Issues: No issue. I can get the quality images.
2) I need to take the images when the user walks in a room.(Note: The
user is moving)
Issues: Most of the stored images are blurred in the server when they
are in moving (while walking).
What I want:
i) Is this the default behavior of the KMS (gstreamer)?
Note: I can see the local stream videos clearly in the browser while moving. But
the remote stream videos only got blurred while moving.
ii) Did anyone face this issue before. If yes, how do I solve this issue?
iii) Do I want to change any gstreamer configuration?
iv) Anyone give me a suggestion to overcome this issue?
The problem you are having is that the exposition time of your camera is high. It's like taking a picture of a car with low light.
When there is movement in the image, getting a simple frame, specially if the camera exposition time is long (due to low light conditions of low camera quality), will end in this kind of images.
On continuous video you don't notice this blurriness because there is a sequence of images, and your brain fills the gaps.
Edit
You can try to improve the quality that you are sending to the server by changing constrains on WebRTCEndpoint using properties setMaxVideoSendBandwidth and setMaxVideoRecvBandwidth. As long as there is available bandwidth you'll get a better quality.

Detect motion with iPad camera while doing other things

I have to have a video play-back in a loop, until I detect some motion (activity) with the front camera of iPad.
The video does not need to be recorder or played later, and I do not have to show the current video on the iPad.
Currently they have to tap the screen to stop the video but the customer wants a 'cool' video detection. I am not interested in face detection, just motion.
There are some examples about it ?
thanks,
EDIT
Well, currently the only workaround that I've found is detecting luminance... Just make an image of every frame (or n frame) and check the luminosity of the image, check another image from another frame and check again, if the variance is enough, something has changed :-)
Just find a good threshold variance and ready to go ...
Of course I would prefere a more robust workaround...

AV Foundation camera preview layer gets zoomed in, how to zoom out?

The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.

Resources