Increase camera fps? - opencv

I need an eye tracker for an experiment, which is not required but would greatly reduce my workload. So I wanted to build a DIY eye-tracker. I'm having a problem with the Frame Rate, a decent eye-tracker would be working #500Hz, or at least 200Hz.
The highest fps camera I can get on market is 200fps, but I want push it to 500.
Any ideas?

The maximum fps of cameras is to do with the internals of the camera, so there is no way to override it using software. The quoted maximum fps is usually for the maximum resolution available on that camera though, higher frame rates are normally achievable if you are willing to lower the resolution.

Related

Reduce motion blur for fast moving ball

I am trying to create a simple ball tracking system. It does not have to be perfect as it is not for commercial/production.
On the hardware side I am using RaspberryPI with RPI camera V2. On the software - OpenCV.
If there is a natural sun light (even if there are some clouds) ball is totally visible. But when the sun is gone and there is only artificial lights there is a big motion blur visible on the ball.
On the picture below top object is ball with artificial light and bottom one is with natural light.
This is rather obvious - less light - longer exposure and with combination of rolling shutter we get motion blur.
I tried all the settings (like exposure sports/night mode) on this camera but I think it is just hardware limitations. I would like to improve this and reduce motion blur. I would need to get different camera that would handle this better but I have really poor knowledge about camera sensors, parameters etc. I cannot afford to buy many cameras, compare them and then select the best one. So my question is - which camera model (compatible with RPI) should I pick or which parameters should I look for to get better results? (less motion blur)
Thanks in advance!
EDIT: e.g. would global shutter reduce the problem? (cam like: ArduCam OV2311 2Mpx Global Shutter)
EDIT2: Maybe slower shutterspeed would change something but I also need good fps (20-30) does it "collide"?
EDIT3: Here I read that maybe using night (NoIR) camera would help since it is more light sensitive.
Regards
In order to reduce the motion blur, you have to use faster shutter speed namely reducing the exposure time, sometimes combined with extra illuminators.
For the Raspberry pi, you have to disable autoexposure and set it manually with shutter speed value.
Hint:
Global shutter camera doesn't help motion blur, it only helps with the rolling shutter artifacts. It still needs very fast shutter speed to avoid motion blur.
Fps doesn't have something to do with the shutter speed, it only is limited by the read out speed from the sensor.
NoIR might not help as well, because it still need the strong illumination for faster shutter speed.

Taking Frame from Video vs Taking a Photo

My specific question is: What are the drawbacks to using a snipped frame from a video vs taking a photo?
Details:
I want to use frames from live video streams to replace taking pictures because it is faster. I have already researched and considered:
Videos need faster shutter speed, leading to higher possibility of blurring
Faster shutter speed also means less exposure to light, leading to potentially darker images
A snipped frame from a video will probably be lower resolution (although maybe we can possibly turn up the resolution to compensate for this?)
Video might take up more memory -- I am still exploring the details with another post (What is being stored and where when you use cv2.VideoCapture()?)
Anything else?
I will reword my question to make it (possibly) easier to answer: What changes must I make to a "snip frame from video" process to make the result equivalent to taking a photo? Are these changes worth it?
The maximum resolution in picamera is 2592x1944 for still photos and 1920x1080 for video recording. Other issues to take into account are that you cannot receive all formats from VideoCapture, so now conversion of the YUV frame to JPG will be your responsibility. OK, OpenCV can handle this, but it takes considerable CPU time and memory.

Can the frequency of a flashing light be counted using a video camera

Is there a formula to determine that max flash rate countable by a video camera? I am thinking that any flash rate > # of fps is not practical. I get hung up on the fact that the shutter is open only a fraction of the amount of time required to produce a frame. 30fps is roughly 33.33ms. If the shutter is set for say 1/125 which is about 8ms or roughly 25% of the frame time. Does the shutter speed matter? I am thinking that unless they are sync'd the shutter could open at any point in the lamp flash ultimately making counting very difficult.
The application is just a general one. With today's high speed cameras (60fps or 120fps) can one reliably decide on the flash rate of a lamp. Think alarm panels, breathing monitors or heart rate monitors or the case of trying to determine duty cycle by visual means.
What you describe is related to the sampling problem.
You can refer your problem to the Nyquist - Shannon theorem
Given a certain frequency of acquisition (# of FPS) you can be sure of your counting (in every case, no matter of syncronization) if
"# of FPS" >= 2* flashing light frequency (in Hz)
Of course this is a general theoric rule, things can work in a quite different way (I am answering only regarding the number of FPS in a general case)

Take photo during video-input

I'm currently trying to take an image in the best quality during capturing video at a lower quality. The problem is, that i'm using the video stream to check if face are in front of the cam and this needs lot's of resources, so i'm using a lower quality video stream and if there are any faces detected I want to take a photo in high quality.
Best regards and thank's for your help!
You can not have multiple capture sessions so at some point you will need to swap to higher resolution. First thing you are saying that face detection takes too much resources when using high res snapshots.. Why not try to simply down-sample the image and keep using high resolution all the time (send the down sampled one to the face detection, display the high res):
I would start with most common apple's graphic context and try to down scale it. If that takes too much cpu you could try to do the same on the GPU (find some library that does that or just create a simple program) or you could even try to simply drop odd lines and columns of the image as the raw data. In any of those cases you should also note that you probably do not need the face detection on the same thread as displaying, also you most likely don't even need a high frame rate for the detection (you display camera a full FPS but update the face recognition at 10 FPS for instance).
Another thing you can do is simply have the whole thing in low res, then when you need to take the image stop the session, start high res session, take a screenshot and swap back to low res for face detection.

Is Direct3D feasible for zoom and pan of large images?

I want to make an image viewer for large images (up to 2000 x 8000 pixels) with very responsive zooming and panning, say 30 FPS or more. One option I came up with is to create a 3D scene with the image as a sort of fixed billboard, then move the camera forward/back to zoom and up/down/left/right to pan.
That basic idea seems reasonable to me, but I don't have experience with 3D graphics. Is there some fundamental thing I'm missing that will make my idea difficult or impossible? What might cause problems or be challenging to implement well? What part of this design will limit the maximum image size? Any guesses as to what framerate I might achieve?
I also welcome any guidance or suggestions on how to approach this task for someone brand new to Direct3D.
That seems pretty doable to me, 30 fps even seems quite low, you can certainly achieve a solid 60 (minimum)
One image at 8k*2k resolution is about 100 megs of VRAM (with mipmaps), so with today's graphics cards it's not much of an issue, you'll of course face challenges if you need to load several at the same time.
DirectX 11 supports 16k*16k size textures, so for maximum size you should be sorted.
If you want to just show your image flat your should not even need any 3d transformations, 2d scaling/translations will do just fine.

Resources