I am currently doing a project with a FLIR Blackfly camera (BFS-U3-120S4C to be exact), and have not been getting a low enough reprojection error in my calibration experiments.
I know that manufacturers often have the intrinsic parameters and distortion coefficients available, and FLIR has a Spinnaker Python API to operate their cameras and they advertise them as Machine Vision cameras, so I strongly suspected that this would be the case for them.
However, I have not been successful in finding this in my Google searches. Does anyone know of FLIR have given out the values for these parameters somewhere?
Thanks for the help!
Related
I am new to handeye calibration and I wanted to know whether for a research application you would use a pre-developed tool like easyhandeye moveit calibration for doing the handeye calibration?
I have mounted an aruco board on a table and the camera on the robot arm, then used moveit easy handeye calibration to record different poses of he robot and use an AX=XB solver for doing the handeye calibration.
When I do this I have a large error (about an offset of few centimeteres) between the comanded position sent to the robot and the actual position reached. The reprojection error is quite small (<0.01). In my final application I would like to command poses from image processing data, so I guess the error compounds and becomes larger...Any ideas how can I reduce this?
Any papers will be helpful! I alsso want to know if normally it would be better to write a code from scratch for this and then use ML/stats/other optimisation methods to reduce the error?
i am doing final year project as lane tracking using a camera. the most challenging task now is how i can measure distance between the camera (the car that carries it actually) and the lane.
While the lane is easily recognized (Hough line transform) but i found no way to measure distance to it.
given the fact that there is a way to measure distance to object in front of camera based on Pixel width of the object, but it does not work here be because the nearest point of the line, is blind in the camera.
What you want is to directly infer the depth map with a monocular camera.
You can refer my answer here
https://stackoverflow.com/a/64687551/11530294
Usually, we need a photometric measurement from a different position in the world to form a geometric understanding of the world(a.k.a depth map). For a single image, it is not possible to measure the geometric, but it is possible to infer depth from prior understanding.
One way for a single image to work is to use a deep learning-based method to direct infer depth. Usually, the deep learning-based approaches are all based on python, so if you only familiar with python, then this is the approach that you should go for. If the image is small enough, i think it is possible for realtime performance. There are many of this kind of work using CAFFE, TF, TORCH etc. you can search on git hub for more option. The one I posted here is what i used recently
reference:
Godard, Clément, et al. "Digging into self-supervised monocular depth estimation." Proceedings of the IEEE international conference on computer vision. 2019.
Source code: https://github.com/nianticlabs/monodepth2
The other way is to use a large FOV video for a single camera-based SLAM. This one has various constraints such as need good features, large FOV, slow motion, etc. You can find many of this work such as DTAM, LSDSLAM, DSO, etc. There are a couple of other packages from HKUST or ETH that does the mapping given the position(e.g if you have GPS/compass), some of the famous names are REMODE+SVO open_quadtree_mapping etc.
One typical example for a single camera-based SLAM would be LSDSLAM. It is a realtime SLAM.
This one is implemented based on ROS-C++, I remember they do publish the depth image. And you can write a python node to subscribe to the depth directly or the global optimized point cloud and project it into a depth map of any view angle.
reference: Engel, Jakob, Thomas Schöps, and Daniel Cremers. "LSD-SLAM: Large-scale direct monocular SLAM." European conference on computer vision. Springer, Cham, 2014.
source code: https://github.com/tum-vision/lsd_slam
I'm a Software Engineering student in my last year in a 4-year bachelor degree program, I'm required to work on a graduation project of my own choice.
we are trying to find a way to notify the user of any thing the gets on his/her way while walking, this will be implemented as an android application so we have the ability to use the camera, we thought of Image processing and computer vision but neither me or any of my group members have any Image processing background, we searched a little bit and we found out about OpenCv.
So my question is do I need any special background to deal with OpenCv? and is it a good choice for the objective of my project to use computer vision, if not what alternatives do u advise me to use?
I appreciate your help.. thanks in advance!
At the first glance I would use 2 standard cameras to find depth image - stereo vision (similar to MS Kinect depth sensor)
from that it would be easy to fix a threshold to some distance.
Those algorithms are very CPU hungry so I do not think it will work on Android (although I have zero experience).
I you must use Android, I would look for some depth sensor (to avoid extracting depth data from 2 images)
For prototyping I would use MATLAB (or Octave), then I would switch to OpenCV (pointers, mem. allocations, blah...)
I am building a website that does cool things using computer vision techniques, with videos live recorded and uploaded by users using their webcam. For this, I need camera intrinsic and distortion parameters. I am trying to figure out what would be the best way to compute these given the user uploaded videos. We can make no assumptions about what videos user might upload - but a reasonable assumption is that a human might be present in the video. I am still in the initial stages of this, but I am interested in knowing how others have solved this problem.
To be specific, below are the questions that I would appreciate someone experienced in the group might comment upon:
What algorithms, libraries and techniques are available to extract intrinsic and distortion parameters of any generic webcam available in the market? [I say "extract" and not "calibrate" to include cases where intrinsic parameters are just a method call away with no calibration necessary].
In general, how much variance have you observed in the intrinsic and distortion parameters in the webcams available in the market? Did you approximate them with a single intrinsic and distortion parameters or what approach did you follow?
What camera self-calibration methods, if any, could be employed in these scenarios? Are there any opensource or commercial libraries available which might be of some help?
If we aim to calibrate the webcams using the videos user record and upload, what assumptions in the parameters [like fx==fy or no distortion params] makes sense and sounds reasonable to you?
Would a reasonable approximation of intrinsic and distortion params for all the cameras make sense? What would be a reasonable approach to validate how good particular intrinsic and distortion parameters are for a specific webcam?
Are there any other issues that need to be considered?
Sometimes I am the one who comes with the bad news :) So do I now.
For almost all your points there the clear answer is No, None, Not, and so on. Only for the last point, with the other issues, the answer is not a no, but a long list :).
Actually, camera calibration without a chessboard and some specific constraints is almost impossible.
The closest implementation to a no-assumptions calibration is found in the stitching module in OpenCV. Hovewer, it is not perfect, and it's not working on random videos. Give it a try.
There is the famous Camera Calibration Toolbox, a good Matlab implementation of extracting intrinsic and extrinsic parameters.
There is a variance not only amongst webcams, but also of:
Different modules
Different zoom levels (Affects the optics)
I think that this is a really hard problem, if you restrict yourself to making no assumptions regarding the video. Both the calibration and the evaluation is hard if you don't use something known - such as checker board in Camera Calibration Toolbox.
Many algorithms, including the currently used in opencv requires that known points can be detected (e.g corners in a chess board). You would have to require that your users took pictures of this known patterns, which ruin the concept of random videos. I dont have a solution to this but you might want to consider requiring users to record videos of structures scenes(no specific patterns or objects) and use the algorithm described in:
"Camera calibration with lens distortion from low-rank textures"
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5995548&tag=1
Haven't tried it myself though.
I'm trying to analyse audio and visual features in tandem. My audio speech features are mel-frequency cepstrum co-efficients sampled at 100fps using the Hidden Markov Model Toolkit. My visual features come from a lip-tracking programme I built and are sampled at 29.97fps.
I know that I need to interpolate my visual features so that the sample rate is also 100fps, but I can't find a nice explanation or tutorial on how to do this online. Most of the help I have found comes from the speech recognition community which assumes a knowledge of interpolation on behalf of the reader, i.e. most cover the step with a simple "interpolate the visual features so that the sample rate equals 100fps".
Can anyone pooint me in the right direction?
Thanks a million
Since face movement is not low-pass filtered prior to video capture, most of the classic DSP interpolation methods may not apply. You might as well try linear interpolation of your features vectors to get from one set of time points to a set at a different set of time points. Just pick the 2 closest video frames and interpolate to get more data points in between. You could also try spline interpolation if your facial tracking algorithm measures accelerations in face motion.