Calculate Intrinsics for a Thermal Camera? - opencv

I"m using a Thermal camera for a project and I'm a little stumped so as to how to think about calculating intrinsics for it. The usual camera's would determine different points on a chessboard or something similar, but the thermal camera won't really be able to differentiate between those points. Does anyone have any insight on what the intrinsics for thermal cameras would really look like?
Cheers!
EDIT - In addition to the great suggestions I currently have, I'm also considering using aluminum foil on the whites to create a thermal difference. Let me know what you think of this idea as well.

This might or might not work, depending on the accuracy you need:
Use a chessboard pattern and shine a really strong light at it. The black squares will likely get hotter than the white squares, so you might be able to see the pattern in the thermal image.
Put small lightbulbs on the edges of a chessboard pattern, light them, wait until they become hot, use your thermal camera on it.

This problem is addressed in A Mask-Based Approach for the Geometric Calibration of Thermal Infrared Cameras, which basically advocates placing an opaque mask with checkerboard squares cut out of it in front of a radiating source such as a computer monitor.
Related code can be found in mm-calibrator.

If you have a camera that is also sensitive to the visible light end of the spectrum (i.e. most IR cameras - which is what most Thermography is based on after all) then simply get a IR cut-off filter and fit this to front of the cameras lens (you can get some good c-mount based ones). Calibrate as normal to the fixed optics then remove the filter. Intrinsics should be the same - since optical properties are the same (for most purposes).

Quoting from the section 5 Image fusion
in GADE, Rikke; MOESLUND, Thomas B. Thermal cameras and applications: a survey. Machine Vision and Applications, 2014, 25.1: 245-262. (freely downloadable in June 2014):
The standard chessboard method for geometric calibration, correction
of lens distortion, and alignment of the cameras relies on colour
difference, and cannot be used for thermal cameras without some
changes. Cheng et al. [30] and Prakash et al. [146] reported that when
heating the board with a flood lamp, the difference in emissivity of
the colours will result in an intensity difference in the thermal
image. However, a more crisp chessboard pattern can be obtained by
constructing a chessboard of two different materials, with large
difference in thermal emissivity and/or temperature [180]. This
approach is also applied in [68] using a copper plate with milled
checker patterns in front of a different base material, and in [195]
with a metal wire in front of a plastic board. When these special
chessboards are heated by a heat gun, hairdryer or similar, a clear
chessboard pattern will be visible in the thermal image, due to the
different emissivity of the materials. At the same time, it is also
visible in the visual image, due to colour difference. Figure 12 shows
thermal and RGB pictures from a calibration test. The chessboard
consists of two cardboard sheets, where the white base sheet has been
heated right before assembling the board.
[30] CHENG, Shinko Y.; PARK, Sangho; TRIVEDI, Mohan M. Multiperspective thermal ir and video arrays for 3d body tracking and driver activity analysis. In: Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on. IEEE, 2005. p. 3-3.
[146] PRAKASH, Surya, et al. Robust thermal camera calibration and 3D mapping of object surface temperatures. In: Defense and Security Symposium. International Society for Optics and Photonics, 2006. p. 62050J-62050J-8.
[180] VIDAS, Stephen, et al. A mask-based approach for the geometric calibration of thermal-infrared cameras. Instrumentation and Measurement, IEEE Transactions on, 2012, 61.6: 1625-1635.
[68] HILSENSTEIN, V. Surface reconstruction of water waves using thermographic stereo imaging. In: Image and Vision Computing New Zealand. 2005. p. 102-107.
[195] NG, Harry, et al. Acquisition of 3D surface temperature distribution of a car body. In: Information Acquisition, 2005 IEEE International Conference on. IEEE, 2005. p. 5 pp.

You may want to consider running a thermal resistor wire on the lines of the pattern (you also need a power source).

You can drill holes in a metal plate and then heat the plate, hopefully the holes will be colder than the plate and will appear as circles in the image.
Then you can use OpenCV (>2.0) to find circle centers http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv
See also the function findCirclesGrid.

Related

camera calibration. difference between april tags and chess pattern calibration?

Why April tags are better than chess pattern for camera calibration? Why it gives different camera matrix?
I have already used calibration via chess pattern and made it another time with April tags and both gave me two different camera matrix. The different wasn't so big, but still ? Why it exists at all?
The intuition is as follows:
The camera calibration is applied on all corners identified in the raw images. In order to maximize the ROI we want to capture corners in the peripheral areas of the image (where the distortion is most extreme).
In the raw image peripherals the MTF is low, which makes the corner detection less reliable, and arucos (april tags) can be used in order to improve the corner detection in those regions.
As a rule of thumb: in fiseye lens, it's is helpfull to use april tags.
If you use a regular checkerboard calibration wioth a fisheye lens, the sub-pixels corner detection will not be as accurate in the peripherals and the reprojection error will be higher.

How to calibrate 4 camera set around a circle?

Four cameras are arranged in a ring shape. How to calibrate the relative postures of the four cameras, that is, the attitudes of the other three cameras relative to the camera 0, the difficulties are:
When using a calibration plate, four cameras cannot see the calibration plate at the same time, and only two cameras can see the calibration plate, such as calibrating cam1 relative to cam0, then calibrating cam2 relative to cam0, and cam2 can only be relative to cam0. The indirect calculation, causing errors;
In the case of only calibrating two cameras, such as cam0 and cam1, the calibration plates seen by both cameras are tilted, and the calibration plate changes angle is small, which also causes errors.
Is there any better way to calibrate, thank you
There are many ways and papers introduced to this.
The similiest way is to calibrate two at a time. The pair need to be havig largest common FOV. But there are other methods as well.
You can use structure from motion-based method to move the camera around and jointly optimize for the camera poses. It was first published in CVPR between 2010 to 2016. forgot the exact year, but it about camera calibration with minimal or zero overlap.
You can add an IMU and use kalibra to calibrate them. Anchor all image to this IMU. https://github.com/ethz-asl/kalibr/wiki/camera-imu-calibration.
An alternative that I frequently use is the Robotics HAND EYE calibration System used in VINSMONO https://github.com/HKUST-Aerial-Robotics/VINS-Mono. The VINSMONO one requires no complicated pattern. just moving around.
For my paper, We use sea level vanishing line and vanishing point to calibrate cameras which cant get the same chessboard pattern in the same view.
Han Wang, Wei Mou, Xiaozheng Mou, Shenghai Yuan, Soner Ulun, Shuai Yang, Bok-Suk Shin, “An Automatic Self-Calibration Approach for Wide Baseline Stereo Cameras Using Sea Surface Images”, Unmanned Systems, Vol. 3, No. 4. pp. 277-290. 2015
There are others as well such as using vicon to image tracking system or many other methods. Just find one which you think is suitable for you and try it out.

Detect elliptical patterns with OpenCV [duplicate]

I am trying to detect elliptical kernels, in OpenCV using C++. I have tried obtaining Canny edges, and then using fitEllipse() function on the edges. Though this finds ellipses, the accuracy is horrid when the image is noisy, or if there are many edges.
I have realised that the way to go is detecting ellipses, and not fitting them. Maybe something like Hough circles, but for ellipses? I also do not know the length the ellipses could be, as it varies between images.
Can anybody help me get started on that? All related answers are very vague, and I just want pointers on where to start.
As you already got, you don't need ellipse fitting, but ellipse detection.
You can find in my other answer two papers with C++ code available. I'll report here for completeness:
L. Libuda, I. Grothues, K.-F. Kraiss, Ellipse detection in digital image
data using geometric features, in: J. Braz, A. Ranchordas, H. Arajo,
J. Jorge (Eds.), Advances in Computer Graphics and Computer Vision,
volume 4 of Communications in Computer and Information Science,
Springer Berlin Heidelberg, 2007, pp. 229-239. link, code
M. Fornaciari, A. Prati, R. Cucchiara,
"A fast and effective ellipse detector for embedded vision applications", Pattern Recognition, 2014 link, code
It's also fairly easy to port to OpenCV this matlab script with the implementation of the two papers:
"A New Efficient Ellipse Detection Method" (Yonghong Xie Qiang , Qiang Ji / 2002)
"Randomized Hough Transform for Ellipse Detection with Result Clustering" (CA Basca, M Talos, R Brad / 2005)
Another very interesting algorithm is:
Dilip K. Prasad, Maylor K.H. Leung and Siu-Yeung Cho, “Edge curvature and convexity based ellipse detection method,” Pattern Recognition, 2012.
Matlab code can be found here
Directly applying Hough Transform is not feasible for an ellipse, since you'll work in a 5 dimensional parameter space. This will be really slow, and not accurate. So a lot of algorithms have been proposed. Among the ones mentioned here:
Xie and Ji approach is quite famous and very easy (but still slow)
Basca et al. approach is faster, but may be not accurate
Prasad et al. approach is fast, and very accurate.
Libuda et al. approach is very fast and accurate
Fornaciari et al. approach is the fastest, but may be inaccurate in particular cases.
For the up-to-date bibliography for ellipse detection, you can refer to this page.

Camera Calibration - Rational Distortion Model

I was looking into the OpenCV 2.2 function cameraCalibration(...) and I noticed a flag CV_CALIB_RATIONAL_MODEL that enables a new radial distortion model supposed to work better with wide-angle lenses:
Where is this model coming from exactly? I read some papers that seemed to be somehow related but the model they employ seems to be quite different from the one implemented by OpenCV.
A Rational Function Lens Distortion Model for General Cameras
Simultaneous linear estimation of multiple view geometry and lens distortion
Could anyone give me more information about the model opencv exploit and why?
http://opencv-users.1802565.n2.nabble.com/OpenCV-2-2-New-Rational-Distortion-Model-td5807334.html
Claus, D. and Fitzgibbon, A.W.
A Rational Function Lens Distortion Model for General Cameras
Computer Vision and Pattern Recognition (June 2005)
http://www.robots.ox.ac.uk/~dclaus/publications/claus05rf_model.pdf
Simultaneous Linear Estimation of Multiple View Geometry and Lens Distortion
A. W. Fitzgibbon
IEEE Conference on Computer Vision and Pattern Recognition, 2001
http://marcade.robots.ox.ac.uk:8080/~vgg/publications/2001/Fitzgibbon01b/fitzgibbon01b.pdf
Well basically if you don't need GREAT precision (when I mean great I mean 0.003 pixel re-projection error) you can omit that model. It is mainly useful for "fisheye" cameras, where the distortion is huge. A guy in my university is taking his PhD about camera calibration and he says that for normal cameras the precision of the calibration does not increase so much (even decrease because of "dimensionality curse" when calibrating if the images are not enough good or few images used).

Finding path obstacles in a 2D image

what approach would you recommend for finding obstacles in a 2D image?
Here are some key points I came up with till now:
I doubt I can use object recognition based on "database of obstacles" search, since I don't know what might the obstruction look like.
I assume color recognition might be problematic if the path does not differ a lot from the object itself.
Possibly, adding one more camera and computing a 3D image (like a Kinect does) would work, but that would not run as smooth as I require.
To illustrate the problem; robot can ride either left or right side of the pavement. In the following picture, left side is the correct choice:
If you know what the path looks like, this is largely a classification problem. Acquire a bunch of images of path at different distances, illumination, etc. and manually label the ground in each image. Use this labeled data to train a classifier that classifies each pixel as either "road" or "not road." Depending upon the texture of the road, this could be as simple as classifying each pixels' RGB (or HSV) values or using OpenCv's built-in histogram back-projection (i.e. cv::CalcBackProjectPatch()).
I suggest beginning with manual thresholds, moving to histogram-based matching, and only using a full-fledged machine learning classifier (such as a Naive Bayes Classifier or a SVM) if the simpler techniques fail. Once the entire image is classified, all pixels that are identified as "not road" are obstacles. By classifying the road instead of the obstacles, we completely avoided building a "database of objects".
Somewhat out of the scope of the question, the easiest solution is to add additional sensors ("throw more hardware at the problem!") and directly measure the three-dimensional position of obstacles. In order of preference:
Microsoft Kinect: Cheap, easy, and effective. Due to ambient IR light, it only works indoors.
Scanning Laser Rangefinder: Extremely accurate, easy to setup, and works outside. Also very expensive (~$1200-10,000 depending upon maximum range and sample rate).
Stereo Camera: Not as good as a Kinect, but it works outside. If you cannot afford a pre-made stereo camera (~$1800), you can make a decent custom stereo camera using USB webcams.
Note that professional stereo vision cameras can be very fast by using custom hardware (Stereo On-Chip, STOC). Software-based stereo is also reasonably fast (10-20 Hz) on a modern computer.

Resources