I have recently come across a paper entitled, "Portable Multispectral Imaging System Based on Raspberry Pi".
In this work, the authors have presented a low-cost spectral imaging system using a raspberry pi. This is done by light multiplexing through a LED panel where 8 selected wavelengths covering the spectrum UV to NIR is included. The camera used is a Raspberry Pi NoIR, RGB camera add-on for raspberry pi that does not have any IR cut filter installed.
When the sample is situated in the corresponding position inside the prototype, the system takes a photograph at each available wavelength sequentially.
Here's an example of an image captured at different wavelengths:
But I do not understand how they were able to compute the reflectance spectra in the plot.
Or did I understand it correctly that the reflectance spectra is the computed intensity, using the equation below?
Why would that be? Can anyone please direct me to the proper readings or source materials in order to understand this concept? Please enlighten me. I am new to this. Thank you!
Related
I am attempting to correct some imagery.
The image is a composite of different aerial images which were collected under less than ideal lighting conditions and therefore when they are mosaiced there is a noticeable difference between them i.e. a dark stripe. To resolve this I have simulated how the imagery should look - but this is just a simulation and all the interesting information is still in the original imagery.
(not the best example - but trust me it needs correcting!)
My question is how can I correct the original imagery with the simulated imagery? I was thinking that a data assimilation technique may be the go but have little experience with this e.g. using a 2d ensemble kalman filter.
I would ideally be able to do this in R or Python.
---- EDIT ----
Here is a larger scene that highlights the issue more clearly. I haven't generated the simulation for this area yet.
I see this as aproblem of shading correction. The image has been "corrupted" by an uneven light field and should be "flattened".
But you don't know the illumination field and you need to somehow reconstruct it. You essentially achieve that by lowpass filtering the image (Gaussian, median, bilateral...).
Then apply multiplicative correction. The images below illustrate the process.
Source image
Smoothed illumination field
Corrected
The paper
BROWN, Matthew; LOWE, David G. Automatic panoramic image stitching using invariant features. International journal of computer vision, 2007, 74.1: 59-73.
has section 6 about "Gain Compensation" and section 7 about "Multi-Band Blending"; maybe they can be applied to your problem?
The following is Figure 5 from the above paper:
OpenCV 3.1 has some support for Exposure Compensation and Image Blenders.
I want to get 3d model of some real word object.
I have two web cams and using openCV and SBM for stereo correspondence I get point cloud of the scene, and filtering through z I can get point cloud only of object.
I know that ICP is good for this purprose, but it needs point clouds to be initally good aligned, so it is combined with SAC to achieve better results.
But my SAC fitness score it too big smth like 70 or 40, also ICP doesn't give good results.
My questions are:
Is it ok for ICP if I just rotate the object infront of cameras for obtaining point clouds? What angle of rotation must be to achieve good results? Or maybe there are better way of taking pictures of the object for getting 3d model? Is it ok if my point clouds will have some holes? What is maximal acceptable fitness score of SAC for good ICP, and what is maximal fitness score of good ICP?
Example of my point cloud files:
https://drive.google.com/file/d/0B1VdSoFbwNShcmo4ZUhPWjZHWG8/view?usp=sharing
My advice and experience is that you already have rgb images or grey. ICP is an good application for optimising the point cloud, but has some troubles aligning them.
First start with rgb odometry (through feature points aligning the point cloud (rotated from each other)) then use and learn how ICP works with the already mentioned point cloud library. Let rgb features giving you a prediction and then use ICP to optimize that when possible.
When this application works think about good fitness score calculation. If that all works use the trunk version of ICP and optimize the parameter. After this all been done You have code that is not only fast, but also with the a low error of going wrong.
The following post is explain what went wrong.
Using ICP, we refine this transformation using only geometric information. However, here ICP decreases the precision. What happens is that ICP tries to match as many corresponding points as it can. Here the background behind the screen has more points that the screen itself on the two scans. ICP will then align the clouds to maximize the correspondences on the background. The screen is then misaligned
https://github.com/introlab/rtabmap/wiki/ICP
Short introduction: project of augmented reality
Goal: load 3D hairstyles templates on the head of someone.
So I am using OpenCV to track the face of the person, then I have to track the cap (We assume that the user has a cap and we can decide of a landmark or everything we need on the cap to detect it) of the user. Once I detected the landmark, I have to get the coordinates of the landmark, and then send it to the 3D engine to launch/update the 3D object.
So, to detect precisely the landmark(s) of the cap I tested a few methods first:
cvFindChessBoardCorner ... very efficient on a plan surface but not a cap (http://dsynflo.blogspot.com/2010/06/simplar-augmented-reality-for-opencv.html)
color detection (http://www.aishack.in/2010/07/tracking-colored-objects-in-opencv/) ... not really efficient. If the luminosity change, the colors change ...
I came to you today to think about it with you. Do I need special landmark on the cap ? (If yes Which one ? If not, How can I do ?)
Is it a good idea to mix up the detection of colors and detection of form ?
.... Am i on the right way ? ^^
I appreciate any advice regarding the use of a cap to target the head of the user, and the different functions I have to use in OpenCV library.
Sorry for my english if it's not perfect.
Thank you a lot !
A quick method, off the top of my head, is to combine both method.
Color tracking using histograms and mean shift
Here is an alternative color detection method using histogram:
Robust Hand Detection via Computer Vision
The idea is this:
For a cap of known color, say bright green/blue (like the kind of colors you see for image matting screen), you can pre-compute a histogram using just the hue and saturation color channels. We deliberately exclude the lightness channel to make it more robust to lighting variations. Now, with the histogram, you can create a back projection map i.e. a mask with a probability value at each pixel in the image indicating the probability that the color there is the color of the cap.
Now, after obtaining the probability map, you can run the meanshift or camshift algorithms (available in OpenCV) on this probability map (NOT the image), with the initial window placed somewhere above the face you detected using OpenCV's algorithm. This window will eventually end up at the mode of the probability distribution i.e. the cap.
Details are in the link on Robust Hand Detection I gave above. For more details, you should consider getting the official OpenCV book or borrowing it from your local library. There is a very nice chapter on using meanshift and camshift for tracking objects. Alternatively, just search the web using any queries along meashift/camshift for object tracking.
Detect squares/circles to get orientation of head
If in addition you wish to further confirm this final location, you can add 4 small squares/circles on the front of the cap and use OpenCV's built in algorithm to detect them only in this region of interest (ROI). It is sort of like detecting the squares in those QR code thing. This step further gives you information on the orientation of the cap and hence, the head, which might be useful when you render the hair. E.g. after locating 2 adjacent squares/circles, you can compute the angle between them and the horizontal/vertical line.
You can detect squares/corners using the standard corner detectors etc in OpenCV.
For circles, you can try using the HoughCircle algorithm: http://docs.opencv.org/modules/imgproc/doc/feature_detection.html#houghcircles
Speeding this up
Make extensive use of Region of Interests (ROIs)
To speed things up, you should, as often as possible, run your algorithm on small regions of the image(ROIs) (also the probability map). You can extract ROI from OpenCV image, which are themselves images, and run OpenCV's algorithms on them the same way you would run them on whole images. For e.g., you could compute the probability map for an ROI around the detected face. Similarly, the meanshift/camshift algorithm should only be run on this smaller map. Likewise for the additional step to detect squares or circles. Details can be found in the OpenCV book as well as quick search online.
Compile OpenCV with TBB and CUDA
A number of OpenCV's algorithm can achieve significant speed ups WITHOUT the programmer needing to do any additional work simply by compiling your OpenCV library with TBB (Thread building blocks) and CUDA support switched on. In particular, the face detection algorithm in OpenCV (Viola Jones) will run a couple times faster.
You can switch on these options only after you have installed the packages for TBB and CUDA.
TBB: http://threadingbuildingblocks.org/download
CUDA: https://developer.nvidia.com/cuda-downloads
And then compile OpenCV from source: http://docs.opencv.org/doc/tutorials/introduction/windows_install/windows_install.html#windows-installation
Lastly, I'm not sure whether you are using the "C version" of OpenCV. Unless strictly necessary (for compatibility issues etc.), I recommend using the C++ interface of OpenCV simply because it is more convenient (from my personal experience at least). Now let me state upfront that I don't intend this statement to start a flame war on the merits of C vs C++.
Hope this helps.
I'm trying to use OpenCV to detect IR point using in-built camera. My camera can see infrared light. However I don't have a clue how to distinguish between visible light and IR light.
After transformation to RGB we can't distinguish, but maybe OpenCV has some methods to do it.
Does anybody know about such OpenCV functions? Or how to do it in other way?
--edit
Is it possible to recognise for example light wavelength using laptop in-build camera ? Or it's just impossible to distinguish between visible and infrared light without using special camera?
You wouldn't be able do anything in OpenCV because by the time it goes to work on it, it will just be another RGB like the visible light (you sort of mention this).
You say your camera can see infrared...Does this mean it has a filter which separates IR light from the visible light? In which case by the time you have your image inside OpenCV you would be only focusing on IR. Then look at intensities etc?
In your setting, assuming you have RGB +IR camera, probably your camera will display these three channels:
R + IR
G + IR
B + IR
So it would be difficult to identify IR pixels directly from the image. But nothing is impossible. R, G, B and IR are broad bands so information on all wavelengths is in the channels.
One thing You can do is to train classification model to classify non-IR and IR pixels in an image with lots of image data with pre-determined classes. With that model trained, you could identify IR pixels of new image.
There is no way to separate IR from visible light with software, because your camera in fact "transforms" IR light into for your eyes visible light.
I assume the only way to solve that would be using 2 cameras, one IR camera with IR-transmitting filter and one normal camera with IR blocking filter. Then you can match the images and pull out the information you need.
I"m using a Thermal camera for a project and I'm a little stumped so as to how to think about calculating intrinsics for it. The usual camera's would determine different points on a chessboard or something similar, but the thermal camera won't really be able to differentiate between those points. Does anyone have any insight on what the intrinsics for thermal cameras would really look like?
Cheers!
EDIT - In addition to the great suggestions I currently have, I'm also considering using aluminum foil on the whites to create a thermal difference. Let me know what you think of this idea as well.
This might or might not work, depending on the accuracy you need:
Use a chessboard pattern and shine a really strong light at it. The black squares will likely get hotter than the white squares, so you might be able to see the pattern in the thermal image.
Put small lightbulbs on the edges of a chessboard pattern, light them, wait until they become hot, use your thermal camera on it.
This problem is addressed in A Mask-Based Approach for the Geometric Calibration of Thermal Infrared Cameras, which basically advocates placing an opaque mask with checkerboard squares cut out of it in front of a radiating source such as a computer monitor.
Related code can be found in mm-calibrator.
If you have a camera that is also sensitive to the visible light end of the spectrum (i.e. most IR cameras - which is what most Thermography is based on after all) then simply get a IR cut-off filter and fit this to front of the cameras lens (you can get some good c-mount based ones). Calibrate as normal to the fixed optics then remove the filter. Intrinsics should be the same - since optical properties are the same (for most purposes).
Quoting from the section 5 Image fusion
in GADE, Rikke; MOESLUND, Thomas B. Thermal cameras and applications: a survey. Machine Vision and Applications, 2014, 25.1: 245-262. (freely downloadable in June 2014):
The standard chessboard method for geometric calibration, correction
of lens distortion, and alignment of the cameras relies on colour
difference, and cannot be used for thermal cameras without some
changes. Cheng et al. [30] and Prakash et al. [146] reported that when
heating the board with a flood lamp, the difference in emissivity of
the colours will result in an intensity difference in the thermal
image. However, a more crisp chessboard pattern can be obtained by
constructing a chessboard of two different materials, with large
difference in thermal emissivity and/or temperature [180]. This
approach is also applied in [68] using a copper plate with milled
checker patterns in front of a different base material, and in [195]
with a metal wire in front of a plastic board. When these special
chessboards are heated by a heat gun, hairdryer or similar, a clear
chessboard pattern will be visible in the thermal image, due to the
different emissivity of the materials. At the same time, it is also
visible in the visual image, due to colour difference. Figure 12 shows
thermal and RGB pictures from a calibration test. The chessboard
consists of two cardboard sheets, where the white base sheet has been
heated right before assembling the board.
[30] CHENG, Shinko Y.; PARK, Sangho; TRIVEDI, Mohan M. Multiperspective thermal ir and video arrays for 3d body tracking and driver activity analysis. In: Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on. IEEE, 2005. p. 3-3.
[146] PRAKASH, Surya, et al. Robust thermal camera calibration and 3D mapping of object surface temperatures. In: Defense and Security Symposium. International Society for Optics and Photonics, 2006. p. 62050J-62050J-8.
[180] VIDAS, Stephen, et al. A mask-based approach for the geometric calibration of thermal-infrared cameras. Instrumentation and Measurement, IEEE Transactions on, 2012, 61.6: 1625-1635.
[68] HILSENSTEIN, V. Surface reconstruction of water waves using thermographic stereo imaging. In: Image and Vision Computing New Zealand. 2005. p. 102-107.
[195] NG, Harry, et al. Acquisition of 3D surface temperature distribution of a car body. In: Information Acquisition, 2005 IEEE International Conference on. IEEE, 2005. p. 5 pp.
You may want to consider running a thermal resistor wire on the lines of the pattern (you also need a power source).
You can drill holes in a metal plate and then heat the plate, hopefully the holes will be colder than the plate and will appear as circles in the image.
Then you can use OpenCV (>2.0) to find circle centers http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv
See also the function findCirclesGrid.