How to detect infrared light with OpenCV - opencv

I'm trying to use OpenCV to detect IR point using in-built camera. My camera can see infrared light. However I don't have a clue how to distinguish between visible light and IR light.
After transformation to RGB we can't distinguish, but maybe OpenCV has some methods to do it.
Does anybody know about such OpenCV functions? Or how to do it in other way?
--edit
Is it possible to recognise for example light wavelength using laptop in-build camera ? Or it's just impossible to distinguish between visible and infrared light without using special camera?

You wouldn't be able do anything in OpenCV because by the time it goes to work on it, it will just be another RGB like the visible light (you sort of mention this).
You say your camera can see infrared...Does this mean it has a filter which separates IR light from the visible light? In which case by the time you have your image inside OpenCV you would be only focusing on IR. Then look at intensities etc?

In your setting, assuming you have RGB +IR camera, probably your camera will display these three channels:
R + IR
G + IR
B + IR
So it would be difficult to identify IR pixels directly from the image. But nothing is impossible. R, G, B and IR are broad bands so information on all wavelengths is in the channels.
One thing You can do is to train classification model to classify non-IR and IR pixels in an image with lots of image data with pre-determined classes. With that model trained, you could identify IR pixels of new image.

There is no way to separate IR from visible light with software, because your camera in fact "transforms" IR light into for your eyes visible light.
I assume the only way to solve that would be using 2 cameras, one IR camera with IR-transmitting filter and one normal camera with IR blocking filter. Then you can match the images and pull out the information you need.

Related

Homography when camera translation (for stitching)

I have a camera which I take with it 2 captures. I want do make a reconstitution with the 2 images in one image.
I only do a translation with the camera an take images of a plane TV screen. I heard homography only works when the camera does a rotation.
What should i do when I only have a translation?
Because you are imaging a planar surface (in your case a TV screen), all images of it with a perspective camera will be related by homographies. This is the same if your camera is translating and/or rotating. Therefore to stitch different images of the surface, you don't need to do any 3D geometry processing (essential matrix computation/triangulation etc.).
To solve your problem you need to do the following:
You determine the homographies between your images. Because you only have two images you can select the first one as the 'source' and the second one as the 'target', and compute the homography from target to source. This is classically done with feature detection and robust homography fitting. Let's denote this homography by the 3x3 matrix H.
You warp your target image to your source using H. You can do this in openCV with the warpPerspective method.
Merge your source and warped target using a blending function.
An open source project for doing exactly these steps is here.
If your TV lacks distinct features or there is lots of background clutter, the method for estimating H might not be highly robust. If this is the case you could manually click four or more correspondences on the TV in the target and source images, and compute H using OpenCV's findHomography method. Note that your correspondences cannot be completely arbitrary. Specifically, there should not be three correspondences that are colinear (in which case H cannot be computed). They should also be clicked as accurately as possible because errors will affect the final stitch and cause ghosting artefacts.
An important caveat is if your camera has significant lens distortion. In this case your images will not be related by homographies. You can deal with this by performing a calibration of your camera using OpenCV, and then you need to pre-process your images to undo the lens distortion (using OpenCV's undistort method).

Screen detection using opencv / Emgucv

I'm working on computer screen detection using emgucv (a c# opencv wrapper ).
I want to detect my computer screnn and draw a rectangle on it.
To help in this process, I used 3 Infrared Leds on the screen of the computer which I detect firtsly and after the detection, I could find the screen areas below those 3 leds.
Here is the results after the detection of the 3 leds.
The 3 red boxes are the detected leds.
.
And in general I have something like this
Does anyone have an idea about how I can proceed to detect the whole screan area ?
This is just a suggestion but, if you know for a fact that your computer screen is below your LEDs, you could try using OpenCV GrabCut algorithm. Draw a rectangle below the LEDs, large enough to contain the screen (maybe you could guess the size from the space between the LEDs) and use it to initialize the GrabCut.
Let me know what kind of results you get.
You can try to use a camera with no IR filter(This is mostly all night vision cameras) so that you can get a more intense light from the LEDs hence making it stand out than what your display would have then its a simple blob detection to get there position.
Another solution would be using ARUCO markers on the display if the view angle you are tending to use are not very large then its should be a compelling option and even the relative position of the camera with the display can be predicted also if that is what you want. With the detection of ARUCO you can get the angles that the plane of the display is placed at hence making the estimation of the display area with them.

OpenCV Stereo Calibration of IR-Cameras

I have 2 webcams with removed IR blocking filters and applied visible light blocking filters. Thus, both cameras can only see IR light. So I can not calibrate the stereo cameras by oberserving points on a chessboard (because I don't see the chessboard). Instead of this I had the idea to use some amount of IR-LEDs as a tracking pattern. I could attach the LEDs on some chessboard, for instance. AFAIK, the OpenCV stereoCalibrate function awaits the objectPoints, as well as the imagePoints1 and imagePoints2 and will return both camera matrices, distortion coeffs as well as the fundamental matrix.
How many points in my images do I need to detect in order to get the function running appropriate? For the fundamental matrix I know the eight-point algorithm. So, are 8 points enough? The problem is, I don't want to use a huge amount of IR-LEDs as a tracking pattern.
Are there some better ways to do so?
Why not remove the filters, calibrate and then replace them?
(For pure curiosities sake what are you working on?)

Detect UV light with iOS Camera

Is there a way to detect the UV ray from the iPhone camera without any hardwares? I don't want to find the exact UV index of the light.
I tried to take a photo from a UV light, but if I analyse that photo, I can only get the RGB pixels. How do I get to know that the photo is taken from UV light? How do the photos taken from the UV light, are differ from the photos taken in the normal light?
I found an app called GammaPix in the app store which says it could detect the radio activity in an area by using only the camera. I want something like that to detect the UV rays.
Fundamentally, most CCD digital sensors can see UV light, but they contain filters to block light outside of the visible range from reaching the detector. There is a good discussion of this here: https://photo.stackexchange.com/questions/2262/are-digital-sensors-sensitive-to-uv
The spectral response of the iPhone drops to essentially zero for wavelengths shorter than 400 nm (edge of the visible) due to these filters. [See page 35 of http://epublications.uef.fi/pub/urn_nbn_fi_uef-20141156/urn_nbn_fi_uef-20141156.pdf]
Really high energy photons, like gamma rays, can pass through most materials and reach the sensor, even when visible light to the camera is completely blocked. This is how the GammaPix app likely works. This doesn't work for UV photons, since they will be blocked by the filters on the sensor.
So, unless you can somehow crack open the iPhone and remove the filters from the sensor (probably not possible without destroying the sensor), then it will not be able to detect UV photons.
To detect UV, a "windowless CCD" (or similar) is required: http://www.mightexsystems.com/index.php?cPath=1_88
Another option might be to use a material that can absorb UV photons and emit visible photons. For example, this card will glow when illuminated with UV light.
It is unlikely you can detect UV light via a standard smartphone camera, at least not without a lot of work.
The GammaPix app does not detect gamma rays directly, but instead "looks for a particular 'signature' left behind by gamma rays" on the camera. The technology behind this signature detection is patented, and was developed as part of a $679,000 DARPA grant.
Relevant patents: US7391028, US7737410
I can't say for sure but I do not believe the iPhone camera can "see" UV light, most digital camera's can't. Note that white luminescence is not UV light, just caused by it.
Slightly off-topic but... Infra-red however is another story, some of the older iPhone camera's could see Infra-Red (before Apple added the IR filter). The iPhone 6 front camera can see still IR light as it doesn't have a IR filter.
Don't believe me? Point your TV IR remote at your front camera and you will see the LED light up
A standard smartphone camera cannot detect UV or IR as only the visible light spectrum is allowed. You can check out this device from Nurugo but it is not compatible with an iPhone yet.
You could use weather APIs to get UV Index on-line, for example OpenUV - Global Real-time UV Index API.

Is it possible to extract the foreground in a video where the biggest part of the background is a huge screen (playing a video)?

I am working on a multi-view telepresence project using an array of kinect cameras.
To improve the visual quality we want to extract the foreground, e.g. the person standing in the middle using the color image, the color image and not the depth image because we want to use the more reliable color image to repair some artefacts in the depth image.
The problem now is that the foreground objects (usually 1-2 persons) are standing in front of a huge screen showing another party, which is also moving all the time, of the telepresence system and this screen is visible for some of the kinects. Is it still possible to extract the foreground for these kinects and if so, could you point me in the right direction?
Some more information regarding the existing system:
we already have a system running that merges the depth maps of all the kinects, but that only gets us so far. There are a lot of issues with the kinect depth sensor, e.g. interference and distance to the sensor.
Also the color and depth sensor are slightly shifted, so when you map the color (like a texture) on a mesh reconstructed using the depth data you sometimes map the floor on the person.
All these issues decrease the overall quality of the depth data, but not the color data, so one could view the color image silhouette as the "real" one and the depth one as a "broken" one. But nevertheless the mesh is constructed using the depth data. So improving the depth data equals improving the quality of the system.
Now if you have the silhouette you could try to remove/modify incorrect depth values outside of the silhouette and/or add missing depth values inside
Thanks for every hint you can provide.
In my experience with this kind of problems the strategy you propose is not the best way to go.
As you have a non-constant background, the problem you want to solve is actually 2D segmentation. This is a hard problem, and people are typically using depth to make segmentation easier and not the other way round. I would try to combine / merge the multiple depth maps from your Kinects in order to improve your depth images, maybe in a Kinect fusion kind of way, or using classic sensor fusion techniques.
If you are absolutely determined to follow your strategy, you could try to use your imperfect depth maps to combine the RGB camera images of the Kinects in order to reconstruct a complete view of the background (without occlusion by the people in front of it). However, due to the changing background image on the screen, this would require your Kinects' RGB cameras to by synchronized, which I think is not possible.
Edit in the light of comments / updates
I think exploiting your knowledge of the image on the screen is your only chance of doing background subtraction for silhouette enhancement. I see that this is a tough problem as the sceen is a stereoscopic display, if I understand you correctly.
You could try to compute a model that describes what your Kinect RGB cameras see (given the stereoscopic display and their placement, type of sensor etc) when you display a certain image on your screen, essentially a function telling you: Kinect K sees (r, g, b) at pixel (x, y) when I show (r',g',b') at pixel (x',y') on my display. To do this you will have do create a sequence of calibration images which you show on the display, without a person standing in front of it, and film with the Kinect. This would allow you to predict the appearance of your screen in the Kinect cameras, and thus compute background subtraction. This is a pretty challenging task (but it would give a good research paper if it works).
A side note: You can easily compute the geometric relation of a Kinect's depth camera to its color camera, in order to avoid mapping the floor on the person. Some Kinect APIs allow you to retrieve the raw image of the depth camera. If you cover the IR projector you can film a calibration pattern with both, depth and RGB camera, and compute an extrinsic calibration.

Resources