Background subtraction with reflective material - opencv

I am using background subtraction method to detect moving objects. Because their type in my experiment is reflective material object, so it causes difficulty for detecting. How could I resolve it?
Thank you!
EDIT: I'm using Background subtraction MOG2 (in OpenCV). OpenCV version is 3.10
EDIT 1: Updated the result when apply to HSV colour space
Step 1: Convert to HSV colour space
Step 2: Apply MoG2

I'm assuming your camera is non-moving, you know the background model and you are using something like MOG detector. The simplest approach is to use color space that separates luminance from hue and saturation - one such example is HSV color space. OpenCV provides cvtColor function to convert i.e. form BGR (default) to HSV color space. Later you can use just hue and saturation channel to avoid influence of value variations (light). This however won't work for extremely shiny objects, like plastic or shiny metal lit by sunlight that appears to be white to the camera.
Another way you can deal with this problem is to use motion tracking - i.e. optical flow. If you are really interested and want to get more into details, I can refer you to some specific papers.

Related

OpenCV Meanshift Tracking HSV

I was wondering why in OpenCV examples when it comes to meanshift tracking, only Hue channel is used.
In https://docs.opencv.org/4.x/d7/d00/tutorial_meanshift.html such line of code implies what I wrote:
roi_hist = cv.calcHist([hsv_roi],[0],mask,[180],[0,180])
I understand main idea to convert RGB color space to HSV, but I do not get why only selecting Hue is enough. I know that roi_hist is later used to create back projection, but I also know that it is possible to create 2-D roi_hist by selecting also Saturation.
What it depends on? Should I expected that adding Saturation will improve my tracking results? I want to perform face tracking so I am looking for skin color.
Thanks in advance for help.
The OpenCV tutorial you linked references the paper that introduces CAMSHIFT. The CAMSHIFT algorithm was designed to track human faces. On page three, the paper states:
Except
for albinos, humans are all the same color (hue). Dark-
skinned people simply have greater flesh color saturation
than light-skinned people, and this is separated out in the
HSV color system and ignored in our flesh-tracking color
model.
The use of the H in HSV allows for a single channel tracker that works for most human faces.

Visualization of Specular Reflection of Optical Coatings on transparent substrate

I was trying to convert the spectrum of specular reflection of an optical coating to RGB values. However, the problem is that because most of the un-reflected light are simply transmitted through the underlying transparent substrate, the intensity of the specular reflection is very weak.
However, this is represented by a spectrum that show very low R% for the visible wavelength region. When I convert this spectrum to RGB, the color patch shows a very dark color/reflection (which I realized now it make sense, since the R% spectrum is basically showing a spectrum where very little light is reflected, ie, a dark color)
I have tried to first convert the patch to HSV and adjusting the saturation value, but the resultant color just doesn't add up, I was wondering if anyone have experienced the same situation before?
Edit:
Sorry I wasn't clear in my previous question. I am trying create a RGB color patch from a fresnel reflection of light reflected off an optical coating of a transparent substrate, below is a simple diagram of what I am trying to do:
Reflection off optical coating of transparent substrate
From the diagram, you can see that the target R% spectrum will give a dark red RGB color patch, which would be correct if the substrate is opaque, i.e. the substrate reflect its color and absorb the rest of the wavelength.
However, as my substrate is transparent, most of the light (D65 light source) is passed through the substrate while the optical coating will reflect 7.7% of the incoming light (which should give a very light red color patch of another hue). This will give a very low intensity reflection as most of the light is transmitted away.
I calculated the CIE XYZ tristimulus values from the spectrum and converted them to sRGB, it works for transmission but it doesn't work for Fresnel reflection (as stated above). I was wondering if there are other formulas that are able to give an accurate color patch for such reflections? I tried using some image processing algorithm to do rendering of Fresnel reflection. they work to a certain extent, however, I need the formula/equations as I would like to included them a program I am writing to analyse the colorimetric data of measured spectrum from optical coatings.
Thanks,
Johan

Robust color tracking vs exposure and white balance changes

Does anyone had the experience with color matching and frame-to-frame tracking of it video when the exposure settings and white balance are constantly changing?
I'm working on color tracking app that uses iPad 2 frontal camera to capture video. I need to detect colored objects (of predefined color we took earlier) on each frame. My problem is that the camera software are like to adjust WB and Exposure each frame. So if we remember one color at frame N, on N+10 frame the WB will be different and this can lead to big difference in color.
For calculating color distance I'm using LAB color space and CIE76 formula:
Yes, i know there is much better CIEDE2000 distance function, but I'm working with ARM processor and I'm afraid this formula will be too heavy even for ARM NEON manually optimized assembly code that i use already.
CIE76 provides a good results in general, but in poor or very bright lighting scenes the camera either generate too much noise or over-saturate the image so the colors becomes too far from their original. In addition to simple thresholding using color distance i implemented per-component thresholding of LAB pixel values based on standard deviation of the calibrated color. This had also increased correctness of the detection, however, this isn't solving the main issue.
The camera itself provide frames in RGB color space, but the API doesn't provide functions to get white point or color temperature of the current frame. Currently i assume D50 illuminant to perform RGB -> LAB conversion.
And this is the my main doubt. My idea is to compute the white point of the given RGB image, and then convert it to XYZ color space and then convert XYZ to LAB using calculated white point. Is it possible?
From Wikipedia: White Point
Expressing color as tristimulus coordinates in the LMS color space, one can "translate" the object's color according to the von Kries transform simply by scaling the LMS coordinates by the ratio of the maximum of the tristimulus values at both white points. This provides a simple, but rough estimate.
http://en.wikipedia.org/wiki/White_point
Are this going to work? Or there is a better way to calculate white point (even roughly)? By the way, i came out for Retinex algorithm, which demonstrate good color enhancement in shadows, does anyone used it? What it's pros and cons?

Detect red color in different illumination or background

I can't find the best way to detect red color in different illumination or background.
I found that there's YCbCr color space which is good for red or blue color detection (actually I need to detect blue color too). The problem is that I can't figure out which threshold to use in different lightning. For example in sunny weather this threshold equals 210 (from 255), when in cloudly weather this threshold equals 130.
I use OpenCV library to implement this.
Thanks for any help or advice.
Yes, HSV is usually used for such purpose. In HSV you can tell that whatever is brightness etc, red is what is needed. I also recommend to look into two places. One is simple tutorial http://aishack.in/tutorials/tracking-colored-objects-in-opencv/ and another is to take a book Learning OpenCV and use examples of histograms from there. They do exactly what you need. Using HSV and Histograms makes your solution solid.
HSV color space should be more robust to change of illumination.

OpenCV floor detection by segmentation

I'm working on a way to detect the floor in an image. I'm trying to accomplish this by reducing the image to areas of color and then assuming that the largest area is the floor. (We get to make some pretty extensive assumptions about the environment the robot will operate in)
What I'm looking for is some recommendations on algorithms that would be suited to this problem. Any help would be greatly appreciated.
Edit: specifically I am looking for an image segmentation algorithm that can reliably extract one area. Everything I've tried (mainly PyrSegmentation) seems to work by reducing the image to N colors. This is causing false positives when the camera is looking at an empty area.
Since floor detection is the main aim, I'd say instead of segmenting by color, you could try separation by texture.
The Eigen transform paper describes a single-value descriptor of texture "roughness" using the average of eigenvalues over a grayscale window in the image/video frame. On pg. 78 of the paper they apply the mean-shift segmentation on the eigen-transform output image, effectively separating it into different textures.
Since your images are from a video feed, there can be a lot of variations in lighting so color segmentation might pose a few problems (unless you're working with HSV and other color spaces as mentioned above). The calculation of the eigenvalues is very simple and fast in OpenCV with the cvSVD() function.
If you can make the assumption about colour constancy your main issue is going to be changes in lighting that will throw off your colour detection.
To that end, convert your input image to HSV, HSL, cie-Lab, YUV or some other luminance-separated colourspace and segment your image based on just the colour part (leave out the luminance value, V, L, L and Y in the examples above). This will help you overcome the obstacle of shadows and variations in lighting.

Resources