OpenCV haar training with images that have transparency - opencv

I'll be using OpenCV's cascade training functions.
But before that I need to prepare training data.
I just want to know if OpenCV can support it if my positive samples have transparency? Like for example if I want the classifier to learn how a vehicle looks, then can I supply positive sample images that have vehicles standing on a transparent background?

As mentioned in the comments above, the haar features are only computed on the grayscale image. This might pose a problem as you mentioned, when the default color of 0 might cause the "wheels" to lose contrast. You can probably "standardize" the transparent color rather than have it default to 0.
The first thing is you can load in all 4 channels (including your alpha channel) and then use the alpha channel to set the transparent part to a certain value.
Python version
I = cv2.imread("image.jpg", cv2.CV_LOAD_IMAGE_UNCHANGED)
alpha = I[:, :, 3]
G = cv2.cvtColor(I, cv2.COLOR_BGRA2GRAY)
G[alpha == 0] = 125 # Set transparent region to 125. Change to suit your needs.
C++
vector<cv::Mat> channels;
cv::split(I, channels);
cv::Mat alpha = channels[3];
alpha = 255 - alpha; // Invert mask so we select the transparent regions.
cv::Mat G = cv::cvtColor(I, cv::COLOR_BGRA2GRAY);
G.setTo(cv::Scalar(125), alpha);
As a note of caution, I think you might have to be careful about some of the operations above, e.g., loading image with alpha and "alpha = 255 - alpha;". I believe they are only available only in later versions of OpenCV. I'm using OpenCV 2.4.7 and it works (for the python version. I haven't tried the C++ but it should be the same). So if things don't work, check whether these operations are supported for your version of OpenCV. If not there are ways to get round them.

Related

How to assess image quality using image comparison

I would like to compare videos. To compare the quality (Non blurry) by coding a C program. Someone told me to learn about DFT (Discrete Fourier Transform) for image analysis and to use a FFT or DFT tool to learn the difference between blurred vs detailed (non-blurry) copies of same image.
(copied from other question):
Lets say we have different files with different video quality, one is extremely clear, other is blurred, one is having rough colors. Compare all files basically frame by frame and report to the user which has better quality.
So can anyone help me with this ??
Let's say we have various files having different video quality:
one is extremely clear, other is blurred, one is having rough colors.
Compare all files basically frame by frame and report to the user which has better quality.
(1) Color Quality detection...
To check which has better color, you analyze the histograms of the test images. The histogram will be a count of how many pixels have intensity X. Where X is a number ranging between 0 up to 255 (because each red, green and blue channels each holds any of those 256 possible intensities).
There are many tutorials online about how to create a histogram since it's a basic task in computer graphics.
Generally it goes like:
First make 3 arrays (eg: hist_Red) to hold data for red, green and blue channels.
Break up (using FOR loop) each pixel into individual R/G/B channel components:
example:
temp_Red = this_pixel >> 16 & 0x0ff;
temp_Grn = this_pixel >> 8 & 0x0ff;
temp_Blu = this_pixel >> 0 & 0x0ff;
Then add +1 to that specific red/green/blue intensity in relevant histogram.
example:
hist_Red[ temp_Red ] += 1;
hist_Grn[ temp_Grn ] += 1;
hist_Blu[ temp_Blu ] += 1;
By adding the totals of red, green and blue, you will have total intensities of RGB in an array that could build charts like below. Check with image's array has most values to find image with better quality of colors:
(2) Detailed vs Blurred detection...
You can try using a convolution filter to detect blur in image. Give the filter a kernel (eg: a matrix). The matrix (3x3) shown below gives an edge-detect filter, where blurred images give less edges (therefore gives more black pixels).
Use logic to assume that: more black pixels EQuals a more blurred image (less detail).
You can read about convolutions here
Lode's Computer Graphics Tutorial: Image Filtering
Image Convolution with C/C++ code
PDF Image Manipulation: Filters and Convolutions
PDF Read page 10 onwards : Convolution filters

OpenCV C2 types of images?

Does the second channel of a C2 image represent the alpha channel or do they just fill the gap between C1-C3,C4?
You are mistaking colorspaces with channels. For example you have a greyscale colorspace, which is represented with 1 channel. Then you have BGR with 3 channels, and BGRA with 4. Here the 4th channel is the Alpha value. OpenCV supports several types of colorspaces.
OpenCV is opened to your needs, in some cases you have a mat with 2 values per pixel, for example Dense Optical Flow results, which have a vector of movement of each pixel (x,y vector). You may even create a greyscale image with alpha value for whatever reason or algorithm you have... in this case it will be a CV_8UC2. However this is not a standard colorspace in OpenCV, and a lot of the algorithms have hard constraints on the color space so they may not work with this Mat type.
A cv::Mat can have more than 4 channels even (up to 512 the last time I checked, for more info check the constant CV_CN_MAX), but beware that this may not work with all of OpenCV functions and it will more like a container to your custom algorithms.

Convert kinects depth to RGB

I'm using OpenNI and OpenCV (but without the latest code with openni support). If I just send the depth channel to the screen - it will look dark and difficult to understand something. So I want to show a depth channel for the user in a color but cannot find how to do that without losing of accuracy. Now I do it like that:
xn::DepthMetaData xDepthMap;
depthGen.GetMetaData(xDepthMap);
XnDepthPixel* depthData = const_cast<XnDepthPixel*>(xDepthMap.Data());
cv::Mat depth(frame_height, frame_width, CV_16U, reinterpret_cast<void*>(depthData));
cv::Mat depthMat8UC1;
depth.convertTo(depthMat8UC1, CV_8UC1);
cv::Mat falseColorsMap;
cv::applyColorMap(depthMat8UC1, falseColorsMap, cv::COLORMAP_AUTUMN);
depthWriter << falseColorsMap;
But in this case I get worse (loosing details) output than, for instance, kinects software for windows shows me. So I'm looking for a function in OpenNI or OpenCV with a better transformation.
ghttps://github.com/OpenNI/OpenNI2/blob/master/Samples/Common/OniSampleUtilities.h
the link is the code for histogram equalization. In short, it makes the probability of each level equal and optimizes mapping between 10,000 levels and 255 levels. That is why Kinect yellowish map looks better than naive I=255*z/z_range.
NOTE: don’t use color for visualization since a human eye is more sensitive to luminance change than to color variation. So with 255 levels of luminance you will get better contrast than with 255*255*255 levels of color. If you still decide to go along the color mapping avenue use HSV color space where you can manipulate Hue 0..360 deg, Value 1..0 and better set saturation to max. Map depth to hue and value, convert to RGB and display. Than go back to histogram equalization ;)
Try this:
const float scaleFactor = 0.05f;
depth.convertTo(depthMat8UC1, CV_8UC1, scaleFactor);
imshow("depth gray",depthMat8UC1);
Play with the value to get a result you're happy with

Estimate Brightness of an image Opencv

I have been trying to obtain the image brightness in Opencv, and so far I have used calcHist and considered the average of the histogram values. However, I feel this is not accurate, as it does not actually determine the brightness of an image. I performed calcHist over a gray scale version of the image, and tried to differentiate between the avergae values obtained from bright images over that of moderate ones. I have not been successful so far. Could you please help me with a method or algorithm, that can be realised through OpenCv, to estimate brightness of an image? Thanks in advance.
I suppose, that HSV color model will be usefull in your problem, where channel V is Value:
"Value is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method cvCvtColor(const CvArr* src, CvArr* dst, int code), that converts an image from one color space to another. In your case code = CV_BGR2HSV.Than calculate histogram of third channel V.
I was about to ask the same, but then found out, that similar question gave no satisfactory answers. All answers I've found on SO deal with human observation of a single pixel RGB vs HSV.
From my observations, the subjective brightness of an image also depends strongly on the pattern. A star in a dark sky may look more bright than a cloudy sky by day, while the average pixel value of the first image will be much smaller.
The images I use are grey-scale cell-images produced by a microscope. The forms vary considerably. Sometimes they are small bright dots on very black background, sometimes less bright bigger areas on not so dark background.
My approach is:
Find histogram maximum (HMax) using threshold for removing hot pixels.
Calculate mean values of all pixel between HMax * 2/3 and HMax
The ratio 2/3 could be also increased to 3/4 (which reduces the range of pixels considered as bright).
The approach works quite well, as different cell-patterns with same titration produce similar brightness.
P.S.: What I actually wanted to ask is, whether there is a similar function for such a calculation in OpenCV or SimpleCV. Many thanks for any comments!
I prefer Valentin's answer, but for 'yet another' way of determining average-per-pixel brightness, you can use numpy and a geometric mean instead of arithmetic. To me it has better results.
from numpy.linalg import norm
def brightness(img):
if len(img.shape) == 3:
# Colored RGB or BGR (*Do Not* use HSV images with this function)
# create brightness with euclidean norm
return np.average(norm(img, axis=2)) / np.sqrt(3)
else:
# Grayscale
return np.average(img)
A bit of OpenCV C++ source code for a trivial check to differentiate between light and dark images. This is inspired by the answer above provided years ago by #ann-orlova:
const int darkness_threshold = 128; // you need to determine what threshold to use
cv::Mat mat = get_image_from_device();
cv::Mat hsv;
cv::cvtColor(mat, hsv, CV_BGR2HSV);
const auto result = cv::mean(hsv);
// cv::mean() will return 3 numbers, one for each channel:
// 0=hue
// 1=saturation
// 2=value (brightness)
if (result[2] < darkness_threshold)
{
process_dark_image(mat);
}
else
{
process_light_image(mat);
}

Opencv Motion detection with tracking

I need a robust motion detection and tracking in web cam's video frames. The background is always the same. The aim is to identify the position of the object, if possible without the shadows, but not so urgent to remove shadows. I've tried the opencv algorithm for background subtraction and thresholding, but this depends on only one image as a background, what if the background changes a little bit in brightness (or camera auto-focus), I need the algorithm to be strong for little changes as brightness or some shadows.
Robust method for tracking are part of broad research interests that are being developed all around the world...
Here are maybe keys to solve your problem that is very interesting but wide and open.
First a lot of them assumes brightness constancy (therefore what you ask is difficult to achieve). For instance:
Lucas-Kanade
Horn-Schunk
Block-matching
is widely used for tracking but assumes brightness constancy.
Then other interesting ones could be meanshift or camshift tracking, but you need a projection to follow... However you can use a back-projection computed accordingly to certain threshold to fit your needs for robustness...
I'll post later about that,
Julien,
When you try the thresholding in OpenCV are you doing this with RGB (red,green,blue) or HSV (hue,saturation,value) colour formats? From personal experience, I find the HSV encoding to be far superior for tracking coloured objects in video footage when used in conjunction with OpenCV for thresholding and cvBlobsLib for identifying the blob location.
HSV is easier since HSV has the advantage of only having to use a single number to detect the colour (“hue”), in spite of the very real probability of there being several shades of that colour, ranging from light to darker shades. (The amount of colour and the brightness of the colour are handled by the “saturation” and “value” parameters respectively).
I threshold the HSV reference image ('imgHSV') to obtain a binary (black and white) image using a call to the cvInRange() OpenCV API:
cvInRangeS( imgHSV,
cvScalar( 104, 178, 70 ),
cvScalar( 130, 240, 124 ),
imgThresh );
In the above example, the two cvScalar parameters are lower and upper bounds of HSV values that represents hues that are blueish in colour. In my own experiments I was able to obtain some suitable max/min values by grabbing screenshots of the object(s) I was interested in tracking and observing the kinds of hue/saturation/lum values that occur.
More detailed descriptions with a code sample can be found on this blog posting.
Andrian has a cool tutorial http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
I followed and have an good experiment test
https://youtu.be/HJBOOZVefXA
I use static image as well
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
(cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
4 lines code find motion well
good luck

Resources