OpenCV Mean/SD filter - opencv

I'm throwing this out there in hope that someone will have attempted something this ridiculous before. My goal is to take in an input image, and segment it based upon the standard deviation of a small window around each pixel. Bascially, this should mathematically resemble a gauss or box filter, in that it will be applied to a compile time (or even run-time) user specified window size around each pixel, and the destination array will contain the SD information at each pixel, in an image the same size as the original.
The idea is to do this on an image in HSV space, so that I can easily find regions of homogeneous color (i.e. those with small local SDs in the Hue and Sat planes) and extract them from the image for more in-depth processing.
So the question is, has anyone ever built a custom filter like this before? I don't know how to do the SD in a simple box type filter kernel like the ones used for gauss and blur, so I'm guessing I'll have to use the FilterEngine construct. Also, I forgot to mention I'm doing this in C++.
Your advice and musings are much appreciated.

Wikipedia has a nice explanation of standard deviation, which you can use to for a standard deviation filter.
Basically, it boils down to blurring the image with a box filter, blurring the square of the image with a box filter, and taking the square root of their difference.
UPDATE: This is probably better shown with the equation from Wikipedia...
You can think of the OpenCV blur function as representing the expected value (i.e., E[X] a.k.a. the sample mean) of the neighborhood of interest. The random samples X in this case are represented by image pixels in the local neighborhood. Therefore, by using the above equivalence we have something like sqrt(blur(img^2) - blur(img)^2) in OpenCV. Doing it this way allows you to compute the local means and standard deviations.
Also, just in case you are curious about the mathematical proof. This equivalence is known as the computational formula for variance.
Here is how you can do this in OpenCV:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
Mat mat2gray(const Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 1.0, NORM_MINMAX);
return dst;
}
int main()
{
Mat image = imread("coke-can.jpg", 0);
Mat image32f;
image.convertTo(image32f, CV_32F);
Mat mu;
blur(image32f, mu, Size(3, 3));
Mat mu2;
blur(image32f.mul(image32f), mu2, Size(3, 3));
Mat sigma;
cv::sqrt(mu2 - mu.mul(mu), sigma);
imshow("coke", mat2gray(image32f));
imshow("mu", mat2gray(mu));
imshow("sigma",mat2gray(sigma));
waitKey();
return 0;
}
This produces the following images:
Original
Mean
Standard Deviation
Hope that helps!

In case you want to use this in more general way this can produce nan values
Values close to zero can be sometimes "negative".
Mat sigma;
cv::sqrt(mu2 - mu.mul(mu), sigma);
correct way should be
Mat sigma;
cv::sqrt(cv::abs(mu2 - mu.mul(mu)), sigma);

Related

opencv reverse perspective transformation?

I start with the following image:
Using opencv I rotate 45° about the Y axis to get the following:
If I tried a little harder I could get it not to be cropped in the foreground.
Now my question: does opencv have the tools to do the reverse transformation? Could I take the second image and produce the first? (Not concerned about blurred pixels.) Please suggest a method.
Yes.
You already made a homography matrix to produce this picture, right?
Just invert it (H.inv()) or pass the WARP_INVERSE_MAP flag.
No need for all that other stuff.
Yes, its possible. After 45° rotation, there are some regions below and above are missing(not seen). You only can not get those parts back.
By using warpPerspective() and getPerspectiveTransform() together, you can easily get back to the first image. Only thing you need to consider is that you should fid the end points of rotated image. Such as: left_up , right_up , left_down , right_down respectively. Since you didn't specify the language, I used C++ to implement the functions. Here is the output and code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <fstream>
int main()
{
cv::Mat begin = cv::imread("/ur/img/dir/input.jpg");
cv::Mat output;
cv::Point2f Poly2[4] = {
cv::Point2f(31,9),
cv::Point2f(342,51),
cv::Point2f(28,571),
cv::Point2f(345,525), //points I got from looking in paint.
};
cv::Point2f Points[4] = {
cv::Point2f(0,0),
cv::Point2f(432,0),
cv::Point2f(0,576), //The picture I want to transform to.
cv::Point2f(432,576),
};
cv::Mat Matrix = cv::getPerspectiveTransform( Poly2,Points);
cv::warpPerspective(begin, output, Matrix, cv::Size(432, 576));
cv::imshow("Input", begin);
cv::imshow("Output", output);
cv::imwrite("/home/yns/Downloads/tt2.jpg",output);
cv::waitKey(0);
return 0;
}

best way to segment a tree in plantation aerial image using opencv

so i want to segment a tree from an aerial image
sample image (original image) :
and i expect the result like this (or better) :
the first thing i do is using threshold function in opencv and i didn't get expected result (it cant segment the tree crown), and then i'm using black and white filter in photoshop using some adjusted parameter (the result is shown beloww) and do the threshold and morphological filter and got result like shown above.
my question, is there a some ways to do the segmentation to the image without using photoshop first, and produce segmented image like the second image (or better) ? or maybe is there a way to do produce image like the third image ?
ps: you can read the photoshop b&w filter question here : https://dsp.stackexchange.com/questions/688/whats-the-algorithm-behind-photoshops-black-and-white-adjustment-layer
You can do it in OpenCV. The code below will basically do the same operations you did in Photoshop. You may need to tune some of the parameters to get exactly what you want.
#include "opencv2\opencv.hpp"
using namespace cv;
int main(int, char**)
{
Mat3b img = imread("path_to_image");
// Use HSV color to threshold the image
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Apply a treshold
// HSV values in OpenCV are not in [0,100], but:
// H in [0,180]
// S,V in [0,255]
Mat1b res;
inRange(hsv, Scalar(100, 80, 100), Scalar(120, 255, 255), res);
// Negate the image
res = ~res;
// Apply morphology
Mat element = getStructuringElement( MORPH_ELLIPSE, Size(5,5));
morphologyEx(res, res, MORPH_ERODE, element, Point(-1,-1), 2);
morphologyEx(res, res, MORPH_OPEN, element);
// Blending
Mat3b green(res.size(), Vec3b(0,0,0));
for(int r=0; r<res.rows; ++r) {
for(int c=0; c<res.cols; ++c) {
if(res(r,c)) { green(r,c)[1] = uchar(255); }
}
}
Mat3b blend;
addWeighted(img, 0.7, green, 0.3, 0.0, blend);
imshow("result", res);
imshow("blend", blend);
waitKey();
return 0;
}
The resulting image is:
The blended image is:
This has been an interesting topic of research in the past - mainly in the remote sensing literature.
While the morphological methods proposed using OpenCV will work in certain cases, you might want to consider more sophisticated approaches (depending on how variable your data is and how robust a detector you want to build).
For example, this paper, and those who cite it - give you a flavour of what has been attempted.
Pragmatically speaking - I think a neat solution would be one founded more on statistical texture analysis. There are many ways to classify (and then count) regions of an image as belong to a texture (co-occurance matrices, filter banks, textons, wavelets, etc, etc.).
Sadly, this is an area where OpenCV is rather deficient - it only provides a subset of the useful algorithms out there... However, here are a few quick ideas (none of which I have tried directly, just what I'm aware of are based on underlying OpenCV):
Use OpenCV Gabor filter support and cluster (for example).
You could also possibly train an OpenCV SVM with Local Binary Patterns.
A new library - but probably not so relevant for static images - LIBDT
Anyways, I hope you get something that just works for your purposes!

Recognizing a topological graph in a noisy image

I am not at all experienced with machine learning or image processing, so I'm hoping someone can give some pointers of first thoughts on this problem:
The image below is an example of a photograph of tomato plant leaf. We have thousands of these. We need to trace the veins and output a graph. We have already had undergraduates trace the veins by hand for a few hundreds, so I presume that this can be a training set for a machine learning approach.
So my question: what types of filters/classifiers immediately come to mind? Is there anything you recommend I read or take a look at?
Our first thought was, look at directional derivatives. Each pixel can be classified as being in an edge or not in an edge at a given angle, and if a pixel is in an for a lot of different angles, then it's probably a blotch and not a vein. Then the parameters of gradient threshold and angle variation allowed can be adjusted by the learning but probably this is not the best way...
Thank you for any help!
Two methods immediately come to mind
a sliding window neural network classifier
identifying a threshold that sets apart dark/light pixels in the image (this could be done using machine learning or perhaps a simple computation) and then doing a flood fill to identify regions in the image.
The second method should be simpler and quicker, so I'd perhaps prototype it first to see if it gives good enough answers.
In any case, my intuition is that it's going to be easier to solve the dual problem - not trying to find edges and nodes of the graph, but finding its faces. From that, you get the graph itself easily.
I did this very simple program to filter the vein regions using opencv. I've added comments to explain the operations. Resulting images for the intermediate steps are saved. Hope it helps.
#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
#define INPUT_FILE "wMTjH3L.png"
#define OUTPUT_FOLDER_PATH string("")
#define CONTOUR_AREA_THRESHOLD 30.0
int _tmain(int argc, _TCHAR* argv[])
{
// read image as grayscale
Mat im = imread(INPUT_FILE, CV_LOAD_IMAGE_GRAYSCALE);
imwrite(OUTPUT_FOLDER_PATH + string("gray.jpg"), im);
// smooth the image with a gaussian filter
Mat blurred;
GaussianBlur(im, blurred, Size(3, 3), 1.5);
imwrite(OUTPUT_FOLDER_PATH + string("blurred.jpg"), blurred);
// flatten lighter regions while retaining the darker vein regions using morphological opening
Mat morph;
Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(5, 5));
morphologyEx(blurred, morph, MORPH_OPEN, morphKernel);
imwrite(OUTPUT_FOLDER_PATH + string("morph.jpg"), morph);
// apply adaptive thresholding
Mat adaptTh;
adaptiveThreshold(morph, adaptTh, 255.0, ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, 7, 2.0);
imwrite(OUTPUT_FOLDER_PATH + string("adaptth.jpg"), adaptTh);
// morphological closing to merge disjoint regions
Mat morphBin;
Mat morphKernelBin = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(adaptTh, morphBin, MORPH_CLOSE, morphKernelBin);
imwrite(OUTPUT_FOLDER_PATH + string("adptmorph.jpg"), morphBin);
// find contours
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(morphBin, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// filter contours by region areas and draw
RNG rng(12345);
Mat drawing = Mat::zeros(morphBin.size(), CV_8UC3);
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
if (contourArea(contours[idx]) > CONTOUR_AREA_THRESHOLD)
{
Scalar color( rand()&255, rand()&255, rand()&255 );
drawContours( drawing, contours, idx, color, CV_FILLED, 8, hierarchy );
}
}
imwrite(OUTPUT_FOLDER_PATH + string("cont.jpg"), drawing);
return 0;
}
The output looks like this for the provided sample image:

How to estimate 2D similarity transformation (linear conformal, nonreflective similarity) in OpenCV?

I'm trying to search a specific object in input images by matching SIFT descriptors and finding the transformation matrix by RANSAC. The object can only be modified in scene by similarity transform in 2D space (scaled, rotated, translated), so I need to estimate 2x2 transform matrix instead of 3x3 homography matrix in 3D space. How can I achieve this in OpenCV?
You can use estimateRigidTransform (I do not know whether it is RANSAC, the code at http://code.opencv.org/projects/opencv/repository/revisions/2.4.4/entry/modules/video/src/lkpyramid.cpp says RANSAC in its comment), the third parameter is set to false in order to get just scale+rotation+translation:
#include <vector>
#include <iostream>
#include "opencv2/video/tracking.hpp"
int main( int argc, char** argv )
{
std::vector<cv::Point2f> p1s,p2s;
p1s.push_back(cv::Point2f( 1, 0));
p1s.push_back(cv::Point2f( 0, 1));
p1s.push_back(cv::Point2f(-1, 0));
p1s.push_back(cv::Point2f( 0,-1));
p2s.push_back(cv::Point2f(1+sqrt(2)/2, 1+sqrt(2)/2));
p2s.push_back(cv::Point2f(1-sqrt(2)/2, 1+sqrt(2)/2));
p2s.push_back(cv::Point2f(1-sqrt(2)/2, 1-sqrt(2)/2));
p2s.push_back(cv::Point2f(1+sqrt(2)/2, 1-sqrt(2)/2));
cv::Mat t = cv::estimateRigidTransform(p1s,p2s,false);
std::cout << t << "\n";
return 0;
}
compiled and tested with OpenCV 2.4.4.
The output is:
[0.7071067988872528, -0.7071067988872528, 1.000000029802322;
0.7071067988872528, 0.7071067988872528, 1.000000029802322]
You can use find an affine transformation between the point sets using opencv, this is slightly more general than the case you are describing (known as a similarity transform) as it describes shearing transformations of the shapes as well.
It can be performed using the function getAffineTransform(InputArray src, InputArray dst). This takes 2 sets of three points and calculates an affine transform between them.

Detect RGB color interval with OpenCV and C++

I would like to detect a red colored object in a video or image, with OpenCV and C++. What algorithms are available to do this?
I would like to do a comparison of the relationship between levels of color. Indeed, when the brightness varies, the ratio remains constant. So I want to determine the interval of acceptable values ​​for the colors of zone of interest.
For cases I look at the red R (x, y) and G (x, y) / R (x, y) and B (x, y) / R (x, y).
I will then find the ranges of acceptable values​​: to get a first idea,
it releases the maximum and minimum for each report from a palette image
red
I would like to find something like this :
if minR<=R(x,y)<=maxR and minG<=G(x,y)<=maxG minB<=B(x,y)<=maxB so
couleur(x,y)=blanc else couleur(x,y)=NOIR
Preprocess the image using cv::inRange() with the necessary color bounds to isolate red. You may want to transform to a color-space like HSV or YCbCr for more stable color bounds because chrominance and luminance are better separated. You can use cvtColor() for this. Check out my answer here for a good example of using inRange() with createTrackbar().
So, the basic template would be:
Mat redColorOnly;
inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
detectSquares(redColorOnly);
EDIT : Just use the trackbars to determine the color range you want to isolate, and then use the color intervals you find that work. You don't have to constantly use the trackbars.
EXAMPLE :
So, for a complete example of the template here you go,
I created a simple (and ideal) image in GIMP, shown below:
Then I created this program to filter all but the red squares:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Mat redFilter(const Mat& src)
{
assert(src.type() == CV_8UC3);
Mat redOnly;
inRange(src, Scalar(0, 0, 0), Scalar(0, 0, 255), redOnly);
return redOnly;
}
int main(int argc, char** argv)
{
Mat input = imread("colored_squares.png");
imshow("input", input);
waitKey();
Mat redOnly = redFilter(input);
imshow("redOnly", redOnly);
waitKey();
// detect squares after filtering...
return 0;
}
NOTE : You will not be able to use these exact same filter intervals for your real imagery; I just suggest you tune the intervals with trackbars to see what is acceptable.
The output looks like this:
Voila! Only the red square remains :)
Enjoy :)
In that case, try to find out any unique feature for your required square which distinguish it from other squares.
For eg,
1) Color of square:- If color is different from all other squares, you can check inside each square, and select square with required color, as explained by mevatron.
2) Size of square :- If you know size of square, then compare size of each square and select best.
You can convert your image from RGB value to HSV type using inbuilt function. After you can find every color has some HSV value range. So you can find that and give that as threshold and differentiate those points from others.

Resources