OpenCV polar transform selective region - image-processing

I want to restrict the operating region of the polar transform in OpenCV's cvLogPolar function. I would consider rewriting the function from scratch. I am unwrapping a fisheye lens image to yield a panorama, and I want to make it as efficient as possible. Much of the image is cropped away after the transform, giving a donut-shaped region of interest in the input image:
This means much processing is wasted on black pixels.
This should be pretty simple, right? The function should take two additional arguments for clipping extents, radius1 and radius2. Here is the relevant pol-to-cart portion of the cvLogPolar function from imgwarp.cpp:
cvLogPolar( const CvArr* srcarr, CvArr* dstarr,
CvPoint2D32f center, double M, int flags )
{
cv::Ptr<CvMat> mapx, mapy;
CvMat srcstub, *src = cvGetMat(srcarr, &srcstub);
CvMat dststub, *dst = cvGetMat(dstarr, &dststub);
CvSize ssize, dsize;
if( !CV_ARE_TYPES_EQ( src, dst ))
CV_Error( CV_StsUnmatchedFormats, "" );
if( M <= 0 )
CV_Error( CV_StsOutOfRange, "M should be >0" );
ssize = cvGetMatSize(src);
dsize = cvGetMatSize(dst);
mapx = cvCreateMat( dsize.height, dsize.width, CV_32F );
mapy = cvCreateMat( dsize.height, dsize.width, CV_32F );
if( !(flags & CV_WARP_INVERSE_MAP) )
//---snip---
else
{
int x, y;
CvMat bufx, bufy, bufp, bufa;
double ascale = ssize.height/(2*CV_PI);
cv::AutoBuffer<float> _buf(4*dsize.width);
float* buf = _buf;
bufx = cvMat( 1, dsize.width, CV_32F, buf );
bufy = cvMat( 1, dsize.width, CV_32F, buf + dsize.width );
bufp = cvMat( 1, dsize.width, CV_32F, buf + dsize.width*2 );
bufa = cvMat( 1, dsize.width, CV_32F, buf + dsize.width*3 );
for( x = 0; x < dsize.width; x++ )
bufx.data.fl[x] = (float)x - center.x;
for( y = 0; y < dsize.height; y++ )
{
float* mx = (float*)(mapx->data.ptr + y*mapx->step);
float* my = (float*)(mapy->data.ptr + y*mapy->step);
for( x = 0; x < dsize.width; x++ )
bufy.data.fl[x] = (float)y - center.y;
#if 1
cvCartToPolar( &bufx, &bufy, &bufp, &bufa );
for( x = 0; x < dsize.width; x++ )
bufp.data.fl[x] += 1.f;
cvLog( &bufp, &bufp );
for( x = 0; x < dsize.width; x++ )
{
double rho = bufp.data.fl[x]*M;
double phi = bufa.data.fl[x]*ascale;
mx[x] = (float)rho;
my[x] = (float)phi;
}
#else
//---snip---
#endif
}
}
cvRemap( src, dst, mapx, mapy, flags, cvScalarAll(0) );
}
Since the routine works by iterating through pixels in the destination image, the r1 and r2 clipping region would just need to be translated to y1 and y2 row region. Then we just change the for loop: for( y = 0; y < dsize.height; y++ ) becomes for( y = y1; y < y2; y++ ).
Correct?
What about constraining cvRemap? I am hoping it ignores unmoved pixels or it is a negligible computational cost.

I ended up doing a different optimization: I store the result of the polar transform operation in persistent remapping matrices. This helps a LOT. If you're doing polar unwrap on full motion video using the same polar transform mapping at all times, you don't want to recalculate the transform with a million sin/cos operations every single frame. So this just required some small modification on the logPolar/linearPolar operations in the OpenCV source to save the remap result somewhere outside.

Related

Comparing openCv PnP with openGv PnP

I am trying to build a test project to compare the openCv solvePnP implementation with the openGv one.
the opencv is detailed here:
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp
and the openGv here:
https://laurentkneip.github.io/opengv/page_how_to_use.html
Using the opencv example code, I am finding a chessboard in an image, and constructing the matching 3d points. i run the cv pnp, then set up the Gv solver. the cv pnp runs fine, and prints the values:
//rotation
-0.003040771263293328, 0.9797142824436152, -0.2003763421317906;
0.0623096853748876, 0.2001735322445355, 0.977777101438374]
//translation
[-12.06549797067309;
-9.533070368412945;
37.6825295047483]
I test by reprojecting the 3d points, and it looks good.
The Gv Pnp, however, prints nan for all values. i have tried to follow the example code, but I must be making a mistake somewhere. The code is:
int main(int argc, char **argv) {
cv::Mat matImg = cv::imread("chess.jpg");
cv::Size boardSize(8, 6);
//Construct the chessboard model
double squareSize = 2.80;
std::vector<cv::Point3f> objectPoints;
for (int i = 0; i < boardSize.height; i++) {
for (int j = 0; j < boardSize.width; j++) {
objectPoints.push_back(
cv::Point3f(double(j * squareSize), float(i * squareSize), 0));
}
}
cv::Mat rvec, tvec;
cv::Mat cameraMatrix, distCoeffs;
cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
fs["cameraMatrix"] >> cameraMatrix;
fs["dist_coeffs"] >> distCoeffs;
//Found chessboard corners
std::vector<cv::Point2f> imagePoints;
bool found = cv::findChessboardCorners(matImg, boardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
if (found) {
cv::drawChessboardCorners(matImg, boardSize, cv::Mat(imagePoints), found);
//SolvePnP
cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
drawAxis(matImg, cameraMatrix, distCoeffs, rvec, tvec, squareSize);
}
//cv to matrix
cv::Mat R;
cv::Rodrigues(rvec, R);
std::cout << "results from cv:" << R << tvec << std::endl;
//START OPEN GV
//vars
bearingVectors_t bearingVectors;
points_t points;
rotation_t rotation;
//add points to the gv type
for (int i = 0; i < objectPoints.size(); ++i)
{
point_t pnt;
pnt.x() = objectPoints[i].x;
pnt.y() = objectPoints[i].y;
pnt.z() = objectPoints[i].z;
points.push_back(pnt);
}
/*
K is the common 3x3 camera matrix that you can compose with cx, cy, fx, and fy.
You put the image point into homogeneous form (append a 1),
multiply it with the inverse of K from the left, which gives you a normalized image point (a spatial direction vector).
You normalize that to norm 1.
*/
//to homogeneous
std::vector<cv::Point3f> imagePointsH;
convertPointsToHomogeneous(imagePoints, imagePointsH);
//multiply by K.Inv
for (int i = 0; i < imagePointsH.size(); i++)
{
cv::Point3f pt = imagePointsH[i];
cv::Mat ptMat(3, 1, cameraMatrix.type());
ptMat.at<double>(0, 0) = pt.x;
ptMat.at<double>(1, 0) = pt.y;
ptMat.at<double>(2, 0) = pt.z;
cv::Mat dstMat = cameraMatrix.inv() * ptMat;
//store as bearing vector
bearingVector_t bvec;
bvec.x() = dstMat.at<double>(0, 0);
bvec.y() = dstMat.at<double>(1, 0);
bvec.z() = dstMat.at<double>(2, 0);
bvec.normalize();
bearingVectors.push_back(bvec);
}
//create a central absolute adapter
absolute_pose::CentralAbsoluteAdapter adapter(
bearingVectors,
points,
rotation);
size_t iterations = 50;
std::cout << "running epnp (all correspondences)" << std::endl;
transformation_t epnp_transformation;
for (size_t i = 0; i < iterations; i++)
epnp_transformation = absolute_pose::epnp(adapter);
std::cout << "results from epnp algorithm:" << std::endl;
std::cout << epnp_transformation << std::endl << std::endl;
return 0;
}
Where am i going wrong in setting up the openGv Pnp solver?
Years later, i had this same issue, and solved it. To convert openCv to openGV bearing vectors, you can do this:
bearingVectors_t bearingVectors;
std::vector<cv::Point2f> dd2;
const int N1 = static_cast<int>(dd2.size());
cv::Mat points1_mat = cv::Mat(dd2).reshape(1);
// first rectify points and construct homogeneous points
// construct homogeneous points
cv::Mat ones_col1 = cv::Mat::ones(N1, 1, CV_32F);
cv::hconcat(points1_mat, ones_col1, points1_mat);
// undistort points
cv::Mat points1_rect = points1_mat * cameraMatrix.inv();
// compute bearings
points2bearings3(points1_rect, &bearingVectors);
using this function for the final conversion:
// Convert a set of points to bearing
// points Matrix of size Nx3 with the set of points.
// bearings Vector of bearings.
void points2bearings3(const cv::Mat& points,
opengv::bearingVectors_t* bearings) {
double l;
cv::Vec3f p;
opengv::bearingVector_t bearing;
for (int i = 0; i < points.rows; ++i) {
p = cv::Vec3f(points.row(i));
l = std::sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]);
for (int j = 0; j < 3; ++j) bearing[j] = p[j] / l;
bearings->push_back(bearing);
}
}

Un-Distort raw images received from the Leap motion cameras

I've been working with the leap for a long time now. 2.1.+ SDK version allows us to access the cameras and get raw images. I want to use those images with OpenCV for square/circle detection and stuff... the problem is i can't get those images undistorted. i read the docs, but don't quite get what they mean. here's one thing i need to understand properly before going forward
distortion_data_ = image.distortion();
for (int d = 0; d < image.distortionWidth() * image.distortionHeight(); d += 2)
{
float dX = distortion_data_[d];
float dY = distortion_data_[d + 1];
if(!((dX < 0) || (dX > 1)) && !((dY < 0) || (dY > 1)))
{
//what do i do now to undistort the image?
}
}
data = image.data();
mat.put(0, 0, data);
//Imgproc.Canny(mat, mat, 100, 200);
//mat = findSquare(mat);
ok.showImage(mat);
in the docs it says something like this "
The calibration map can be used to correct image distortion due to lens curvature and other imperfections. The map is a 64x64 grid of points. Each point consists of two 32-bit values....(the rest on the dev website)"
can someone explain this in detail please, OR OR, just post the java code to undistort the images give me an output MAT image so i may continue processing that (i'd still prefer a good explanation if possible)
Ok, I have no leap camera to test all this, but this is how I understand the documentation:
The calibration map does not hold offsets but full point positions. An entry says where the pixel has to be placed instead. Those values are mapped between 0 and 1, which means that you have to mutiply them by your real image width and height.
What isnt explained explicitly is, how you pixel positions are mapped to 64 x 64 positions of your calibration map. I assume that it's the same way: 640 pixels width are mapped to 64 pixels width and 240 pixels height are mapped to 64 pixels height.
So in general, to move from one of your 640 x 240 pixel positions (pX, pY) to the undistorted position you will:
compute corresponding pixel position in the calibration map: float cX = pX/640.0f * 64.0f; float cY = pY/240.0f * 64.0f;
(cX, cY) is now the locaion of that pixel in the calibration map. You will have to interpolate between two pixel locaions, but I will now only explain how to go on for a discrete location in the calibration map (cX', cY') = rounded locations of (cX, cY).
read the x and y values out of the calibration map: dX, dY as in the documentation. You have to compute the location in the array by: d = dY*calibrationMapWidth*2 + dX*2;
dX and dY are values between 0 and 1 (if not: dont undistort this point because there is no undistortion available. To find out the pixel location in your real image, multiply by the image size: uX = dX*640; uY = dY*240;
set your pixel to the undistorted value: undistortedImage(pX,pY) = distortedImage(uX,uY);
but you dont have discrete point positions in your calibration map, so you have to interpolate. I'll give you an example:
let be (cX,cY) = (13.7, 10.4)
so you read from your calibration map four values:
calibMap(13,10) = (dX1, dY1)
calibMap(14,10) = (dX2, dY2)
calibMap(13,11) = (dX3, dY3)
calibMap(14,11) = (dX4, dY4)
now your undistorted pixel position for (13.7, 10.4) is (multiply each with 640 or 240 to get uX1, uY1, uX2, etc):
// interpolate in x direction first:
float tmpUX1 = uX1*0.3 + uX2*0.7
float tmpUY1 = uY1*0.3 + uY2*0.7
float tmpUX2 = uX3*0.3 + uX4*0.7
float tmpUY2 = uY3*0.3 + uY4*0.7
// now interpolate in y direction
float combinedX = tmpUX1*0.6 + tmpUX2*0.4
float combinedY = tmpUY1*0.6 + tmpUY2*0.4
and your undistorted point is:
undistortedImage(pX,pY) = distortedImage(floor(combinedX+0.5),floor(combinedY+0.5)); or interpolate pixel values there too.
Hope this helps for a basic understanding. I'll try to add openCV remap code soon! The only point thats unclear for me is, whether the mapping between pX/Y and cX/Y is correct, cause thats not explicitly explained in the documentation.
Here is some code. You can skip the first part, where I am faking a distortion and creating the map, which is your initial state.
With openCV it is simple, just resize the calibration map to your image size and multiply all the values with your resolution. The nice thing is, that openCV performs the interpolation "automatically" while resizing.
int main()
{
cv::Mat input = cv::imread("../Data/Lenna.png");
cv::Mat distortedImage = input.clone();
// now i fake some distortion:
cv::Mat transformation = cv::Mat::eye(3,3,CV_64FC1);
transformation.at<double>(0,0) = 2.0;
cv::warpPerspective(input,distortedImage,transformation,input.size());
cv::imshow("distortedImage", distortedImage);
//cv::imwrite("../Data/LenaFakeDistorted.png", distortedImage);
// now fake a calibration map corresponding to my faked distortion:
const unsigned int cmWidth = 64;
const unsigned int cmHeight = 64;
// compute the calibration map by transforming image locations to values between 0 and 1 for legal positions.
float calibMap[cmWidth*cmHeight*2];
for(unsigned int y = 0; y < cmHeight; ++y)
for(unsigned int x = 0; x < cmWidth; ++x)
{
float xx = (float)x/(float)cmWidth;
xx = xx*2.0f; // this if from my fake distortion... this gives some values bigger than 1
float yy = (float)y/(float)cmHeight;
calibMap[y*cmWidth*2+ 2*x] = xx;
calibMap[y*cmWidth*2+ 2*x+1] = yy;
}
// NOW you have the initial situation of your scenario: calibration map and distorted image...
// compute the image locations of calibration map values:
cv::Mat cMapMatX = cv::Mat(cmHeight, cmWidth, CV_32FC1);
cv::Mat cMapMatY = cv::Mat(cmHeight, cmWidth, CV_32FC1);
for(int j=0; j<cmHeight; ++j)
for(int i=0; i<cmWidth; ++i)
{
cMapMatX.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i];
cMapMatY.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i+1];
}
//cv::imshow("mapX",cMapMatX);
//cv::imshow("mapY",cMapMatY);
// interpolate those values for each of your original images pixel:
// here I use linear interpolation, you could use cubic or other interpolation too.
cv::resize(cMapMatX, cMapMatX, distortedImage.size(), 0,0, CV_INTER_LINEAR);
cv::resize(cMapMatY, cMapMatY, distortedImage.size(), 0,0, CV_INTER_LINEAR);
// now the calibration map has the size of your original image, but its values are still between 0 and 1 (for legal positions)
// so scale to image size:
cMapMatX = distortedImage.cols * cMapMatX;
cMapMatY = distortedImage.rows * cMapMatY;
// now create undistorted image:
cv::Mat undistortedImage = cv::Mat(distortedImage.rows, distortedImage.cols, CV_8UC3);
undistortedImage.setTo(cv::Vec3b(0,0,0)); // initialize black
//cv::imshow("undistorted", undistortedImage);
for(int j=0; j<undistortedImage.rows; ++j)
for(int i=0; i<undistortedImage.cols; ++i)
{
cv::Point undistPosition;
undistPosition.x =(cMapMatX.at<float>(j,i)); // this will round the position, maybe you want interpolation instead
undistPosition.y =(cMapMatY.at<float>(j,i));
if(undistPosition.x >= 0 && undistPosition.x < distortedImage.cols
&& undistPosition.y >= 0 && undistPosition.y < distortedImage.rows)
{
undistortedImage.at<cv::Vec3b>(j,i) = distortedImage.at<cv::Vec3b>(undistPosition);
}
}
cv::imshow("undistorted", undistortedImage);
cv::waitKey(0);
//cv::imwrite("../Data/LenaFakeUndistorted.png", undistortedImage);
}
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
I use this as input and fake a remap/distortion from which I compute my calib mat:
input:
faked distortion:
used the map to undistort the image:
TODO: after those computatons use a opencv map with those values to perform faster remapping.
Here's an example on how to do it without using OpenCV. The following seems to be faster than using the Leap::Image::warp() method (probably due to the additional function call overhead when using warp()):
float destinationWidth = 320;
float destinationHeight = 120;
unsigned char destination[(int)destinationWidth][(int)destinationHeight];
//define needed variables outside the inner loop
float calX, calY, weightX, weightY, dX1, dX2, dX3, dX4, dY1, dY2, dY3, dY4, dX, dY;
int x1, x2, y1, y2, denormalizedX, denormalizedY;
int x, y;
const unsigned char* raw = image.data();
const float* distortion_buffer = image.distortion();
//Local variables for values needed in loop
const int distortionWidth = image.distortionWidth();
const int width = image.width();
const int height = image.height();
for (x = 0; x < destinationWidth; x++) {
for (y = 0; y < destinationHeight; y++) {
//Calculate the position in the calibration map (still with a fractional part)
calX = 63 * x/destinationWidth;
calY = 63 * y/destinationHeight;
//Save the fractional part to use as the weight for interpolation
weightX = calX - truncf(calX);
weightY = calY - truncf(calY);
//Get the x,y coordinates of the closest calibration map points to the target pixel
x1 = calX; //Note truncation to int
y1 = calY;
x2 = x1 + 1;
y2 = y1 + 1;
//Look up the x and y values for the 4 calibration map points around the target
// (x1, y1) .. .. .. (x2, y1)
// .. ..
// .. (x, y) ..
// .. ..
// (x1, y2) .. .. .. (x2, y2)
dX1 = distortion_buffer[x1 * 2 + y1 * distortionWidth];
dX2 = distortion_buffer[x2 * 2 + y1 * distortionWidth];
dX3 = distortion_buffer[x1 * 2 + y2 * distortionWidth];
dX4 = distortion_buffer[x2 * 2 + y2 * distortionWidth];
dY1 = distortion_buffer[x1 * 2 + y1 * distortionWidth + 1];
dY2 = distortion_buffer[x2 * 2 + y1 * distortionWidth + 1];
dY3 = distortion_buffer[x1 * 2 + y2 * distortionWidth + 1];
dY4 = distortion_buffer[x2 * 2 + y2 * distortionWidth + 1];
//Bilinear interpolation of the looked-up values:
// X value
dX = dX1 * (1 - weightX) * (1- weightY) + dX2 * weightX * (1 - weightY) + dX3 * (1 - weightX) * weightY + dX4 * weightX * weightY;
// Y value
dY = dY1 * (1 - weightX) * (1- weightY) + dY2 * weightX * (1 - weightY) + dY3 * (1 - weightX) * weightY + dY4 * weightX * weightY;
// Reject points outside the range [0..1]
if((dX >= 0) && (dX <= 1) && (dY >= 0) && (dY <= 1)) {
//Denormalize from [0..1] to [0..width] or [0..height]
denormalizedX = dX * width;
denormalizedY = dY * height;
//look up the brightness value for the target pixel
destination[x][y] = raw[denormalizedX + denormalizedY * width];
} else {
destination[x][y] = -1;
}
}
}

Approximate photo of a simple drawing using lines

As an input I have a photo of a simple symbol, e.g.: https://www.dropbox.com/s/nrmsvfd0le0bkke/symbol.jpg
I would like to detect the straight lines in it, like points of start and ends of the lines. In this case, assuming the top left of the symbol is (0,0), the lines would be defined like this:
start end (coordinates of beginning and end of a line)
1. (0,0); (0,10) (vertical line)
2. (0,10); (15, 15)
3. (15,15); (0, 20)
4. (0,20); (0,30)
How can I do it (pereferably using OpenCV)? I though about Hough lines, but they seem to be good for perfect thin straight lines, which is not the case in a drawing. I'll probably work on binarized image, too.
Give a try on this,
Apply thinning algorithm on threshold image.
Find contours.
approxPolyDP for the found contour.
See some reference:
approxpolydp-for-edge-maps
Creating Bounding boxes and circles for contours
maybe you can work on this one.
assume a perfect binarization:
run HoughLinesP
(not implemented) try to group those detected lines
I used this code:
int main()
{
cv::Mat image = cv::imread("HoughLinesP_perfect.png");
cv::Mat gray;
cv::cvtColor(image,gray,CV_BGR2GRAY);
cv::Mat output; image.copyTo(output);
cv::Mat g_thres = gray == 0;
std::vector<cv::Vec4i> lines;
//cv::HoughLinesP( binary, lines, 1, 2*CV_PI/180, 100, 100, 50 );
// cv::HoughLinesP( h_thres, lines, 1, CV_PI/180, 100, image.cols/2, 10 );
cv::HoughLinesP( g_thres, lines, 1, CV_PI/(4*180.0), 50, image.cols/20, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
cv::line( output, cv::Point(lines[i][0], lines[i][3]),
cv::Point(lines[i][4], lines[i][3]), cv::Scalar(155,255,155), 1, 8 );
}
cv::imshow("g thres", g_thres);
cv::imwrite("HoughLinesP_out.png", output);
cv::resize(output, output, cv::Size(), 0.5,0.5);
cv::namedWindow("output"); cv::imshow("output", output);
cv::waitKey(-1);
std::cout << "finished" << std::endl;
return 0;
}
EDIT:
updated code with simple line clustering (`minimum_distance function taken from SO):
giving this result:
float minimum_distance(cv::Point2f v, cv::Point2f w, cv::Point2f p) {
// Return minimum distance between line segment vw and point p
const float l2 = cv::norm(w-v) * cv::norm(w-v); // i.e. |w-v|^2 - avoid a sqrt
if (l2 == 0.0) return cv::norm(p-v); // v == w case
// Consider the line extending the segment, parameterized as v + t (w - v).
// We find projection of point p onto the line.
// It falls where t = [(p-v) . (w-v)] / |w-v|^2
//const float t = dot(p - v, w - v) / l2;
float t = ((p-v).x * (w-v).x + (p-v).y * (w-v).y)/l2;
if (t < 0.0) return cv::norm(p-v); // Beyond the 'v' end of the segment
else if (t > 1.0) return cv::norm(p-w); // Beyond the 'w' end of the segment
const cv::Point2f projection = v + t * (w - v); // Projection falls on the segment
return cv::norm(p - projection);
}
int main()
{
cv::Mat image = cv::imread("HoughLinesP_perfect.png");
cv::Mat gray;
cv::cvtColor(image,gray,CV_BGR2GRAY);
cv::Mat output; image.copyTo(output);
cv::Mat g_thres = gray == 0;
std::vector<cv::Vec4i> lines;
cv::HoughLinesP( g_thres, lines, 1, CV_PI/(4*180.0), 50, image.cols/20, 10 );
float minDist = 100;
std::vector<cv::Vec4i> lines_filtered;
for( size_t i = 0; i < lines.size(); i++ )
{
bool keep = true;
int overwrite = -1;
cv::Point2f a(lines[i][0], lines[i][6]);
cv::Point2f b(lines[i][7], lines[i][3]);
float lengthAB = cv::norm(a-b);
for( size_t j = 0; j < lines_filtered.size(); j++ )
{
cv::Point2f c(lines_filtered[j][0], lines_filtered[j][8]);
cv::Point2f d(lines_filtered[j][9], lines_filtered[j][3]);
float distCDA = minimum_distance(c,d,a);
float distCDB = minimum_distance(c,d,b);
float lengthCD = cv::norm(c-d);
if((distCDA < minDist) && (distCDB < minDist))
{
if(lengthCD >= lengthAB)
{
keep = false;
}
else
{
overwrite = j;
}
}
}
if(keep)
{
if(overwrite >= 0)
{
lines_filtered[overwrite] = lines[i];
}
else
{
lines_filtered.push_back(lines[i]);
}
}
}
for( size_t i = 0; i < lines_filtered.size(); i++ )
{
cv::line( output, cv::Point(lines_filtered[i][0], lines_filtered[i][10]),
cv::Point(lines_filtered[i][11], lines_filtered[i][3]), cv::Scalar(155,255,155), 2, 8 );
}
cv::imshow("g thres", g_thres);
cv::imwrite("HoughLinesP_out.png", output);
cv::resize(output, output, cv::Size(), 0.5,0.5);
cv::namedWindow("output"); cv::imshow("output", output);
cv::waitKey(-1);
std::cout << "finished" << std::endl;
return 0;
}
You should try the Hough Line Transform. And here is an example from this website
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("building.jpg", 0);
Mat dst, cdst;
Canny(src, dst, 50, 200, 3);
cvtColor(dst, cdst, CV_GRAY2BGR);
vector<Vec2f> lines;
// detect lines
HoughLines(dst, lines, 1, CV_PI/180, 150, 0, 0 );
// draw lines
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( cdst, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
imshow("source", src);
imshow("detected lines", cdst);
waitKey();
return 0;
}
With this you should be able to tweak and get the proprieties you are looking for (vertices).

OpenCV: Fundamental matrix accuracy

I am trying to calculate the fundamental matrix of 2 images (different photos of a static scene taken by a same camera).
I calculated it using findFundamentalMat and I used the result to calculate other matrices (Essential, Rotation, ...). The results were obviously wrong. So, I tried to be sure of the accuracy of the calculated fundamental matrix.
Using the epipolar constraint equation, I Computed fundamental matrix error. The error is very high (like a few hundreds). I do not know what is wrong about my code. I really appreciate any help. In particular: Is there any thing that I am missing in Fundamental matrix calculation? and is the way that I calculate the error right?
Also, I ran the code with very different number of matches. There are usually lots of outliers. e.g in a case with more than 80 matches, there was only 10 inliers.
Mat img_1 = imread( "imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 1000;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
vector<Point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++ )
{
// queryIdx is the "left" image
imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
// trainIdx is the "right" image
imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
//-- Step 4: Calculate Fundamental matrix
Mat f_mask;
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.5, 0.99, f_mask);
//-- Step 5: Calculate Fundamental matrix error
//Camera intrinsics
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};
Mat K(3, 3, CV_64F, data);
//Camera distortion parameters
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
//working with undistorted points
vector<Point2f> undistorted_1,undistorted_2;
vector<Point3f> line_1, line_2;
undistortPoints(imgpts1,undistorted_1,K,D);
undistortPoints(imgpts2,undistorted_2,K,D);
computeCorrespondEpilines(undistorted_1,1,F,line_1);
computeCorrespondEpilines(undistorted_2,2,F,line_2);
double f_err=0.0;
double fx,fy,cx,cy;
fx=K.at<double>(0,0);fy=K.at<double>(1,1);cx=K.at<double>(0,2);cy=K.at<double>(1,2);
Point2f pt1, pt2;
int inliers=0;
//calculation of fundamental matrix error for inliers
for (int i=0; i<f_mask.size().height; i++)
if (f_mask.at<char>(i)==1)
{
inliers++;
//calculate non-normalized values
pt1.x = undistorted_1[i].x * fx + cx;
pt1.y = undistorted_1[i].y * fy + cy;
pt2.x = undistorted_2[i].x * fx + cx;
pt2.y = undistorted_2[i].y * fy + cy;
f_err += = fabs(pt1.x*line_2[i].x +
pt1.y*line_2[i].y + line_2[i].z)
+ fabs(pt2.x*line_1[i].x +
pt2.y*line_1[i].y + line_1[i].z);
}
double AvrErr = f_err/inliers;
I believe the problem is because you calculated the Fundamental matrix based on brute force matcher only, you should make some more optimization for these corresponding point, like ration test and symmetric test.
I recommend you to ready page 233, from book "OpenCV2 Computer Vision Application Programming Cookbook" Chapter 9.
Its explained very well!
Given that we are supplied with the intrinsic matrix K, and distortion matrix D, we should undistort the image points before feeding it to findFundamentalMat and should work on undistorted image co-ordinatates henceforth (ie for computing the error). I found that this simple change reduced the maximum error of any image point pair from 176.0f to 0.2, and the number of inliers increased from 18 to 77.
I also toyed with normalizing the undistorted image points before it to findFundamentalMat, which reduced the maximum error of any image point pair to almost zero, though it does not increase the number of inliers any further.
const float kEpsilon = 1.0e-6f;
float sampsonError(const Mat &dblFMat, const Point2f &pt1, const Point2f &pt2)
{
Mat m_pt1(3, 1 , CV_64FC1 );//m_pt1(pt1);
Mat m_pt2(3, 1 , CV_64FC1 );
m_pt1.at<double>(0,0) = pt1.x; m_pt1.at<double>(1,0) = pt1.y; m_pt1.at<double>(2,0) = 1.0f;
m_pt2.at<double>(0,0) = pt2.x; m_pt2.at<double>(1,0) = pt2.y; m_pt2.at<double>(2,0) = 1.0f;
assert(dblFMat.rows==3 && dblFMat.cols==3);
assert(m_pt1.rows==3 && m_pt1.cols==1);
assert(m_pt2.rows==3 && m_pt2.cols==1);
Mat dblFMatT(dblFMat.t());
Mat dblFMatp1=(dblFMat * m_pt1);
Mat dblFMatTp2=(dblFMatT * m_pt2);
assert(dblFMatp1.rows==3 && dblFMatp1.cols==1);
assert(dblFMatTp2.rows==3 && dblFMatTp2.cols==1);
Mat numerMat=m_pt2.t() * dblFMatp1;
double numer=numerMat.at<double>(0,0);
if (numer < kEpsilon)
{
return 0;
} else {
double denom=dblFMatp1.at<double>(0,0) + dblFMatp1.at<double>(1,0) + dblFMatTp2.at<double>(0,0) + dblFMatTp2.at<double>(1,0);
double ret=(numer*numer)/denom;
return (numer*numer)/denom;
}
}
#define UNDISTORT_IMG_PTS 1
#define NORMALIZE_IMG_PTS 1
int filter_imgpts_pairs_with_epipolar_constraint(
const vector<Point2f> &raw_imgpts_1,
const vector<Point2f> &raw_imgpts_2,
int imgW,
int imgH
)
{
#if UNDISTORT_IMG_PTS
//Camera intrinsics
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};
Mat K(3, 3, CV_64F, data);
//Camera distortion parameters
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
//working with undistorted points
vector<Point2f> unnormalized_imgpts_1,unnormalized_imgpts_2;
undistortPoints(raw_imgpts_1,unnormalized_imgpts_1,K,D);
undistortPoints(raw_imgpts_2,unnormalized_imgpts_2,K,D);
#else
vector<Point2f> unnormalized_imgpts_1(raw_imgpts_1);
vector<Point2f> unnormalized_imgpts_2(raw_imgpts_2);
#endif
#if NORMALIZE_IMG_PTS
float c_col=imgW/2.0f;
float c_row=imgH/2.0f;
float multiply_factor= 2.0f/(imgW+imgH);
vector<Point2f> final_imgpts_1(unnormalized_imgpts_1);
vector<Point2f> final_imgpts_2(unnormalized_imgpts_2);
for( auto iit=final_imgpts_1.begin(); iit != final_imgpts_1.end(); ++ iit)
{
Point2f &imgpt(*iit);
imgpt.x=(imgpt.x - c_col)*multiply_factor;
imgpt.y=(imgpt.y - c_row)*multiply_factor;
}
for( auto iit=final_imgpts_2.begin(); iit != final_imgpts_2.end(); ++ iit)
{
Point2f &imgpt(*iit);
imgpt.x=(imgpt.x - c_col)*multiply_factor;
imgpt.y=(imgpt.y - c_row)*multiply_factor;
}
#else
vector<Point2f> final_imgpts_1(unnormalized_imgpts_1);
vector<Point2f> final_imgpts_2(unnormalized_imgpts_2);
#endif
int algorithm=FM_RANSAC;
//int algorithm=FM_LMEDS;
vector<uchar>status;
Mat F = findFundamentalMat (final_imgpts_1, final_imgpts_2, algorithm, 0.5, 0.99, status);
int n_inliners=std::accumulate(status.begin(), status.end(), 0);
assert(final_imgpts_1.size() == final_imgpts_2.size());
vector<float> serr;
for( unsigned int i = 0; i< final_imgpts_1.size(); i++ )
{
const Point2f &p_1(final_imgpts_1[i]);
const Point2f &p_2(final_imgpts_2[i]);
float err= sampsonError(F, p_1, p_2);
serr.push_back(err);
}
float max_serr=*max_element(serr.begin(), serr.end());
cout << "found " << raw_imgpts_1.size() << "matches " << endl;
cout << " and " << n_inliners << " inliners" << endl;
cout << " max sampson err" << max_serr << endl;
return 0;
}

How to draw Optical flow images from ocl::PyrLKOpticalFlow::dense()

How to draw Optical flow images from ocl::PyrLKOpticalFlow::dense() Which actually calculates both horizontal and vertical component of the Optical flow? So I don't know how to draw them. I'm new to opencv . Can anyone help me?
Syntax :
ocl::PyrLKOpticalFlow::dense(oclMat &prevImg, oclMat& nextImg, oclMat& u, oclMat &v,oclMat &err)
A well establische method used in the optical flow community is to display a motion vector field as a color coded image as you can see at one of the various data sets. E.g MPI dataset or the Middlebury dataset.
Therefor you estimate the length and the angle of your motion vector. And use a HSV to RGB colorspace transformation (see OpenCV cvtColor function) to create your color coded image. Use the angle of your motion vector as H (Hue) - channel and the normalized length as the S (Saturation) - channel and set V (Value) to 1. The the color of your image will show you the direction of your motion and the saturation the length ( speed ).
The code will should like this ( Note if use_value == true, the Saturation will be set to 1 and the Value channel is related to the motion vector length):
void FlowToRGB(const cv::Mat & inpFlow,
cv::Mat & rgbFlow,
const float & max_size ,
bool use_value)
{
if(inpFlow.empty()) return;
if( inpFlow.depth() != CV_32F)
throw(std::exception("FlowToRGB: error inpFlow wrong data type ( has be CV_32FC2"));
const float grad2deg = (float)(90/3.141);
cv::Mat pol(inpFlow.size(), CV_32FC2);
float mean_val = 0, min_val = 1000, max_val = 0;
float _dx, _dy;
for(int r = 0; r < inpFlow.rows; r++)
{
for(int c = 0; c < inpFlow.cols; c++)
{
cv::Point2f polar = cvmath::toPolar(inpFlow.at<cv::Point2f>(r,c));
polar.y *= grad2deg;
mean_val +=polar.x;
max_val = MAX(max_val, polar.x);
min_val = MIN(min_val, polar.x);
pol.at<cv::Point2f>(r,c) = cv::Point2f(polar.y,polar.x);
}
}
mean_val /= inpFlow.size().area();
float scale = max_val - min_val;
float shift = -min_val;//-mean_val + scale;
scale = 255.f/scale;
if( max_size > 0)
{
scale = 255.f/max_size;
shift = 0;
}
//calculate the angle, motion value
cv::Mat hsv(inpFlow.size(), CV_8UC3);
uchar * ptrHSV = hsv.ptr<uchar>();
int idx_val = (use_value) ? 2:1;
int idx_sat = (use_value) ? 1:2;
for(int r = 0; r < inpFlow.rows; r++, ptrHSV += hsv.step1())
{
uchar * _ptrHSV = ptrHSV;
for(int c = 0; c < inpFlow.cols; c++, _ptrHSV+=3)
{
cv::Point2f vpol = pol.at<cv::Point2f>(r,c);
_ptrHSV[0] = cv::saturate_cast<uchar>(vpol.x);
_ptrHSV[idx_val] = cv::saturate_cast<uchar>( (vpol.y + shift) * scale);
_ptrHSV[idx_sat] = 255;
}
}
cv::Mat rgbFlow32F;
cv::cvtColor(hsv, rgbFlow32F, CV_HSV2BGR);
rgbFlow32F.convertTo(rgbFlow, CV_8UC3);}
}
Python
Please refer to opt_flow.py#draw_flow
def draw_flow(img, flow, step=16):
h, w = img.shape[:2]
y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int)
fx, fy = flow[y,x].T
lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2)
lines = np.int32(lines + 0.5)
vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.polylines(vis, lines, 0, (0, 255, 0))
for (x1, y1), (x2, y2) in lines:
cv2.circle(vis, (x1, y1), 1, (0, 255, 0), -1)
return vis
C++
Please can refer to tvl1_optical_flow.cpp#drawOpticalFlow
static void drawOpticalFlow(const Mat_<Point2f>& flow, Mat& dst, float maxmotion = -1)
{
dst.create(flow.size(), CV_8UC3);
dst.setTo(Scalar::all(0));
// determine motion range:
float maxrad = maxmotion;
if (maxmotion <= 0)
{
maxrad = 1;
for (int y = 0; y < flow.rows; ++y)
{
for (int x = 0; x < flow.cols; ++x)
{
Point2f u = flow(y, x);
if (!isFlowCorrect(u))
continue;
maxrad = max(maxrad, sqrt(u.x * u.x + u.y * u.y));
}
}
}
for (int y = 0; y < flow.rows; ++y)
{
for (int x = 0; x < flow.cols; ++x)
{
Point2f u = flow(y, x);
if (isFlowCorrect(u))
dst.at<Vec3b>(y, x) = computeColor(u.x / maxrad, u.y / maxrad);
}
}
}
I did something like this in my code, a while ago:
calcOpticalFlowPyrLK(frame_prec,frame_cur,v_corners_prec[i],corners_cur,status, err);
for(int j=0; j<corners_cur.size(); j++){
if(status[j]){
line(frame_cur,v_corners_prec[i][j],corners_cur[j],colors[i]);
}
}
Basically I draw a line between the points tracked by the OF in this iteration and the previous ones, this draws the optical flow lines which represent the flow on the image.
Hope this helps..

Resources