Clarification on conversion between RGB and CIELAB - opencv

I'm working cum clouds of points with PCL. I recently had to convert the color information of the points that are in RGB to Cielab.
I have seen that it is possible to do with OpenCV and then I used the following code:
pcl::PointCloud<pcl::PointXYZLAB>::Ptr convert_rgb_to_lab_opencv(pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud) {
pcl::PointCloud <pcl::PointXYZLAB>::Ptr cloud_lab(new pcl::PointCloud <pcl::PointXYZLAB>);
cloud_lab->height = cloud->height;
cloud_lab->width = cloud->width;
for (pcl::PointCloud<pcl::PointXYZRGB>::iterator it = cloud->begin(); it != cloud->end(); it++) {
// Color conversion
cv::Mat pixel(1, 1, CV_8UC3, cv::Scalar(it->r, it->g, it->b));
cv::Mat temp;
cv::cvtColor(pixel, temp, CV_BGR2Lab);
pcl::PointXYZLAB point;
point.x = it->x;
point.y = it->y;
point.z = it->z;
point.L = temp.at<uchar>(0, 0);
point.a = temp.at<uchar>(0, 1);
point.b = temp.at<uchar>(0, 2);
cloud_lab->push_back(point);
}
return cloud_lab;
}
My question is: are the values I got correct? Should not LAB values be decimal and vary with negative numbers?
So I tried to do the conversion "manually" with the code available here.
When I visualized the two clouds in the CloudCompare I saw that they produced very similar views, even in the histogram.
Can someone explain to me why?

Related

Transform a point from map A to map B

I´m trying to transform a point from one map to another. I´ve tried to use some OpenCV sample code for getAffineTransform(), getPerspectiveTransform(), warpAffine() and findHomography(), but there´re always some kind of gaps in my transformation mesh. The feature points are usually detected on very different positions, so I need a good interpolation method, I think.
About the maps:
Both maps are images which are containing human body parts and human skin. I´m using the OpenCV feature detection/matching algorithmns to get a couple of equal points in both maps. The tricky thing is they´re containing arms and feets, too. Feature points on arms/feets can have much bigger offsets than the points on the torso.
The goal:
I want to transform any point on map A as good as possible to the equivalent position on map B.
My current approach is to find the three most clostest points to my original point on map A and construct a triangle. Afterwards I transform this triangle to the same three feature points on map B. That´s working nice if I have a lot of close feature point surrounding my original point. But on larger areas without feature points I got some problems with the interpolation.
Is this a good way to do so? Or is there a much better solution?
My favorite one would be the contruction of a complete transformation map for both images, but I´m not sure how to do this. Is it possible at all?
Thanks a lot for any advice!
Simple sketch of the transformation (I´m trying to find the points X1 to X3 from the left image in the right image):
Sketch of a sample transformation
Sample for homography (OpenCVSharp):
Mat imgA = new Mat(#"d:\Mesh\Left2.jpg", ImreadModes.Color);
Mat imgB = new Mat(#"d:\Mesh\Right2.jpg", ImreadModes.Color);
Cv2.Resize(imgA, imgA, new Size(512, 341));
Cv2.Resize(imgB, imgB, new Size(512, 341));
SURF detector = SURF.Create(500.0);
KeyPoint[] keypointsA = detector.Detect(imgA);
KeyPoint[] keypointsB = detector.Detect(imgB);
SIFT extractor = SIFT.Create();
Mat descriptorsA = new Mat();
Mat descriptorsB = new Mat();
extractor.Compute(imgA, ref keypointsA, descriptorsA);
extractor.Compute(imgB, ref keypointsB, descriptorsB);
BFMatcher matcher = new BFMatcher(NormTypes.L2, true);
DMatch[] matches = matcher.Match(descriptorsA, descriptorsB);
double minDistance = 10000.0;
double maxDistance = 0.0;
for (int i = 0; i < matches.Length; ++i)
{
double distance = matches[i].Distance;
if (distance < minDistance)
{
minDistance = distance;
}
if (distance > maxDistance)
{
maxDistance = distance;
}
}
List<DMatch> goodMatches = new List<DMatch>();
for (int i = 0; i < matches.Length; ++i)
{
if (matches[i].Distance <= 3.0 * minDistance &&
Math.Abs(keypointsA[matches[i].QueryIdx].Pt.Y - keypointsB[matches[i].TrainIdx].Pt.Y) < 30)
{
goodMatches.Add(matches[i]);
}
}
Mat output = new Mat();
Cv2.DrawMatches(imgA, keypointsA, imgB, keypointsB, goodMatches.ToArray(), output);
List<Point2f> goodA = new List<Point2f>();
List<Point2f> goodB = new List<Point2f>();
for (int i = 0; i < goodMatches.Count; i++)
{
goodA.Add(keypointsA[goodMatches[i].QueryIdx].Pt);
goodB.Add(keypointsB[goodMatches[i].TrainIdx].Pt);
}
InputArray goodInputA = InputArray.Create<Point2f>(goodA);
InputArray goodInputB = InputArray.Create<Point2f>(goodB);
Mat h = Cv2.FindHomography(goodInputA, goodInputB);
Point2f centerA = new Point2f(imgA.Cols / 2.0f, imgA.Rows / 2.0f);
output.DrawMarker((int)centerA.X, (int)centerA.Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Point2f[] transformedPoints = Cv2.PerspectiveTransform(new Point2f[] { centerA }, h);
output.DrawMarker((int)transformedPoints[0].X + imgA.Cols, (int)transformedPoints[0].Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Code snippet for perspective transform (different approach, OpenCVSharp):
pointsA[0] = new Point(trisA[i].Item0, trisA[i].Item1);
pointsA[1] = new Point(trisA[i].Item2, trisA[i].Item3);
pointsA[2] = new Point(trisA[i].Item4, trisA[i].Item5);
pointsB[0] = new Point(trisB[i].Item0, trisB[i].Item1);
pointsB[1] = new Point(trisB[i].Item2, trisB[i].Item3);
pointsB[2] = new Point(trisB[i].Item4, trisB[i].Item5);
Mat transformation = Cv2.GetAffineTransform(pointsA, pointsB);
InputArray inputSource = InputArray.Create<Point2f>(new Point2f[] { new Point2f(10f, 50f) });
Mat outputMat = new Mat();
Cv2.PerspectiveTransform(inputSource, outputMat, transformation);
Mat.Indexer<Point2f> indexer = outputMat.GetGenericIndexer<Point2f>();
var target = indexer[0, 0];

error in subtract two same size matrix in opencv , maybe error in conversion

I am comparing the difference of two similar grey images in Euclidean distance. The image is in grey format.
int dis = 0 ;
for(int i=0;i<mat1.rows;i++)
for(int j=0;j<mat1.cols;j++)
{
cout<< mat1.at<unsigned char>(i,j) <<endl;
int a = (mat1.at<unsigned char>(i,j) - mat2.at<unsigned char>(i,j));
dis += (a*a);
}
dis = sqrt (dis);
But the program gives out a error, it doesn't say what exact the error. But I think the error is due to the conversion - int a = (mat1.at(i,j) - mat2.at(i,j));
I have tried int a = (mat1.at(i,j) - mat2.at(i,j)); still doesn't work
mat2[i] looks weird. what's the purpose of the index there ?
also, you might just use the builtin norm function , which already does what you're trying

matchTemplate with openCV in java

i have a code like this:
Mat img = Highgui.imread(inFile);
Mat templ = Highgui.imread(templateFile);
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(img, templ, result, Imgproc.TM_CCOEFF);
/////Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
for (int i = 0; i < result_rows; i++)
for (int j = 0; j < result_cols; j++)
if(result.get(i, j)[0]>?)
//match!
I need to parse the input image to find multiple occurrencies of the template image. I want to have a result like this:
result[0][0]= 15%
result[0][1]= 17%
result[x][y]= 47%
If i use TM_COEFF all results are [-xxxxxxxx.xxx,+xxxxxxxx.xxx]
If i use TM_SQDIFF all results are xxxxxxxx.xxx
If i use TM_CCORR all results are xxxxxxxx.xxx
How can i detect a match or a mismatch? What is the right condition into the if?
If i normalized the matrix the application set a value to 1 and i can't detect if the template isn't stored into the image (all mismatch).
Thanks in advance
You can append "_NORMED" to the method names (For instance: CV_TM_COEFF_NORMED in C++; could be slightly different in Java) to get a sensible value for your purpose.
By 'sensible', I mean that you will get values in the range of 0 to 1 which can be multiplied by 100 for your purpose.
Note: For CV_TM_SQDIFF_NORMED, it will be in the range -1 to 0, and you will have to subtract the value from 1 in order to make sense of it, because the lowest value if used in this method.
Tip: you can use the java equivalent of minMaxLoc() in order to get the minimum and maximum values. It's very useful when used in conjunction with matchtemplate.
I believe 'minMaxLoc' that is located inside the class Core.
Here's a C++ implementation:
matchTemplate( input_mat, template_mat, result_mat, method_NORMED );
double minVal, maxVal;
double percentage;
Point minLoc; Point maxLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
if( method_NORMED == CV_TM_SQDIFF_NORMED )
{
percentage=1-minVal;
}
else
{
percentage=maxVal;
}
Useful C++ docs:
Match template description along with available methods: http://docs.opencv.org/modules/imgproc/doc/object_detection.html
MinMaxLoc documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=minmaxloc#minmaxloc
Another approach will be background differencing. You can observe the distortion.
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class BackgroundDifference {
public static void main(String[] arg){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat model = Highgui.imread("e:\\answers\\template.jpg",Highgui.CV_LOAD_IMAGE_GRAYSCALE);
Mat scene = Highgui.imread("e:\\answers\\front7.jpg",Highgui.CV_LOAD_IMAGE_GRAYSCALE);
Mat diff = new Mat();
Core.absdiff(model,scene,diff);
Imgproc.threshold(diff,diff,15,255,Imgproc.THRESH_BINARY);
int distortion = Core.countNonZero(diff);
System.out.println("distortion:"+distortion);
Highgui.imwrite("e:\\answers\\diff.jpg",diff);
}
}

Standard Hough Lines in EMGU CV

I am in need of using the standard Hough Transformation (instead of the using the HoughLinesBinary method which implements Probabilistic Hough Transform) and have attempted doing so by creating a custom version of the HoughLinesBinary method:
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
Seq<MCvMat> segments = new Seq<MCvMat>(lines, stor);
List<MCvMat> lineslist = segments.ToList();
foreach(MCvMat line in lineslist)
{
//Process lines: (rho, theta)
}
}
My problem is that I am unsure of what type is the sequence returned. I believe it should be MCvMat, due to reading the documentation that CvMat* is used in OpenCV, which also states that for STANDARD "the matrix must be (the created sequence will be) of CV_32FC2 type"
I am unclear as to what I would need to do to return and process that correct output data from the STANDARD hough lines (i.e. the 2x1 vector for each line giving the rho and theta information).
Any help would be greatly appreciated. Thank you
-Sal
I had the same problem myself a couple of days ago. This is how I solved it using marshalling. Please let me know if you find a simpler solution.
using (MemStorage stor = new MemStorage())
{
IntPtr lines = CvInvoke.cvHoughLines2(canny.Ptr, stor.Ptr, Emgu.CV.CvEnum.HOUGH_TYPE.CV_HOUGH_STANDARD, rhoResolution, (thetaResolution*Math.PI)/180, threshold, 0, 0);
int maxLines = 100;
for(int i = 0; i < maxLines; i++)
{
IntPtr line = CvInvoke.cvGetSeqElem(lines, i);
if (line == IntPtr.Zero)
{
// No more lines
break;
}
PolarCoordinates coords = (PolarCoordinates)System.Runtime.InteropServices.Marshal.PtrToStructure(line, typeof(PolarCoordinates));
// Do something with your Hough lines
}
}
with a struct defined as follows:
public struct PolarCoordinates
{
public float Rho;
public float Theta;
}

Find rectangles without corners using opencv

I have an image where I want to find contours but the "contours" in my image don't have corners. Are there some tricks I can use to help find the rectangles that are implied by the lines in this image? I thought about extending all the lines to form the corners but I worry about lines intersecting from other contours and how to determine which intersections I'm interested in. I'm very new to opencv and I don't know much about image processing. Thank you for any help you can give.
Fit lines in your binary image with the Hough transform and fit rectangles to the orthogonally intersecting lines.
I ended up implementing my own solution. It isn't very graceful but it gets the job done. I would be interested in hearing about improvements. HoughLines2 didn't always give me good results for finding line segments and I had to mess around with the threshold value a lot for different scenarios. Instead I opted for FindCountours where I took contours with two elements, I should be guaranteed 1 pixel wide lines. After finding the lines I iterated through them and traced them out to find the rectangles.
Where points is a *CvSeq of the line endpoints
while(points->total>0){
if(p1.x==-1&&p1.y==-1){
cvSeqPopFront(points,&p1);
cvSeqPopFront(points,&p2);
}
if((pos=findClosestPoint(&p1,&p2, points,maxDist))>=0){
p3 = (CvPoint*)cvGetSeqElem( points,pos );
pos2 = (pos%2==0)?pos+1:pos-1; //lines are in pairs of points
p4 = (CvPoint*)cvGetSeqElem( points,pos2 );
if(isVertical(&p1,&p2) && isHorizontal(p3,p4)){
printf("found Corner %d %d\n",p2.x,p3->y);
} else if(isHorizontal(&p1,&p2) && isVertical(p3,p4) ){
printf("found Corner %d %d\n",p3->x,p2.y);
}
memcpy(&p1,p3,sizeof(CvPoint));
memcpy(&p2,p4,sizeof(CvPoint));
cvSeqRemove(points, (pos>pos2)?pos:pos2);
cvSeqRemove(points, (pos>pos2)?pos2:pos);
} else {
p1.x=-1;
p1.y=-1;
}
}
int findClosestPoint (CvPoint *p1, CvPoint *p2, CvSeq *points, int maxDist) {
int ret = -1,i;
float dist, minDist = maxDist;
CvPoint* test;
int (*dirTest)(CvPoint *,CvPoint *);
if(isVertical(p1,p2)){ //vertical line
if(p2->y > p1->y) {//going down
dirTest = isBelow;
} else { // going up
dirTest = isAbove;
}
} else if (isHorizontal(p1,p2)){ //horizontal line
if(p2->x > p1->x) {//going right
dirTest = isRight;
} else { //going left
dirTest = isLeft;
}
}
for( i = 0; i < points->total; i++ )
{
test = (CvPoint*)cvGetSeqElem( points, i );
if(dirTest(p2,test)){ //only test points in the region we care about
dist = sqrt(pow(test->x - p2->x,2)+pow(test->y - p2->y,2));
if(dist<minDist){
minDist = dist;
ret = i;
}
}
}
return ret;
}
int isVertical(CvPoint *p1, CvPoint *p2){
return p1->x == p2->x;
}
int isHorizontal(CvPoint *p1, CvPoint *p2){
return p1->y == p2->y;
}
int isRight(CvPoint *pt1, CvPoint *pt2){
return pt2->x > pt1->x;
}
int isLeft(CvPoint *pt1, CvPoint *pt2){
return pt2->x < pt1->x;
}
int isBelow(CvPoint *pt1, CvPoint *pt2){
return pt2->y > pt1->y;
}
int isAbove(CvPoint *pt1, CvPoint *pt2){
return pt2->y < pt1->y;
}
You could also try posing it as optimization problem. Rectangle is defined as 4D state vector (x,w,width,height) or 5D vector if you include rotation (x,y,width,height,rotation). For your current state you could do a gradient descent towards result of Hough lines to converge to the optimal state. Other option is using linear least squares: http://people.inf.ethz.ch/arbenz/MatlabKurs/node88.html
Using the hough transform you will be able to extract lines. Then you can calculate intersections of these lines to estimate the position of the rectangles.

Resources