Extract centroids of connected components in EMGU - emgucv

I have the following piece of code in EMGU to extract connected components:
Mat connected_array = new Mat();
Mat stats = new Mat();
Mat centroids = new Mat();
Mat ImgMat = new Mat();
CvInvoke.ConnectedComponentsWithStats(ImgThresh, connected_array, stats, centroids, LineType.EightConnected,DepthType.Cv32S);
I could not find any way to access the centroids.

EMGU wraps most arrays into Mat objects, which you then need to convert to arrays to access their contents (using mat.CopyTo(array)). This is not straight forward from the documentation - I've had to use trail & error to find out how it works:
Mat labels = new Mat();
Mat stats = new Mat();
Mat centroids = new Mat();
MCvPoint2D64f[] centroidPoints;
double x, y;
int n;
n = CvInvoke.ConnectedComponentsWithStats(image, labels, stats, centroids, LineType.EightConnected, DepthType.Cv32S);
centroidPoints = new MCvPoint2D64f[n];
centroids.CopyTo(centroidPoints);
foreach (MCvPoint2D64f point in centroidPoints)
{
x = point.X;
y = point.Y;
}
Another common method is to use Contours, similar functionality which EMGU also provides. I have used this for better performance. I include an example as well:
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
MCvMoments moments;
double area;
MCvPoint2D64f center;
int n;
CvInvoke.FindContours(image, contours, null, RetrType.External, ChainApproxMethod.ChainApproxSimple);
n = contours.Size;
for (int i = 0; i < n; i++)
{
area = CvInvoke.ContourArea(contours[i], false);
moments = CvInvoke.Moments(contours[i]);
center = moments.GravityCenter;
}

Related

Having problems rending 3d models on aruco markers using tvec/rvec

My graduation project team and I are working on marker-based AR application where in one of the tasks we want to draw some 3d models on markers, we are using opencv to detect markers, and rajawali for drawing the 3d models.
The problem is the tvec/rvec we get from Aruco.estimatePoseSingleMarkers(...) doesn't map correctly to location and rotation on the markers, although we can draw the axis accurately.
image of the 3d model on the marker
So we wanted to ask:
Is there a processing needed on the tvec/rvec needed before using them to get the position and rotation?
Is there some alternatives for marker-detection that are more convenient to use than opencv with rajawali?
What could be the cause for them to be inaccurate?
Code:
marker-detection
public void markerDetection(Mat frame)
{
ids = new Mat();
corners = new ArrayList<>();
Aruco.detectMarkers(frame, markerDictionary, corners, ids);
List<MarkerData> newDetectedMarkers = new ArrayList<>();
if(ids.size().height > 0)
{
rvecs = new Mat();
tvecs = new Mat();
Aruco.estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distortionCoef, rvecs, tvecs);
for(int i=0; i<ids.size().height; i++)
{
double[] rvecArray = rvecs.get(i, 0), tvecArray = tvecs.get(i, 0);
Mat rvec = new Mat(3, 1, CvType.CV_64FC1), tvec = new Mat(3, 1, CvType.CV_64FC1);
for (int j = 0; j < 3; ++j) {
rvec.put(j, 0, rvecArray[j]);
tvec.put(j, 0, tvecArray[j]);
}
multiply(rvec, new Scalar(180.0 / Math.PI), rvec); // transform them to degree
MarkerData newMarker = new MarkerData(rvec, tvec, corners.get(i), (int)ids.get(i, 0)[0]);
newDetectedMarkers.add(newMarker);
}
}
updateDetectedMarkers(newDetectedMarkers); // update the detected markers
}
Rendering
#Override
protected void onRender(long elapsedRealtime, double deltaTime) {
super.onRender(elapsedRealtime, deltaTime);
getCurrentScene().clearChildren();
List<MarkerData> markerData = markerDetector.getDetectedMarkers();
for (MarkerData marker : markerData) {
try {
int id = R.raw.monkey;
LoaderOBJ parser = new LoaderOBJ(mContext.getResources(), mTextureManager, id);
parser.parse();
Object3D object = parser.getParsedObject();
object.setMaterial(defaultMaterial);
object.setScale(0.3);
Mat rvec = marker.getRvec(); // 3x1 Mat
Mat tvec = marker.getTvec(); // 3x1 Mat
object.setRotation(new Vector3(rvec.get(0, 0)[0], rvec.get(1, 0)[0], rvec.get(2, 0)[0]));
object.setPosition(new Vector3(tvec.get(0, 0)[0], tvec.get(1, 0)[0], tvec.get(2, 0)[0]));
getCurrentScene().addChild(object);
} catch (ParsingException e) {
e.printStackTrace();
}
}
}

Is there a built-in function to split a 3-channel Mat into three 3-channel Mat rather than into three 1-channel Mat?

As far as I know the built-in split will split one 3-channel Mat into three 1-channel Mat. As a result, those three Mat are just gray scale with some different intensities.
My intent is to get three 3-channel Mat as follows.
void splitTo8UC3(const Mat& input, vector<Mat>& output)
{
Mat blue = input.clone();
Mat green = input.clone();
Mat red = input.clone();
const uint N = input.rows * input.step;
for (uint i = 0; i < N; i += 3)
{
// blue.data[i]
green.data[i] = 0;
red.data[i] = 0;
blue.data[i + 1] = 0;
//green.data[i+1]
red.data[i + 1] = 0;
blue.data[i + 2] = 0;
green.data[i + 2] = 0;
//red.data[i+2]
}
output.push_back(blue);
output.push_back(green);
output.push_back(red);
}
It works but instead of reinventing the wheel, I am looking for the built-in if any.
Edit
The proposed solution must be faster than mine.
EDIT: I incorporated Dan's suggested improvements from his comment.
I can't think of a built-in function exactly doing this, and I also couldn't find one. But while doing some research, I came across the mixChannels function, which might improve your solution. At least, it avoids implementing a loop.
Here are my modifications to your code:
void splitTo8UC3(const cv::Mat& input, std::vector<cv::Mat>& output)
{
// Allocate outputs
cv::Mat b(cv::Mat::zeros(input.size(), input.type()));
cv::Mat g(cv::Mat::zeros(input.size(), input.type()));
cv::Mat r(cv::Mat::zeros(input.size(), input.type()));
// Collect outputs
cv::Mat out[] = { b, g, r };
// Set up index pairs
int from_to[] = { 0,0, 1,4, 2,8 };
cv::mixChannels(&input, 1, out, 3, from_to, 3);
output.assign(std::begin(out), std::end(out));
}
Let's have this test image colors.png:
And, let's have this test code:
cv::Mat img = cv::imread("images/colors.png");
std::vector<cv::Mat> bgr;
splitTo8UC3(img, bgr);
cv::imwrite("images/b.png", bgr[0]);
cv::imwrite("images/g.png", bgr[1]);
cv::imwrite("images/r.png", bgr[2]);
Then, we get the following outputs b.png, g.png, and r.png, which hopefully are the them as for your initial solution:
Hope that helps!

partition a set of images into k clusters with opencv

I have an image data set that I would like to partition into k clusters. I am trying to use the opencv implementation of k-means clustering.
Firstly, I store my Mat images into a vector of Mat and then I am trying to use the kmeans function. However, I am getting an assertion error.
Should the images be stored into a different kind of structure? I have read the k-means documentation and I dont seem to understand what I am doing wrong. This is my code:
Thank you in advance,
vector <Mat> images;
string folder = "D:\\football\\positive_clustering\\";
string mask = "*.bmp";
vector<string> files = getFileList(folder + mask);
for (int i = 0; i < files.size(); i++)
{
Mat img = imread(folder + files[i]);
images.push_back(img);
}
cout << "Vector of positive samples created" << endl;
int k = 10;
cv::Mat bestLabels;
cv::kmeans(images, k, bestLabels, TermCriteria(), 3, KMEANS_PP_CENTERS);
//have a look
vector<cv::Mat> clusterViz(bestLabels.rows);
for (int i = 0; i<bestLabels.rows; i++)
{
clusterViz[bestLabels.at<int>(i)].push_back(cv::Mat(images[bestLabels.at<int>(i)]));
}
namedWindow("clusters", WINDOW_NORMAL);
for (int i = 0; i<clusterViz.size(); i++)
{
cv::imshow("clusters", clusterViz[i]);
cv::waitKey();
}

How to detect whole rectangle in a frame

I am using OpenCV4Android version 2.4.11 and I am trying to detect the rectangles in frames retrieved from Camera. I referred to some questions in this website and they were so helpful. but the issue i am facing currently is
when i try to detect an object with light color in the middle as shown in the original image below the detection algorithm in this case does not detect the object as whole, rather it detects the dark parts of it as shown in image in the section titled "processed" below.
the code posted below indicates the steps i followed and the threshold values i used to detect the objects in the frames.
please let me know why the object as a whole is not getting detected and what can i do to detect the whole object not only parts of it
code:
//step 1
this.mMatGray = new Mat();
Imgproc.cvtColor(this.mMatInputFrame, this.mMatGray, Imgproc.COLOR_BGR2GRAY);
//step 2
this.mMatEdges = new Mat();
Imgproc.blur(this.mMatGray, this.mMatEdges, new Size(7, 7));//7,7
//step 3
Imgproc.Canny(this.mMatEdges, this.mMatEdges, 128, 128*2, 5, true);//..,..,2,900,7,true
//step 4
dilated = new Mat();
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_DILATE, new Size(3, 3));
Imgproc.dilate(mMatEdges, dilated, dilateElement);
ArrayList<MatOfPoint> contours = new ArrayList<>();
hierachy = new Mat();
Imgproc.findContours(dilated, contours, hierachy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
MatOfPoint2f approxCurve = new MatOfPoint2f();
if (contours.size() > 0) {
for (int i = 0; i < contours.size(); i++) {
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(i).toArray());
double approxDistance = Imgproc.arcLength(contour2f, true) * .02;//.02
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
if (points.total() >= 4 && Imgproc.isContourConvex(points) && Math.abs(Imgproc.contourArea(points)) >= 40000 && Math.abs(Imgproc.contourArea(points)) <= 150000) {
Rect boundingRect = Imgproc.boundingRect(points);
RotatedRect minAreaRect = Imgproc.minAreaRect(contour2f);
Point[] rectPoints = new Point[4];
minAreaRect.points(rectPoints);
Rect minAreaAsRect = minAreaRect.boundingRect();
//to draw the minAreaRect
for( int j = 0; j < 4; j++ ) {
Core.line(mMatInputFrame, rectPoints[j], rectPoints[(j+1)%4], new Scalar(255,0,0));
}
Core.putText(mMatInputFrame, "MinAreaRect", new Point(10, 30), 1,1 , new Scalar(255,0,0),2);
Core.putText(mMatInputFrame, "Width: " + minAreaAsRect.width , new Point(minAreaAsRect.tl().x, minAreaAsRect.tl().y-100), 1,1 , new Scalar(255,0,0),2);
Core.putText(mMatInputFrame, "Height: " + minAreaAsRect.height, new Point(minAreaAsRect.tl().x, minAreaAsRect.tl().y-80), 1,1 , new Scalar(255,0,0),2);
Core.putText(mMatInputFrame, "Area: " + minAreaAsRect.area(), new Point(minAreaAsRect.tl().x, minAreaAsRect.tl().y-60), 1,1 , new Scalar(255,0,0),2);
//drawing the contour
Imgproc.drawContours(mMatInputFrame, contours, i, new Scalar(0,0,0),2);
//drawing the boundingRect
Core.rectangle(mMatInputFrame, boundingRect.tl(), boundingRect.br(), new Scalar(0, 255, 0), 1, 1, 0);
Core.putText(mMatInputFrame, "BoundingRect", new Point(10, 60), 1,1 , new Scalar(0,255,0),2);
Core.putText(mMatInputFrame, "Width: " + boundingRect.width , new Point(boundingRect.br().x-100, boundingRect.tl().y-100), 1,1 , new Scalar(0,255,0),2);
Core.putText(mMatInputFrame, "Height: " + boundingRect.height, new Point(boundingRect.br().x-100, boundingRect.tl().y-80), 1,1 , new Scalar(0,255,0),2);
Core.putText(mMatInputFrame, "Area: " + Imgproc.contourArea(points), new Point(boundingRect.br().x-100, boundingRect.tl().y-60), 1,1 , new Scalar(0,255,0),2);
}
}
}
original image:
processed image:
I have implemented in c++. API's are same so you can easily port for android. I have used Opencv 2.4.8 . Please check the implementation. Hope the code says what is done:
#include <iostream>
#include <string>
#include "opencv/highgui.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
using namespace std;
using namespace cv;
Mat GetKernel(int erosion_size)
{
Mat element = getStructuringElement(cv::MORPH_CROSS,
cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
cv::Point(erosion_size, erosion_size) );
return element;
}
int main()
{
Mat img = imread("C:/Users/dell2/Desktop/j6B3A.png",0);//loading gray scale image
Mat imgC = imread("C:/Users/dell2/Desktop/j6B3A.png",1);
GaussianBlur(img,img,Size(7,7),1.5,1.5);
Mat dimg;
adaptiveThreshold(img,dimg,255,ADAPTIVE_THRESH_GAUSSIAN_C,THRESH_BINARY,17,1);
dilate(dimg,img,GetKernel(2));
erode(img,dimg,GetKernel(2));
erode(dimg,img,GetKernel(1));
dimg = img;
//*
vector<vector<Point>> contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours( dimg, contours, hierarchy,CV_RETR_TREE , CV_CHAIN_APPROX_NONE ); // Find the contours in the image
double largest_area = 0;
int largest_contour_index = 0;
Rect bounding_rect;
for( int i = 0; i< contours.size(); i++ ) // iterate through each contour.
{
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
drawContours( imgC, contours, largest_contour_index, Scalar(255,0,0), 2, 8, hierarchy, 0, Point() );
rectangle(imgC, bounding_rect, Scalar(0,255,0),2, 8,0);
/**/
//imshow("display",dimg);
imshow("display2",imgC);
waitKey(0);
return 0;
}
Output produced:
You can fine tune the threshold if necessary.

Input matrix to opencv kmeans clustering

This question is specific to opencv:
The kmeans example given in the opencv documentation has a 2-channel matrix - one channel for each dimension of the feature vector. But, some of the other example seem to say that it should be a one channel matrix with features along the columns with one row for each sample. Which of these is right?
if I have a 5 dimensional feature vector, what should be the input matrix that I use:
This one:
cv::Mat inputSamples(numSamples, 1, CV32FC(numFeatures))
or this one:
cv::Mat inputSamples(numSamples, numFeatures, CV_32F)
The correct answer is cv::Mat inputSamples(numSamples, numFeatures, CV_32F).
The OpenCV Documentation about kmeans says:
samples – Floating-point matrix of input samples, one row per sample
So it is not a Floating-point vector of n-Dimensional floats as in the other option. Which examples suggested such a behaviour?
Here is also a small example by me that shows how kmeans can be used. It clusters the pixels of an image and displays the result:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( argv[1], 1 );
Mat samples(src.rows * src.cols, 3, CV_32F);
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
for( int z = 0; z < 3; z++)
samples.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
int clusterCount = 15;
Mat labels;
int attempts = 5;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );
Mat new_image( src.size(), src.type() );
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
new_image.at<Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
new_image.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
new_image.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}
imshow( "clustered image", new_image );
waitKey( 0 );
}
As alternative to reshaping the input matrix manually, you can use OpenCV reshape function to achieve similar result with less code. Here is my working implementation of reducing colors count with K-Means method (in Java):
private final static int MAX_ITER = 10;
private final static int CLUSTERS = 16;
public static Mat colorMapKMeans(Mat img, int K, int maxIterations) {
Mat m = img.reshape(1, img.rows() * img.cols());
m.convertTo(m, CvType.CV_32F);
Mat bestLabels = new Mat(m.rows(), 1, CvType.CV_8U);
Mat centroids = new Mat(K, 1, CvType.CV_32F);
Core.kmeans(m, K, bestLabels,
new TermCriteria(TermCriteria.COUNT | TermCriteria.EPS, maxIterations, 1E-5),
1, Core.KMEANS_RANDOM_CENTERS, centroids);
List<Integer> idx = new ArrayList<>(m.rows());
Converters.Mat_to_vector_int(bestLabels, idx);
Mat imgMapped = new Mat(m.size(), m.type());
for(int i = 0; i < idx.size(); i++) {
Mat row = imgMapped.row(i);
centroids.row(idx.get(i)).copyTo(row);
}
return imgMapped.reshape(3, img.rows());
}
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Highgui.imwrite("result.png",
colorMapKMeans(Highgui.imread(args[0], Highgui.CV_LOAD_IMAGE_COLOR),
CLUSTERS, MAX_ITER));
}
OpenCV reads image into 2 dimensional, 3 channel matrix. First call to reshape - img.reshape(1, img.rows() * img.cols()); - essentially unrolls 3 channels into columns. In resulting matrix one row corresponds to one pixel of the input image, and 3 columns corresponds to RGB components.
After K-Means algorithm finished its work, and color mapping has been applied, we call reshape again - imgMapped.reshape(3, img.rows()), but now rolling columns back into channels, and reducing row numbers to the original image row number, thus getting back the original matrix format, but only with reduced colors.

Resources