emguCV getting the largest blob - opencv

i'm working on color tracking..
and i'm specifically tracking an orange ball, a basketball ball perhaps, along with kinect for the body, i'm making a free throw shooting guide.
here's my case
i have already thresholded my image, eroded it to remove noise, and other insignificant objects (non-ball) and then dilated a few times to emphasize the ball..
and so i've come to a final binary image - where i've successfully isolated the ball.. there are other blobs..(smaller blobs that aren't the ball).. how do i get the largest blob(the ball) and put a bounding box?
i've tried hough circles btw, however this is very slow,,..thanks! some code would be useful

This is the code I used to get the largest blob in the image:
public static Blob FindLargestObject(Image<Gray, byte> block, Rectangle rectangle)
{
Image<Gray, byte> mask = block.CopyBlank();
Contour<Point> largestContour = null;
double largestarea = 0;
for (var contours = block.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL); contours != null; contours = contours.HNext)
{
if (contours.Area > largestarea)
{
largestarea = contours.Area;
largestContour = contours;
}
}
// fill the largest contour
mask.Draw(largestContour, new Gray(255), -1);
return new Blob(mask, largestContour, rectangle);
}
For Blob:
public class Blob
{
Image<Gray,byte> Mask{ get; set; }
Contour<Point> Contour { get; set; }
Rectangle Rectangle { get; set; }
}
The blob will contain all the information that you want to get.

Related

Text Similarity Percentage in Document Files using ML.Net

I want to group the 80% or above similar pdf documents using K Mean Algorithm and ML.Net.
I am reading the text from PDF files.
My requirement is whatever similarity percentage user enters, the document files should grouped according to that percentage only which means if user entered the 70% then document should be at least 70% similar.
Also how can i get the euclidean distance for each document from centroid?
I am new to ML.Net and Algorithm Please help and guide. Thanks
public class Prediction
{
[ColumnName("PredictedLabel")]
public uint Cluster { get; set; }
[ColumnName("Score")]
public float[] Distances { get; set; }
}
public void Train(IEnumerable<TextData> data, int numberOfClusters)
{
var mlContext = new MLContext();
var textDataView = mlContext.Data.LoadFromEnumerable(data);
var textEstimator = mlContext.Transforms.Text.NormalizeText("Text")
.Append(mlContext.Transforms.Text.TokenizeIntoWords("Text"))
.Append(mlContext.Transforms.Text.RemoveDefaultStopWords("Text"))
.Append(mlContext.Transforms.Conversion.MapValueToKey("Text"))
.Append(mlContext.Transforms.Text.ProduceNgrams("Text"))
.Append(mlContext.Transforms.NormalizeLpNorm("Text"))
.Append(mlContext.Transforms.NormalizeMinMax("Text"))
.Append(mlContext.Clustering.Trainers.KMeans("Text",
numberOfClusters: numberOfClusters));
var model = textEstimator.Fit(textDataView);
_predictionEngine = mlContext.Model.CreatePredictionEngine<TextData, Prediction>
_predictionEngine.Predict(new TextData() { Text = "19D000XXX Susan Porter 821442289
Information required"}).Cluster;
}

How to save interest point in SURF feature?

I'm using Emgu CV's SURF feature to recognize similar objects in images.
The image is drawn, showing all the key points found, in both images. The problem is that the similar points are seen in the image.
How can I save those match points in a database?
First of all, create a class SURF.cs then write the following code in it:
public void FindSURF(Image<Gray, Byte> modelImage)
{
VectorOfKeyPoint modelKeyPoints;
SURFDetector surfCPU = new SURFDetector(500, false);
//extract features from the object image
modelKeyPoints = new VectorOfKeyPoint();
Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(modelImage, null, modelKeyPoints);
}
Then, in the program.cs write the following code:
SURF FindImageSURF = new SURF();
string[] filePaths = Directory.GetFiles(#"E:\folderimages\");
for (int i = 0; i < filePaths.Length; ++i)
{
string path = filePaths[i];
using (Image<Gray, Byte> modelImage = new Image<Gray, byte>(path))
{
FindImageSURF.FindSURF(modelImage);
}
}

Emgu CV 3 findContours and hierarchy parameter of type Vec4i equivalent?

I'm attempting to translate the following OpenCV C++ code into Emgu CV 3:
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> v4iHierarchy;
cv::findContours(imgThreshCopy, contours, v4iHierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
I can find some Emgu CV 3 examples that use null for the 3rd parameter to findContours, for example doing it that way here would be a Visual Basic translation:
Dim contours As New VectorOfVectorOfPoint()
CvInvoke.FindContours(imgThreshCopy, contours, Nothing, RetrType.Tree, ChainApproxMethod.ChainApproxSimple)
Which works if the hierarchy parameter is not needed, but what if it is? I can't seem to figure the Emgu CV 3 syntax equivalent for the C++ line
std::vector<cv::Vec4i> v4iHierarchy;
Anybody else gotten this to work? Any help would be appreciated.
Pass a default-constructed Mat to get the hierarchy.
var VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
var Mat hierarchy = new Mat();
CvInvoke.FindContours(
image,
contours,
hierarchy,
RetrType.Ccomp,
ChainApproxMethod.ChainApproxSimple
);
Console.WriteLine("contours.Size: " + contours.Size);
Console.WriteLine("hierarchy.Rows: " + hierarchy.Rows);
Console.WriteLine("hierarchy.Cols: " + hierarchy.Cols);
Console.WriteLine("hierarchy.Depth: " + hierarchy.Depth);
Console.WriteLine("hierarchy.NumberOfChannels: " + hierarchy.NumberOfChannels);
// Example Output:
// contours.Size: 4391
// hierarchy.Rows: 1
// hierarchy.Cols: 4391
// hierarchy.Depth: Cv32S
// hierarchy.NumberOfChannels: 4
You can access the hierarchy data using the Mat DataPointer:
/// <summary>
/// Get a neighbor index in the heirarchy tree.
/// </summary>
/// <returns>
/// A neighbor index or -1 if the given neighbor does not exist.
/// </returns>
public int Get(HierarchyIndex component, int index)
{
if (Hierarchy.Depth != Emgu.CV.CvEnum.DepthType.Cv32S)
{
throw new ArgumentOutOfRangeException("ContourData must have Cv32S hierarchy element type.");
}
if (Hierarchy.Rows != 1)
{
throw new ArgumentOutOfRangeException("ContourData must have one hierarchy hierarchy row.");
}
if (Hierarchy.NumberOfChannels != 4)
{
throw new ArgumentOutOfRangeException("ContourData must have four hierarchy channels.");
}
if (Hierarchy.Dims != 2)
{
throw new ArgumentOutOfRangeException("ContourData must have two dimensional hierarchy.");
}
long elementStride = Hierarchy.ElementSize / sizeof(Int32);
var offset = (long)component + index * elementStride;
if (0 <= offset && offset < Hierarchy.Total.ToInt64() * elementStride)
{
unsafe
{
return *((Int32*)Hierarchy.DataPointer.ToPointer() + offset);
}
}
else
{
return -1;
}
}
https://gist.github.com/joshuanapoli/8c3f282cece8340a1dd43aa5e80d170b
EmguCV started using VectorOfVectorPoint for FindContours, but didn't really update their code to work correctly with it. See below for a working example:
/// <summary>
/// Find contours using the specific memory storage
/// </summary>
/// <param name="method">The type of approximation method</param>
/// <param name="type">The retrieval type</param>
/// <param name="stor">The storage used by the sequences</param>
/// <returns>
/// Contour if there is any;
/// null if no contour is found
/// </returns>
public static VectorOfVectorOfPoint FindContours(this Image<Gray, byte> image, ChainApproxMethod method = ChainApproxMethod.ChainApproxSimple,
Emgu.CV.CvEnum.RetrType type = RetrType.List) {
//Check that all parameters are valid.
VectorOfVectorOfPoint result = new VectorOfVectorOfPoint();
if (method == Emgu.CV.CvEnum.ChainApproxMethod.ChainCode) {
throw new ColsaNotImplementedException("Chain Code not implemented, sorry try again later");
}
CvInvoke.FindContours(image, result, null, type, method);
return result;
}
This returns a VectorOfVectorPoint, which implements IInputOutputArray, IOutputArray, IInputArrayOfArrays, and IInputArray. I'm not sure what you need to do with the contours, but here is an example of how to get the bounding boxes for each one. We do some other things, so let me know what you need and I can help you.
VectorOfVectorOfPoint contours = canvass2.FindContours(ChainApproxMethod.ChainApproxSimple, RetrType.Tree);
int contCount = contours.Size;
for (int i = 0; i < contCount; i++) {
using (VectorOfPoint contour = contours[i]) {
segmentRectangles.Add(CvInvoke.BoundingRectangle(contour));
if (debug) {
finalCopy.Draw(CvInvoke.BoundingRectangle(contour), new Rgb(255, 0, 0), 5);
}
}
}
You can simply create a
Matrix
then copy you Mat object data into that Matrix. See Example below:
Mat hierarchy = new Mat();
CvInvoke.FindContours(imgThreshCopy, contours, hierarchy , RetrType.Tree,ChainApproxMethod.ChainApproxSimple);
Matrix<int> matrix = new Matrix<int>(hierarchy.Rows, hierarchy.Cols,hierarchy.NumberOfChannels);
hierarchy.CopyTo(matrix);
Data can be accessed in
matrix.Data
Good Luck. H

HoughLinesBinary doesn't find lines

I use canny on image of a house and then HoughLinesBinary on result.
Original image:
Image after canny:
Lines found by HoughLinesBinary:
As you can see it produced many artifacts, but didn't mark straight lines, like the left side of the door
Source:
public static Image<Bgr, byte> split_to_patterns(Image<Bgr, byte> original)
{
Image<Bgr, byte> res = original.Copy();
LineSegment2D[] lines =
original
.Convert<Gray, byte>()
.Canny(16, 16)
.HoughLinesBinary(1,Math.PI/16,1,10,1)[0];
foreach (LineSegment2D line in lines)
{
res.Draw(line,new Bgr(Color.Red),2);
}
return res;
}

EmguCV and Unity3D Interoperation

I managed integrating EmguCV into Unity3D and wrote a little converter that has some little problems
Step 1 Converting Unity3D Texture to OpenCV Image
public static Image<Bgr, byte> UnityTextureToOpenCVImage(Texture2D tex){
return UnityTextureToOpenCVImage(tex.GetPixels32 (), tex.width, tex.height);
}
public static Image<Bgr, byte> UnityTextureToOpenCVImage(Color32[] data, int width, int height){
byte[,,] imageData = new byte[width, height, 3];
int index = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
imageData[x,y,0] = data[index].b;
imageData[x,y,1] = data[index].g;
imageData[x,y,2] = data[index].r;
index++;
}
}
Image<Bgr, byte> image = new Image<Bgr, byte>(imageData);
return image;
}
Step 2 Converting OpenCV Image back to Unity3D Texture
public static Texture2D OpenCVImageToUnityTexture(Image<Bgr, byte> openCVImage, GameObject check){
return OpenCVImageToUnityTexture(openCVImage.Data, openCVImage.Width, openCVImage.Height, check);
}
public static Texture2D OpenCVImageToUnityTexture(byte[,,] data, int width, int height, GameObject check){
Color32 [] imageData = new Color32[width*height];
int index = 0;
byte alpha = 255;
for (int y = 0; y < width; y++) {
for (int x = 0; x < height; x++) {
imageData[index] = new Color32((data[x,y,2]),
(data[x,y,1]),
(data[x,y,0]),
alpha);
check.SetActive(true);
index++;
}
}
Texture2D toReturn = new Texture2D(width, height, TextureFormat.RGBA32, false);
toReturn.SetPixels32(imageData);
toReturn.Apply ();
toReturn.wrapMode = TextureWrapMode.Clamp;
return toReturn;
}
Compiler throws no errors but some goes wrong all the time. See yourself: cats.
On the left side is the original image, on the right side is the converted one. As you can see there are more cats then it should be...
Has anyone any clues?
Also it is slow as hell because of iterating twice through all pixels. Is there any better solution?
EDIT
This is the code where i draw my GUITextures:
public GameObject catGO;
GUITexture guitex;
Texture catTex;
void Start () {
guitex = GetComponent<GUITexture> ();
catTex = catGO.GetComponent<GUITexture> ().texture;
Image<Bgr, byte> cvImage = EmguCVUnityInterop.UnityTextureToOpenCVImage((Texture2D)catTex);
Texture2D converted = EmguCVUnityInterop.OpenCVImageToUnityTexture(cvImage);
guitex.texture = converted;
}
First of all you wrote in 2 for loops check.SetActive(true) so its width*height times. Enabling GameObject is a costly operation.
Second is that iterating thru every pixel is another costly operation. For examle if u have image 2560x1600 you have over 4 milion iterations.
Try to change TextureWrapMode to Repeat (I know it sounds silly:] )

Resources