I'm using Emgu CV's SURF feature to recognize similar objects in images.
The image is drawn, showing all the key points found, in both images. The problem is that the similar points are seen in the image.
How can I save those match points in a database?
First of all, create a class SURF.cs then write the following code in it:
public void FindSURF(Image<Gray, Byte> modelImage)
{
VectorOfKeyPoint modelKeyPoints;
SURFDetector surfCPU = new SURFDetector(500, false);
//extract features from the object image
modelKeyPoints = new VectorOfKeyPoint();
Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(modelImage, null, modelKeyPoints);
}
Then, in the program.cs write the following code:
SURF FindImageSURF = new SURF();
string[] filePaths = Directory.GetFiles(#"E:\folderimages\");
for (int i = 0; i < filePaths.Length; ++i)
{
string path = filePaths[i];
using (Image<Gray, Byte> modelImage = new Image<Gray, byte>(path))
{
FindImageSURF.FindSURF(modelImage);
}
}
Related
I am new into using moa and I am having a hard time trying to decode how the clustering algorithms have to be used. The documentation lacks of sample code for common usages, and the implementation is not well explained with comments ... have not found any tutorial either.
So, here is my code:
import com.yahoo.labs.samoa.instances.DenseInstance;
import moa.cluster.Clustering;
import moa.clusterers.denstream.WithDBSCAN;
public class TestingDenstream {
static DenseInstance randomInstance(int size) {
DenseInstance instance = new DenseInstance(size);
for (int idx = 0; idx < size; idx++) {
instance.setValue(idx, Math.random());
}
return instance;
}
public static void main(String[] args) {
WithDBSCAN withDBSCAN = new WithDBSCAN();
withDBSCAN.resetLearningImpl();
for (int i = 0; i < 10; i++) {
DenseInstance d = randomInstance(2);
withDBSCAN.trainOnInstanceImpl(d);
}
Clustering clusteringResult = withDBSCAN.getClusteringResult();
Clustering microClusteringResult = withDBSCAN.getMicroClusteringResult();
System.out.println(clusteringResult);
}
}
And here is the error I get:
Any insights into how the algorithm has to be used will be appreciated. Thanks!
I have updated the code.
It is working as I mentioned in the github, you have to assign header to your instance. See the github discussion
here is the updated code:
static DenseInstance randomInstance(int size) {
// generates the name of the features which is called as InstanceHeader
ArrayList<Attribute> attributes = new ArrayList<Attribute>();
for (int i = 0; i < size; i++) {
attributes.add(new Attribute("feature_" + i));
}
// create instance header with generated feature name
InstancesHeader streamHeader = new InstancesHeader(
new Instances("Mustafa Çelik Instance",attributes, size));
// generates random data
double[] data = new double[2];
Random random = new Random();
for (int i = 0; i < 2; i++) {
data[i] = random.nextDouble();
}
// creates an instance and assigns the data
DenseInstance inst = new DenseInstance(1.0, data);
// assigns the instanceHeader(feature name)
inst.setDataset(streamHeader);
return inst;
}
public static void main(String[] args) {
WithDBSCAN withDBSCAN = new WithDBSCAN();
withDBSCAN.resetLearningImpl();
withDBSCAN.initialDBScan();
for (int i = 0; i < 1500; i++) {
DenseInstance d = randomInstance(5);
withDBSCAN.trainOnInstanceImpl(d);
}
Clustering clusteringResult = withDBSCAN.getClusteringResult();
Clustering microClusteringResult = withDBSCAN.getMicroClusteringResult();
System.out.println(clusteringResult);
}
here is the screenshot of debug process, as you see the clustering result is generated:
image link is broken, you can find it on github github entry link
I'm attempting to translate the following OpenCV C++ code into Emgu CV 3:
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> v4iHierarchy;
cv::findContours(imgThreshCopy, contours, v4iHierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
I can find some Emgu CV 3 examples that use null for the 3rd parameter to findContours, for example doing it that way here would be a Visual Basic translation:
Dim contours As New VectorOfVectorOfPoint()
CvInvoke.FindContours(imgThreshCopy, contours, Nothing, RetrType.Tree, ChainApproxMethod.ChainApproxSimple)
Which works if the hierarchy parameter is not needed, but what if it is? I can't seem to figure the Emgu CV 3 syntax equivalent for the C++ line
std::vector<cv::Vec4i> v4iHierarchy;
Anybody else gotten this to work? Any help would be appreciated.
Pass a default-constructed Mat to get the hierarchy.
var VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
var Mat hierarchy = new Mat();
CvInvoke.FindContours(
image,
contours,
hierarchy,
RetrType.Ccomp,
ChainApproxMethod.ChainApproxSimple
);
Console.WriteLine("contours.Size: " + contours.Size);
Console.WriteLine("hierarchy.Rows: " + hierarchy.Rows);
Console.WriteLine("hierarchy.Cols: " + hierarchy.Cols);
Console.WriteLine("hierarchy.Depth: " + hierarchy.Depth);
Console.WriteLine("hierarchy.NumberOfChannels: " + hierarchy.NumberOfChannels);
// Example Output:
// contours.Size: 4391
// hierarchy.Rows: 1
// hierarchy.Cols: 4391
// hierarchy.Depth: Cv32S
// hierarchy.NumberOfChannels: 4
You can access the hierarchy data using the Mat DataPointer:
/// <summary>
/// Get a neighbor index in the heirarchy tree.
/// </summary>
/// <returns>
/// A neighbor index or -1 if the given neighbor does not exist.
/// </returns>
public int Get(HierarchyIndex component, int index)
{
if (Hierarchy.Depth != Emgu.CV.CvEnum.DepthType.Cv32S)
{
throw new ArgumentOutOfRangeException("ContourData must have Cv32S hierarchy element type.");
}
if (Hierarchy.Rows != 1)
{
throw new ArgumentOutOfRangeException("ContourData must have one hierarchy hierarchy row.");
}
if (Hierarchy.NumberOfChannels != 4)
{
throw new ArgumentOutOfRangeException("ContourData must have four hierarchy channels.");
}
if (Hierarchy.Dims != 2)
{
throw new ArgumentOutOfRangeException("ContourData must have two dimensional hierarchy.");
}
long elementStride = Hierarchy.ElementSize / sizeof(Int32);
var offset = (long)component + index * elementStride;
if (0 <= offset && offset < Hierarchy.Total.ToInt64() * elementStride)
{
unsafe
{
return *((Int32*)Hierarchy.DataPointer.ToPointer() + offset);
}
}
else
{
return -1;
}
}
https://gist.github.com/joshuanapoli/8c3f282cece8340a1dd43aa5e80d170b
EmguCV started using VectorOfVectorPoint for FindContours, but didn't really update their code to work correctly with it. See below for a working example:
/// <summary>
/// Find contours using the specific memory storage
/// </summary>
/// <param name="method">The type of approximation method</param>
/// <param name="type">The retrieval type</param>
/// <param name="stor">The storage used by the sequences</param>
/// <returns>
/// Contour if there is any;
/// null if no contour is found
/// </returns>
public static VectorOfVectorOfPoint FindContours(this Image<Gray, byte> image, ChainApproxMethod method = ChainApproxMethod.ChainApproxSimple,
Emgu.CV.CvEnum.RetrType type = RetrType.List) {
//Check that all parameters are valid.
VectorOfVectorOfPoint result = new VectorOfVectorOfPoint();
if (method == Emgu.CV.CvEnum.ChainApproxMethod.ChainCode) {
throw new ColsaNotImplementedException("Chain Code not implemented, sorry try again later");
}
CvInvoke.FindContours(image, result, null, type, method);
return result;
}
This returns a VectorOfVectorPoint, which implements IInputOutputArray, IOutputArray, IInputArrayOfArrays, and IInputArray. I'm not sure what you need to do with the contours, but here is an example of how to get the bounding boxes for each one. We do some other things, so let me know what you need and I can help you.
VectorOfVectorOfPoint contours = canvass2.FindContours(ChainApproxMethod.ChainApproxSimple, RetrType.Tree);
int contCount = contours.Size;
for (int i = 0; i < contCount; i++) {
using (VectorOfPoint contour = contours[i]) {
segmentRectangles.Add(CvInvoke.BoundingRectangle(contour));
if (debug) {
finalCopy.Draw(CvInvoke.BoundingRectangle(contour), new Rgb(255, 0, 0), 5);
}
}
}
You can simply create a
Matrix
then copy you Mat object data into that Matrix. See Example below:
Mat hierarchy = new Mat();
CvInvoke.FindContours(imgThreshCopy, contours, hierarchy , RetrType.Tree,ChainApproxMethod.ChainApproxSimple);
Matrix<int> matrix = new Matrix<int>(hierarchy.Rows, hierarchy.Cols,hierarchy.NumberOfChannels);
hierarchy.CopyTo(matrix);
Data can be accessed in
matrix.Data
Good Luck. H
I have an EmguCV.Image<Brg, Byte>() instance and another 3rd party API, that requires Stream that represents the image data.
I was able to convert the EmguCV image to the stream and pass it to the 3rd party API using System.Drawing.Bitmap.Save but that is not very efficient.
How to get the stream as afficietnly as possible?
This code works:
var image = new Image<Rgb, byte>("photo.jpg");
var bitmap = image.Bitmap// System.Drawing - this takes 108ms!
using (var ms = new MemoryStream())
{
bitmap.Save(ms, ImageFormat.Bmp); //not very efficient either
ms.Position = 0;
return ImageUtils.load(ms); //the 3rd party API
}
I have tried to create UnmanagedMemoryStream directly from the image:
byte* pointer = (byte*)image.Ptr.ToPointer();
int length = image.Height*image.Width*3;
var unmanagedMemoryStream = new UnmanagedMemoryStream(pointer, length);
but when i try to read from it, it throws an AccessViolationException: Attempted to read or write protected memory.
for (int i = 0; i < length; i++)
{
//throw AccessViolationException at random interation, e.g i==82240, 79936, etc
unmanagedMemoryStream.ReadByte();
}
the length is 90419328 in this case and it shoudl be correct, because it has the same value as image.ManagedArray.Length;
How to get the stream without copying the data?
Ok, there is Bytes property of type byte[] on the Image() so the answer is very easy:
new MemoryStream(image.Bytes)
I need to see some example code in java so that i can figure out the proper functioning of the various methods defined in the library.Also how to pass various necessary parameters.
some of them are
svm_predict
svm_node
svm_problem etc.
I have done a lot of googling and i still haven't found something substantial. And the documentation for java is another major disappointment. please help me out!!
here is some code that i have written so far.
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import libsvm.*;
import libsvm.svm_node;
import java.io.IOException;
import java.io.InputStream;
public class trial {
public static void main(String[] args) throws IOException {
svm temp = new svm();
svm_model model;
model = svm.svm_load_model("C:\\Users\\sidharth\\Desktop\\libsvm-3.18\\windows\\svm- m.model");
svm_problem prob = new svm_problem();
prob.l = trial.countLines("C:\\Users\\sidharth\\Desktop\\libsvm-3.18\\windows\\svm-ml.test");
prob.y = new double[prob.l];
int i;
for(i=0;i<prob.l;i++)
{
prob.y[i]=0.0;
}
prob.x = new svm_node[prob.l][];
temp.svm_predict(model, /*what to put here*/);
}
public static int countLines(String filename) throws IOException {
InputStream is = new BufferedInputStream(new FileInputStream(filename));
try {
byte[] c = new byte[1024];
int count = 0;
int readChars = 0;
boolean empty = true;
while ((readChars = is.read(c)) != -1) {
empty = false;
for (int i = 0; i < readChars; ++i) {
if (c[i] == '\n') {
++count;
}
}
}
return (count == 0 && !empty) ? 1 : count;
} finally {
is.close();
}
}
}
I already have a model file created and i want to predict a sample data using this model file. I have given prob.y[] a label of 0 .
Any example code that has been written by you will be of great help.
P.S. I am supposed to make an SVM based POS tagger. that is why i have tagged nlp.
Its pretty straightforward.
For training you could write something of the form:
svm_train training = new svm_train();
String[] options = new String[7];
options [0] = "-c";
options [1] = "1";
options [2] = "-t";
options [3] = "0"; //linear kernel
options [4] = "-v";
options [5] = "10"; //10 fold cross-validation
options [6] = your_training_filename;
training.run(options);
If you choose to save the model. Then you can retrieve it by
libsvm.svm_model model = training.getModel();
If you wish to test the model on test data, you could write something of the form:
BufferedReader input = new BufferedReader(new FileReader(test_file));
DataOutputStream output = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(prediction_output_file)));
svm_predict.predict(input, output, model, 0);
Hope this helps !
i'm working on color tracking..
and i'm specifically tracking an orange ball, a basketball ball perhaps, along with kinect for the body, i'm making a free throw shooting guide.
here's my case
i have already thresholded my image, eroded it to remove noise, and other insignificant objects (non-ball) and then dilated a few times to emphasize the ball..
and so i've come to a final binary image - where i've successfully isolated the ball.. there are other blobs..(smaller blobs that aren't the ball).. how do i get the largest blob(the ball) and put a bounding box?
i've tried hough circles btw, however this is very slow,,..thanks! some code would be useful
This is the code I used to get the largest blob in the image:
public static Blob FindLargestObject(Image<Gray, byte> block, Rectangle rectangle)
{
Image<Gray, byte> mask = block.CopyBlank();
Contour<Point> largestContour = null;
double largestarea = 0;
for (var contours = block.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL); contours != null; contours = contours.HNext)
{
if (contours.Area > largestarea)
{
largestarea = contours.Area;
largestContour = contours;
}
}
// fill the largest contour
mask.Draw(largestContour, new Gray(255), -1);
return new Blob(mask, largestContour, rectangle);
}
For Blob:
public class Blob
{
Image<Gray,byte> Mask{ get; set; }
Contour<Point> Contour { get; set; }
Rectangle Rectangle { get; set; }
}
The blob will contain all the information that you want to get.