Accord.NET Comparing two images to determine similarity - asp.net-mvc

I would like your advice as to why the code might be becoming unresponsive and how to fix it.
I am using Accord.NET to compare images. The first stage of my project is to compare two images, an observed image and a model image, and determine how similar they are; the second, is to compare an observed image against my whole database to determine what the observed image most likely is based on how the models have been categorized. Right now I am focusing on the first. I initially tried using ExhaustiveTemplateMatching.ProcessImage() but it didn't fit my need. Now, I am using SURF. Here is my code as is:
public class ProcessImage
{
public static void Similarity(System.IO.Stream model, System.IO.Stream observed,
out float similPercent)
{
Bitmap bitModel = new Bitmap(model);
Bitmap bitObserved = new Bitmap(observed);
// For method Difference, see http://www.aforgenet.com/framework/docs/html/673023f7-799a-2ef6-7933-31ef09974dde.htm
// Inspiration for this process: https://www.youtube.com/watch?v=YHT46f2244E
// Greyscale class http://www.aforgenet.com/framework/docs/html/d7196dc6-8176-4344-a505-e7ade35c1741.htm
// Convert model and observed to greyscale
Grayscale filter = new Grayscale(0.2125, 0.7154, 0.0721);
// apply the filter to the model
Bitmap greyModel = filter.Apply(bitModel);
// Apply the filter to the observed image
Bitmap greyObserved = filter.Apply(bitObserved);
int modelPoints = 0, matchingPoints = 0;
/*
* This doesn't work. Images can have different sizes
// For an example, https://thecsharper.com/?p=94
// Match
var tm = new ExhaustiveTemplateMatching(similarityThreshold);
// Process the images
var results = tm.ProcessImage(greyModel, greyObserved);
*/
using (SpeededUpRobustFeaturesDetector detector = new SpeededUpRobustFeaturesDetector())
{
List<SpeededUpRobustFeaturePoint> surfModel = detector.ProcessImage(greyModel);
modelPoints = surfModel.Count();
List<SpeededUpRobustFeaturePoint> surfObserved = detector.ProcessImage(greyObserved);
KNearestNeighborMatching matcher = new KNearestNeighborMatching(5);
var results = matcher.Match(surfModel, surfObserved);
matchingPoints = results.Length;
}
// Determine if they represent the same points
// Obtain the pairs of associated points, we determine the homography matching all these pairs
// Compare the results, 0 indicates no match so return false
if (matchingPoints <= 0)
{
similPercent = 0.0f;
}
similPercent = (matchingPoints * 100) / modelPoints;
}
}
So far I get to obtain the list of points but then when matching the code seems to become unresponsive.
I am calling the above code from an ASP.NET web page after the user posts a bitmap. Here is the code if it may help:
public ActionResult Compare(int id)
{
ViewData["SampleID"] = id;
return View();
}
[HttpPost]
public ActionResult Compare(int id, HttpPostedFileBase uploadFile)
{
Sample model = _db.Sample_Read(id);
System.IO.Stream modelStream = null;
float result = 0;
_db.Sample_Stream(model.FileId, out modelStream);
ImgProc.ProcessImage.Similarity(modelStream, uploadFile.InputStream,
out result);
ViewData["SampleID"] = id;
ViewData["match"] = result;
return View();
}
The page itself is rather simple, a hidden field, an file type input and a submit.

Problem was my PC. After some time processing the calculation finishes.
Thanks,

For KNearestNeighborMatching to decide, it is necessary to put
Accord.Imaging and Accord.Vision.

Related

Save Jena property table to TDB

When I try to save my propertyTabelGraph then I get
a org.apache.jena.shared.AddDeniedExceptionenter exception.
This exception is thrown by the method performAdd in the GraphBase class:
/**
Add a triple to the triple store. The default implementation throws an
AddDeniedException; subclasses must override if they want to be able to add triples.
*/
#Override
public void performAdd( Triple t )
{ throw new AddDeniedException( "GraphBase::performAdd" ); }
This function is called because I create a GraphPropertyTable which inherit
from GraphBase, however there is no override for the method perfromAdd as I expected there to be.
I am unsure on how I should proceed now.
I suspect that I am doing something wrong, please help me find out what!
Here is a minimum example that recreate the error:
PropertyTable propertytable = new PropertyTableArrayImpl(2, 2);
Column alpha = propertytable.createColumn(NodeFactory.createLiteral("alpha"));
Column beta = propertytable.createColumn(NodeFactory.createLiteral("beta"));
Row one = propertytable.createRow(NodeFactory.createLiteral("one"));
Row two = propertytable.createRow(NodeFactory.createLiteral("two"));
propertytable.getRow(one.getRowKey()).setValue(alpha,NodeFactory.createLiteral("alpha-one"));
propertytable.getRow(one.getRowKey()).setValue(beta,NodeFactory.createLiteral("beta-two"));
propertytable.getRow(two.getRowKey()).setValue(alpha,NodeFactory.createLiteral("alpha-one"));
propertytable.getRow(two.getRowKey()).setValue(beta,NodeFactory.createLiteral("beta-two"));
GraphPropertyTable graph = new GraphPropertyTable(propertytable);
Model model = ModelFactory.createModelForGraph(graph);
Dataset dataset = TDBFactory.createDataset("tdb/");
dataset.begin(ReadWrite.WRITE);
try {
dataset.addNamedModel("www.example.org/model", model);
dataset.commit();
} finally {
dataset.end();
}
How do I proceed in order to persist my property table on disk?

Mahout Recomendaton engine recommending products and with its Quantity to customer

i am working on mahout recommendation engine use case.I precomputed recommendations and stored in database. now i am planning to expose with taste rest services to .net.i had limited customers and products.it is distributor level recommendation use case.my question is if new distributor comes in ,how would i suggests recommendations to him.and also how would i suggest the Quantity of Recommended product to each distributor.could you people give me some guidance.am i going to face performance issues..?
One way is, when new user comes, to precompute the recommendations from scratch for all the users or only for this user. You should know that this user might change the recommendations for the others too. Its up to your needs frequently do you want to do the pre-computations.
However, if you have limited number of users and items, another way is to have online recommender that computes the recommendations in real time. If you use the FileDataModel, there is a way to get the data from the new user periodically (See the book Mahout in Action). If you use in memory data model, which is faster, you can override the methods: setPreference(long userID, long itemID, float value) and removePreference(long userID, long itemID), and whenever new user comes and likes or removes some items you should call these methods on your data model.
EDIT: Basically you can get the GenericDataModel, and add this to the methods setPreference and removePreference. This will be your lower level data model. You can wrap it afterwards with ReloadFromJDBCDataModel by setting your data model in the reload() method like this:
DataModel newDelegateInMemory =
delegate.hasPreferenceValues()
? new MutableDataModel(delegate.exportWithPrefs())
: new MutableBooleanPrefDataModel(delegate.exportWithIDsOnly());
The overridden methods:
#Override
public void setPreference(long userID, long itemID, float value) {
userIDs.add(userID);
itemIDs.add(itemID);
setMinPreference(Math.min(getMinPreference(), value));
setMaxPreference(Math.max(getMaxPreference(), value));
Preference p = new GenericPreference(userID, itemID, value);
// User preferences
GenericUserPreferenceArray newUPref;
int existingPosition = -1;
if (preferenceFromUsers.containsKey(userID)) {
PreferenceArray oldPref = preferenceFromUsers.get(userID);
newUPref = new GenericUserPreferenceArray(oldPref.length() + 1);
for (int i = 0; i < oldPref.length(); i++) {
//If the item does not exist in the liked user items, add it!
if(oldPref.get(i).getItemID()!=itemID){
newUPref.set(i, oldPref.get(i));
}else{
//Otherwise remember the position
existingPosition = i;
}
}
if(existingPosition>-1){
//And change the preference value
oldPref.set(existingPosition, p);
}else{
newUPref.set(oldPref.length(), p);
}
} else {
newUPref = new GenericUserPreferenceArray(1);
newUPref.set(0, p);
}
if(existingPosition == -1){
preferenceFromUsers.put(userID, newUPref);
}
// Item preferences
GenericItemPreferenceArray newIPref;
existingPosition = -1;
if (preferenceForItems.containsKey(itemID)) {
PreferenceArray oldPref = preferenceForItems.get(itemID);
newIPref = new GenericItemPreferenceArray(oldPref.length() + 1);
for (int i = 0; i < oldPref.length(); i++) {
if(oldPref.get(i).getUserID()!=userID){
newIPref.set(i, oldPref.get(i));
}else{
existingPosition = i;
}
}
if(existingPosition>-1){
oldPref.set(existingPosition, p);
}else{
newIPref.set(oldPref.length(), p);
}
} else {
newIPref = new GenericItemPreferenceArray(1);
newIPref.set(0, p);
}
if(existingPosition == -1){
preferenceForItems.put(itemID, newIPref);
}
}
#Override
public void removePreference(long userID, long itemID) {
// User preferences
if (preferenceFromUsers.containsKey(userID)) {
List<Preference> newPu = new ArrayList<Preference>();
for (Preference p : preferenceFromUsers.get(userID)) {
if(p.getItemID()!=itemID){
newPu.add(p);
}
}
preferenceFromUsers.remove(userID);
preferenceFromUsers.put(userID, new GenericUserPreferenceArray(newPu));
}
if(preferenceFromUsers.get(userID).length()==0){
preferenceFromUsers.remove(userID);
userIDs.remove(userID);
}
if (preferenceForItems.containsKey(itemID)) {
List<Preference> newPi = new ArrayList<Preference>();
for (Preference p : preferenceForItems.get(itemID)) {
if(p.getUserID() != userID){
newPi.add(p);
}
}
preferenceForItems.remove(itemID);
preferenceForItems.put(itemID, new GenericItemPreferenceArray(newPi));
}
if(preferenceForItems.get(itemID).length()==0){
//Not sure if this is needed, but it works without removing the item
//preferenceForItems.remove(itemID);
//itemIDs.remove(itemID);
}
}
If by "new distributor" you mean that you have no data for them, no historical data. Then you cannot make recommendations using Mahout's recommenders.
You can suggest other items once they chose one. Use Mahout's "itemsimilarity" driver to calculate similar items for everything in your catalog. Then if they choose something you can suggest similar items.
The items that come from the itemsimilarity driver can be stored in you DB as a column value containing ids for similar items for every item. Then you can index the column with a search engine and use the user's first order as the query. This will return realtime personalized recommendations and is the most up-to-date method suggested by the Mahout people.
See a description of how to do this in this book by Ted Dunning, one of the leading Mahout Data Scientists. http://www.mapr.com/practical-machine-learning

Draw a JUNG graph having more than two vertices between a pair of nodes

Ok so here is a clearer explanation :
I have now understood that I need to use sparse ou SparseMultigraph type to be able to have bidirectional edges so I have changed my GraphML class as such :
class GraphML
{
public GraphML(String filename) throws ParserConfigurationException, SAXException, IOException
{
//Step 1 we make a new GraphML Reader. We want a directed Graph of type node and edge.
GraphMLReader<SparseMultigraph<node, edge>, node, edge> gmlr =
new GraphMLReader<SparseMultigraph<node, edge>, node, edge>(new VertexFactory(), new EdgeFactory());
//Next we need a Graph to store the data that we are reading in from GraphML. This is also a directed Graph
// because it needs to match to the type of graph we are reading in.
final SparseMultigraph<node, edge> graph = new SparseMultigraph<node, edge>();
gmlr.load(filename, graph);
// gmlr.load(filename, graph); //Here we read in our graph. filename is our .graphml file, and graph is where we
// will store our graph.
BidiMap<node, String> vertex_ids = gmlr.getVertexIDs(); //The vertexIDs are stored in a BidiMap.
Map<String, GraphMLMetadata<node>> vertex_color = gmlr.getVertexMetadata(); //Our vertex Metadata is stored in a map.
Map<String, GraphMLMetadata<edge>> edge_meta = gmlr.getEdgeMetadata(); // Our edge Metadata is stored in a map.
// Here we iterate through our vertices, n, and we set the value and the color of our nodes from the data we have
// in the vertex_ids map and vertex_color map.
for (node n : graph.getVertices())
{
n.setValue(vertex_ids.get(n)); //Set the value of the node to the vertex_id which was read in from the GraphML Reader.
n.setColor(vertex_color.get("d0").transformer.transform(n)); // Set the color, which we get from the Map, vertex_color.
//Let's print out the data so we can get a good understanding of what we've got going on.
System.out.println("ID: "+n.getID()+", Value: "+n.getValue()+", Color: "+n.getColor());
}
// Just as we added the vertices to the graph, we add the edges as well.
for (edge e : graph.getEdges())
{
e.setValue(edge_meta.get("d1").transformer.transform(e)); //Set the edge's value.
System.out.println("Edge ID: "+e.getID());
}
TreeBuilder treeBuilder = new TreeBuilder(graph);
// create a simple graph for the demo:
//First we make a VisualizationViewer, of type node, edge. We give it our Layout, and the Layout takes a graph in it's constructor.
//VisualizationViewer<node, edge> vv = new VisualizationViewer<node, edge>(new FRLayout<node, edge>(graph));
VisualizationViewer<node, edge> vv = new VisualizationViewer<node, edge>(new TreeLayout<node, edge>(treeBuilder.getTree()));
//Next we set some rendering properties. First we want to color the vertices, so we provide our own vertexPainter.
vv.getRenderContext().setVertexFillPaintTransformer(new vertexPainter());
//Then we want to provide labels to our nodes, Jung provides a nice function which makes the graph use a vertex's ToString function
//as it's way of labelling. We do the same for the edge. Look at the edge and node classes for their ToString function.
vv.getRenderContext().setVertexLabelTransformer(new ToStringLabeller<node>());
vv.getRenderContext().setEdgeLabelTransformer(new ToStringLabeller<edge>());
// Next we do some Java stuff, we create a frame to hold the graph
final JFrame frame = new JFrame();
frame.setTitle("GraphMLReader for Trees - Reading in Attributes"); //Set the title of our window.
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //Give a close operation.
//Here we get the contentPane of our frame and add our a VisualizationViewer, vv.
frame.getContentPane().add(vv);
//Finally, we pack it to make sure it is pretty, and set the frame visible. Voila.
frame.pack();
frame.setVisible(true);
}
}
And then changed my tree builder class constructor to SparseMultigraph :
public class TreeBuilder
{
DelegateForest<node,edge> mTree;
TreeBuilder(SparseMultigraph<node, edge> graph)
{
mTree = new DelegateForest<node, edge>();
for (node n : graph.getVertices())
{
mTree.addVertex(n);
}
for (edge e : graph.getEdges())
{
mTree.addEdge(e, graph.getSource(e),graph.getDest(e));
}
}
public DelegateForest<node, edge> getTree()
{
return mTree;
}
}
when I run my Main class :
public class Main
{
public static void main(String[] args) throws ParserConfigurationException, SAXException, IOException
{
String filename = "attributes.graphml";
if(args.length > 0)
filename = args[0];
new GraphML(filename);
}
}
I don't get an error but edges are not present (node are their in the graph but not properly displayed).
Thanks
Zied

Lucene does not index some words?

I use leucene.net for my site and it Index some of the words fine and correct but it doesn't index some words like "الله"!
I have see the indexed file with Luke and it shows that "الله"is not indexed.
I have used ArabicAnalyzer for indexing.
you can see my site at www.qoranic.com , if you search "مریم" it will be ok but if you search "الله" it shows nothing.
any idea is appreciated in forward.
The ArabicAnalyzer does some transformation to that input; it will transform the input الله to له. This is due to the usage of the ArabicStemFilter (and ArabicStemmer) which is documented with ...
Stemming is defined as:
Removal of attached definite article, conjunction, and prepositions.
Stemming of common suffixes.
This shouldn't be an issue since you should be parsing the user provided query through the same analyzer when searching, producing the same tokens.
Here's the sample code I used to see what terms an analyzer produced from a given input.
using System;
using Lucene.Net.Analysis.AR;
using Lucene.Net.Analysis.Tokenattributes;
using System.IO;
namespace ConsoleApplication {
public static class Program {
public static void Main() {
var luceneVersion = Lucene.Net.Util.Version.LUCENE_30;
var input = "الله";
var analyzer = new ArabicAnalyzer(luceneVersion);
var inputReader = new StringReader(input);
var stream = analyzer.TokenStream("fieldName", inputReader);
var termAttribute = stream.GetAttribute<ITermAttribute>();
while(stream.IncrementToken()) {
Console.WriteLine("Term: {0}", termAttribute.Term);
}
Console.WriteLine("Done.");
Console.ReadLine();
}
}
}
You can overcome this behavior (remove the stemming) by writing a custom Analyzer which uses the ArabicNormalizationFilter, just as ArabicAnalyzer does, but without the call to ArabicStemFilter.
public class CustomAnalyzer : Analyzer {
public override TokenStream TokenStream(String fieldName, TextReader reader) {
TokenStream result = new ArabicLetterTokenizer(reader);
result = new LowerCaseFilter(result);
result = new ArabicNormalizationFilter(result);
return result;
}
}

Compare Byte Arrays Before Saving To Database

What is the best way of comparing that my image saved to a database isnt different, thus saving I/O.
Scenario:
Im writing an ASP.NET Application in MVC3 using Entity Framework. I have an Edit action method for my UserProfile Controller. Now i want to check that the image i have posted back to the method is different, and if it is, then i want to call the ObjectContext .SaveChanges() if it is the same image, then move on.
Here is a cut down version of my code:
[HttpPost, ActionName("Edit")]
public ActionResult Edit(UserProfile userprofile, HttpPostedFileBase imageLoad2)
{
Medium profileImage = new Medium();
if (ModelState.IsValid)
{
try
{
if (imageLoad2 != null)
{
if ((db.Media.Count(i => i.Unique_Key == userprofile.Unique_Key)) > 0)
{
profileImage = db.Media.SingleOrDefault(i => i.Unique_Key == userprofile.Unique_Key);
profileImage.Amend_Date = DateTime.Now;
profileImage.Source = Images.ImageToBinary(imageLoad2.InputStream);
profileImage.File_Size = imageLoad2.ContentLength;
profileImage.File_Name = imageLoad2.FileName;
profileImage.Content_Type = imageLoad2.ContentType;
profileImage.Height = Images.FromStreamHeight(imageLoad2.InputStream);
profileImage.Width = Images.FromStreamWidth(imageLoad2.InputStream);
db.ObjectStateManager.ChangeObjectState(profileImage, EntityState.Modified);
db.SaveChanges();
}
}
}
}
So i save my image as a varbinary(max) in nto a SQL Server Express DB, which is referenced as a byte array in my entities.
Is it just a case of looping around the byte array from the post and comparing it to the byte array pulled back into the ObjectContext?
Rather than directly comparing the byte array, I would compare the hash of the images. Perhaps something like the following could be extracted into a comparison method:
SHA256Managed sha = new SHA256Managed();
byte[] imgHash1 = sha.ComputeHash(imgBytes1);
byte[] imgHash2 = sha.ComputeHash(imgBytes2);
// compare the hashes
for (int i = 0; i < imgHash1.Length && i < imgHash2.Length; i++)
{
//found a non-match, exit the loop
if (!(imgHash1[i] == imgHash2[i]))
return false;
}
return true;

Resources