Synonym chains - Efficient routing algorithm for iOS/sqlite - ios

A synonym chain is a series of closely related words that span two anchors. For example, the English words "black" and "white" can connected as:
black-dark-obscure-hidden-concealed-snug-comfortable-easy-simple-pure-white
Or, here's "true" and "false":
true-just=fair=beautiful=pretty-artful-artificial-sham-false
I'm working on a thesaurus iOS app, and I would like to display synonym chains also. The goal is to return a chain from within a weighted graph of word relations. My source is a very large thesaurus with weighted data, where the weights measure similarity between words. (e.g., "outlaw" is closely related to "bandit", but more distantly related to "rogue.") Our actual values range from 0.001 to ~50, but you can assume any weight range.
What optimization strategies do you recommend to make this realistic, e.g., within 5 seconds of processing on a typical iOS device? Assume the thesaurus has half a million terms, each with 20 associations. I'm sure there's a ton of prior research on these kinds of problems, and I'd appreciate pointers on what might be applied to this.
My current algorithm involves recursively descending a few levels from the start and end words, and then looking for intercepting words, but that becomes too slow with thousands of sqlite (or Realm) selects.

Since you said your source is a large thesaurus with weighted data, I'm assuming if you pick any word, you will have the weight to its successor in the similarity graph. I will always use the sequence below, when I'm giving any example:
black-dark-obscure-hidden-concealed-snug-comfortable-easy-simple-pure-white
Let's think of the words as being a node on a graph, each relationship of similarity a word has with another is a path on that graph. Each path is weighted with a cost, which is the weight you have on the source file. So the best solution to find a path from one word to another is to use the A* (A star) path finding.
I'm using the minimum "cost" to travel from a word to its successor to be 1. You can adjust it accordingly. First you will need a good heuristic function to use, since this is a greedy algorithm. This heuristic function will return the "greedy" distance between two words, any words. You must respect the fact the the "distance" it returns can never be bigger than the real distance between the two words. Since I don't know any relationship between any words for a thesaurus, my heuristic function will always return the minimum cost 1. In other words, it will always say a word is the most similar word to any other. For example, my heuristic function tells me that 'black' is the best synonym for 'white'.
You must tune the heuristic function if you can, so it will respond with more accurate distances making the algorithm runs faster. That's the tricky part I guess.
You can see the pseudo-code for the algorithm on the Wikipedia article I sent. But here it is for a faster explanation:
function A*(start,goal)
closedset := the empty set -- The set of nodes already evaluated.
openset := {start} -- The set of tentative nodes to be evaluated, initially containing the start node
came_from := the empty map -- The map of navigated nodes.
g_score[start] := 0 -- Cost from start along best known path.
-- Estimated total cost from start to goal through y.
f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal)
while openset is not empty
current := the node in openset having the lowest f_score[] value
if current = goal
return reconstruct_path(came_from, goal)
remove current from openset
add current to closedset
for each neighbor in neighbor_nodes(current)
if neighbor in closedset
continue
tentative_g_score := g_score[current] + dist_between(current,neighbor)
if neighbor not in openset or tentative_g_score < g_score[neighbor]
came_from[neighbor] := current
g_score[neighbor] := tentative_g_score
f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal)
if neighbor not in openset
add neighbor to openset
return failure
function reconstruct_path(came_from,current)
total_path := [current]
while current in came_from:
current := came_from[current]
total_path.append(current)
return total_path
Now, for the algorithm you'll have 2 arrays of nodes, the ones you are going to visit (opened) and the ones you already visited (closed). You will also have two arrays of distances for each node, that you will be completing as you travel through the graph.
One array (g_score) will tell you the real lowest traveled distance between the starting node and the specified node. For example, g_score["hidden"] will return the lowest weighted cost to travel from 'black' to 'hidden'.
The other array (f_score) will tell you the supposed distance between the node you specified to the goal you want to reach. For example, f_score["snug"] will return the supposed weighted cost to travel from "snug" to "white" using the heuristic function. Remember, this cost will always be less or equal the real cost to travel between words, since our heuristic function need to respect the aforementioned rule.
As the algorithm runs, you will be traveling from node to node, from the starting word, saving all the nodes you traveled and the costs you "used" to travel. You will be replacing the traveled path when you find a better cost to travel on the g_score array. You will use the f_score to predict which node will be best visited first, from the array of 'unvisited' nodes. It's best if you save your f_score as a minimum Heap.
You will end the algorithm when you find the node that is the goal that you want. Then you will reconstruct the minimum path using the array of nodes visited that you kept saving at each iteration. Another way the algorithm will stop is if it visited all neighbor nodes and didn't find the goal. When this happens, you can say there is no path from the starting node to the goal.
This algorithm is the most used on games to find the better path between two objects on a 3D world. To improve it, you just need to create a better heuristic function, that can let the algorithm find the better nodes to travel first, leding it to the goal faster.
-- 7f

Here's a closely related question and answer: Algorithm to find multiple short paths
There you can see comments about Dijkstra's and A-star, Dinic's, but more broadly also the idea of maximum flow and minimum cost flow.

Related

In the algorithm LambdaRank (in Learning to Rank) what does |∆ NDCG| means?

This Article describes the LambdaRank algorithm for information retrieval. In formula 8 page 6, the authors propose to multiply the gradient (lambda) by a term called |∆NDCG|.
I do understand that this term is the difference of two NDCGs when swapping two elements in the list:
the size of the change in NDCG (|∆NDCG|) given by swapping the rank positions of U1 and U2
(while leaving the rank positions of all other urls unchanged)
However, I do not understand which ordered list is considered when swapping U1 and U2. Is it the list ordered by the predictions from the model at the current iteration ? Or is it the list ordered by the ground-truth labels of the documents ? Or maybe, the list of the predictions from the model at the previous iteration as suggested by Tie-Yan Liu in his book Learning to Rank for Information Retrieval ?
Short answer: It's the list ordered by the predictions from the model at the current iteration.
Let's see why it makes sense.
At each training iteration, we perform the following steps (these steps are standard for all Machine Learning algorithms, whether it's classification or regression or ranking tasks):
Calculate scores s[i] = f(x[i]) returned by our model for each document i.
Calculate the gradients of model's weights ∂C/∂w, back-propagated from RankNet's cost C. This gradient is the sum of all pairwise gradients ∂C[i, j]/∂w, calculated for each document's pair (i, j).
Perform a gradient ascent step (i.e. w := w + u * ∂C/∂w where u is step size).
In "Speeding up RankNet" paragraph, the notion λ[i] was introduced as contributions of each document's computed scores (using the model weights at current iteration) to the overall gradient ∂C/∂w (at current iteration). If we order our list of documents by the scores from the model at current iteration, each λ[i] can be thought of as "arrows" attached to each document, the sign of which tells us to which direction, up or down, that document should be moved to increase NDCG. Again, NCDG is computed from the order, predicted by our model.
Now, the problem is that the lambdas λ[i, j] for the pair (i, j) contributes equally to the overall gradient. That means the rankings of documents below, let’s say, 100th position is given equal improtance to the rankings of the top documents. This is not what we want: we should prioritize having relevant documents at the very top much more than having correct ranking below 100th position.
That's why we multiply each of those "arrows" by |∆NDCG| to emphasise on top ranking more than the ranking at the bottom of our list.

How to squish a continuous cosine-theta score to a discrete (0/1) output?

I implemented a cosine-theta function, which calculates the relation between two articles. If two articles are very similar then the words should contain quite some overlap. However, a cosine theta score of 0.54 does not mean "related" or "not related". I should end up with a definitive answer which is either 0 for 'not related' or 1 for 'related'.
I know that there are sigmoid and softmax functions, yet I should find the optimal parameters to give to such functions and I do not know if these functions are satisfactory solutions. I was thinking that I have the cosine theta score, I can calculate the percentage of overlap between two sentences two (e.g. the amount of overlapping words divided by the amount of words in the article) and maybe some more interesting things. Then with the data, I could maybe write a function (what type of function I do not know and is part of the question!), after which I can minimize the error via the SciPy library. This means that I should do some sort of supervised learning, and I am willing to label article pairs with labels (0/1) in order to train a network. Is this worth the effort?
# Count words of two strings.
v1, v2 = self.word_count(s1), self.word_count(s2)
# Calculate the intersection of the words in both strings.
v3 = set(v1.keys()) & set(v2.keys())
# Calculate some sort of ratio between the overlap and the
# article length (since 1 overlapping word on 2 words is more important
# then 4 overlapping words on articles of 492 words).
p = min(len(v1), len(v2)) / len(v3)
numerator = sum([v1[w] * v2[w] for w in v3])
w1 = sum([v1[w]**2 for w in v1.keys()])
w2 = sum([v2[w]**2 for w in v2.keys()])
denominator = math.sqrt(w1) * math.sqrt(w2)
# Calculate the cosine similarity
if not denominator:
return 0.0
else:
return (float(numerator) / denominator)
As said, I would like to use variables such as p, and the cosine theta score in order to produce an accurate discrete binary label, either 0 or 1.
As said, I would like to use variables such as p, and the cosine theta score in order to produce an accurate discrete binary label, either 0 or 1.
Here it really comes down to what you mean by accuracy. It is up to you to choose how the overlap affects whether or not two strings are "matching" unless you have a labelled data set. If you have a labelled data set (I.e., a set of pairs of strings along with a 0 or 1 label), then you can train a binary classification algorithm and try to optimise based on that. I would recommend something like a neural net or SVM due to the potentially high dimensional, categorical nature of your problem.
Even the optimisation, however, is a subjective measure. For example, in theory let's pretend you have a model which out of 100 samples only predicts 1 answer (Giving 99 unknowns). Technically if that one answer is correct, that is a model with 100% accuracy, but which has a very low recall. Generally in machine learning you will find a trade off between recall and accuracy.
Some people like to go for certain metrics which combine the two (The most famous of which is the F1 score), but honestly it depends on the application. If I have a marketing campaign with a fixed budget, then I care more about accuracy - I would only want to target consumers who are likely to buy my product. If however, we are looking to test for a deadly disease or markers for bank fraud, then it's feasible for that test to be accurate only 10% of the time - if its recall of true positives is somewhere close to 100%.
Finally, if you have no labelled data, then your best bet is just to define some cut off value which you believe indicates a good match. This is would then be more analogous to a binary clustering problem, and you could use some more abstract measure such as distance to a centroid to test which cluster (Either the "related" or "unrelated" cluster) the point belongs to. Note however that here your features feel like they would be incredibly hard to define.

The Hamiltonian Cycle

What is the worst case time complexity of the Hamiltonian cycle problem using backtracking?
Is it O(n!) or O(n^n )? Since I tried to find out the complexity and it's coming out to be O(n×n!) which is more like O(n^n ), and not O(n!).
The brute-force solution for finding a Hamiltonian cycle requires O(n!) work (which is indeed O(n^n), but O(n^n) wouldn't be a tight upper bound).
A Hamiltonian cycle in a graph G with n nodes has the form
H = v_1,v_2,v_3,...,v_n,v_1.
Since H includes every node in G, we may start our search from any arbitrarily chosen node, say v_1. Subsequently, there are n-1 candidate nodes to be the second node v_2 (i.e., all nodes but v_1 itself); there are n-2 choices for the third node v_3 (i.e., all nodes but the chosen candidates for v_1 and v_2), and so on so forth; at the end having candidates for v_1 to v_n-1 fixed, there is exactly one remaining candidate for v_n.
(i) This results in a maximum of (n-1)(n-2)...(2)(1) = (n-1)!
combinations.
(ii) In a naive implementation, checking each combination requires O(n) work;
i.e., for checking whether or not a given combination is
a Hamiltonian cycle, we go through the whole sequence of given combination and make sure it has the required properties of a Hamiltonian path.
Hence,
The overall complexity is O(n) x (n-1)! = O(n!)
Of course, we can reduce the required work using a variety of techniques, e.g, branch and bound approaches.

Time series clustering of activity of machines

I have a NxM matrix where N is the number of time intervals and M are the number of nodes in a graph.
Each cell indicates the nodes that were active in that time interval
Now I need to find group of nodes that always appear together across time series. Is there some approach I can use to cluster these nodes together based on their time series activity.
In R you could do this:
# hierarchical clustering
library(dendextend) # contains color_branches()
dist_ts <- dist(mydata) # calculate distances
hc_dist <- hclust(dist_ts)
dend_ts <- as.dendrogram(hc_dist)
# set some value for h (height within the dendrogram) here that makes sense for you
dend_100 <- color_branches(dend_ts, h = 100)
plot(dend_100)
This creates a dendrogram with colored branches.
You could do much better visualizations, but your post is pretty generic (somewhat unclear what you're asking) and you didn't indicate whether you like R at all.
As the sets may overlap most clustering methods will not produce optimum results.
Instead, treat each time point as a transaction, containing all active nodes as items. Then run frequent itemset mining to find frequently active sets of machines.

Changing Kademlia Metric - Unidirectional Property Importance

Kademlia uses XOR metric. Among other things, this has so called "unidirectional" property (= for any given point x and distance e>0, there is exactly one point y such that d(x,y)=e).
First question is a general question: Is this property of the metric critical for the functionality of Kademlia, or is it just the thing that helps with revealing pressure from certain nodes (as the original paper suggests). In other words, if we want to change the metric, how important is to come with a metric that is "unidirectional" as well?
Second question is about concrete change of the metric: Let's assume we have node identifiers (addresses) as X-bit numbers, would any of the following metric work with Kademlia?
d(x,y) = abs(x-y)
d(x,y) = abs(x-y) + 1/(x xor y)
The first metric simply provides difference between numbers, so for node ID 100 the nodes with IDs 90 and 110 are equally distant, so this is not unidirectional metric. In the second case we fix that adding 1/(x xor y), where we know that (x xor y) is unidirectional, so having 1/(x xor y) should preserve this property.
Thus for node ID 100, the node ID 90 is d(100,90) = 10 + 1/62, while the distance from node ID 110 is d(100,110) = 10 + 1/10.
You wouldn't be dealing with kademlia anymore. There are man other routing algorithms which use different distance metrics, some even non-uniform distance metrics, but they do not rely on kademlia-specific assumptions and sometimes incorporate other features to compensate for some undesirable aspect of those metrics.
Since there can be ties in the metric (two candidates for each point), lookups could no longer converge on a precise set of closest nodes.
Bucket splitting and other routing table maintenance algorithms would need to be changed since they assume that identical distances can only occur with node identity.
I'm not sure whether it would affect Big-O properties or other guarantees of kademlia.
Anyway, this seems like an X-Y problem. You want to modify the metric to serve a particular goal. Maybe you should look for routing overlays designed with that goal in mind instead.
d(x,y) = abs(x-y) + 1/(x xor y)
This seems impractical, division on integers suffers from rounding. and in reality you would not be dealing with such small numbers but much larger (e.g. 160bit) numbers, making divisions more expensive too.

Resources