In Sollin's algorithm, how do the other nodes understand when to check for least weight neighbor, when they are located in different geographical locations?
Related
Are there in the literature some standard scalability measures for distributed systems? I'm searching in google (and also google scholar) but I came up with only few papers (e.g., https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=862209).
In particular, I was wondering if there are some scalability measures for the three axes of the AKF cube or cube of the scalability (http://microservices.io/articles/scalecube.html), which is described in the book The Art of Scalability, by Abbott and Fischer.
There is no such unit for scalability. However, it is often illustrated by chart having the amount of resources in X-axis and throughput or latency in the Y-axis.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are several components and techniques used in learning programs. Machine learning components include ANN, Bayesian networks, SVM, PCA and other probability based methods. What role do Bayesian networks based techniques play in machine learning?
Also it would be helpful to know how does integrating one or more of these components into applications lead to real solutions, and how does software deal with limited knowledge and still produce sufficiently reliable results.
Probability and Learning
Probability plays a role in all learning. If we apply Shannon's information theory, the movement of probability toward one of the extremes 0.0 or 1.0 is information. Shannon defined a bit as the quotient of the log_2 of the before and after probabilities of a hypothesis. Given the probability of the hypothesis and its logical inversion, if the probability does not increase for either, no bits of information have been learned.
Bayesian Approaches
Bayesian Networks are directed graphs that represents causality hypotheses. They are generally represented as nodes with conditions connected by arrows that represent the hypothetical causes and corresponding effects. Algorithms have been developed based on Bayes' Theorem that attempt to statistically analyze causality from data that had been or is being collected.
MINOR SIDE NOTE: There are often usage constraints for the analytic tools. Most Bayesian algorithms require that the directed graph be acyclic, meaning that no series of arrows exist between two or more nodes anywhere in the graph that create a purely clockwise or purely counterclockwise closed loop. This is to avoid endless loops, however there may be now or in the future algorithms that work with cycles and handle them seamlessly from mathematical theory and software usability perspectives.
Application to Learning
The application to learning is that the probabilities calculated can be used to predict potential control mechanisms. The litmus test for learning is the ability to reliably alter the future through controls. An important application is the sorting of mail from handwriting. Both neural nets and Naive Bayesian classifiers can be useful in general pattern recognition integrated into routing or manipulation robotics.
Keep in mind here that the term network has a very wide meaning. Neural Nets are not at all the same approach as Bayesian Networks, although they may be applied to similar problem-solution topologies.
Relation to Other Approaches and Mechanisms
How a system designer uses support vector machines, principle component analysis, neural nets, and Bayesian networks in multivariate time series analysis (MTSA) varies from author to author. How they tie together also depends on the problem domain and statistical qualities of the data set, including size, skew, sparseness, and the number of dimensions.
The list given includes only four of a much larger set of machine learning tools. For instance Fuzzy Logic combines weights and production system (rule based) approaches.
The year is also a factor. An answer given now might be stale next year. If I were to write software given the same predictive or control goals as I was given ten years ago, I might combine various techniques entirely differently. I would certainly have a plethora of additional libraries and comparative studies to read and analyse before drawing my system topology.
The field is quite active.
I've read a lot of papers about the Nearest Neighbor problem, and it seems that indexing techniques like randomized kd-trees or LSH has been successfully used for Content Based Image Retrieval (CBIR), which can operate in high dimensional space. One really common experiment is given a SIFT query vector, find the most similar SIFT descriptor in the dataset. If we repeat the process with all the detected SIFT descriptors we can find the most similar image.
However, another popular approach is using Bag of Visual Words and convert all the SIFT descriptors detected into an huge sparse vector, which can be indexed with the same text techniques (e.g. inverted index).
My question is: these two different approaches ( matching the SIFT descriptors through Nearest Neighbor technique VS Bag of Features on SIFT descriptors + invert index) are extremely different and I don't understand which one is better.
If the second approach is better, what is the application of Nearest Neighbor in Computer Vision / Image Processing?
Oh boy, you are asking a question that even the papers can't answer, I think. In order to compare, one should take the state-of-the-art technologies of both approaches and compare them, measure speed, accuracy and recall. The one with the best characteristics is better than the other.
Personally, I hadn't heard much of the Bag of Visual Words, I had used the bag of words model only in text related projects, not images-relevant ones. Moreover, I am pretty sure I have seen many people use the 1st approach (including me and our research).
That's the best I got, so if I were you I would search for a paper that compares these two approaches, and if I couldn't find one, I would find the best representative of both approaches (the link you posted has a paper of 2009, that's old I guess), and check their experiments.
But be careful! In order to compare the approaches by the best representatives, you need to make sure that the experiments of each paper are super-relevant, the machines used are of the same "powerness", the data used are of the same nature and size, and so on.
I am trying to identifying high hitting IP's over a duration of time.
I have performed clustering on certain features, got a 12 cluster output, out of which 8 were bots and 4 were humans, as per the centroid values of the cluster.
Now What technique can I use to analyze the data within the cluster, so as to know that the data points within the cluster are in the right clusters.
In other words, are there any statistical methods to check the quality of the clusters.?
What I can think of is , if I take a data point which is at the boundary of the cluster, If I measure the distance of this point from the other Centroids and from its own Centroid, then can I get to know how close the two clusters are to my point and may be how well are my data divided in cluster ??
Kindly guide how to measure the quality of my clusters, with respect to data points and what are the standard technique to do so.
Thanks in Advance.!!
Cheers.!
With k-means, the chances are that you already have a big heap of garbage. Because it is an incredibly crude heuristic, and unless you were extremely careful at designing your features (at which point you would already know how to check the quality of a cluster assignment) the result is barely better than choosing a few centroids at random. In particular with k-means, which is very sensitive to the scale of your features. The results are very unreliable if you have features of different types and scale (e.g. height, shoe size, body mass, BMI: k-means on such variables is statistical nonsense).
Do not dump your data into a clustering algorithm and expect to get something useful. Clustering follows the GIGO principle: garbage-in-garbage-out. Instead, you need to proceed as follows:
identify what is a good cluster in your domain. This is very data and problem dependant.
choose a clustering algorithm with a very simialar objective.
find a data transformation, distance function or modification of the clustering algorithm to align with your objective
carefully double-check the result for trivial, unwanted, biased and random solutions.
For example, if you blindly threw customer data into clustering algorithm, the chances are it will decide the best answer to be 2 clusters, corresponding to the attributes "gender=m" and "gender=f"simply because this is the most extreme factor in your data. But because this is a know attribute, this result is entirely useless.
I have my own java based implementation of clustering (knn). However I am facing scalability issues. I do not plan to use Mahout because my requirements are very simple and mahout requires lot of work. I am looking for java based Canopy clustering implementation which i can plug into my algo and do parellel processing.
Mahout based Canopy libraries are coupled with Vectors and indexes and does not work on plain strings. If you know of the way, where i can use canopy clustering on strings using simple library, it would fix my issue.
My requirement is to pass list of strings(say 10K) to Canopy clustering algo and it should return sublists based on T1 and T2.
Canopy clustering is mostly useful as a preprocessing step for parallelization. I'm not sure how much it will get you on a single node. I figure you might as well compute the actual algorithm right away, or build an index such as an M-tree.
The strength of Canopy clustering is that you can run it independently on a number of nodes and then just overlap their results.
Also check if it actually is compatible to your approach. I figure that canopy might need metric properties to be correct. Is your string distance a proper metric (i.e. triangle inequality)?
10,000 data points, if that's all you're concerned with, should be no problem with standard k-means. I'd look at optimising that before you consider canopy clustering (which is really designed for millions or even billions of examples). Some things you may have missed:
pre-compute the feature vectors for each string. Don't do it every time you want to compare s_1 to s_2 or s_1 to cluster centroid
you only need to keep the summary statistics in memory: the sum of all points assigned to a cluster and the number of points assigned to a cluster. When you're done with an iteration, divides sums by ns and you have your new centroids.
what's the dimensionality of your feature space? be aware that you should use a distance metric where the dimensions where both vectors are zero have no impact, so you should only need to compute for non-zero dimensions. Store your points as sparse vectors to facilitate this.
Can you do some analysis and determine where the bottle-neck in your implementation is? I'm a little perplexed by your comment about Mahout not working with plain strings.
You should give the clustering algorithms in ELKI a try. Sorry for so shamelessly promoting a project I'm closely affiliated with. But it is the largest collection of clustering and outlier detection algorithms that are implemented in a comparable fashion. (If you'd take all the clustering algorithms available in some R package, you might end up with more algorithms, but they won't be really comparable because of implementation differences)
And benchmarking showed enormous speed differences with different implementations of the same algorithm. See our benchmarking web site on how much performance can vary even on simple algorithms such as k-means.
We do not yet have Canopy Clustering. The reason is that it's more of a preprocessing index than actually a clustering algorithm. Kind of like a primitive variant of the M-tree, or of DBSCAN clustering. However, we should would like to see a contributed canopy clustering as such a preprocessing step.
ELKIs abilities to process strings are also a bit limited so far. You can load typical TF-IDF vectors just fine and we have somewhat optimized sparse vector classes and similarity functions. They don't fully exploit sparsity for k-means yet, though, and there is no spherical k-means yet either. But there are various reasons why k-means results on sparse vectors cannot be expected to be very meaningful; it's more of a heuristic.
But it would be interesting if you could give it a try for your problem and report back your experiences. Was the performance somewhat competitive with your implementation? And we would love to see contributed modules for text processing, such as e.g. further optimized similarity functions, or a spherical k-means variant.
Update: ELKI now actually includes CanopyClustering: CanopyPreClustering (will be part of 0.6.0 then). But as of now, it's just another clustering algorithm, and not yet used to accelerate other algorithms such as k-means. I need to check how to best use it as some kind of index to accelerate algorithms. I can imagine it also helps for speeding up DBSCAN if you set T1=epsilon and T2=0.5*T1. The big issue with CanopyClustering IMHO is how to choose a good radius.