why the concurrency performance of R-tree is bad - r-tree

In many papers, they mentioned that concurrency performance is bad but they don't give the explanation about why it is bad.
Can anyone give me a hint?

Querying is not problematic at all.
Only changes to the tree are somewhat expensive and require locking. They are worse than e.g. B trees, because you may need to update the bounding boxes all the way to the root. The R* tree also does a special kind of reinsertions to balance the tree. But overall, it's comparable to the B tree and any other page oriented tree. You need locking for pages you write to.

When you look at the insertion/deletion strategies of R-Trees (or R*Tree, R+Tree, ...) you can see that an overflow/underflow of a node can easily cascade through large parts of the tree (also called rebalancing). This requires locking a lot of nodes, which is obviously bad for concurrency.
Instead of locking you can attempt an copy-on-write strategy, but that would also be expensive because of copying a lot of nodes and a significant probablility of conflicts with other writing threads.

Related

How do I specify that a subset of nodes needs to be fully connected?

Consider a number of nodes with some connections between them.
My model's task is to color the nodes. One of the conditions is that the black nodes form a fully-connected set.
How do I code that?
NB: in case it matters: the connections between symbols are a precondition.
What have you tried? Stack-overflow works the best if you show what you tried and where you got stuck. Based on how you model your graph, there could be many different ways.
Here’s a hint to get you started: in programming with z3, you usually write the code that “checks” the nodes are fully connected. Then, through the magic of constraint solving, that causes the solver to provide models that satisfy that criteria. So, start with modeling your graph and how you can check that the same-colored nodes are connected.
Note that hard problems like graph coloring, clique finding, isomorphisms etc remain hard in this realm too. They’re easier to code perhaps, but you shouldn’t expect better performance than exhaustive search for large instances on average; unless your graphs have special structure that the solver can exploit. But in that case, you’re better off using a custom algorithm anyhow, instead of relying on a general purpose SMT solver. Of course, this all depends on what your main goal is. It’s best to try multiple approaches and pick the one that performs the best.

Scalability of z3

I would like to improve the scalability of SMT solving. I have actually implemented the incremental solving. But I would like to improve more. Any other general methods to improve it without the knowledge of the problem itself?
There's no single "trick" that can make z3 scale better for an arbitrary problem. It really depends on what the actual problem is and what sort of constraints you have. Of course, this goes for any general computing problem, but it really applies in the context of an SMT solver.
Having said that, here're some general ideas based on my experience, roughly in the order of ease of use:
Read the Programming Z3 book This is a very nice write-up and will teach you a ton of things about how z3 is architected and what the best idioms are. You might hit something in there that directly applies to your problem: https://theory.stanford.edu/~nikolaj/programmingz3.html
Keep booleans as booleans not integers Never use integers to represent booleans. (That is, use 1 for true, 0 for false; multiplication for and etc. This is a terrible idea that kills the powerful SAT engine underneath.) Explicitly convert if necessary. Most problems where people tend to deploy such tricks involve counting how many booleans are true etc.: Such problems should be solved using the pseudo-boolean tactics that's built into the solver. (Look up pbEq, pbLt etc.)
Don't optimize unless absolutely necessary The optimization engine is not incremental, nor it is well optimized (pun intended). It works rather slowly compared to all other engines, and for good reason: Optimization modulo theories is a very tricky thing to do. Avoid it unless you really have an optimization problem to tackle. You might also try to optimize "outside" the solver: Make a SAT call, get the results, and making subsequent calls asking for "smaller" cost values. You may not hit the optimum using this trick, but the values might be good enough after a couple of iterations. Of course, how well the results will be depends entirely on your problem.
Case split Try reducing your constraints by case-splitting on key variables. Example: If you're dealing with floating-point constraints, say; do a case split on normal, denormal, infinity, and NaN values separately. Depending on your particular domain, you might have such semantic categories where underlying algorithms take different paths, and mixing-and-matching them will always give the solver a hard time. Case splitting based on context can speed things up.
Use a faster machine and more memory This goes without saying; but having plenty of memory can really speed up certain problems, especially if you have a lot of variables. Get the biggest machine you can!
Make use of your cores You probably have a machine with many cores, further your operating system most likely provides fine-grained multi-tasking. Make use of this: Start many instances of z3 working on the same problem but with different tactics, random seeds, etc.; and take the result of the first one that completes. Random seeds can play a significant role if you have a huge constraint set, so running more instances with different seed values can get you "lucky" on average.
Try to use parallel solving Most SAT/SMT solver algorithms are sequential in nature. There has been a number of papers on how to parallelize some of the algorithms, but most engines do not have parallel counterparts. z3 has an interface for parallel solving, though it's less advertised and rather finicky. Give it a try and see if it helps out. Details are here: https://theory.stanford.edu/~nikolaj/programmingz3.html#sec-parallel-z3
Profile Profile z3 source code itself as it runs on your problem, and see
where the hot-spots are. See if you can recommend code improvements to developers to address these. (Better yet, submit a pull request!) Needless to say, this will require a deep study of z3 itself, probably not suitable for end-users.
Bottom line: There's no free lunch. No single method will make z3 run better for your problems. But above ideas might help improve run times. If you describe the particular problem you're working on in some detail, you'll most likely get better advice as it applies to your constraints.

How to treat outliers if you have data set with ~2,000 features and can't look at each feature individually

I'm wondering how one goes about treating outliers at scale. Based on my experiences, I usually need to understand why there are outliers from the first place. What causes it, are there any patterns, or it just happens randomly. I know that, theoretically, we usually define outliers as data points outside of 3 standard deviation. But in the case where data is so big that you can't treat each feature one by one, and don't know if the 3 standard deviation rule is applicable anymore because of sparsity, how do we most effectively treat the outliers.
My intuition about high dimensional data is that data is sparse so the definition of "outliers" is harder to determine. Do you guys think we would be able to just get away with using ML algorithms that are more robust to outliers (tree based models, robust SVM, etc) instead of trying to treat outliers during preprocessing step? And if we really want to treat it, what is the best way to do it?
I would first propose a frame work for understanding the data. Imagine you are handed a dataset with no explanation of what it is. Analytics could actually be used to enable us to get understanding. Usually rows are observations and columns parameters of some sort regarding the observations. You first want to have a frame work for what you are trying to achieve. Now matter is going on, all data centers around the interest of people...that is why we decided to record it in some format. Given that, we are at most interested in:
1.) Object
2.) Attributes of object
3.) Behaviors of object
4.) Preferences of object
4.) Behaviors and preferences of object over time
5.) Relationships of object to other objects
6.) Affects of attributes, behaviors, preferences and other objects on object
So you are wanting to identify these items. So you open a data set and maybe you instantly recognize a time stamp. You then see some categorical variables and start doing relationship analysis for what is one to one, one to many, many to many. You then identify continuous variables. These all come together to give a foundation for identifying what is an outlier.
If we are evaluating objects of over time....is the rare event indicative of something that happens rarely, but we want to know about. Forest fire are outlier events...but they are events of great concern. If I am analyzing machine data and having rare events, but these rare events are tied to machine failure, then it matters. Basically.....does the rare event-parameter show evidence that it correlates to something that you care about?
Now if you have so many dimensions that the above approach is not feasible to your judgement, then you are seeking dimension reduction alternatives. I am currently employing Single Value Decomposition as at technique. I am already seeing situations where I am accomplishing the same level of predictive ability with 25% of the data. Which segways into my final thought; find a mark to decide whether the outliers matter or not.
Begin with leaving them in then begin your analysis, and run the work again with them removed. What were the affects. I believe that when you are in doubt, simply do both and see how different the results are. If there is little difference than maybe you are good to go. If there is significant difference of concern, then you are wanting to take an evidenced based approach of the outlier occurring. Simply because it is rare in your data does not mean it is rare. Think of certain type crimes that are under-reported (via arrest records). Lack of data showing politicians being arrested for insider trading does not mean that politicians are not doing insider trader en masse.

TML(Tractable Markov Logic) is a wonderful model! Why I haven't seen it being used for a wide of application scenarios of artificial intelligence?

I have been reading papers about the Markov model, suddenly a great extension like TML(Tractable Markov Logic) coming out.
It is a subset of Markov logic, and uses probabilistic class and part hierarchies to control complexity.
This model has both complex logical structure and uncertainty.
It can represent objects, classes, and relations between objects, subject to certain restrictions which ensure that inference in any model built in TML can be queried efficiently.
I am just wondering why such a good idea not widely spreading around the area of application scenarios like activity analysis?
More info
My understanding is that TML is polynomial in the size of the model, but the size of the model needs to be compiled to a given problem and may become exponentially large. So, at the end, it's still not really tractable.
However, it may be advantageous to use it in the case that the compiled form will be used multiple times, because then the compilation is done only once for multiple queries. Also, once you obtain the compiled form, you know what to expect in terms of run-time.
However, I think the main reason you don't see TML being used more broadly is that it is just an academic idea. There is no robust, general-purpose system based on it. If you try to work on a real problem with it, you will probably find out that it lacks certain practical features. For example, there is no way to represent a normal distribution with it, and lots of problems involve normal distributions. In such cases, one may still use the ideas behind the TML paper but would have to create their own implementation that includes further features needed for the problem at hand. This is a general problem that applies to lots and lots of academic ideas. Only a few become really useful and the basis of practical systems. Most of them exert influence at the level of ideas only.

Neo4j partition

Is the a way to physically separate between neo4j partitions?
Meaning the following query will go to node1:
Match (a:User:Facebook)
While this query will go to another node (maybe hosted on docker)
Match (b:User:Google)
this is the case:
i want to store data of several clients under neo4j, hopefully lots of them.
now, i'm not sure about whats is the best design for that but it has to fulfill few conditions:
no mixed data should be returned from a cypher query ( its really hard to make sure, that no developer will forget the ":Partition1" (for example) in a cypher query)
performance of 1 client shouldn't affect another client, for example, if 1 client has lots of data, and another client has small amount of data, or if a "heavy" query of 1 client is currently running, i dont want other "lite" queries of another client to suffer from slow slow performance
in other words, storing everything under 1 node, at some point in the future, i think, will have scalability problem, when i'll have more clients.
btw, is it common to have few clusters?
also whats the advantage of partitioning over creating different Label for each client? for example: Users_client_1 , Users_client_2 etc
Short answer: no, there isn't.
Neo4j has high availability (HA) clusters where you can make a copy of your entire graph on many machines, and then serve many requests against that copy quickly, but they don't partition a really huge graph so some of it is stored here, some other parts there, and then connected by one query mechanism.
More detailed answer: graph partitioning is a hard problem, subject to ongoing research. You can read more about it over at wikipedia, but the gist is that when you create partitions, you're splitting your graph up into multiple different locations, and then needing to deal with the complication of relationships that cross partitions. Crossing partitions is an expensive operation, so the real question when partitioning is, how do you partition such that the need to cross partitions in a query comes up as infrequently as possible?
That's a really hard question, since it depends not only on the data model but on the access patterns, which may change.
Here's how bad the situation is (quote stolen):
Typically, graph partition problems fall under the category of NP-hard
problems. Solutions to these problems are generally derived using
heuristics and approximation algorithms.[3] However, uniform graph
partitioning or a balanced graph partition problem can be shown to be
NP-complete to approximate within any finite factor.[1] Even for
special graph classes such as trees and grids, no reasonable
approximation algorithms exist,[4] unless P=NP. Grids are a
particularly interesting case since they model the graphs resulting
from Finite Element Model (FEM) simulations. When not only the number
of edges between the components is approximated, but also the sizes of
the components, it can be shown that no reasonable fully polynomial
algorithms exist for these graphs.
Not to leave you with too much doom and gloom, plenty of people have partitioned big graphs. Facebook and twitter do it every day, so you can read about FlockDB on the twitter side or avail yourself of relevant facebook research. But to summarize and cut to the chase, it depends on your data and most people who partition design a custom partitioning strategy, it's not something software does for them.
Finally, other architectures (such as Apache Giraph) can auto-partition in some senses; if you store a graph on top of hadoop, and hadoop already automagically scales across a cluster, then technically this is partitioning your graph for you, automagically. Cool, right? Well...cool until you realize that you still have to execute graph traversal operations all over the place, which may perform very poorly owing to the fact that all of those partitions have to be traversed, the performance situation you're usually trying to avoid by partitioning wisely in the first place.

Resources