Should a node (vertex) know about its neighbours in a graph? - dart

I am trying to implement a graph in dart.
I thought of creating the classes Node (vertex), Edge and Graph.
The main idea is that the Graph has a List of Nodes and a List of Edges.
Later I will implement some search algorithms on the graph.
I think of also adding a List of neighbours to each Node (List neighbours) so each Node knows its neighbours (successor nodes to be precise) . My thought here is that getting the successor nodes of one node is quicker when the node has this information than when the algorithm has to check the edge list each time. I know that changes (deletion of edges, nodes, adding new edges, node) would also cost more because I'd have to update them in two locations. But at the moment I don't plan to make the graph too dynamic after its creation.
Do you think this approach makes sense or might my way have some mayor flaw?

Even if you're not changing the graph once it's created, by denormalizing the graph you're creating Technical Debt, making it more complicated/difficult to work on. You could get some weird bugs that would be hard to track down. When you come back to this piece of code in a month or two it'll be a little more confusing since it's not fresh in your memory and it's not intuitive.
You would have to have an absurd amount of nodes to realize any performance gain and if you had an absurd amount of nodes you'd be doubling the amount of references to the edges, increasing the memory footprint. Also if you're compiling to JavaScript, be nice to the garbage collector by not having more references to an object than you need.
If you want to improve the performance of the graph I'd look into what I could run concurrently with isolates. Just keep in mind, graphs can get stupid complex so if you can keep anything simple, then keep it simple.

Related

How do I specify that a subset of nodes needs to be fully connected?

Consider a number of nodes with some connections between them.
My model's task is to color the nodes. One of the conditions is that the black nodes form a fully-connected set.
How do I code that?
NB: in case it matters: the connections between symbols are a precondition.
What have you tried? Stack-overflow works the best if you show what you tried and where you got stuck. Based on how you model your graph, there could be many different ways.
Here’s a hint to get you started: in programming with z3, you usually write the code that “checks” the nodes are fully connected. Then, through the magic of constraint solving, that causes the solver to provide models that satisfy that criteria. So, start with modeling your graph and how you can check that the same-colored nodes are connected.
Note that hard problems like graph coloring, clique finding, isomorphisms etc remain hard in this realm too. They’re easier to code perhaps, but you shouldn’t expect better performance than exhaustive search for large instances on average; unless your graphs have special structure that the solver can exploit. But in that case, you’re better off using a custom algorithm anyhow, instead of relying on a general purpose SMT solver. Of course, this all depends on what your main goal is. It’s best to try multiple approaches and pick the one that performs the best.

How to realize a multi-client capability in Neo4j?

Initial situation
I have several independent and disconnected graphs, each of them have a hierarchical like structure with a local root element. Each of these graphs consists of approximately 8 million nodes and 40 million relationships. I have successfully created a three-digit number of Cypher queries, which should now be applied to a single graph only and not the entirety of all graphs. The graph, the queries have to apply to, is specified by its root node.
Challenge to be solved
How can I realize a kind of pseudo multi-client capability for a graph, if all graphs have to remain in a common Neo4j database for reasons of reporting and pattern matching?
approach to the problem / preliminary result
Implement a single shortest path to the given root element for selection purposes in really every query at the beginning? Cons:
huge performance losses expected
with high development costs
Expand each graph with a separate, additional label? Cons:
complex queries, high development effort
For these cases, adding a specific label per tenant/client to all nodes in the subgraph tends to be the approach taken. It would require you to ensure that when you match to the relevant nodes in the query that you additionally make sure the nodes you're working with have the client's label present.
As a note for the future, native multi-tenancy support is one of the key features we're working on for the next year.

Neo4j partition

Is the a way to physically separate between neo4j partitions?
Meaning the following query will go to node1:
Match (a:User:Facebook)
While this query will go to another node (maybe hosted on docker)
Match (b:User:Google)
this is the case:
i want to store data of several clients under neo4j, hopefully lots of them.
now, i'm not sure about whats is the best design for that but it has to fulfill few conditions:
no mixed data should be returned from a cypher query ( its really hard to make sure, that no developer will forget the ":Partition1" (for example) in a cypher query)
performance of 1 client shouldn't affect another client, for example, if 1 client has lots of data, and another client has small amount of data, or if a "heavy" query of 1 client is currently running, i dont want other "lite" queries of another client to suffer from slow slow performance
in other words, storing everything under 1 node, at some point in the future, i think, will have scalability problem, when i'll have more clients.
btw, is it common to have few clusters?
also whats the advantage of partitioning over creating different Label for each client? for example: Users_client_1 , Users_client_2 etc
Short answer: no, there isn't.
Neo4j has high availability (HA) clusters where you can make a copy of your entire graph on many machines, and then serve many requests against that copy quickly, but they don't partition a really huge graph so some of it is stored here, some other parts there, and then connected by one query mechanism.
More detailed answer: graph partitioning is a hard problem, subject to ongoing research. You can read more about it over at wikipedia, but the gist is that when you create partitions, you're splitting your graph up into multiple different locations, and then needing to deal with the complication of relationships that cross partitions. Crossing partitions is an expensive operation, so the real question when partitioning is, how do you partition such that the need to cross partitions in a query comes up as infrequently as possible?
That's a really hard question, since it depends not only on the data model but on the access patterns, which may change.
Here's how bad the situation is (quote stolen):
Typically, graph partition problems fall under the category of NP-hard
problems. Solutions to these problems are generally derived using
heuristics and approximation algorithms.[3] However, uniform graph
partitioning or a balanced graph partition problem can be shown to be
NP-complete to approximate within any finite factor.[1] Even for
special graph classes such as trees and grids, no reasonable
approximation algorithms exist,[4] unless P=NP. Grids are a
particularly interesting case since they model the graphs resulting
from Finite Element Model (FEM) simulations. When not only the number
of edges between the components is approximated, but also the sizes of
the components, it can be shown that no reasonable fully polynomial
algorithms exist for these graphs.
Not to leave you with too much doom and gloom, plenty of people have partitioned big graphs. Facebook and twitter do it every day, so you can read about FlockDB on the twitter side or avail yourself of relevant facebook research. But to summarize and cut to the chase, it depends on your data and most people who partition design a custom partitioning strategy, it's not something software does for them.
Finally, other architectures (such as Apache Giraph) can auto-partition in some senses; if you store a graph on top of hadoop, and hadoop already automagically scales across a cluster, then technically this is partitioning your graph for you, automagically. Cool, right? Well...cool until you realize that you still have to execute graph traversal operations all over the place, which may perform very poorly owing to the fact that all of those partitions have to be traversed, the performance situation you're usually trying to avoid by partitioning wisely in the first place.

Neo4J Performance Benchmarking

I have created a basic implementation of high level client over Neo4J (https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-neo4j) and want to compare its performance with Native neo4j driver (and maybe SpringData too). This way I would be able to determine overhead my library is putting over native driver.
I plan to create an extension of YCSB for Neo4J.
My question is: what should be considered as a basic unit of object to be written into neo4j (should it be a single node or a couple of nodes joined by an edge).
What's current practice in Neo4J world. How people benchmarking neo4j performance are doing it.
There's already been some work for benchmarking Neo4J with Gatling: http://maxdemarzi.com/2013/02/14/neo4j-and-gatling-sitting-in-a-tree-performance-t-e-s-t-ing/
You could maybe adapt it.
See graphdb-benchmarks
The project graphdb-benchmarks is a benchmark between popular graph dataases. Currently the framework supports Titan, OrientDB, Neo4j and Sparksee. The purpose of this benchmark is to examine the performance of each graph database in terms of execution time. The benchmark is composed of four workloads, Clustering, Massive Insertion, Single Insertion and Query Workload. Every workload has been designed to simulate common operations in graph database systems.
Clustering Workload (CW): CW consists of a well-known community detection algorithm for modularity optimization, the Louvain Method. We adapt the algorithm on top of the benchmarked graph databases and employ cache techniques to take advantage of both graph database capabilities and in-memory execution speed. We measure the time the algorithm needs to converge.
Massive Insertion Workload (MIW): Create the graph database and configure it for massive loading, then we populate it with a particular dataset. We measure the time for the creation of the whole graph.
Single Insertion Workload (SIW): Create the graph database and load it with a particular dataset. Every object insertion (node or edge) is committed directly and the graph is constructed incrementally. We measure the insertion time per block, which consists of one thousand edges and the nodes that appear during the insertion of these edges.
Query Workload (QW): Execute three common queries:
FindNeighbours (FN): finds the neighbours of all nodes.
FindAdjacentNodes (FA): finds the adjacent nodes of all edges.
FindShortestPath (FS): finds the shortest path between the first node and 100 randomly picked nodes.
One way to performance-test is to use e.g. http://gatling-tool.org/. There is work underway to create benchmark frameworks at http://ldbc.eu . Otherwise, benchmarking is highly dependent on your domain dataset and the queries you are trying to do. Maybe you could start at https://github.com/neo4j/performance-benchmark and improve on it?

People counting using OpenCV

I'm starting a search to implement a system that must count people flow of some place.
The final idea is to have something like http://www.youtube.com/watch?v=u7N1MCBRdl0 . I'm working with OpenCv to start creating it, I'm reading and studying about. But I'd like to know if some one can give me some hints of source code exemples, articles and anything elese that can make me get faster on my deal.
I started with blobtrack.exe sample to study, but I got not good results.
Tks in advice.
Blob detection is the correct way to do this, as long as you choose good threshold values and your lighting is even and consistent; but the real problem here is writing a tracking algorithm that can keep track of multiple blobs, being resistant to dropped frames. Basically you want to be able to assign persistent IDs to each blob over multiple frames, keeping in mind that due to changing lighting conditions and due to people walking very close together and/or crossing paths, the blobs may drop out for several frames, split, and/or merge.
To do this 'properly' you'd want a fuzzy ID assignment algorithm that is resistant to dropped frames (ie blob ID remains, and ideally predicts motion, if the blob drops out for a frame or two). You'd probably also want to keep a history of ID merges and splits, so that if two IDs merge to one, and then the one splits to two, you can re-assign the individual merged IDs to the resulting two blobs.
In my experience the openFrameworks openCv basic example is a good starting point.
I'll not put this as the right answer.
It is just an option for those who are able to read in Portugues or can use a translator. It's my graduation project and there is the explanation of a option to count people in it.
Limitations:
It's do not behave well on envirionaments that change so much the background light.
It must be configured for each location that you will use it.
Advantages:
It's fast!
I used OpenCV to do the basic features as, capture screen, go trough the pixels, etc. But the algorithm to count people was done by my self.
You can check it on this paper
Final opinion about this project: It's not prepared to go alive, to became a product. But it works very well as base for study.

Resources