I need to compute the PageRank scores for a large graph which cannot be loaded into memory. I need a simple toolkit that can be easily modified, since I need to change its code in my research. Are you aware of any useful and simple toolkit that computes PageRank for large graphs (the size of graph is around 40 GB).
Thanks
Two packages you might want to evaluate are
Apache TinkerPop
http://tinkerpop.incubator.apache.org/docs/3.0.1-incubating/#pagerankvertexprogram
Apache Spark - GraphX
http://spark.apache.org/docs/latest/graphx-programming-guide.html#pagerank
Both are open source with Apache license, so the source code is available for you to modify or extend.
Related
I wanna start to develop a recommendation system for big data, say 2GB log data per day. For this purpose, between Rhadoop and Apache Mahout, which one is preferred?
Please answer this question from different aspects, such as availability of codes, speed, et.
If you know R and your data is not that big try SparkR but most of the massive R package collection does not integrate well with Spark distributed data.
If you have big data a are ok with an R-like Scala API then Mahout is better. You can get your math working on sample data and the same code will automatically scale to production size.
Just started my excursion to graph processing methods and tools. What we basically do - count some standard metrics like pagerank, clustering coefficient, triangle count, diameter, connectivity etc. In the past was happy with Octave, but when we started to work with graphs having let's say 10^9 nodes/edges we stuck.
So the possible solutions can be distributed cloud made with Hadoop/Giraph, Spark/GraphX, Neo4j on top of them, etc.
But since I am a beginner, can someone advise what actually to choose? I did not get the difference when to use Spark/GraphX and when Neo4j? Right now I consider Spark/GraphX, since it have more Python alike syntax, while neo4j has the own Cypher. Visualization in neo4j is cool but not useful in such a large scale. I do not understand is there a reason to use additional level of software (neo4j) or just use Spark/GraphX? Since I understood neo4j will not save so much time like if we worked with pure hadoop vs Giraph or GraphX or Hive.
Thank you.
Neo4J: It is a graphical database which helps out identifying the relationships and entities data usually from the disk. It's popularity and choice is given in this link. But when it needs to process the very large data-sets and real time processing to produce the graphical results/representation it needs to scale horizontally. In this case combination of Neo4J with Apache Spark will give significant performance benefits in such a way Spark will serve as an external graph compute solution.
Mazerunner is a distributed graph processing platform which extends Neo4J. It uses message broker to process distribute graph processing jobs to Apache Spark GraphX module.
GraphX: GraphX is a new component in Spark for graphs and graph-parallel computation. At a high level, GraphX extends the Spark RDD by introducing a new Graph abstraction: a directed multigraph with properties attached to each vertex and edge. It supports multiple Graph algorithms.
Conclusion:
It is always recommended to use the Hybrid combination of Neo4j with GraphX as they both easier to integrate.
For real time processing and processing large data-sets, use neo4j with GraphX.
For simple persistence and to show the entity relationship for a simple graphical display representation use standalone neo4j.
Neo4j: I have not used it, but I think it does all of a graph computation (like pagerank) on a single machine. Would that be able to handle your data set? It may depend on whether your entire graph fits into memory, and if not, how efficiently does it process data from disk. It may hit the same problems you encountered with Octave.
Spark GraphX: GraphX partitions graph data (vertices and edges) across a cluster of machines. This gives you horizontal scalability and parallelism in computation. Some things you may want to consider: it only has a Scala API right now (no Python yet). It does PageRank, triangle count, and connected components, but you may have to implement clustering coefficent and diameter yourself, using the provided graph API (pregel for example). The programming guide has a list of supported algorithms: https://spark.apache.org/docs/latest/graphx-programming-guide.html
GraphX is more of a realtime processing framework for the data that can be (and it's is better when) represented in a graph form. With GraphX you can use various algorithms that require large amounts of processing power (both RAM and CPU), and with neo4j you can (reliably) persist and update that data. This is what I'd suggest.
I know for sure that #kennybastani has done some pretty interesting advancements in that area, you can take a look at his mazerunner solution. It's also shipped as a docker image, so you can poke at it with a stick and find out for yourself whether you like it or not.
This image deploys a container with Apache Spark and uses GraphX to
perform ETL graph analysis on subgraphs exported from Neo4j. The
results of the analysis are applied back to the data in the Neo4j
database.
How is it possible to make calculations on a matrix with size 6GB and RAM is 4GB? What techniques are used in this case? Is there any open source solution or tool using files during vector operations?
Yes, the famous Hadoop is an open source computing platform, which can be used for operations on pretty big matrices (and not only for that).
For examples, please read this page.
I've heard of a max flow min cut method for sharding or segmenting a graph database. Does someone have a sample cypher query that can do that say against the movielens dataset? Basically I want to segment users into different shards/clusters based on what they like so maybe the min cuts can naturally find clusters of users around the genres say Horror, Drama, or maybe it will create non-intuitive clusters/segments like hipster/romantics and conservative/comedy/horror groups.
my short answer is no, sorry I don't know how you would express that.
my longer answer is even if this were possible - which it very well may be - I would advise against it.
multiple algorithms 'do' min-cut max-flow, these will all have different performance characteristics and, because clustering is computationally expensive, I'd guess you want control over the specific algorithm implementation used.
Cypher is a declarative language, you specify what you're looking for but not how to do it, and it will be difficult to specify such a complex problem in a way that the Cypher engine can figure out what you're trying to do. that will make it hard for Cypher (or any declarative language engine) to produce an efficient query plan.
my suggestion is find the specific algorithm you wish to use and implement it using the Neo4j Java API.
if you're running Neo4j in embedded mode you're done at that point. if you're running Neo4j server you'll then just have to run that code as an Unmanaged Server Extension
AFAIK you're after 'Community Detection' algorithms. There are non-overlapping (communities do not overlap) and overlapping variants, where non-overlapping is generally easier to implement and understand. Common algorithms are:
Non-overlapping: Louvain
Overlapping: Label Propagation Algorithm (LPA) (typically non-overlapping, but there are extensions to make it overlapping)
Here are a few C++ code examples for the algorithms: Louvain, Oslom (overlapping), LPA (non-overlapping), and Infomap)
And if you want bleeding edge I was recommended the SCD algorithm
Academic paper: "High Quality, Scalable and Parallel Community Detection for Large Real Graphs"
C++ implementation
I have 10 grids (currently stored as ascii grids from a GIS), each of them with about 4.5GB uncompressed. In addition I have about 100,000 location with an x and y coordinate. I need to extract the grid value at each of this location. I am currently doing it with GRASS GIS which works, but is very slow. Can anyone recommend me a library or a programming language most suitable for such a task?
Thanks in advance!
Sounds like the classic use-case for Hadoop MapReduce.
Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes.