In my application i want to keep each customers data separately so wanted to know how can i achieve Multi-Tenancy in Neo4j.
As Neo4j does not support schemas on a DB as in ORACLE, is there any way to run multiple instances say on different ports on a single installation of Neo4j?
I don't believe it's possible to run different instances off the same installation. The neo4j software is self-contained, so you can have two copies of neo4j in different directories on the same server with different ports. In each you can separately configure the port in the conf/neo4j-server.properties file. This also has the nice property that if you need to configure the two tenants differently for different usage patterns you can configure them separately.
Related
Let's take it like an organization having multiple divisions maintained in multiple database systems. If I want to create a neo4j Knowledge graph for all the DB's using neo4j how can we do that? which shouldn't affect the current scenario and the knowledge graph should be up and running?
Neo4j Enterprise Edition 4.0 introduced support for Fabric, which is:
a way to store and retrieve data in multiple databases, whether they
are on the same Neo4j DBMS or in multiple DBMSs, using a single Cypher query.
I'm in the process of designing a web-service hosted with Google App Engine comprised of three parts, a client website (or more), a simple CMS I designed to edit and view the content of that website, and lastly a server component to communicate between these two services and the database. I am new to docker and currently doing research to figure out how exactly to set up my containers along with the structure of my project.
I would like each of these to be a separate service, and therefor put them in different containers. From my research it seems perfectly possible to put them in separate containers and still have them communicate, but is this the optimal solution? Also given that in the future I might want to scale up so that my backed can supply multiple different frontends all managed from the same CMS.
tldr:
How should I best structure my web-service with docker, as well as assuming my back-end supplies more than one front end managed from a CMS.
Any suggestion for tools, or design patterns that make my life easier are welcome!
Personally, I don't like to think of designing whatever in terms of containers. Containers should be good for deployment process, for their main goal.
If you keep your logic in separate components/services you'll be able to combine them within containers in many different ways.
Once you have criteria what suits your product requirements (performance, price, security etc) you'll configure your docker images in the way you prefer.
So my advise is focus on design of your application first. Start from the number of solutions you have, provide a dockerfile for each one and then see what you will have to change.
I create several neo4j databases for several demo projects on my laptop.
When I open any of my projects I can see ALL the queries I created for ALL projects.
How can I split that any query will be viewed only from the DB it belongs too?
Thanks Tal
If you have a single DB, then keeping your "logical subgraphs" apart when doing queries requires crafting your data model so that each subgraph can be queried independently of the others.
A typical approach would be to use specific node labels for each subgraph, and to not share those labels between subgraphs. If you do that, then your queries can specify that you only care about the nodes with those labels.
A more exact answer would depend on your actual use cases.
Thanks for your answers.I figure it up.
For each database, I should configure a separate PORT.
Since all my databases refer to One port 7474 the queries from all the databases were mixed together.
Tal
I am working on a research domain called knowledge managment and i am using neo4j.
I want to link my neo4j base with other database that requires physical data storage (PostgreSQL, MySQL...). Is this possible?
In general sure, it depends on how you want to set up the linking.
Perhaps you can detail your use-case more?
Normally people sync data between other datastores and Neo4j e.g. by triggering updates or polling.
For Postgres there is also a foreign data wrapper.
You can also use an event-sourced system, where data is written to your relational databases and relationships also to Neo4j. (also)
I have a RoR app that is installed multiple times in the same machine. The app is the same, it's just installed with different names (i.e: app1, app2, ...).
The app uses ThinkingSphinx for searching. It has one index for the model Element. Each installation of the app will have it's own database with its own Elements.
So my question is:
Should I have multiple Sphinx instances running by changing the port, one per app? (I tried this option with 2 installations and it works well, but I think there are some issues regarding server load)
Should I have only one Sphinx instance? In that case, where should I configure Sphinx? How can I configure it to access different databases? How can I tell it to differentiate between instances from different apps?
Should I go with another solution?
Thank you in advance
Separate Sphinx instances (running on different ports) is definitely the way to go.
Sphinx requires every document to have a unique id, even between different index files, so managing that with a standard Thinking Sphinx-generated configuration is painful with multiple applications - you'd need to manage the single configuration file yourself, really, plus adapt Thinking Sphinx to only search across the relevant data set for each app. It could be interesting on some level, but my gut feel is that it's really not worth the effort or time. Use different ports, different daemons, much easier.