Give users read-only access to Neo4j while doing Batch Update - neo4j

This is just a general question, not too technical. We have this use-case wherein we are to load hundreds of thousands of records to an existing Neo4j database. Now, we cannot afford to make the database offline because of users who are accessing it. I know that Neo4j requires exclusive lock on the database while it's performing batch updates. Is there a way around my problem? I don't want to lock my database while doing updates. I still want my users to access it - even for just read-only access. Thanks.

Neo4j never requires exclusive lock on the database. It selectively locks portions of the graph that are affected by mutating operations. So there are some things you can do to achieve your goal. Are you a Neo4j Enterprise customer?
Option 1: If so, you can run your batch insert on the master node and route users to slaves for reading.
Option 2: Alternatively, you could do a "blue-green" style deployment where you:
take a backup (B) of your existing database (A), then mark the A database read-only
apply your batch inserts onto B either by starting a separate instance, or even better, using BatchInserters. That way, you'll insert your hundreds of thousands in a few seconds
start the new database B
flip a switch on a load-balancer, so that users start to be routed to the B instead of A
take A down
(Please let me know if you need some tips how to make a read-only DB.)
Option 3: If you can only afford to run one instance at any one time, then there are techniques you can employ to let your users access the database as usual and still insert large volumes of data. One of them could be using a single-threaded "writer" with a queue that batches write operations. Because one thread only ever writes to the database, you never run into deadlock scenarios and people can happily read from the database. For option 3, I suggest using GraphAware Writer.
I've assumed you are not trying to insert hundreds of thousands of nodes to a running Neo4j database using Cypher. If you are, I would start there and change it to use Java APIs or the BatchInserter API.

Related

How can I view only the context I'm working on?

In Neo4j, I created the database through the various exercises I'm doing.
When I run a query, for example MATCH (n) RETURN (n), until that database that was created in "Christmas of 1914" appears on the screen, making my interface ugly, polluted, loaded with unnecessary objects to work at that moment.
If I work with Northwind, I want to see only Northwind, if I work with Facebook, I just want to see Social, and so on. I do not want to see all the databases on the planet on my screen each time I run a query like MATCH (n) RETURN (n).
Neo4j doesn't really have a direct equivalent to multiple databases stored within the same server instance. There are three options for achieving this:
1) the closest match would be create run an additional instance of neo4j on the same server. You will need to edit the neo4j.conf file to give the new instance a new port number and a new data directory. This will give you isolation between the data and user accounts in the two databases. The downside is you will need to divide up the RAM on the box before running, effectively limiting both instances to half the RAM.
2) You can attach labels to your nodes to identify which bucket of data (database in the RDBMS world) each node belongs to. You can operate as if the two are isolated even though they really live in the same database instance. Neo4j won't do a lot to help you enforce this, you will need to do the work at the application level. There is a mechanism for you to restrict users to only being able interact with a subset of your graph but you have to write custom procedures and restrict the users to only using those. I haven't tried it but it sounds tedious.
https://neo4j.com/docs/operations-manual/current/security/authentication-authorization/subgraph-access-control/
3) If you are running on VMs or the cloud, you mind as well just create a new instance for your second database. It achieves the same effect as number one but with better isolation of resources.

Client-Server Data Synchronisation Algorithm

Client DB - CoreData (iOS)
Server DB - MySQL
I am trying to achieve data synchronisation between client and server but the complicated part is that the schema is highly relational. I was going through couple of synchronisation patterns already in use and looks like most of them are based on a NOSQL or schemaless DB. Wondering if there are any patterns of synch for a highly relational data. I have already gone through couchbase, dropbox sync api, wasabi synch etc. Following are the concerns
1) By highly relational data it means, there are several tables which are related to each other and Create/Update happens on all the tables. Right now I am planning to do seperate CRUD requests for each table. Is that a good approach? But the problem is that there should be a strict ordering of the requests because the changes in table-3 cannot be processed before the table-2 data is received. This relationship is making it hard to synch.
2) Change tracking on the client. what would be the best way to identify the changes in a particular table(CoreData Entity). I am planning for a delta approach where only the changes in similar kind of objects will be uploaded at a time.Any insights /links to it?
3) Data Merging/ConflictResolution - I am stumbled upon this part. 1 way would be to have the modified timestamp in each object, but what if the devices dates are not in sync or manually changed.
I wanted to know the implications/challenges in such a synch pattern with RDBMS backed server or any alternative approaches.
Problem #1 Explained
Assume there are 10 tables and APIs expose CRUD requests for these 10 tables. 1 Request will do only C/R/U/D of any one table. So my question was is this a good approach to design APIs like this when it comes to offline syncing of data. For e.g. Consider a relational data
Organization->Employee->Department->Project
Assuming some objects of these 4 tables got created offline. Now we need to sync data to server when network is back. So it will be like Create/Update Organisations First, Once it is over Create/Update Employee so that it can be linked to Organisation. So basically everytime a C/U/D will be issued from the top->bottom level objects. So my question is whether this is good approach in a Sync Problem. Because if the data was not relational we could have uploaded the changes in all the tables in a single C/U/D API Call.
It seems that you might not be aware of typical relational DBMS facilities and protocols that
support simultaneous write access by multiple sessions, making them suitable for multi-user, highly concurrent, and OLTP applications.
1) Your API to access MySQL allows you to make your changes atomically (all or nothing) via a transaction. Within that transaction you should update as many tables as possible simultaneously but can sequence such changes as necessary. By locking tables as you use them then unlocking in the reverse order you avoid deadlocks. You can request that only parts of tables that a transaction knows it could possibly change are locked so that non-overlapping clients can proceed concurrently.
2) Your schema can explicitly record redundant delta information that you get the DBMS to calculate on updates, or it can record sufficient past changes to calculate deltas on request. Your client can give the DBMS its transaction data and the DBMS can return relevant info based on it and the past. You probably do not need to and should keep any persistent state on your client. That is what the server database is for. The client database is a buffer for it and user info.
3) You can use an explicit client serial transaction id so that client plus id indicates what order the client thinks its transactions were sent regardless of its clock.
I wonder how much you have googled.

Migrate Data from Neo4j to SQL

Hi I am using neo4j in my application and my structure is as following:
I am using Embedded Graph API
I have several databases that I point to using a pool that I maintain in my application eg-> db1, db2, db3, ..... db100
When I want to access a particular database I point to it using new EmbeddedGraphDatabase("Path to db(n)")
The problem is that when the connection pool count increases the RAM size being consumed by the application keep increasing and breaks down the application at a point of limit.
So I am Thinking of migrating from Neo4j to some other Database.
Additionally only a small part of my database is utilizing the graph structure.
One way for migration is that I write a script for it. Is there any better option?
My another question is what is the best Database so that my structure can be maintained.
Other view-point that I am thinking about is I can keep part of my data into Neo4j and shift another part to some other database.
If anything is unclear I can clarify.
Thanks in advance.
An EmbeddedGraphDatabase instance is not the equivalent of a "connection" in SQL. It's designed to run a long time (days, months). Hence starting/stopping is costly.
What is the use case for having hundreds of separate databases in the same JVM?
Your lots of small databases will perform poorly as the graphdb is designed to hold the whole datamodel on a single host.
Do you run a single JVM per database?
You can control the amount of memory used by neo4j by providing the correct properties for memory mapping and also use the gcr cache from neo4j-enterprise and control the cache size-property variables.
I think it still makes sense to keep the graph part in Neo4j and only move the non-graphy part.

RavenDB - Planning for scalability

I have been learning RavenDB recently and would like to put it to use.
I was wondering what advice or suggestions people had around building the system in a way that is ready to scale, specifically sharding the data across servers, but that can start on a single server and only grow as needed.
Is it advisable, or even possible, to create multiple databases on a single instance and implement sharding across them. Then to scale it would simply be a matter of spreading these databases across the machines?
My first impression is that this approach would work, but I would be interested to hear the opinions and experiences of others.
Update 1:
I have been thinking more on this topic. I think my problem with the "sort it out later" approach is that it seems to me difficult to spread data evenly across servers in that situation. I will not have a string key which I can range on (A-E,F-M..) it will be done with numbers.
This leaves two options I can see. Either break it at boundaries, so 1-50000 is on shard 1, 50001-100000 is on shard 2, but then with a site that ages, say like this one, your original shards will be doing a lot less work. Alternatively a strategy that round robins the shards and put the shard id into the key will suffer if you need to move a document to a new shard, it would change the key and break urls that have used the key.
So my new idea, and again I am putting it out there for comment, would be to create from day one a bucketting system. Which works like stuffing the shard id into the key, but you start with a large number, say 1000 which you distribute evenly between. Then when it comes time to split the load into a shard, you can say move buckets 501-1000 to the new server and write your shard logic that 1-500 goes to shard 1 and 501-1000 goes to shard 2. Then when a third server comes online you pick another range of buckets and adjust.
To my eye this gives you the ability to split into as many shards as you originally created buckets, spreading the load evenly both in terms of quantity and age. Without having to change keys.
Thoughts?
It is possible, but really unnecessary. You can start using one instance, and then scale when necessary by setting up sharding later.
Also see:
http://ravendb.net/documentation/docs-sharding
http://ayende.com/blog/4830/ravendb-auto-sharding-bundle-design-early-thoughts
http://ravendb.net/documentation/replication/sharding
I think a good solution is to use virtual shards. You can start with one server and point all virtual shard to a single server. Use module on the incremental id to evenly distribute the rows across the virtual shards. With Amazon RDS you have the option to turn a slave into a master, so before you change the sharding configuration (point more virtual shards to the new server), you should make a slave a master, then update your configuration file, and then delete all the records on the new master using modulu that doesn't comply with the shard range that you use for the new instance.
You also need to delete rows from the original server, but by now all the new data with IDs that are modulu based on the new virtual shard ranges will point to the new server. So you actually don't need to move the data, but take advantage of Amazon RDS server promotion feature.
You can then make replica off the original server. You create a unique ID as: Shard ID + Table Type ID + Incremental number. So when you query the database, you know to which shard to go and fetch the data from.
I don't know how it's possible to do it with RavenDB, but it can work pretty well with Amazon RDS, because Amazon already provide you with replication and server promotion feature.
I agree that their should be a solution that right from the start offer seamless sociability and not telling the developer to sort the problems out when those occur. Furthermore, I've find out that many NoSQL solution that evenly distribute data across shards need to work within a cluster with low latency. So you have to take that into consideration. I've tried using Couchbase with two different EC2 machines (not in a dedicated Amazon cluster) and data balancing was very very slow. That adds to the overall cost too.
I also want to add that what pinterest had done to solve their scalability issues, using 4096 virtual shards.
You should also need to look into paging issues with many NoSQL databases. With that approach you can page data quite easily, but maybe not in the most efficient way, because you might need to query several databases. Another problem is changing schema. Pinterest solved this by putting all the data in a JSON Blob in MySQL. When you want to add a new column, you create a new table with the new column data + key, and can use Index on that column. If you need to query the data, for example, by email, you can create another table with the emails + ID and put an index on the email column. Counters are another problem , I mean atomic counters. So it's better taking those counters out from the JSON and put them in a column so you can increment the counter value.
There are great solutions out there, but at the end of the day you find out that they can be very expensive. I preferred spending time on building my own sharding solution and prevent myself the headache later on. If you choose the other path, there are plenty of companies waiting for you to get into trouble and ask for quite a lot of money to solve your problems. Because at the moment that you need them, they know that you will pay everything to make your project work again. That's from my own experience, that's why I am breaking my head to build my own sharding solution using your approach, which also be much cheaper.
Another option is to use middleware solutions for MySQL like ScaleBase or DBshards. So you can continue working with MySQL, but at the time you need to scale, they have well proven solution. And the costs might be much lower then the alternative.
Another tip: when you create your config for shards, put a write_lock attribute that accepts false or true. So when it false, data won't be written to that shard, so when you fetch the list of shards for specific table type (ie. users), it will be written only to the other shards for that same type. This is also good for backup, so you can show a friendly error for visitors when you want to lock all the shard when backing up all the data to get a point-in-time snapshots of all the shards. Although I think you can send a global request for snapshoting all the databases with Amazon RDS and using point-in-time backup.
The thing is that most companies won't spend time working with a DIY sharding solution , they will prefer paying for ScaleBase. Those solution comes from single developers that can afford paying for a scalable solution from the start, but want to rest assured that when they reach to the point they need it, they have a solution. Just look at the prices out there and you can figure out that it will cost you A LOT. I will gladly share my code with you once I'm done. You are going with the best path in my opinion, it's all depends on your application logic. I model my database to be simple, no joins, not complicated aggregation queries - this solves many of my problems. In the future you can use Map Reduce to solve those big data queries needs.

multiple db connections vs. centralized/redundant db

I have a project to create a dashboard that will connect to existing systems as well as create new features based on combining data from the existing systems. For example, the dashboard will be able to generate "orders" containing data merged from "members" (MS Access DB), "employees" (MySQL DB) and "products" (flat file), and there will also be new attributes particular to "orders."
At first I thought it would be most efficient to have my application connect to each of the systems separately and perform cross-vendor joins between the different databases. But then I thought that creating a centralized/redundant db (built with scripts pushing and pulling data between the systems) might also be useful because it would empower some semi-technical staff to use products like OOBase, which can only make a single connection.
Are there any other advantages to creating a centralized/redundant DB like the one I'm talking about? Or are multiple direct connections the best approach?
Thanks in advance for any tips.
To give you are short answer: yes, you want a central data storage.
You don't want to run complex reports on your live database. As your live database will grow you will want to do some housekeeping and clean it up but keep the data for analysi.
You will also want the data to be aggregated so you could perform historical analysis.
For the data which comes from different sources some clean-up will be required. And you will probably need to know how to link your data together and there are quire a lot of things like that you will have to be aware of to do the job properly.
You might consider reading on data warehousing (wikipedia) and business intelligence (wikipedia).
If you want to have 'new features' added to this system you could also look up orchestration (wikipedia. It will allow you to link your heterogeneous business processes together.
All of these are quite specialized and complex disciplines on their own so you might want to have a specialist to consult you.
Be very, very careful to copy lots of data around. If you do, here are some important guidelines:
Make sure that one system is defined as the master and no other system may tamper with the data.
Always copy data from the master to the slaves.
When you copy the data, use a checksum of some kind to make sure all data has been copied. Make sure you can handle "yesterday, the copy failed".
If a slave must make a change, push the change to the master and then use the standard "update" path to merge it back to the slave. Avoid "save change on slave and update the master some time in the future".

Resources