With a new version of Spring Data Neo4j I can't use Neo4jHelper.cleanDb(db);
So, what is the most effective way to completly clear Embedded Neo4j database in my application?
I have implemented my own util method for this purpose, but this method is slow:
public static void cleanDb(Neo4jTemplate template) {
template.query("MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r", null);
}
How to properly clear/delete database ?
UPDATED
This is the similar question How to reset / clear / delete neo4j database? but I don't know how to programmatically shutdown Embedded Neo4j and how to start it after deleting.
I use Spring Data Neo4j and based on the user request I'd like to clear/delete existing database and recreate it with a new data. How to start up new embedded database after suggested invocation of shutdown method ?
USE CASE:
On the working application I have configured embedded database:
GraphDatabaseService graphDb = new GraphDatabaseFactory()
.newEmbeddedDatabaseBuilder(environment.getProperty(NEO4J_EMBEDDED_DATABASE_PATH_PROPERTY))
.setConfig(GraphDatabaseSettings.node_keys_indexable, "name,description")
.setConfig(GraphDatabaseSettings.node_auto_indexing, "true")
.newGraphDatabase();
Also, I pre populate this database with 1000000 nodes. On the user request I need to clear this database and populate it with a new data. How to correctly and quick clear existing database ?
Can I call Neo4j database API for new node creation after database.shutdown() or do I need to initialize new database before it?
See the other answer on the related question. Inside of java, you can shut down an embedded database with the GraphDatabaseService#shutdown() method.
From there, there are a pile of different ways you can delete the underlying directory, see this other answer.
So the general answer can still be the same:
Shutdown the database using the neo4j java API
Delete the database contents off of the disk
Related
I am trying to implement a solution using SDN which was aimed to create a dynamic cypher where my label vary w.r.t input type(n types) irrespective of properties of Node.
Hoping a solultion similiar to what mentioned on this link would help me.
Is it possible to dynamically construct a neo4j cypher query using the GraphRepository pattern
I found the below information in Release notes.
Deprecation of Neo4jTemplate
It is highly recommended for users starting new SDN projects to use the OGM Session directly. Neo4jTemplate has been kept to give upgrading users a better experience.
The Neo4jTemplate has been slimmed-down significantly for SDN 4. It contains the exact same methods as Session. In fact Neo4jTemplate is just a very thin wrapper with an ability to support SDN Exception Translation. Many of the operations are no longer needed or can be expressed with a straightforward Cypher query.
If you do use Neo4jTemplate, then you should code against its Neo4jOperations interface instead of the template class.
The following table shows the Neo4jTemplate functions that have been retained for version 4 of Spring Data Neo4j. In some cases the method names have changed but the same functionality is offered under the new version.
To achieve the old template.fetch(entity) equivalent behaviour, you should call one of the load methods specifying the fetch depth as a parameter.
It’s also worth noting that exec(GraphCallback) and the create…() methods have been made obsolete by Cypher. Instead, you should now issue a Cypher query to the new execute method to create the nodes or relationships that you need.
Dynamic labels, properties and relationship types are not supported as of this version, server extensions should be considered instead.
from this link https://docs.spring.io/spring-data/neo4j/docs/5.0.0.RELEASE/reference/html/
Could anyone help me in achieving the equivalent solution in
SDN 5.X
Thanks!!!
I took the advice to use the session directly in place of the Neo4jOperations mechanism.
#Autowired
SessionFactory sessionFactory
public void doCustomQuery() {
Session session = sessionFactory.openSession();
Iterable<NodeEntity> nodes = session.query(NodeEntity.class, "MATCH (n) RETURN n", params);
}
In my Spring Data Neo4j 5 project I have the following Neo4j Java configuration:
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
// #formatter:off
return new org.neo4j.ogm.config.Configuration.Builder()
.autoIndex("assert")
.credentials(username, password)
.uri(serverDatabaseUri)
.build();
// #formatter:on
}
Right now with a data growth inside of my Neo4j database I experienced a significant slowdown during my application startup.
I think one of the possible reasons for this issue can be the following property:
autoIndex("assert")
How to check it and if I'm right - how to improve the application startup time without losing the functionality provided by autoIndex("assert")?
It’s very likely that you are right, as the creation and validation of indexes will take an amount of time proportional to the size of your data; in other words, the more data you have the longer it takes for indexes to be created or validated each time the application starts up.
Index creation is a convenient feature of SDN. That said, given that the addition or removal of indexes is a fairly infrequent event, typically happening only when you add or remove a domain entity or are starting from an empty database, another option is to remove the #Index annotations and to create a Cypher script that creates or removes the indexes and only execute the Cypher script as needed. This approach allows the application to start as quickly as possible with the tradeoff of having to manually execute a script when needed, which most find to be a reasonable balance.
I am working on a project in which I am assigned to implement database first Approach. Here, I want to know that when we initiate database first approach we map that to an existing DB, but what if I have another DB with the same structure but different data, can I use that DB by just changing the connection string ? or will it impact somehow?
It will work when you change the connection string. I recommend you select the 'Code First from database' when creating new 'ADO.NET Entity Data Model' with VS add new item.
I have one Grails application that has been running for a while. But now I want to change the GORM format and I wonder if there are simple ways to do so, i.e. ways that I don't need to drop existing tables, only modifying my application will do.
To be specific, I used to have one HashSet field that is mapped to varbinary in DB. There are some existing rows in this User table.
public class User{
//irrelevant attributes omitted
HashSet<String> friends=new HashSet<>();
static mapping={
friends sqlType: 'VARBINARY(10000)'
}
}
Now I've changed the field friends to a HashMap<String,Integer>. Now although I still map the field to varchar, Grails throws an exception every time I save an User object:
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String
I first suspected that Grails keeps the old converting rule transforming HashSet to varbinary and it wasn't updated. So I tried changing the mapping from varbinary to blob and text, but neither worked.
I'm wondering if there are ways that I keep this column in varbinary in DB while letting Grails know that the attribute is now in HashMap and it should generate new ruls to convert.
Appreciate your insightful advice!
Edit: Im using Grails 2.4.4
There is one way I know of doing this: log into the database server so you have access to the database in a term window. Do this first on your development machine. Look at the relevant columns and see exactly which data types they use. Then, on your development machine, drop those columns and deploy the changed project. The new columns will be created if you've got the gorm set to 'update.' Again inspect the relevant columns and see if there's any way of changing the old columns (alter table...) in your production database to the new columns. You'll have to stop your production server, make the changes, deploy the new project and restart it. If you can't just change the columns you may have to create the new ones, move data over and delete the old ones - all with the application server stopped.
I have an MVC web application with code-first Entity Framework. We install this application in various computers as a local application. I made a migration to upgrade the database (in this case I added a new table), and after running the migration on upgrade, I want to insert initial data to the database so the users will be able to add/edit/delete them but I don't want the table to be empty at the first time.
Is there a way to do it automatically on upgrade without running a SQL script manually?
Migration class has up method,you can override it and insert/update records using SQL :
public override void Up() {
AddColumn("dbo.Posts", "Abstract", c => c.String());
Sql("UPDATE dbo.Posts SET Abstract = LEFT(Content, 100) WHERE Abstract IS NULL");
}
(Source)
Yes there is. You essentially write a class to conditionally check and insert values, and then you link this class to your entity framework database initialiser. It runs each time there is a migration to be performed, but I think you can change exactly when it runs (e.g. Application startup).
This link will give you the rough idea:
Entity Framework Inserting Initial Data On Rebuild
I have an exact code sample on my PC but I won't be on it until tomorrow. If this link doesn't quite do what you want, I can send you some code tomorrow which definitely will.