I've got a sample project which uses Spring Data Neo4j's advanced mapping using aspectj. It's failing in a test with NullPointerException due to the entityState attribute being null at the time when the entity is being persisted.
I can also replicate this when the server itself is spun up and I execute:
curl -i -X POST -H "Content-Type:application/json" -d '{ "firstName" : "Test", "lastName" : "Person" }' http://localhost:8080/people
The project is at https://github.com/dhallam/spring-data-neo4j-demo and the Travis build with the logs is at https://travis-ci.org/dhallam/spring-data-neo4j-demo/builds/22538972.
I'm not sure whether I'm missing something or there is a bug that would require me to raise a JIRA issue.
Any ideas? Many thanks in advance.
Edit 1:
Versions: Java v1.7; SDNeo4j v3.1.0.M1; SDRest v2.1.0.M1; Neo4j v2.0.1; Jetty v9.0.5.v20130815; AspectJ v1.7.4
Tried adding #JsonIgnoreProperties to the Person (#NodeEntity)from Michael's comment below and it's still failing. The bit of code that is failing is when I run:
Person p = new Person();
// setters ...
p.persist();
The persist() calls
public <T extends NodeBacked> T NodeBacked.persist() {
return (T)this.entityState.persist();
}
but entityState is null.
Figured it out. The Neo4jConfig class previously extended the Neo4jConfiguration class. With the introduction of aspects, that superclass needed to be updated to be Neo4jAspectConfiguration to provide the additional neo4jRelationshipBacking and neo4jNodeBacking beans.
I've updated the github reference project at https://github.com/dhallam/spring-data-neo4j-demo and the build is passing.
Which versions do you use?
You can use #JsonIgnore for the entityState attribute.
See: http://forum.spring.io/forum/spring-projects/data/nosql/110063-unable-to-convert-nodeentity-object-to-json-with-jackson
#JsonIgnoreProperties({"entityState", "nodeId", "persistentState", "relationshipTo", "template"})
Related
I need a little help understanding why my many to many relationship is not functioning. I have spent the last 1/2 day going down many rabbit holes, from trying to change the php.ini file so i would not get an xdebug error (you will see below), trying to install vim on the docker container so I could edit the php.ini, spelunking why migrations were not working ......
i installed laravel 8 using docker. I am not sure but I wonder if my problems are due to a message I am getting when I try to run anything with artisan tinker:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: localhost:9003 (fallback through xdebug.client_host/xdebug.client_port) :-(
I am able to run most artisan commands, but it fails to make a migration. If i type sail php artisan make:model [tablename] -m to try to make a migration at the same time, the system successfully makes a model, but it does not make a migration. sail php artisan make migration and sail php artisan migrate also return this xdebug error along with the ERROR: 255 code.
nonetheless (whatever the hec than means), I created the tables manually, and used the artisan make:model to make the models.
I have installed laravel breeze so my User Model looks like the following. I have cut some of the interior of the class out to save space.
thank you in advance for your help.
btw, you will notice there is a return type declaration for each of the models shown below. Those are there because php storm warned me with "missing function's return type declaration.'. So I chose to add the return type declaration. I tried this same code without the return type declaration, and received the same non-results.
this is the user model
<?php
namespace App\Models;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Database\Eloquent\SoftDeletes;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
class User extends Authenticatable implements MustVerifyEmail
{
use Notifiable;
use SoftDeletes;
/* i have deleted some methods, etc. here */
public function people(): \Illuminate\Database\Eloquent\Relations\BelongsToMany
{
return $this->belongsToMany(Person::class, 'user_person', 'user_id', 'person_id')->withTimestamps();
}
public function family(): \Illuminate\Database\Eloquent\Relations\HasMany
{
return $this->hasMany(UserPerson::class);
}
}
this is the person model:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class Person extends Model
{
/**
* The table associated with the model.
*
* #var string
*/
protected $table = 'people';
public function users(): \Illuminate\Database\Eloquent\Relations\BelongsToMany
{
return $this->belongsToMany(User::class, 'user_person', 'person_id', 'user_id')->withTimestamps();
}
}
This is the user_person model
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class UserPerson extends Model
{
public function user(): \Illuminate\Database\Eloquent\Relations\BelongsTo
{
return $this->belongsTo(User::class);
}
}
this shows the error in php artisan tinker.
robertbryandavis#Roberts-iMac ~/D/s/rec4life (main)> sail php artisan tinker
Xdebug: [Step Debug] Could not connect to debugging client. Tried: localhost:9003 (fallback through xdebug.client_host/xdebug.client_port) :-(
Psy Shell v0.10.6 (PHP 8.0.2 — cli) by Justin Hileman
>>> $user = App\Models\User::where('email','bdavis#xxxxx.com')->get();
=> Illuminate\Database\Eloquent\Collection {#4381
all: [
App\Models\User {#4370
id: 7655,
name: "bob davis",
firstname: "bob",
lastname: "davis",
fields: taken out here for brevity
deleted_at: null,
created_at: "2019-02-07 13:43:05",
updated_at: "2021-03-18 23:56:44",
},
],
}
>>> $user->people()->get();
BadMethodCallException with message 'Method Illuminate\Database\Eloquent\Collection::people does not exist.'
>>> $user->people;
Exception with message 'Property [people] does not exist on this collection instance.'
>>> $user->family();
BadMethodCallException with message 'Method Illuminate\Database\Eloquent\Collection::family does not exist.'
>>> $user->family()->get();
BadMethodCallException with message 'Method Illuminate\Database\Eloquent\Collection::family does not exist.'
>>>
I am not sure if this is your case but I'm gonna tell you how I solved this issue, and got rid of that warning:
I solved it pointing my PHP error_log to a valid location by adding this line to my php.ini (in /etc/php/7.4/cli/php.ini):
error_log = /var/www/log/php_error.log
Change that location to any valid directory you want and try again. The warning should disappear from your output and now goes to the log file.
I'm running into a problem with my Elasticsearch Document index creation failing on startup with "java.lang.IllegalArgumentException: can't add a _parent field that points to an already existing type, that isn't already a parent". I'm not sure if this is due to a version upgrade or b/c I am starting with a brand new Elasticsearch server install.
Contrived example that shows what I'm seeing:
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user")
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department")
public class DepartmentSearchResult implements Serializable {
...
}
When I start my application I get that exception. If I check the ElasticSearch server, I see the "hr_index" index and the "department" mapping, but the "user" mapping is not created.
If I understand the error, it's because "department" is being created and then when Spring tries to create "user" with "department" as its parent, it doesn't like that, since department wasn't previously marked as a parent when it was created.
Is there some way (via annotation?) to denote DepartmentSearchResult as being a parent when it's created somehow?
Or, is it possible to give a hint to Spring Data Elasticsearch as to what order it should create the indices/mappings? I have seen some other posts (Spring Data Elasticsearch Parent/Child Document Repositories / Test execution error) but disabling auto creation and then manually creating it myself (either as part of my Spring codebase or external to the app) seems kind of "un-Spring-y" to me?
Or, is there some other approach I should be taking?
(This is a working Spring application that had been using Spring 4.2.1 and Spring Data Release Train Gosling, that I'm attempting to upgrade to use Spring 5.0.0 and Spring Data Release Train Kay. As part of this I am starting with a fresh Elasticsearch install, and so I'm not sure if this error is coming from the upgrade or just b/c the install is clean).
In the SD ES, issues related to the parent-child relationship at now really poorly developed.
The problem is most likely due to the fact that you are using a clean installation of Elasticsearch. Before the update, the problem did not arise, because mappings have already been created. For the solution, you can use elasticsearchTemplate, which is part of SD ES, and ApplicationListener. It's simple. Just 3 steps.
Drop index in ES (it only needs one time):
curl -XDELETE [ES_IP]:9200/hr_index
Tell SD ES not to create indices and mappings automatically
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user", createIndex = false)
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department", createIndex = false)
public class DepartmentSearchResult implements Serializable {
...
}
Add a ApplicationListener:
#Component
public class ApplicationStartupListener implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
//Mapping for child must be created only if mapping for parents doesn't exist
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
elasticsearchTemplate.createIndex(DepartmentSearchResult.class);
try {
elasticsearchTemplate.getMapping(DepartmentSearchResult.class);
} catch (ElasticsearchException e) {
elasticsearchTemplate.putMapping(UserSearchResult.class);
elasticsearchTemplate.putMapping(DepartmentSearchResult.class);
}
}
}
P.S. Among other things, it is worth paying attention to the fact that with the release of ES 5.6, a process for removing types began. This inevitably entails the removal of the parent-child relationship. In one of the next releases of the SD ES, we will provide the opportunity to work with joins. Working with parent-child relationships is unlikely to be improved
After upgrading to Spring Boot 1.3 (via Grails 3.1), the JSON output is rendered incorrectly. I believe it is because of the new auto-configured WebSocket JSON converter.
For example, with previous versions of Spring Boot (via Grails 3.0), using the following code:
#MessageMapping("/chat")
#SendTo("/sub/chat")
protected String chatMessage() {
def builder = new groovy.json.JsonBuilder()
builder {
type("message")
text("foobar")
}
builder.toString()
}
This would produce:
{"type": "message", "text": "foobar"}
With Spring Boot 1.3 (via Grails 3.1), that web socket produces the following:
"{\"type\":\"message\",\"text\":\"foobar\"}"
This is not valid JSON. How can I get rid of this new behavior and have it render the JSON as it was before? Please let me know if you have any suggestions.
I tried overriding the new configureMessageConverters method, but it did not have any effect.
looks like you are right. referenced commit shows questionable autoconfiguration imho.
especially b/c in the past, the converter ordering was intentionally changed to that StringMessageConverter takes precedence before MappingJackson2MessageConverter: https://github.com/spring-projects/spring-framework/commit/670c216d3838807fef46cd28cc82165f9abaeb45
for now, you can either disable that autoconfiguration:
#EnableAutoConfiguration(exclude = [WebSocketMessagingAutoConfiguration])
class Application extends GrailsAutoConfiguration { ... }
or, you add yet another StringMessageConverter to the top of the configured converters (maybe because you do want the boot autoconfiguration behavior because it is using the jackson ObjectMapper bean instead of creating a new one):
#Configuration
#EnableWebSocketMessageBroker
class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
boolean configureMessageConverters(List<MessageConverter> messageConverters) {
messageConverters.add 0, new StringMessageConverter()
return true
}
...
}
hope that helps.
I don't know how to do it in Grails but in Java you have to now pass the object instead of an object in the String class. I believe the old behavior was actually incorrect as it was returning a string as an object so there was no way to return a String that had JSON inside it as a String. So create an object with the structure you want and return it and you should be fine. I went through the same issue when upgrading from 1.2.X to 1.3.X. I am not exactly sure what change caused this but I think in the long run it is the correct thing to do.
I am building an application using spring-boot (1.1.8.RELEASE), spring-data-neo4j (3.2.0.RELEASE) in order to connect to a stand alone neo4j server via rest api. I am using spring-test in order to test the application I have implemented a unit test to create a Node and retrieved it. It is working well but the new node remained in the database after the test is completed, however I expect the transaction to be rollbacked and the node deleted
However in the console I can see the following statement.
"Rolled back transaction after test execution for test context...
** I don't understand why based on the console the roll back seems to have occured but the transaction has been committed to the database. **
It would be really appreciated if somebody could help me to figure out where the issue is coming from.
Find below my spring configuration
#Configuration
#ComponentScan
#EnableTransactionManagement
#EnableAutoConfiguration
public class AppConfig extends Neo4jConfiguration {
public AppConfig() {
setBasePackage("demo");
}
#Bean
public GraphDatabaseService graphDatabaseService(Environment environment) {
return new SpringRestGraphDatabase("http://localhost:7474/db/data");
}
}
Find below my test class
#SuppressWarnings("deprecation")
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = AppConfig.class)
#Transactional
public class AppTests {
#Autowired
private Neo4jTemplate template;
#Test
public void templateTest() {
Person person = new Person();
person.setName("Benoit");
person.setBorn(1986);
Person newPerson = template.save(person);
Person retrievedPerson = template.findOne(newPerson.getNodeId(),Person.class);
Assert.assertEquals("Benoit", retrievedPerson.getName());
}
}
I tried to add the following annotation in my unit test class but it did not change anything:
#TransactionConfiguration(transactionManager="transactionManager", defaultRollback=true)
I also tried to add the following in my unit test based on what I have seen in other posts
implements ApplicationContextAware
Thank you for your help
Regards
The behavior you are experiencing is to be expected: there is nothing wrong with transaction support in the Spring TestContext Framework (TCF) in this regard.
The TCF manages transactions via the configured transactionManager.
So when you switched to an embedded database and configured the transaction manager with the data source for that embedded database, that works perfectly. The issue is that the transaction support in Neo4J-REST does not tie in with Spring's transaction management facilities. As Michael Hunger stated in the other thread you referenced, an upcoming version of the Neo4J-REST API should address this issue.
Note that annotating your test class with #TransactionConfiguration has zero effect since you are merely overriding the defaults with the defaults which achieves nothing. Furthermore, implementing ApplicationContextAware in a test class has no effect on transaction management.
Regards,
Sam (spring-test component lead)
Updated post:
In a Controller if I do this:
def obj = new Test(name:"lol")
obj.save(flush:true)
obj.name = "lol2"
//a singleton service with nothing to do with obj
testService.dostuff()
/*
"obj" gets persisted to the database right here
even before the next println
*/
println "done"
Can anyone please explain me why is this happening with Grails 1.3.7 and not with Grails 2? What is the reason?
I know I could use discard() and basically restructure the code but I am interested in what and why is happening behind the scenes. Thanks!
Old post:
I have a test Grails application. I have one domain class test.Test:
package test
class Test {
String name
static constraints = {}
}
Also I have a service test.TestService:
package test
class TestService {
static scope = "singleton"
static transactional = true
def dostuff() {
println "test service was called"
}
}
And one controller test.TestController:
package test
class TestController {
def testService
def index = {
def obj = new Test(name:"lol")
obj.save(flush:true)
obj.name = "lol2"
testService.dostuff()
println "done"
}
}
So what I do:
Create a domain object
Change one of it's properties
Call a singleton service method
What I would expect:
Nothing gets persisted to the db unless I call obj.save()
What happens instead:
Right after the service call Grails will do an update query to the database.
I have tried the following configuration from this url: http://grails.1312388.n4.nabble.com/Turn-off-autosave-in-gorm-td1378113.html
hibernate.flush.mode="manual"
But it didn't help.
I have tested it with Grails 1.3.7, Grails 2.0.3 does not have this issue.
Could anyone please give me a bit more information on what is exactly going on? It seems like the current session has to be terminated because of the service call and because the object is dirty it is getting automatically persisted to the database after the service call. What I don't understand that even with the manual flush mode configuration in Hibernate does not help.
Thanks in advance!
I'm not sure what about that thread you linked to made you think it would work. They all said it wouldn't work, the ticket created has been closed as won't fix. The solution here is to use discard() as the thread stated.