preface note: I'm just starting to learn Grails, so I'm sure there are many other problems and room for optimization.
I've got two domains, a parent (Collection) and child (Event), in a one-to-many mapping. I'm trying to code an integration test for the deletion of children. Prior to the code in question, I've successfully created a parent and three children. The point where I'm having problems is getting a single child in preparation to delete it. The first line of my sample code is only there because of my rudimentary attempt to troubleshoot.
// lines 95-100 of my EventIntegrationTests.groovy file
// delete a single event
assertEquals("2nd Event", event2.title) // passes
def foundEvent = Event.get(event2.id) // no apparent problems
assertEquals("2nd Event", foundEvent.title) // FAILS (line #98)
foundEvent.delete()
assertFalse Event.exists(foundEvent.id)
The error message I'm getting is:
Cannot get property 'title' on null object
java.lang.NullPointerException: Cannot get property 'title' on null object
at edu.learninggrails.EventIntegrationTests.testEventsDelete(EventIntegrationTests.groovy:98)
What should my next troubleshooting steps be? (Since the first assertEquals passes, event2 is clearly not null, so at this point I have no idea how to troubleshoot the failure of the second assertEquals.)
This is not evident from the code: did you persist event2 by calling save()? Get will try to retrieve it from the persistent storage (the in-memory database for example) and if the event wasn't saved, the retrieved instance will be null.
If you did save it, did the save go through OK? Calling event.save() will return false if there was something wrong while saving the item (like a validation error). Lastly, you might try calling event.save(flush:true) in case the Hibernate session doesn't handle this case as you might expect (I'm not entirely sure about this one, but it can't hurt to try).
Try to print or inspect the event2.id on line 97 and check if you actually have an id, if so check if you actually get an Event object on line 97.
I dont think you saved the parent and its children successfully. after you save, you should make sure that every object that was persisted has a non null id, in your test.
What you are seeing is you created the event2 with a title, but didnt save it. It passes the first assertion because you created it. When you do the get, null is returned because your save failed.
in general for DAO integration tests i do the following
Setup -- create all objects Ill use in the test.
Save -- assert that all ids on saved objects are NOT null.
Clear the hibernate session -- this is important because if you don't do it, objects can be in the session from the previous operations. In your real world scenario, you are probably going to start with a find, i.e. an empty session. In other words, you are not going to start with anything in the session. If you are you need to adjust this rule so that the session in the test, when you start the actual testing part, is the same as the session of the code in the wild
Load the objects on which you want to operate and do what you need to do.
Related
private void doSomething(someProcessModel process){
CustomerModel customer = process.getCustomerModel();
customer.getFoos().stream()
.filter(foo -> foo.getCountryCode().equals(process.getCountryCode()))
.findFirst()
.ifPresent(foo -> {
if(foo.getSomeNumber() == null){
foo.setSomeNumber("1234567");
modelService.save(foo);
}
});
}
As seen in the code snippet above, I have a 'CustomerModel' that has an attribute 'Foos'. It's a one-to-many relationship. As you can see I have done some filtering and in the end, I want to update the value of 'someNumber' attribute of 'Foo' if it is null. I've confirmed that everything is working as the "someNumber" attribute's value is updated during the debugging. It doesn't save at all as I have done my checking in the HMC. I have also validated that the Interceptor doesn't have any condition that would throw an error. There is nothing being shown in the log either.
I am wondering is it a legal approach to do the "modelService.save()' inside the 'ifPresent()' method? What could be the possible issue here?
I have found the root cause now as I have face the same issue again.
Context to my original question
To give more context to my original question, the #doSomething method resides in a Hybris Business Process action class and I have ended the action prematurely while I am debugging it (by stopping the debugging) once the #doSomething method is ran.
Root cause
The mentioned problem happened when I was debugging the action class. I assumed that the ModelService#save will persist the current state of the business process once it has been ran. However, the Hybris OOTB business process will do a rollback if there is any error (and I believe it was caused by me stopping the debugging half-way).
SAP Commerce Documentation:
All actions are performed inside their own transaction. This means that changes made inside the action bean run method are rolled back in case of an error.
Solution
Let the action runs completely!
Good to know
Based on the SAP Documentation and this blog post, there will be times that we will need to bypass a business process rollback even though there is an exception being thrown and there are ways to achieve that. More info can be found in this SAP Commerce Documentation and the mentioned blog post.
However, in some circumstances it may be required to let a business exception reach the outside but also commit the transaction and deal with the exception outside. Therefore, it is possible to make the task engine not roll back the changes made during a task which failed.
You have to be cautious with a list in models as they are immutable, you have to set the whole new list. Also you called save only on a particular model, which changes its Jalo reference, that's why your list is not updated. Mutating stream and collecting it in the end will create new list that's why you can stream over the list directly from the model.
private void doSomething(someProcessModel process){
CustomerModel customer = process.getCustomerModel();
ArrayList<FooModel> foos = doSomethingOnFoos(customer.getFoos());
customer.setFoos(foos);
modelService.saveAll(foos, customer);
}
//compare the value you know exists with something that might be NULL as equals can handle that, but not the other way around
private ArrayList<FooModel> doSomethingOnFoos(ArrayList<FooModel> fooList) {
return fooList.stream()
.filter(Objects::nonNull)
.filter(foo -> process.getCountryCode().equals(foo.getCountryCode()))
.filter(foo -> Objects.isNull(foo.getSomeNumber()))
.map(foo -> foo.setSomeNumber(1234))
.collect(toList());
}
In my Grails App, I have bootstrapped an object of a domain class as:
def user1 = new Users(userID: 1, name: "John Doe")
user1.save()
In my dashboard controller i have retrieved the object and modified its property name as:
Users userObj = Users.get((Long) 1)
println(userObj as JSON); // This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"John Doe"}
userObj.name = "anonymous"
Now i create a new Users object to retrieve the changed object with same ID 1 as
Users otherUserObj = Users.get((Long) 1) // **Line 2** Is this retrieving from database or from GORM session?
print(otherUserObj as JSON)// This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"anonymous"}
But the value of object in database is not changed. And even when i retrieve the Users object of same id 1 in another controller it gives me the initial object rather than the modified as:
Users userObjAtDifferentController = Users.get(1);
print(userObjAtDifferentController) //This gives me: {"class":"com.prabin.domains.Users","id":1,"name":"John Doe"}
My question is, if the value is not changed in the database, why it gives me the modified object at Line 2 though i have retrieved the object using GORM query (which i guess should retrieve from the database)?
I have also tried using save() after the modification but the result is same.
userObj.save() //doesn't show the changes in database.
My guess is that the object is not being saved to the database because some constraint(s) are invalid. You can determine whether this is the case by replacing your calls to save() with save(failOnError: true). If my guess is correct, an exception will be thrown if saving to the database fails.
When you call the save() method on a domain object, it may not persist in the database immediately. In order to persist the changed value to the database, you would need to do the following.
userObj.save(flush: true)
By using flush, you are telling GORM to persist immediately in the database.
In some cases when validation fails, the data will still not persist in the database. The save() method will fail silently. To catch validation errors as well as save to the database immediately, you would want to do the following
userObj.save(flush:true, failOnError:true)
If validation errors exist, then the GROM will throw ValidationException (http://docs.grails.org/latest/api/grails/validation/ValidationException.html)
You need to consider two things:
If you do save(), it only retains in hibernate session, until you flush it the changes does not persist in database. So, better do save(flush:true, failOnError: true) and put in try/catch block and print exception.
And another important things is, your method in controller needs to be with #Transactional annotation, without it your changes does not persist in database, you will encounter Connection is read-only. Queries leading to data modification are not allowed. exception.
Hope this helps, let me know if your issue is fixed. Happy coding . :)
Confused here about domain objects in intermediate states.
So here's a class:
class Foo {
Double price
String name
Date creationDate = new Date()
static constraints = {
price min: 50D
creationDate nullable: false
}
}
And this test case:
#TestFor(Foo)
class FooSpec extends Specification {
void "test creation and constraints"() {
when: "I create a Foo within constraints"
Foo f= new Foo(price: 60D, name: "FooBar")
then: "It validates and saves"
f.validate()
f.save()
when: "I make that foo invalid"
f.price=49D
then: "It no longer validates"
!f.validate()
when: "I discard the changes since the last save"
f.discard()
then: "it validates again"
f.validate() //TEST FAILS HERE
}
}
So this test fails on the last validate, because f.discard() doesn't seem to revert f to its "saved" state. Notes on the grails docs appear to indcate this is intended (alothough very confusing to me at least) behavior. That all "discard" does is mark it as not to be persisted (which it wouldn't be anyway if it fails the constraints, so in a case like this, I guess it does nothing right? ) I thought refresh might get me the saved state, or even Foo.get([id]) but none of that works. Foo.get gets the corrupted version and f.refresh throws an nullPointer apparently because creationDate is null and it shouldn't be.
So all that is confusing, and seems like there's no way to get back to saved state of the object if it hasn't been flushed. Is that really true? Which sort of raises the question "In what respect is the object "saved?"
That would seem to mean I would want to be making sure the I'm definitely reading from the db not the cache if I want to be sure I'm getting a valid persisted state of the object, \not some possibly corrupted intermediate state.
A related question -- Not sure the scope of the local cache either -- is it restricted to the session? So if I start another session and retrive that id, I'll get a null because it was never saved, not the corrupt version with the invalid price?
Would appreciate someone explaining the underlying design principal here.
Thanks.
Hibernate doesn't have a process for reverting state, although it seems like it could since it keeps a clean copy of the data for dirty checking. Calling the GORM discard() method calls a few layer methods along the way but the real work is done in Hibernate's SessionImpl.evict(Object) method, and that just removes state from the session caches and disassociates the instance from the session, but it does nothing to the persistent properties in the instance. If you want to undo changes you should evict the instance with discard() and reload it, e.g. f = Foo.get(f.id)
We have a breeze client solution in which we show parent entities with lists of their children. We do hard deletes on some child entities. Now when the user is the one doing the deletes, there is no problem, but when someone else does, there seems to be no way to invalidate the children already loaded in cache. We do a new query with the parent and expanding to children, but breeze attaches all the other children it has already heard of, even if the database did not return them.
My question: shouldn't breeze realize we are loading through expand and thus completely remove all children from cache before loading back the results from the db? How else can we accomplish this if that is not the case?
Thank you
Yup, that's a really good point.
Deletion is simply a horrible complication to every data management effort. This is true no matter whether you use Breeze or not. It just causes heartache up and down the line. Which is why I recommend soft deletes instead of hard deletes.
But you don't care what I think ... so I will continue.
Let me be straight about this. There is no easy way for you to implement a cache cleanup scheme properly. I'm going to describe how we might do it (with some details neglected I'm sure) and you'll see why it is difficult and, in perverse cases, fruitless.
Of course the most efficient, brute force approach is to blow away the cache before querying. You might as well not have caching if you do that but I thought I'd mention it.
The "Detached" entity problem
Before I continue, remember the technique I just mentioned and indeed all possible solutions are useless if your UI (or anything else) is holding references to the entities that you want to remove.
Oh, you'll remove them from cache alright. But whatever is holding references to them now will continue to have a reference to an entity object which is in a "Detached" state - a ghost. Making sure that doesn't happen is your responsibility; Breeze can't know and couldn't do anything about it if it did know.
Second attempt
A second, less blunt approach (suggested by Jay) is to
apply the query to the cache first
iterate over the results and for each one
detach every child entity along the "expand" paths.
detach that top level entity
Now when the query succeeds, you have a clear road for it to fill the cache.
Here is a simple example of the code as it relates to a query of TodoLists and their TodoItems:
var query = breeze.EntityQuery.from('TodoLists').expand('TodoItems');
var inCache = manager.executeQueryLocally(query);
inCache.slice().forEach(function(e) {
inCache = inCache.concat(e.TodoItems);
});
inCache.slice().forEach(function(e) {
manager.detachEntity(e);
});
There are at least four problems with this approach:
Every queried entity is a ghost. If your UI is displaying any of the queried entities, it will be displaying ghosts. This is true even when the entity was not touched on the server at all (99% of the time). Too bad. You have to repaint the entire page.
You may be able to do that. But in many respects this technique is almost as impractical as the first. It means that ever view is in a potentially invalid state after any query takes place anywhere.
Detaching an entity has side-effects. All other entities that depend on the one you detached are instantly (a) changed and (b) orphaned. There is no easy recovery from this, as explained in the "orphans" section below.
This technique wipes out all pending changes among the entities that you are querying. We'll see how to deal with that shortly.
If the query fails for some reason (lost connection?), you've got nothing to show. Unless you remember what you removed ... in which case you could put those entities back in cache if the query fails.
Why mention a technique that may have limited practical value? Because it is a step along the way to approach #3 that could work
Attempt #3 - this might actually work
The approach I'm about to describe is often referred to as "Mark and Sweep".
Run the query locally and calculate theinCache list of entities as just described. This time, do not remove those entities from cache. We WILL remove the entities that remain in this list after the query succeeds ... but not just yet.
If the query's MergeOption is "PreserveChanges" (which it is by default), remove every entity from the inCache list (not from the manager's cache!) that has pending changes. We do this because such entities must stay in cache no matter what the state of the entity on the server. That's what "PreserveChanges" means.
We could have done this in our second approach to avoid removing entities with unsaved changes.
Subscribe to the EntityManager.entityChanged event. In your handler, remove the "entity that changed" from the inCache list because the fact that this entity was returned by the query and merged into the cache tells you it still exists on the server. Here is some code for that:
var handlerId = manager.entityChanged.subscribe(trackQueryResults);
function trackQueryResults(changeArgs) {
var action = changeArgs.entityAction;
if (action === breeze.EntityAction.AttachOnQuery ||
action === breeze.EntityAction.MergeOnQuery) {
var ix = inCache.indexOf(changeArgs.entity);
if (ix > -1) {
inCache.splice(ix, 1);
}
}
}
If the query fails, forget all of this
If the query succeeds
unsubscribe: manager.entityChanged.unsubscribe(handlerId);
subscribe with orphan detection handler
var handlerId = manager.entityChanged.subscribe(orphanDetector);
function orphanDetector(changeArgs) {
var action = changeArgs.entityAction;
if (action === breeze.EntityAction.PropertyChange) {
var orphan = changeArgs.entity;
// do something about this orphan
}
}
detach every entity that remains in the inCache list.
inCache.slice().forEach(function(e) {
manager.detachEntity(e);
});
unsubscribe the orphan detection handler
Orphan Detector?
Detaching an entity can have side-effects. Suppose we have Products and every product has a Color. Some other user hates "red". She deletes some of the red products and changes the rest to "blue". Then she deletes the "red" Color.
You know nothing about this and innocently re-query the Colors. The "red" color is gone and your cleanup process detaches it from cache. Instantly every Product in cache is modified. Breeze doesn't know what the new Color should be so it sets the FK, Product.colorId, to zero for every formerly "red" product.
There is no Color with id=0 so all of these products are in an invalid state (violating referential integrity constraint). They have no Color parent. They are orphans.
Two questions: how do you know this happened to you and what do your do?
Detection
Breeze updates the affected products when you detach the "red" color.
You could listen for a PropertyChanged event raised during the detach process. That's what I did in my code sample. In theory (and I think "in fact"), the only thing that could trigger the PropertyChanged event during the detach process is the "orphan" side-effect.
What do you do?
leave the orphan in an invalid, modified state?
revert to the equally invalid former colorId for the deleted "red" color?
refresh the orphan to get its new color state (or discover that it was deleted)?
There is no good answer. You have your pick of evils with the first two options. I'd probably go with the second as it seems least disruptive. This would leave the products in "Unchanged" state, pointing to a non-existent Color.
It's not much worse then when you query for the latest products and one of them refers to a new Color ("banana") that you don't have in cache.
The "refresh" option seems technically the best. It is unwieldy. It could easily cascade into a long chain of asynchronous queries that could take a long time to finish.
The perfect solution escapes our grasp.
What about the ghosts?
Oh right ... your UI could still be displaying the (fewer) entities that you detached because you believe they were deleted on the server. You've got to remove these "ghosts" from the UI.
I'm sure you can figure out how to remove them. But you have to learn what they are first.
You could iterate over every entity that you are displaying and see if it is in a "Detached" state. YUCK!
Better I think if the cleanup mechanism published a (custom?) event with the list of entities you detached during cleanup ... and that list is inCache. Your subscriber(s) then know which entities have to be removed from the display ... and can respond appropriately.
Whew! I'm sure I've forgotten something. But now you understand the dimensions of the problem.
What about server notification?
That has real possibilities. If you can arrange for the server to notify the client when any entity has been deleted, that information can be shared across your UI and you can take steps to remove the deadwood.
It's a valid point but for now we don't ever remove entities from the local cache as a result of a query. But.. this is a reasonable request, so please add this to the breeze User Voice. https://breezejs.uservoice.com/forums/173093-breeze-feature-suggestions
In the meantime, you can always create a method that removes the related entities from the cache before the query executes and have the query (with expand) add them back.
In my app, I have a code like this:
// 1
Foo.get(123).example = "my example" // as expected, don't change value in db
// 2
Foo.get(123).bars.each { bar ->
bar.value *= -1 // it's changing "value" field in database!! WHY?
}
note: Foo and Bar are tables in my DB
Why is gorm saving in database is second case?
I don't have any save() method in code.
Tks
SOLVED:
I need to use read() to get a readonly session.
(Foo.discard() also works)
Doc: http://grails.org/doc/latest/guide/5.%20Object%20Relational%20Mapping%20%28GORM%29.html#5.1.1%20Basic%20CRUD
(In the first case, I guess I made mistest)
Both should save, so the first example appears to be a bug. Grails requests run in the context of an OpenSessionInView interceptor. This opens a Hibernate session at the beginning of each request and binds it to the thread, and flushes and closes it at the end of the request. This helps a lot with lazy loading, but can have unexpected consequences like you're seeing.
Although you're not explicitly saving, the logic in the Hibernate flush involves finding all attached instances that have been modified and pushing the updates to the database. This is a performance optimization since if each change had been pushed it would slow things down. So everything that can wait until a flush is queued up.
So the only time you need to explicitly save is for new instances, and when you want to check validation errors.