With Grails there are several ways to do the same thing.
Finds all of domain class instances:
Book.findAll()
Book.getAll()
Book.list()
Retrieves an instance of the domain class for the specified id:
Book.findById(1)
Book.get(1)
When do you use each one? Are there significant differences in performance?
getAll is an enhanced version of get that takes multiple ids and returns a List of instances. The list size will be the same as the number of provided ids; any misses will result in a null at that slot. See http://grails.org/doc/latest/ref/Domain%20Classes/getAll.html
findAll lets you use HQL queries and supports pagination, but they're not limited to instances of the calling class so I use executeQuery instead. See http://grails.org/doc/latest/ref/Domain%20Classes/findAll.html
list finds all instances and supports pagination. See http://grails.org/doc/latest/ref/Domain%20Classes/list.html
get retrieves a single instance by id. It uses the instance cache, so multiple calls within the same Hibernate session will result in at most one database call (e.g. if the instance is in the 2nd-level cache and you've enabled it).
findById is a dynamic finder, like findByName, findByFoo, etc. As such it does not use the instance cache, but can be cached if you have query caching enabled (typically not a good idea). get should be preferred since its caching is a lot smarter; cached query results (even for a single instance like this) are pessimistically cleared more often than you would expect, but the instance cache doesn't need to be so pessimistic.
The one use case I would have for findById is as a security-related check, combined with another property. For example instead of retrieving a CreditCard instance using CreditCard.get(cardId), I'd find the currently logged-in user and use CreditCard.findByIdAndUser(cardId, user). This assumes that CreditCard has a User user property. That way both properties have to match, and this would block a hacker from accessing the card instance since the card id might match, but the user wouldn't.
Another difference between Domain.findByID(id) and Domain.get(id) is that if you're using a hibernate filter, you need to use Domain.findById(id). Domain.get(id) bypasses the filter.
AFAIK, these are all identical
Book.findAll()
Book.getAll()
Book.list()
These will return the same results
Book.findById(1)
Book.get(1)
but get(id) will use the cache (if enabled), so should be preferred to findById(1)
Related
I've got a domain called Planning that has a hasMany of another domain called Employee included in it. I'm trying to do a findAll of these plannings where the plannings contain a particular employee and I can't get it to work.
I'm trying to do it like so, my print statements do print the contains as true
plannings = plannings.findAll{planning->
if(employee) {
log.info("find plannings with employee ${employee} ${planning.employees.contains(employee)}")
planning.employees.contains(employee)
}
}
I'm not doing this as a Hibernate query as this broke the application in another weird way. This code is executed in a for each and for whatever reason that causes some weird behavior with Hibernate.
The closure must return a boolean value - see http://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Collection.html#findAll(groovy.lang.Closure)
This should work (not tested):
plannings = plannings.findAll{planning-> planning.employees?.contains(employee)}
BTW: I wouldn't assign the filtered list to the origin plannings list. Extract a a new expressive variable like planingsOfEmployee or something similar.
Without more relevant details around your problem (what's the weird behavior? log traces? hibernate mappings?, etc.) all we can do is to speculate; and if I have to do so, I would say that most likely:
The employee object you are using for comparison is a detached one.
The employee object does not override meaningfully equals and hashCode
You use using this detached employee to do comparisons against against persisted employees (using planning.employees.contains(employee)) found inside planning
Under these circumstances the comparisons will never be true even when they may represent the same objects. If this is your case, you must either:
Use a persisted employee object to do the comparisons.
Or, implement equals and hashCode semantically meaningful for Employee
Hope this helps.
I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.
I have a variable in create method in controller ,is there any way to reuse that variable with the same value in update method. How can i pass this, or how can i maintain the lifetime for the multiple requests?
Example variable:
#m = Issue.where( :project_id => #project.id ).where( :issue => "xyz" )
As I understand it, your requirement is to re-use data that was accessed during one call to your API (for creation of an API entity), during a separate call (an update). The data is fetched from the database in the first case.
Just fetch the data again, using the same query.
The database is the only data source easily accessible in both events, that will reliably hold an up-to-date value.
As this is for a RESTful API, there should be no other state information - everything should be in either the current request or the database.
If you want, you can cache data for performance, but Ruby variables are not a reliable or efficient way to do that (because there will be several Ruby processes running independently on the web server, and you don't get to manage them from the controller code) - instead you might want to consider something like memcached if the query is slow and its results are needed in many API events. However, you should normally avoid caching data except where you have a real performance issue - because you will probably need to handle cache invalidation, too.
I'm having trouble understanding how one would access the sub entities of an aggregate root. From answers to my previous question I now understand that I need to identify the aggregate roots of my model, and then only setup repositories which handle these root objects.
So say I have an Order object that contains Items. Items must exist within and Order so the Order is the aggregate root. But what if I want to include as part of my site an OrderItem details page? The URL to this page may be something like /Order/ItemDetails/1234, where 1234 is the ID of the OrderItem. Yet this would require that I retrieve an Item directly by ID, and because it is not an aggregate root I should not have a OrderItemRepository that can retrive an OrderItem by ID.
Since I want to work with OrderItems independent of an Orders does that imply that OrderItem is not actually an aggregate of Order but another aggregate root?
I don't know your business rules, of course, but I can't think of a case where you would have an orderitem that doesn't have an order. Not saying you wouldn't want to "work with one" by itself, but it still has to have an order, imo, and the order is sort of in charge of the relationship; e.g. you would represent all this by adding or deleting items from an order.
In situations like this, I usually will still require access to the items through the order. It's pretty easy to setup, in URLs I would just do /order/123/item/456. Or, if item ordering is stored / important (which it normally is stored at least indirectly via the order of entry), you could do /order/123/item/1 to retrieve the first item on the order.
In the controller, then, I just retrieve the order from the OrderRepository and then access the appropriate item from there.
All that said, I do agree w/ Arnis that you don't always have to follow this pattern at all. It's a case-by-case thing that you should evaluate the tradeoffs before doing it.
In Your case, I would retrieve OrderItem directly by URL /OrderItem/1234.
I personally don't try to abstract persistence (I don't use repository pattern). Also - I don't follow repository per aggregate root principle. But I do isolate domain model from persistence.
Main reason for that is - it's near-impossible to abstract persistence mechanisms completely. It's a leaky abstraction (e.g. try specifying eager/lazy loading for ORM that lives underneath w/o polluting repository API).
Another reason - it does not matter that much in what way You report data. Reporting part is boring and relatively unimportant. Real value of application is what it can do - automation of processes. So it's much more important how Your application behaves, how it manages to stay consistent, how objects interact etc.
When thinking about this problem, it's good to remember Law of Demeter. The point is - it should be applied only if we explicitly want to hide internals. In Your case - we don't want to hide order items.
So - exploiting fact that we know that entity Ids are globally unique (as opposed to unique only in Order context) it's just a short-cut and there is nothing wrong with retrieving them directly.
Interestingly enough - this can be pushed forward.
Even behavior encapsulation can and should be loosened up too.
E.g. - it makes more sense to have orderItem.EditComments("asdf") than order.EditOrderItemComments(order.OrderItems[0], "asdf").
I'm wondering what the most efficient way of updating a single value in a domain class from a row in the database. Lets say the domain class has 20+ fields
def itemId = 1
def item = Item.get(itemId)
itemId.itemUsage += 1
Item.executeUpdate("update Item set itemUsage=(:itemUsage) where id =(:itemId)", [usageCount: itemId.itemUsage, itemId: itemId.id])
vs
def item = Item.get(itemId)
itemId.itemUsage += 1
itemId.save(flush:true)
executeUpdate is more efficient if the size and number of the un-updated fields is large (and this is subjective). It's how I often delete instances too, running 'delete from Foo where id=123' since it seems wasteful to me to load the instance fully just to call delete() on it.
If you have large strings in your domain class and use the get() and save() approach then you serialize all of that data from the database to the web server twice unnecessarily when all you need to change is one field.
The effect on the 2nd-level cache needs to be considered if you're using it (and if you edit instances a lot you probably shouldn't). With executeUpdate it will flush all instances previously loaded with get() but if you update with get + save if flushes just that one instance. This gets worse if you're clustered since after executeUpdate you'd clear all of the various cluster node caches vs flushing the one instance on all nodes.
Your best bet is to benchmark both approaches. If you're not overloading the database then you may be prematurely optimizing and using the standard approach might be best to keep things simple while you solve other problems.
If you use get/save, you'll get the maximum advantage of the hibernate cache. executeUpdate might force more selects and updates.
The way executeUpdate interacts with the hibernate cache makes a difference here. The hibernate cache gets invalidated on executeUpdate. The next access of that Item after the executeUpdate would have to go to the database (and possibly more, I think hibernate might invalidate all Items in the cache).
Your best bet is to turn on debug logging for 'org.hibernate' in your Config.groovy and examine the SQL calls.
I think they are equal. The both issue 2 sql calls.
More efficient would be just a single update
Item.executeUpdate("update Item set itemUsage=itemUsage+1 where id =(:itemId)", [ itemId: itemId.id])
You can use the dynamicUpdate mapping attribute in your Item class:
http://grails.org/doc/latest/ref/Database%20Mapping/dynamicUpdate.html
With this option enabled, your second way to update a single field using Gorm will be as efficient as the first one.