What's the difference between these two controller actions:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
and
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
My purpose is to rollback the whole save() action if a transaction fails. But not sure which one shoud I use.
You can use both approaches.
Your listing #1 will rollback the controller transaction when firstService or secondService is throwing an exception.
In listing #2 (I expect the createAndSave method of combinedServiceto be annotated with #Transactional) will rollback the transaction if createAndSave throws an exception. The big plus using this approach is that this service method is theoretically reusable in other controllers.
One of the key points about #Transactional is that there are two separate concepts to consider, each with it's own scope and life cycle:
the persistence context
the database transaction
The transactional annotation itself defines the scope of a single database transaction. The database transaction happens inside the scope of a persistence context. Your code:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
The persistence context is in JPA the EntityManager, implemented internally using an Hibernate Session (when using Hibernate as the persistence provider). Your code:
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
Note : The persistence context is just a synchronizer object that tracks the state of a limited set of Java objects and makes sure that changes on those objects are eventually persisted back into the database.
Conclusion : The declarative transaction management mechanism (#Transactional) is very powerful, but it can be misused or wrongly configured easily.
Understanding how it works internally is helpful when troubleshooting situations when the mechanism is not at all working or is working in an unexpected way.
Related
I have the following test (which is probably more of a functional test than integration but...):
#Integration(applicationClass = Application)
#Rollback
class ConventionControllerIntegrationSpec extends Specification {
RestBuilder rest = new RestBuilder()
String url
def setup() {
url = "http://localhost:${serverPort}/api/admin/organizations/${Organization.first().id}/conventions"
}
def cleanup() {
}
void "test update convention"() {
given:
Convention convention = Convention.first()
when:
RestResponse response = rest.put("${url}/${convention.id}") {
contentType "application/json"
json {
name = "New Name"
}
}
then:
response.status == HttpStatus.OK.value()
Convention.findByName("New Name").id == convention.id
Convention.findByName("New Name").name == "New Name"
}
}
The data is being loaded via BootStrap (which admittadly might be the issue) but the problem is when I'm in the then block; it finds the Convention by the new name and the id matches, but when testing the name field, it is failing because it still has the old name.
While reading the documentation on testing I think the problem lies in what session the data gets created in. Since the #Rollback has a session that is separate from BootStrap, the data isn't really gelling. For example, if I load the data via the test's given block, then that data doesn't exist when my controller is called by the RestBuilder.
It is entirely possible that I shouldn't be doing this kind of test this way, so suggestions are appreciated.
This is definitely a functional test - you're making HTTP requests against your server, not making method calls and then making assertions about the effects of those calls.
You can't get automatic rollbacks with functional tests because the calls are made in one thread and they're handled in another, whether the test runs in the same JVM as the server or not. The code in BootStrap runs once before all of the tests run and gets committed (either because you made the changes in a transaction or via autocommit), and then the 'client' test code runs in its own new Hibernate session and in a transaction that the testing infrastructure starts (and will roll back at the end of the test method), and the server-side code runs in its own Hibernate session (because of OSIV) and depending on whether your controller(s) and service(s) are transactional or not, may run in a different transaction, or may just autocommit.
One thing that is probably not a factor here but should always be considered with Hibernate persistence tests is session caching. The Hibernate session is the first-level cache, and a dynamic finder like findByName will probably trigger a flush, which you want, but should be explicit about in tests. But it won't clear any cached elements and you run the risk of false positives with code like this because you might not actually be loading a new instance - Hibernate might be returning a cached instance. It definitely will when calling get(). I always add a flushAndClear() method to integration and functional base classes and I'd put a call to that after the put call in the when block to be sure everything is flushed from Hibernate to the database (not committed, just flushed) and clear the session to force real reloading. Just pick a random domain class and use withSession, e.g.
protected void flushAndClear() {
Foo.withSession { session ->
session.flush()
session.clear()
}
}
Since the put happens in one thread/session/tx and the finders run in their own, this shouldn't have an effect but should be the pattern used in general.
In Grails framework I saw the command object pattern but its use is not very clear for me. In addition most of examples given by Grails documentation are about domain classes not command objects (maybe to simplify code example).
1 - Command object is something used between view and controller layer and must stay there ?
2 - Or is it a good practice to pass command object to service layer ?
To illustrate point 2 :
class MyController {
def updateUserPassword (UserPasswordCommand cmd) {
...
myService.updatePassword(cmd)
...
}
}
If point 2 is a bad practice, then how do you pass submitted data to the service layer ? Via domain class ?
EDIT : Seems OK
[EDIT]
If I use command object and not domain class what to do in this case :
def signup(UserCreateCommand cmd)
{
if (!cmd.hasErrors()) {
def userInstance = userService.signup(cmd)
}
}
if (cmd.hasErrors()) {
/* Stay on form in order to display errors */
render(view:"/app/authentication/_signupForm", model:[userCreateCommand: cmd])
return
}
...
}
what happen if when user service transaction ends, there is an exception raided by database (because of flushing data not respecting schema constraints) ?
The problem in my point of view is that there are two queries :
Firstly - when call cmd.hasErrors() there is a persistent call for unique constraint on email for example
Secondly - when service transaction ends, there is a flush to DB (which result in one SQL insert in my case), and maybe raises an exception on column email which has unique constraint
Test cmd.hasErrors() doesn't prevent the case where DB raises a violated constraint unique exception or I'm wrong ?
That's the best way to pass request params to service layer. I have seen people passing params to service which is really a worst practice. Our controllers should be dump, Max 5-8 LOC in controller method is a guideline in my company.
Command object gives you so much power out of the box like validation, method etc.
Constraints like unique which needs to validated from database cannot be applied on command object. In this case you can use validator http://grails.github.io/grails-doc/2.5.1/ref/Constraints/validator.html.
You can also use importFrom constraint to have all the constraint form User domain to command object http://grails.github.io/grails-doc/2.5.1/guide/validation.html.
I'm using Grails 2.5.1, and I have a controller calling a service method which occasionally results in a StaleObjectStateException. The code in the service method has a try catch around the obj.save() call which just ignores the exception. However, whenever one of these conflicts occurs there's still an error printed in the log, and an error is returned to the client.
My GameController code:
def finish(String gameId) {
def model = [:]
Game game = gameService.findById(gameId)
// some other work
// this line is where the exception points to - NOT a line in GameService:
model.game = GameSummaryView.fromGame(gameService.scoreGame(game))
withFormat {
json {
render(model as JSON)
}
}
}
My GameService code:
Game scoreGame(Game game) {
game.rounds.each { Round round ->
// some other work
try {
scoreRound(round)
if (round.save()) {
updated = true
}
} catch (StaleObjectStateException ignore) {
// ignore and retry
}
}
}
The stack-trace says the exception generates from my GameController.finish method, it doesn't point to any code within my GameService.scoreGame method. This implies to me that Grails checks for staleness when a transaction is started, NOT when an object save/update is attempted?
I've come across this exception many times, and generally I fix it by not traversing the Object graph.
For example, in this case, I'd remove the game.rounds reference and replace it with:
def rounds = Round.findAllByGameId(game.id)
rounds.each {
// ....
}
But that would mean that staleness isn't checked when the transaction is created, and it isn't always practical and in my opinion kind of defeats the purpose of Grails lazy collections. If I wanted to manage all the associations myself I would.
I've read the documentation regarding Pessimistic and Optimistic Locking, but my code follows the examples there.
I'd like to understand more about how/when Grails (GORM) checks for staleness and where to handle it?
You don't show or discuss any transaction configuration, but that's probably what's causing the confusion. Based on what you're seeing, I'm guessing that you have #Transactional annotations in your controller. I say that because if that's the case, a transaction starts there, and (assuming your service is transactional) the service method joins the current transaction.
In the service you call save() but you don't flush the session. That's better for performance, especially if there were another part of the workflow where you make other changes - you wouldn't want to push two or more sets of updates to each object when you can push all the changes at once. Since you don't flush, and since the transaction doesn't commit at the end of the method as it would if the controller hadn't started the transaction, the updates are only pushed when the controller method finishes and the transaction commits.
You'd be better off moving all of your transactional (and business) logic to the service and remove every trace of transactions from your controllers. Avoid "fixing" this by eagerly flushing unless you're willing to take the performance hit.
As for the staleness check - it's fairly simple. When Hibernate generates the SQL to make the changes, it's of the form UPDATE tablename SET col1=?, col2=?, ..., colN=? where id=? and version=?. The id will obviously match, but if the version has incremented, then the version part of the where clause won't match and the JDBC update count will be 0, not 1, and this is interpreted to mean that someone else made a change between your reading and updating the data.
Have a SharePoint "remote web" application that will be managing data for multiple tenant databases (and thus, ultimately, multiple tenant database connections). In essence, each operation will deal with 2 databases.
The first is our tenancy database, where we store information that is specific for each tenant. This can be the SharePoint OAuth Client ID and secret, as well as information about how to connect to the tenant's specific database, which is the second database. This means that connecting to the first database will be required before we can connect to the second database.
I believe I know how to do this using Simple Injector for HTTP requests. I could register the first connection type (whether that be an IDbConnection wrapper using ADO.NET or a TenancyDbContext from entity framework) with per web request lifetime.
I could then register an abstract factory to resolve the connections to the tenant-specific databases. This factory would depend on the first database type, as well as the Simple Injector Container. Queries & commands that need to access the tenant database will depend on this abstract factory and use it to obtain the connection to the tenant database by passing an argument to a factory method.
My question mainly has to do with how to handle this in the context of an operation that may or may not have a non-null HttpContext.Current. When a SharePoint app is installed, we are sometimes running a WCF .svc service to perform certain operations. When SharePoint invokes this, sometimes HttpContext is null. I need a solution that will work in both cases, for both database connections, and that will make sure the connections are disposed when they are no longer needed.
I have some older example code that uses the LifetimeScope, but I see now that there is an Execution Context Scoping package available for Simple Injector on nuget. I am wondering if I should use that to create hybrid scoping for these 2 database connections (with / without HTTP context), and if so, how is it different from lifetime scoping using Container.GetCurrentLifetimeScope and Container.BeginLifetmeScope?
Update
I read up on the execution scope lifestyle, and ended up with the following 3-way hybrid:
var hybridDataAccessLifestyle = Lifestyle.CreateHybrid( // create a hybrid lifestyle
lifestyleSelector: () => HttpContext.Current != null, // when the object is needed by a web request
trueLifestyle: new WebRequestLifestyle(), // create one instance for all code invoked by the web request
falseLifestyle: Lifestyle.CreateHybrid( // otherwise, create another hybrid lifestyle
lifestyleSelector: () => OperationContext.Current != null, // when the object is needed by a WCF op,
trueLifestyle: new WcfOperationLifestyle(), // create one instance for all code invoked by the op
falseLifestyle: new ExecutionContextScopeLifestyle()) // in all other cases, create per execution scope
);
However my question really has to do with how to create a dependency which will get its connection string sometime after the root is already composed. Here is some pseudo code I came up with that implements an idea I have for how to implement this:
public class DatabaseConnectionContainerImpl : IDatabaseConnectionContainer, IDisposable
{
private readonly AllTenantsDbContext _allTenantsDbContext;
private TenantSpecificDbContext _tenantSpecificDbContext;
private Uri _tenantUri = null;
public DatabaseConnectionContainerImpl(AllTenantsDbContext allTenantsDbContext)
{
_allTenantsDbContext = allTenantsDbContext;
}
public TenantSpecificDbContext GetInstance(Uri tenantUri)
{
if (tenantUri == null) throw new ArgumentNullException(“tenantUri”);
if (_tenantUri != null && _tenantUri.Authority != tenantUri.Authority)
throw new InvalidOperationException(
"You can only connect to one tenant database within this scope.");
if (_tenantSpecificDbContext == null) {
var tenancy = allTenantsDbContext.Set<Tenancy>()
.SingleOrDefault(x => x.Authority == tenantUri.Authority);
if (tenancy == null)
throw new InvalidOperationException(string.Format(
"Tenant with URI Authority {0} does not exist.", tenantUri.Authority));
_tenantSpecificDbContext = new TenantSpecificDbContext(tenancy.ConnectionString);
_tenantUri = tenantUri;
}
return _tenantSpecificDbContext
}
void IDisposable.Dispose()
{
if (_tenantSpecificDbContext != null) _tenantSpecificDbContext.Dispose();
}
}
The bottom line is that there is a runtime Uri variable that will be used to determine what the connection string will be to the TenantSpecificDbContext instance. This Uri variable is passed into all WCF operations and HTTP web requests. Since this variable is not known until runtime after the root is composed, I don't think there is any way to inject it into the constructor.
Any better ideas than the one above, or will the one above be problematic?
Since you want to run operations in two different contexts (one with the availability of the web request, and when without) within the same AppDomain, you need to use an hybrid lifestyle. Hybrid lifestyles switch automatically from one lifestyle to the other. The example given in the Simple Injector documentation is the following:
ScopedLifestyle scopedLifestyle = Lifestyle.CreateHybrid(
lifestyleSelector: () => container.GetCurrentLifetimeScope() != null,
trueLifestyle: new LifetimeScopeLifestyle(),
falseLifestyle: new WebRequestLifestyle());
// The created lifestyle can be reused for many registrations.
container.Register<IUserRepository, SqlUserRepository>(hybridLifestyle);
container.Register<ICustomerRepository, SqlCustomerRepository>(hybridLifestyle);
Using this custom hybrid lifestyle, instances are stored for the duration of an active lifetime scope, but we fall back to caching instances per web request, in case there is no active lifetime scope. In case there is both no active lifetime scope and no web request, an exception will be thrown.
With Simple Injector, a scope for a web request will implicitly be created for you under the covers. For the lifetime scope however this is not possible. This means that you have to begin such scope yourself explicitly (as shown here). This will be trivial for you since you use command handlers.
Now your question is about the difference between the lifetime scope and execution context scope. The difference between the two is that a lifetime scope is thread-specific. It can't flow over asychronous operations that might jump from thread to thread. It uses a ThreadLocal under the covers.
The execution scope however can be used in case you use async/wait and return Task<T> from you methods. In this case the scope can be disposed on a different thread, since it stores all cached instances in the CallContext class.
In most cases you will be able to use the execution scope in places where you would use lifetime scope, but certainly not the other way around. But if your code doesn't flow asynchronously, lifetime scope gives better performance (although probably not really a significant performance difference from execution scope).
If I have two flushes on the same thread using GORM, is it possible for the first to pass and the second to fail in separate transactions?
Or even if I just have one flush half way thru the thread and then a second implicit flush after the request finishes, is it is possible for the second to fail but the changes from the explicit flush to pass and thus be persisted in the DB?
Thanks
If I have two flushes on the same thread using GORM, is it possible for the first to pass and the second to fail in separate transactions?
It's the transactions that succeed/fail, not the flushes. There's an implicit flush at the end of every transaction, and also at the end of every session (request). It's absolutely possible to have several transactions in the same thread, some of which fail, and some of which succeed. For example, given a simple domain class
class Book {
String title
}
The first transaction in someAction will succeed and the second will be rolled back.
class MyController {
def someAction() {
Book.withTransaction {
new Book().save(title: 'successful').save(failOnError: true)
}
Book.withTransaction {
new Book().save(title: 'failed').save(failOnError: true)
throw new RuntimeException('transaction rollback')
}
}
}