I have the following test (which is probably more of a functional test than integration but...):
#Integration(applicationClass = Application)
#Rollback
class ConventionControllerIntegrationSpec extends Specification {
RestBuilder rest = new RestBuilder()
String url
def setup() {
url = "http://localhost:${serverPort}/api/admin/organizations/${Organization.first().id}/conventions"
}
def cleanup() {
}
void "test update convention"() {
given:
Convention convention = Convention.first()
when:
RestResponse response = rest.put("${url}/${convention.id}") {
contentType "application/json"
json {
name = "New Name"
}
}
then:
response.status == HttpStatus.OK.value()
Convention.findByName("New Name").id == convention.id
Convention.findByName("New Name").name == "New Name"
}
}
The data is being loaded via BootStrap (which admittadly might be the issue) but the problem is when I'm in the then block; it finds the Convention by the new name and the id matches, but when testing the name field, it is failing because it still has the old name.
While reading the documentation on testing I think the problem lies in what session the data gets created in. Since the #Rollback has a session that is separate from BootStrap, the data isn't really gelling. For example, if I load the data via the test's given block, then that data doesn't exist when my controller is called by the RestBuilder.
It is entirely possible that I shouldn't be doing this kind of test this way, so suggestions are appreciated.
This is definitely a functional test - you're making HTTP requests against your server, not making method calls and then making assertions about the effects of those calls.
You can't get automatic rollbacks with functional tests because the calls are made in one thread and they're handled in another, whether the test runs in the same JVM as the server or not. The code in BootStrap runs once before all of the tests run and gets committed (either because you made the changes in a transaction or via autocommit), and then the 'client' test code runs in its own new Hibernate session and in a transaction that the testing infrastructure starts (and will roll back at the end of the test method), and the server-side code runs in its own Hibernate session (because of OSIV) and depending on whether your controller(s) and service(s) are transactional or not, may run in a different transaction, or may just autocommit.
One thing that is probably not a factor here but should always be considered with Hibernate persistence tests is session caching. The Hibernate session is the first-level cache, and a dynamic finder like findByName will probably trigger a flush, which you want, but should be explicit about in tests. But it won't clear any cached elements and you run the risk of false positives with code like this because you might not actually be loading a new instance - Hibernate might be returning a cached instance. It definitely will when calling get(). I always add a flushAndClear() method to integration and functional base classes and I'd put a call to that after the put call in the when block to be sure everything is flushed from Hibernate to the database (not committed, just flushed) and clear the session to force real reloading. Just pick a random domain class and use withSession, e.g.
protected void flushAndClear() {
Foo.withSession { session ->
session.flush()
session.clear()
}
}
Since the put happens in one thread/session/tx and the finders run in their own, this shouldn't have an effect but should be the pattern used in general.
Related
I'm performing the following logic on a single Service in grails 2.4.4.
class SampleService {
void process(params1, params2) {
SampleDomain1 sd1 = new SampleDomain1()
sd1.setProperties(params1)
sd1.save()
SampleDomain2 sd2 = new SampleDomain2()
sd2.setProperties(params2)
sd2.save()
}
}
What I understand is that Services are by default transactional. If sd1.save() is successful but sd2.save() is not, it will rollback the changes and will throw an error. While if both are successful, both are committed upon service's exit.
If my understanding is correct, then both of it should already been persisted to the database. However, the problem is: it does not— unless if you explicitly use the flush: true parameter based on my tests using the same set of params1 and params2.
sd1.save(flush: true)
SampleDomain2 sd2 = new SampleDomain2()
sd2.setProperties(params2)
sd2.save(flush: true)
}
Which, by the way is what I am really avoiding (what would be the point setting it as #Transactional). If that's the catch of Hibernate 4 / Grails 2.4, what do I need to do to make my services to commit at every end of a service call again? Do I need to configure any global configuration of Grails? I really need to flush my Domain classes at the end of every service automatically.
Note
I've already assured that the data is correct, including calling .validate() and other checker. Success in performing .save(flush: true) proves that. The problem I found is regarding to the update on Grails 2.4 on its FlushMode. Now, maybe what I really need is a global settings to override this.
If your data is not being flushed to the database layer there are some possibilities that come to mind.
There's some kind of error when trying to save to the database, you can try passing failOnError=true parameter to the .save() calls to see it clearly. (Actually setting this globally is a good idea since silently failing db calls are a migraine)
You are calling this service method from within the same service object. This will not allow the underlying spring declarative transactions to work due to the use of proxies.
You might have annotated some other method in the same service, in which case the default transactional support is no longer available for the remaining un-annotated (is this even a word?) methods.
You might have created the Service somewhere outside of service folder, not quite sure if this can cause an issue since I've never tried it out.
You have failed to sacrifice a goat to the Groovy and Grails Gods and they are messing with your head.
Edit :
I'm going to try to answer the points in your new edit.
Have you tried failOnError? It might be a issue that occurs when both objects are flushed to the DB at once, instead of manually committing them one at a time.
By figuring out a way to auto flush on save, you are going to be bypassing the transactions altogether AFAIK, now if I'm wrong then by all means go for it. But do test it out first before assuming.
Somewhere on my DataSource.groovy configuration, there is this line:
hibernate {
...
singleSession = true // configure OSIV singleSession mode
flush.mode = 'manual' // OSIV session flush mode outside of transactional context
^^^^^^^^^^
}
Which explicit states that every save should be flushed manually. As a solution, I comment out this line. After that, every database transaction now commits every time it exists a Service.
I'm using Grails 2.5.1, and I have a controller calling a service method which occasionally results in a StaleObjectStateException. The code in the service method has a try catch around the obj.save() call which just ignores the exception. However, whenever one of these conflicts occurs there's still an error printed in the log, and an error is returned to the client.
My GameController code:
def finish(String gameId) {
def model = [:]
Game game = gameService.findById(gameId)
// some other work
// this line is where the exception points to - NOT a line in GameService:
model.game = GameSummaryView.fromGame(gameService.scoreGame(game))
withFormat {
json {
render(model as JSON)
}
}
}
My GameService code:
Game scoreGame(Game game) {
game.rounds.each { Round round ->
// some other work
try {
scoreRound(round)
if (round.save()) {
updated = true
}
} catch (StaleObjectStateException ignore) {
// ignore and retry
}
}
}
The stack-trace says the exception generates from my GameController.finish method, it doesn't point to any code within my GameService.scoreGame method. This implies to me that Grails checks for staleness when a transaction is started, NOT when an object save/update is attempted?
I've come across this exception many times, and generally I fix it by not traversing the Object graph.
For example, in this case, I'd remove the game.rounds reference and replace it with:
def rounds = Round.findAllByGameId(game.id)
rounds.each {
// ....
}
But that would mean that staleness isn't checked when the transaction is created, and it isn't always practical and in my opinion kind of defeats the purpose of Grails lazy collections. If I wanted to manage all the associations myself I would.
I've read the documentation regarding Pessimistic and Optimistic Locking, but my code follows the examples there.
I'd like to understand more about how/when Grails (GORM) checks for staleness and where to handle it?
You don't show or discuss any transaction configuration, but that's probably what's causing the confusion. Based on what you're seeing, I'm guessing that you have #Transactional annotations in your controller. I say that because if that's the case, a transaction starts there, and (assuming your service is transactional) the service method joins the current transaction.
In the service you call save() but you don't flush the session. That's better for performance, especially if there were another part of the workflow where you make other changes - you wouldn't want to push two or more sets of updates to each object when you can push all the changes at once. Since you don't flush, and since the transaction doesn't commit at the end of the method as it would if the controller hadn't started the transaction, the updates are only pushed when the controller method finishes and the transaction commits.
You'd be better off moving all of your transactional (and business) logic to the service and remove every trace of transactions from your controllers. Avoid "fixing" this by eagerly flushing unless you're willing to take the performance hit.
As for the staleness check - it's fairly simple. When Hibernate generates the SQL to make the changes, it's of the form UPDATE tablename SET col1=?, col2=?, ..., colN=? where id=? and version=?. The id will obviously match, but if the version has incremented, then the version part of the where clause won't match and the JDBC update count will be 0, not 1, and this is interpreted to mean that someone else made a change between your reading and updating the data.
What's the difference between these two controller actions:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
and
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
My purpose is to rollback the whole save() action if a transaction fails. But not sure which one shoud I use.
You can use both approaches.
Your listing #1 will rollback the controller transaction when firstService or secondService is throwing an exception.
In listing #2 (I expect the createAndSave method of combinedServiceto be annotated with #Transactional) will rollback the transaction if createAndSave throws an exception. The big plus using this approach is that this service method is theoretically reusable in other controllers.
One of the key points about #Transactional is that there are two separate concepts to consider, each with it's own scope and life cycle:
the persistence context
the database transaction
The transactional annotation itself defines the scope of a single database transaction. The database transaction happens inside the scope of a persistence context. Your code:
#Transactional
def save(SomeDomain someDomain) {
someDomain.someProperty = firstService.createAndSaveSomething(params) //transactional
someDomain.anotherProperty = secondService.createAndSaveSomething(params) //transactional
someDomain.save(flush: true)
}
The persistence context is in JPA the EntityManager, implemented internally using an Hibernate Session (when using Hibernate as the persistence provider). Your code:
def save(SomeDomain someDomain) {
combinedService.createAndSave(someDomain, params) //transactional, encapsulating first and second service calls
}
Note : The persistence context is just a synchronizer object that tracks the state of a limited set of Java objects and makes sure that changes on those objects are eventually persisted back into the database.
Conclusion : The declarative transaction management mechanism (#Transactional) is very powerful, but it can be misused or wrongly configured easily.
Understanding how it works internally is helpful when troubleshooting situations when the mechanism is not at all working or is working in an unexpected way.
Have a SharePoint "remote web" application that will be managing data for multiple tenant databases (and thus, ultimately, multiple tenant database connections). In essence, each operation will deal with 2 databases.
The first is our tenancy database, where we store information that is specific for each tenant. This can be the SharePoint OAuth Client ID and secret, as well as information about how to connect to the tenant's specific database, which is the second database. This means that connecting to the first database will be required before we can connect to the second database.
I believe I know how to do this using Simple Injector for HTTP requests. I could register the first connection type (whether that be an IDbConnection wrapper using ADO.NET or a TenancyDbContext from entity framework) with per web request lifetime.
I could then register an abstract factory to resolve the connections to the tenant-specific databases. This factory would depend on the first database type, as well as the Simple Injector Container. Queries & commands that need to access the tenant database will depend on this abstract factory and use it to obtain the connection to the tenant database by passing an argument to a factory method.
My question mainly has to do with how to handle this in the context of an operation that may or may not have a non-null HttpContext.Current. When a SharePoint app is installed, we are sometimes running a WCF .svc service to perform certain operations. When SharePoint invokes this, sometimes HttpContext is null. I need a solution that will work in both cases, for both database connections, and that will make sure the connections are disposed when they are no longer needed.
I have some older example code that uses the LifetimeScope, but I see now that there is an Execution Context Scoping package available for Simple Injector on nuget. I am wondering if I should use that to create hybrid scoping for these 2 database connections (with / without HTTP context), and if so, how is it different from lifetime scoping using Container.GetCurrentLifetimeScope and Container.BeginLifetmeScope?
Update
I read up on the execution scope lifestyle, and ended up with the following 3-way hybrid:
var hybridDataAccessLifestyle = Lifestyle.CreateHybrid( // create a hybrid lifestyle
lifestyleSelector: () => HttpContext.Current != null, // when the object is needed by a web request
trueLifestyle: new WebRequestLifestyle(), // create one instance for all code invoked by the web request
falseLifestyle: Lifestyle.CreateHybrid( // otherwise, create another hybrid lifestyle
lifestyleSelector: () => OperationContext.Current != null, // when the object is needed by a WCF op,
trueLifestyle: new WcfOperationLifestyle(), // create one instance for all code invoked by the op
falseLifestyle: new ExecutionContextScopeLifestyle()) // in all other cases, create per execution scope
);
However my question really has to do with how to create a dependency which will get its connection string sometime after the root is already composed. Here is some pseudo code I came up with that implements an idea I have for how to implement this:
public class DatabaseConnectionContainerImpl : IDatabaseConnectionContainer, IDisposable
{
private readonly AllTenantsDbContext _allTenantsDbContext;
private TenantSpecificDbContext _tenantSpecificDbContext;
private Uri _tenantUri = null;
public DatabaseConnectionContainerImpl(AllTenantsDbContext allTenantsDbContext)
{
_allTenantsDbContext = allTenantsDbContext;
}
public TenantSpecificDbContext GetInstance(Uri tenantUri)
{
if (tenantUri == null) throw new ArgumentNullException(“tenantUri”);
if (_tenantUri != null && _tenantUri.Authority != tenantUri.Authority)
throw new InvalidOperationException(
"You can only connect to one tenant database within this scope.");
if (_tenantSpecificDbContext == null) {
var tenancy = allTenantsDbContext.Set<Tenancy>()
.SingleOrDefault(x => x.Authority == tenantUri.Authority);
if (tenancy == null)
throw new InvalidOperationException(string.Format(
"Tenant with URI Authority {0} does not exist.", tenantUri.Authority));
_tenantSpecificDbContext = new TenantSpecificDbContext(tenancy.ConnectionString);
_tenantUri = tenantUri;
}
return _tenantSpecificDbContext
}
void IDisposable.Dispose()
{
if (_tenantSpecificDbContext != null) _tenantSpecificDbContext.Dispose();
}
}
The bottom line is that there is a runtime Uri variable that will be used to determine what the connection string will be to the TenantSpecificDbContext instance. This Uri variable is passed into all WCF operations and HTTP web requests. Since this variable is not known until runtime after the root is composed, I don't think there is any way to inject it into the constructor.
Any better ideas than the one above, or will the one above be problematic?
Since you want to run operations in two different contexts (one with the availability of the web request, and when without) within the same AppDomain, you need to use an hybrid lifestyle. Hybrid lifestyles switch automatically from one lifestyle to the other. The example given in the Simple Injector documentation is the following:
ScopedLifestyle scopedLifestyle = Lifestyle.CreateHybrid(
lifestyleSelector: () => container.GetCurrentLifetimeScope() != null,
trueLifestyle: new LifetimeScopeLifestyle(),
falseLifestyle: new WebRequestLifestyle());
// The created lifestyle can be reused for many registrations.
container.Register<IUserRepository, SqlUserRepository>(hybridLifestyle);
container.Register<ICustomerRepository, SqlCustomerRepository>(hybridLifestyle);
Using this custom hybrid lifestyle, instances are stored for the duration of an active lifetime scope, but we fall back to caching instances per web request, in case there is no active lifetime scope. In case there is both no active lifetime scope and no web request, an exception will be thrown.
With Simple Injector, a scope for a web request will implicitly be created for you under the covers. For the lifetime scope however this is not possible. This means that you have to begin such scope yourself explicitly (as shown here). This will be trivial for you since you use command handlers.
Now your question is about the difference between the lifetime scope and execution context scope. The difference between the two is that a lifetime scope is thread-specific. It can't flow over asychronous operations that might jump from thread to thread. It uses a ThreadLocal under the covers.
The execution scope however can be used in case you use async/wait and return Task<T> from you methods. In this case the scope can be disposed on a different thread, since it stores all cached instances in the CallContext class.
In most cases you will be able to use the execution scope in places where you would use lifetime scope, but certainly not the other way around. But if your code doesn't flow asynchronously, lifetime scope gives better performance (although probably not really a significant performance difference from execution scope).
In my app, I have a code like this:
// 1
Foo.get(123).example = "my example" // as expected, don't change value in db
// 2
Foo.get(123).bars.each { bar ->
bar.value *= -1 // it's changing "value" field in database!! WHY?
}
note: Foo and Bar are tables in my DB
Why is gorm saving in database is second case?
I don't have any save() method in code.
Tks
SOLVED:
I need to use read() to get a readonly session.
(Foo.discard() also works)
Doc: http://grails.org/doc/latest/guide/5.%20Object%20Relational%20Mapping%20%28GORM%29.html#5.1.1%20Basic%20CRUD
(In the first case, I guess I made mistest)
Both should save, so the first example appears to be a bug. Grails requests run in the context of an OpenSessionInView interceptor. This opens a Hibernate session at the beginning of each request and binds it to the thread, and flushes and closes it at the end of the request. This helps a lot with lazy loading, but can have unexpected consequences like you're seeing.
Although you're not explicitly saving, the logic in the Hibernate flush involves finding all attached instances that have been modified and pushing the updates to the database. This is a performance optimization since if each change had been pushed it would slow things down. So everything that can wait until a flush is queued up.
So the only time you need to explicitly save is for new instances, and when you want to check validation errors.