I am having a problem with transaction in Grails. I want to save a list of object to DB by a checking condition at each object. All these process I want to put to one transaction, it means if the k-th object does not satisfied the checking condition, all previous objects (from the first object to the (k-1)th one) will be rolled back from DB. Here is my example:
static transactional = true
public void saveManyPeople() {
// ...
List<People> peoples = new ArraysList();
for(i = 0, i < n, i++) {
People newPeople = createPeopleFromRawData(); // return a people object in memory
if(<checking-condition>) {
newPeople.save(flush : false)
} else {
throw new MyCustomizedException() // MyCustomizedException has extended from RuntimException
}
}
// ...
}
As you may see, I set transactional variable to true and I've tried to use flush : true and flush : false, but it didn't work as I want. I've read this article Rolling back a transaction in a Grails Service
And the author recommended that the service method should throw a RuntimeException then the process will be rollbacked. But if I want to throw another exception, so what I have to do?
Could you please give me some suggestions on this problem?
Thank you so much!
You can throw any exception that extends from RuntimeException to rollback the transaction. Or you can use Programmatic Transactions, using withTransation, to have more control over the transaction.
Could you verify that saveManyPeople() is within a Service and not a Controller?
The static transactional = true isn't respected in a Controller. I am suspecting that this is the issue.
If you need to have transactional support with the controller, you could always use DomainClass.withTransaction. Reference Documentation
Example:
Account.withTransaction { status ->
def source = Account.get(params.from)
def dest = Account.get(params.to)
def amount = params.amount.toInteger()
if(source.active) {
source.balance -= amount
if(dest.active) {
dest.amount += amount
}
else {
status.setRollbackOnly()
}
}
}
Related
I am using grails-2.1.1. When I load the edit page, I am assigning some value in the edit action in controller. But it is updating my table! although I am not saving. How can I stop it?
Here is my code below. My edit action in controller:
def edit() {
def accTxnMstInstance = AccTxnMst.get(params.id)
if (!accTxnMstInstance) {
flash.message = message(code: 'default.not.found.message', args: [message(code: 'accTxnMst.label', default: 'AccTxnMst'), params.id])
redirect(action: "list")
return
}
accTxnMstInstance?.accTxnDtls?.each {
if (it?.debitCoa != null && it?.debitCoa != "") {
String debitCoaVal = ""
List<String> items = Arrays.asList(it?.debitCoa?.split("\\s*~\\s*"))
items.each {
List itemList = new ArrayList()
List<String> subItems = Arrays.asList(it.split("\\^"))
subItems.each {
itemList.add(it)
}
itemList.add("false")
itemList.add("0")
itemList.each {
debitCoaVal += it.toString() + "^"
}
debitCoaVal += "~"
}
it?.debitCoa = debitCoaVal
debitCoaVal = ""
}
if (it?.creditCoa != null && it?.creditCoa != "") {
String creditCoaVal = ""
List<String> items = Arrays.asList(it?.creditCoa?.split("\\s*~\\s*"))
items.each {
List itemList = new ArrayList()
List<String> subItems = Arrays.asList(it.split("\\^"))
subItems.each {
itemList.add(it)
}
itemList.add("false")
itemList.add("0")
itemList.each {
creditCoaVal += it.toString() + "^"
}
creditCoaVal += "~"
}
it?.creditCoa = creditCoaVal
creditCoaVal = ""
}
}
[accTxnMstInstance: accTxnMstInstance]
}
You can see that I am not saving after assigning the value just passing to view.
Grails uses the Open Session In View (OSIV) pattern, where at the beginning of the web request a Hibernate session is opened (and stored in a thread-local to make it easily accessible) and at the end of the request as long as there wasn't an exception, the Hibernate session is flushed and closed. During any flush, Hibernate looks at all "active" object instances and loops through each persistent property to see if it is "dirty". If so, even though you didn't explicitly call save(), your changes will be pushed to the database for you. This is possible because when Hibernate creates an instance from a database row it caches the original data to compare later to the potentially-changed instance properties.
A lot of the time this is helpful behavior, but in cases like this it gets in the way. There are lots of fixes though. One drastic one is to disable OSIV, but this is generally a bad idea unless you know what you're doing. In this case there are two things you can try that should work.
One is to change AccTxnMst.get(params.id) to AccTxnMst.read(params.id). This will not cause the instance to be strictly "read-only" because you can still explicitly call save() and if something was modified, all of the instance changes will be persisted. But the caching of the original data isn't done for instances retrieved using read(), and there's no dirty checking during flush for these instances (which isn't possible anyway since there's no cached data to compare with).
Using read() is a good idea in general when retrieving instances that are not going to be updated (whether you make property changes or not), and makes the code more self-documenting.
Another option is to call discard() on the instance before the controller action finishes. This "detaches" the instance from the Hibernate session, so when the OSIV filter runs at the end of the request and flushes the Hibernate session, your instance won't be considered dirty since Hibernate won't have access to it.
read() only makes sense for individual instances retrieved by id, whereas discard() is useful for any instance, e.g. if they're in a mapped collection or were retrieved by a non-id query (e.g. a dynamic finder, criteria query, etc.)
There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :
Do I need to pass a class objects to the Model method and process it one at a time?
Eg.
public async Task<int> SaveCollectionValues(Foo foo)
{
....
//Parameters
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
(OR)
2. Shall I pass the Collection value to the Model method and use foreach to iterate through the collection
public async Task<int> SaveCollectionValues(FooCollection foo)
{
....
//Parameters
foreach(Foo obj in foo)
{
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
....
}
I just need to know which of the above mentioned method would be efficient to use?
Efficient is a bit relative here since you didn't specify which database. Bulk insert might change from one to another DB. SQL Server, for instance, uses BCP, while MySQL has a way to disable some internals while sending many insert/update commands.
Apart from that, if you're submitting a single collection at once and that should be handled as a single transaction, than the best option, from both code organization and SQL optimization, is to use both connection sharing and a single transaction object, as follows:
public void DoSomething(FooCollection collection)
{
using(var db = GetMyDatabase())
{
db.Open();
var transaction = db.BeginTransaction();
foreach(var foo in collection)
{
if (!DoSomething(foo, db, transaction))
{ transaction.Rollback(); break; }
}
}
}
public bool DoSomething(Foo foo, IDbConnection db, IDbTransaction transaction)
{
try
{
// create your command (use a helper?)
// set your command connection to db
// execute your command (don't forget to pass the transaction object)
// return true if it's ok (eg: ExecuteNonQuery > 0)
// return false it it's not ok
}
catch
{
return false;
// this might not work 100% fine for you.
// I'm not logging nor re-throwing the exception, I'm just getting rid of it.
// The idea is to return false because it was not ok.
// You can also return the exception through "out" parameters.
}
}
This way you have a clean code: one method that handles the entire collection and one that handles each value.
Also, although you're submitting each value, you're using a single transaction. Besides of a single commit (better performance), if one fails, the entire collection fails, leaving no garbage behind.
If you don't really need all that transaction stuff, just don't create the transaction and remove it from the second method. Keep a single connection since that will avoid resources overuse and connection overhead.
Also, as a general rule, I like to say: "Never open too many connections at once, specially when you can open a single one. Never forget to close and dispose a connection unless you're using connection poolling and know exactly how that works".
I have a domain class that modifies one of its properties in the afterInsert event.
A small example:
class Transaction {
Long transactionId
static constraints = {
transactionId nullable: true
}
def afterInsert() {
// copy the record id to transactionId;
transactionId = id
}
}
Whenever I save the domain object (transaction.save(flush: true)) in
my unit tests, all is well, and the transactionId is updated. But when I try to find the saved record using Transaction.findByTransactionId(), I get no results:
// do something
transaction.save(flush: true)
Transaction transaction = Transaction.findByTransactionId(1)
// !! no results; transaction == null
And I have to do a double save() before I can find the record using findByTransactionId():
// do something
transaction.save(flush: true)
transaction.save(flush: true)
Transaction transaction = Transaction.findByTransactionId(1)
// !! it works....
The double save() seems awkward. Any suggestions on how to eliminate the need for it?
The call to save() will return the persisted entity if validation passes, so there isn’t any reason to look it up separately afterwards. I think that your problem is that you’re re-instantiating the transaction variable (using that same name). If you must look it up (I don’t suggest doing so), call it something else. Also, the 1 id that you’re looking up may not exist if the column is an AUTO-INCREMENT.
def a = a.save(flush: true)
a?.refresh() // for afterInsert()
Transaction b = (a == null) ? null : Transaction.findByTransactionId(a.id)
// (Why look it up? You already have it.)
Update:
Because you’re using afterInsert(), Hibernate may not realize that it needs to refresh the object. Try using the refresh() method after you call save().
This small piece of code makes it obviously work:
def afterInsert() {
transactionId = id
save() // we need to call save to persist the changens made to the object
}
So calling save in the afterInsert is needed to persist the changes made in afterInsert!
I have a custom validator like -
validator: { userEmail, userAccount ->
if (userAccount.authenticationChannel == "ABC") {
boolean valid = true;
UserAccount.withNewSession {
if (UserAccount.findByEmail(userEmail)){
valid = false;
}
else if (UserAccount.findByName(userEmail)) {
valid = false;
}
...
So basically, I need some validation based on some condition and in my validation I need to execute a query.
But, now if I do -
def admin = new UserAccount(firstname:'Admin',email:'admin#example.com')
admin.save(flush:true)
admin.addToAuthorities("ADMIN").save(flush:true)
It fails.
Grails is running the validation, even on update and since email exists validation fails. How is this different if I do
email {unique:true}
Is Grails saying that I cannot write a custom validator which checks uniqueness.
Not sure if this is your issue or not, but when I tried to create a validation like this (ie. one that does queries against the database), I would get a StackOverflowError. The reason is that, when you run a query (like findByEmail), Hibernate will try to flush the session, which will cause it to validate all transient objects, which in turn calls your custom validator again, resulting in infinite recursion.
The trick to prevent this is to set the flush mode of the session to "manual" for a short time while running the queries. This prevents Hibernate from trying to flush the session before running the queries. The side-effect is that your query won't return entities you created in the current session but haven't been persisted (flushed) back to the database yet.
UserAccount.withNewSession { session ->
session.flushMode = FlushMode.MANUAL
try {
if (UserAccount.findByEmail(userEmail)){
valid = false;
}
else if (UserAccount.findByName(userEmail)) {
valid = false;
}
}
finally {
session.setFlushMode(FlushMode.AUTO);
}
}
See UniqueConstraint for an example of how this is done.
An alternative might be to do the checks in the save method.
def save = {
..
if (some_checks_succeed(userEmail, userAccount)) {
admin.save(flush: true)
}
..
}
def some_checks_succeed = { String userEmail, String userAccount ->
boolean valid = true;
if (userAccount.authenticationChannel == "ABC") {
UserAccount.withNewSession {
if (UserAccount.findByEmail(userEmail)) {
valid = false;
} else if (UserAccount.findByName(userEmail)) {
valid = false;
}
..
}
return valid
}
Some modifications might be necessary, but the above code gives you an example
Thanks. I could get this to work.The admin.save() calls validation both on insert and update. I handled both the cases (insert and update) and was able to get this working.
Thanks