How do I get the current attempt number on a background job in Hangfire? - asp.net-mvc

There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.

Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}

(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...

You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :

Related

How are Firebase offline capabilities supposed to detect when the cache is outdated? [duplicate]

Whenever I use addListenerForSingleValueEvent with setPersistenceEnabled(true), I only manage to get a local offline copy of DataSnapshot and NOT the updated DataSnapshot from the server.
However, if I use addValueEventListener with setPersistenceEnabled(true), I can get the latest copy of DataSnapshot from the server.
Is this normal for addListenerForSingleValueEvent as it only searches DataSnapshot locally (offline) and removes its listener after successfully retrieving DataSnapshot ONCE (either offline or online)?
Update (2021): There is a new method call (get on Android and getData on iOS) that implement the behavior you'll like want: it first tries to get the latest value from the server, and only falls back to the cache when it can't reach the server. The recommendation to use persistent listeners still applies, but at least there's a cleaner option for getting data once even when you have local caching enabled.
How persistence works
The Firebase client keeps a copy of all data you're actively listening to in memory. Once the last listener disconnects, the data is flushed from memory.
If you enable disk persistence in a Firebase Android application with:
Firebase.getDefaultConfig().setPersistenceEnabled(true);
The Firebase client will keep a local copy (on disk) of all data that the app has recently listened to.
What happens when you attach a listener
Say you have the following ValueEventListener:
ValueEventListener listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
System.out.println(snapshot.getValue());
}
#Override
public void onCancelled(FirebaseError firebaseError) {
// No-op
}
};
When you add a ValueEventListener to a location:
ref.addValueEventListener(listener);
// OR
ref.addListenerForSingleValueEvent(listener);
If the value of the location is in the local disk cache, the Firebase client will invoke onDataChange() immediately for that value from the local cache. If will then also initiate a check with the server, to ask for any updates to the value. It may subsequently invoke onDataChange() again if there has been a change of the data on the server since it was last added to the cache.
What happens when you use addListenerForSingleValueEvent
When you add a single value event listener to the same location:
ref.addListenerForSingleValueEvent(listener);
The Firebase client will (like in the previous situation) immediately invoke onDataChange() for the value from the local disk cache. It will not invoke the onDataChange() any more times, even if the value on the server turns out to be different. Do note that updated data still will be requested and returned on subsequent requests.
This was covered previously in How does Firebase sync work, with shared data?
Solution and workaround
The best solution is to use addValueEventListener(), instead of a single-value event listener. A regular value listener will get both the immediate local event and the potential update from the server.
A second solution is to use the new get method (introduced in early 2021), which doesn't have this problematic behavior. Note that this method always tries to first fetch the value from the server, so it will take longer to completely. If your value never changes, it might still be better to use addListenerForSingleValueEvent (but you probably wouldn't have ended up on this page in that case).
As a workaround you can also call keepSynced(true) on the locations where you use a single-value event listener. This ensures that the data is updated whenever it changes, which drastically improves the chance that your single-value event listener will see the current value.
So I have a working solution for this. All you have to do is use ValueEventListener and remove the listener after 0.5 seconds to make sure you've grabbed the updated data by then if needed. Realtime database has very good latency so this is safe. See safe code example below;
public class FirebaseController {
private DatabaseReference mRootRef;
private Handler mHandler = new Handler();
private FirebaseController() {
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
mRootRef = FirebaseDatabase.getInstance().getReference();
}
public static FirebaseController getInstance() {
if (sInstance == null) {
sInstance = new FirebaseController();
}
return sInstance;
}
Then some method you'd have liked to use "addListenerForSingleEvent";
public void getTime(final OnTimeRetrievedListener listener) {
DatabaseReference ref = mRootRef.child("serverTime");
ref.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
if (listener != null) {
// This can be called twice if data changed on server - SO DEAL WITH IT!
listener.onTimeRetrieved(dataSnapshot.getValue(Long.class));
}
// This can be called twice if data changed on server - SO DEAL WITH IT!
removeListenerAfter2(ref, this);
}
#Override
public void onCancelled(DatabaseError databaseError) {
removeListenerAfter2(ref, this);
}
});
}
// ValueEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ValueEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
// ChildEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ChildEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
Even if they close the app before the handler is executed, it will be removed anyways.
Edit: this can be abstracted to keep track of added and removed listeners in a HashMap using reference path as key and datasnapshot as value. You can even wrap a fetchData method that has a boolean flag for "once" if this is true it would do this workaround to get data once, else it would continue as normal.
You're Welcome!
You can create transaction and abort it, then onComplete will be called when online (nline data) or offline (cached data)
I previously created function which worked only if database got connection lomng enough to do synch. I fixed issue by adding timeout. I will work on this and test if this works. Maybe in the future, when I get free time, I will create android lib and publish it, but by then it is the code in kotlin:
/**
* #param databaseReference reference to parent database node
* #param callback callback with mutable list which returns list of objects and boolean if data is from cache
* #param timeOutInMillis if not set it will wait all the time to get data online. If set - when timeout occurs it will send data from cache if exists
*/
fun readChildrenOnlineElseLocal(databaseReference: DatabaseReference, callback: ((mutableList: MutableList<#kotlin.UnsafeVariance T>, isDataFromCache: Boolean) -> Unit), timeOutInMillis: Long? = null) {
var countDownTimer: CountDownTimer? = null
val transactionHandlerAbort = object : Transaction.Handler { //for cache load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, true)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.abort()
}
}
val transactionHandlerSuccess = object : Transaction.Handler { //for online load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
countDownTimer?.cancel()
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, false)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.success(p0)
}
}
In the code if time out is set then I set up timer which will call transaction with abort. This transaction will be called even when offline and will provide online or cached data (in this function there is really high chance that this data is cached one).
Then I call transaction with success. OnComplete will be called ONLY if we got response from firebase database. We can now cancel timer (if not null) and send data to callback.
This implementation makes dev 99% sure that data is from cache or is online one.
If you want to make it faster for offline (to don't wait stupidly with timeout when obviously database is not connected) then check if database is connected before using function above:
DatabaseReference connectedRef = FirebaseDatabase.getInstance().getReference(".info/connected");
connectedRef.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
boolean connected = snapshot.getValue(Boolean.class);
if (connected) {
System.out.println("connected");
} else {
System.out.println("not connected");
}
}
#Override
public void onCancelled(DatabaseError error) {
System.err.println("Listener was cancelled");
}
});
When workinkg with persistence enabled, I counted the times the listener received a call to onDataChange() and stoped to listen at 2 times. Worked for me, maybe helps:
private int timesRead;
private ValueEventListener listener;
private DatabaseReference ref;
private void readFB() {
timesRead = 0;
if (ref == null) {
ref = mFBDatabase.child("URL");
}
if (listener == null) {
listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
//process dataSnapshot
timesRead++;
if (timesRead == 2) {
ref.removeEventListener(listener);
}
}
#Override
public void onCancelled(DatabaseError databaseError) {
}
};
}
ref.removeEventListener(listener);
ref.addValueEventListener(listener);
}

I need to add a collection value to database . Which is the efficient way to do it?

Do I need to pass a class objects to the Model method and process it one at a time?
Eg.
public async Task<int> SaveCollectionValues(Foo foo)
{
....
//Parameters
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
(OR)
2. Shall I pass the Collection value to the Model method and use foreach to iterate through the collection
public async Task<int> SaveCollectionValues(FooCollection foo)
{
....
//Parameters
foreach(Foo obj in foo)
{
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
....
}
I just need to know which of the above mentioned method would be efficient to use?
Efficient is a bit relative here since you didn't specify which database. Bulk insert might change from one to another DB. SQL Server, for instance, uses BCP, while MySQL has a way to disable some internals while sending many insert/update commands.
Apart from that, if you're submitting a single collection at once and that should be handled as a single transaction, than the best option, from both code organization and SQL optimization, is to use both connection sharing and a single transaction object, as follows:
public void DoSomething(FooCollection collection)
{
using(var db = GetMyDatabase())
{
db.Open();
var transaction = db.BeginTransaction();
foreach(var foo in collection)
{
if (!DoSomething(foo, db, transaction))
{ transaction.Rollback(); break; }
}
}
}
public bool DoSomething(Foo foo, IDbConnection db, IDbTransaction transaction)
{
try
{
// create your command (use a helper?)
// set your command connection to db
// execute your command (don't forget to pass the transaction object)
// return true if it's ok (eg: ExecuteNonQuery > 0)
// return false it it's not ok
}
catch
{
return false;
// this might not work 100% fine for you.
// I'm not logging nor re-throwing the exception, I'm just getting rid of it.
// The idea is to return false because it was not ok.
// You can also return the exception through "out" parameters.
}
}
This way you have a clean code: one method that handles the entire collection and one that handles each value.
Also, although you're submitting each value, you're using a single transaction. Besides of a single commit (better performance), if one fails, the entire collection fails, leaving no garbage behind.
If you don't really need all that transaction stuff, just don't create the transaction and remove it from the second method. Keep a single connection since that will avoid resources overuse and connection overhead.
Also, as a general rule, I like to say: "Never open too many connections at once, specially when you can open a single one. Never forget to close and dispose a connection unless you're using connection poolling and know exactly how that works".

How to remove item from ConcurrentDictionary after final ContinueWith finishes

First, could someone with 1500+ "reputation" please create a tag for "ContinueWith" (and tag this question with it)? Thanks!
Sorry for the length of this post but I don't want to waste the time of anyone trying to help me because I left out relevant details. That said, it may still happen. :)
Now the details. I am working on a service that subscribes to a couple of ActiveMQ queue topics. Two of the topics are somewhat related. One is a "company update" and one is a "product update". The "ID" for both is the CompanyID. The company topic includes the data in the product topic. Required because other subscribers need the product data but don't want/need to subscribe to the product topic. Since my service is multi-threaded (requirement beyond our discretion), as the messages arrive I add a Task to process each one in a ConcurrentDictionary using AddOrUpdate where the update parm is simply a ContinueWith (see below). Done to prevent simultaneous updates which could happen because these topics and subscribers are "durable" so if my listener service goes offline (whatever reason) we could end with multiple messages (company and/or product) for the same CompanyID.
Now, my actual question (finally!) After the Task (whether just one task, or the last in a chain of ContinueWith tasks) is finished, I want to remove it from the ConcurrentDictionary (obviously). How? I have thought of and gotten some ideas from coworkers but I don't really like any of them. I am not going to list the ideas because your answer might be one of those ideas I have but don't like but it may end up being the best one.
I have tried to compress the code snippet to prevent you from having to scroll up and down too much, unlike my description. :)
nrtq = Not Relevant To Question
public interface IMessage
{
long CompantId { get; set; }
void Process();
}
public class CompanyMessage : IMessage
{ //implementation, nrtq }
public class ProductMessage : IMessage
{ //implementation, nrtq }
public class Controller
{
private static ConcurrentDictionary<long, Task> _workers = new ConcurrentDictionary<long, Task>();
//other needed declarations, nrtq
public Controller(){//constructor stuff, nrtq }
public StartSubscribers()
{
//other code, nrtq
_companySubscriber.OnMessageReceived += HandleCompanyMsg;
_productSubscriber.OnMessageReceived += HandleProductMsg;
}
private void HandleCompanyMsg(string msg)
{
try {
//other code, nrtq
QueueItUp(new CompanyMessage(message));
} catch (Exception ex) { //other code, nrtq }
}
private void HandleProductMsg(string msg)
{
try {
//other code, nrtq
QueueItUp(new ProductMessage(message));
} catch (Exception ex) { //other code, nrtq }
}
private static void QueueItUp(IMessage message)
{
_workers.AddOrUpdate(message.CompanyId,
x => {
var task = new Task(message.Process);
task.Start();
return task;
},
(x, y) => y.ContinueWith((z) => message.Process())
);
}
Thanks!
I won't "Accept" this answer for a while because I am eager to see if anyone else can come up with a better solution.
A coworker came up with a solution which I tweaked a little bit. Yes, I am aware of the irony (?) of using the lock statement with a ConcurrentDictionary. I don't really have the time right now to see if there would be a better collection type to use. Basically, instead of just doing a ContinueWith() for existing tasks, we replace the task with itself plus another task tacked on the end using ContinueWith().
What difference does that make? Glad you asked! :) If we had just done a ContinueWith() then the !worker.Value.IsCompleted would return true as soon as the first task in the chain is completed. However, by replacing the task with two (or more) chained tasks, then as far as the collection is concerned, there is only one task and the !worker.Value.IsCompleted won't return true until all tasks in the chain are complete.
I admit I was a little concerned about replacing a task with itself+(new task) because what if the task happened to be running while it is being replaced. Well, I tested the living daylights out of this and did not run into any problems. I believe what is happening is that since task is running in its own thread and the collection is just holding a pointer to it, the running task is unaffected. By replacing it with itself+(new task) we maintain the pointer to the executing thread and get the "notification" when it is complete so that the next task can "continue" or the IsCompleted returns true.
Also, the way the "clean up" loop works, and where it is located, means that we will have "completed" tasks hanging around in the collection but only until the next time the "clean up" runs which is the next time a message is received. Again, I did a lot of testing to see if I could cause a memory problem due to this but my service never used more than 20 MB of RAM, even while processing hundreds of messages per second. We would have to receive some pretty big messages and have a lot of long running tasks for this to ever cause a problem but it is something to keep in mind as your situation may differ.
As above, in the code below, nrtq = not relevant to question.
public interface IMessage
{
long CompantId { get; set; }
void Process();
}
public class CompanyMessage : IMessage
{ //implementation, nrtq }
public class ProductMessage : IMessage
{ //implementation, nrtq }
public class Controller
{
private static ConcurrentDictionary<long, Task> _workers = new ConcurrentDictionary<long, Task>();
//other needed declarations, nrtq
public Controller(){//constructor stuff, nrtq }
public StartSubscribers()
{
//other code, nrtq
_companySubscriber.OnMessageReceived += HandleCompanyMsg;
_productSubscriber.OnMessageReceived += HandleProductMsg;
}
private void HandleCompanyMsg(string msg)
{
//other code, nrtq
QueueItUp(new CompanyMessage(message));
}
private void HandleProductMsg(string msg)
{
//other code, nrtq
QueueItUp(new ProductMessage(message));
}
private static void QueueItUp(IMessage message)
{
lock(_workers)
{
foreach (var worker in Workers)
{
if (!worker.Value.IsCompleted) continue;
Task task;
Workers.TryRemove(worker.Key, out task);
}
var id = message.CompanyId;
if (_workers.ContainsKey(id))
_workers[id] = _workers[id].ContinueWith(x => message.Process());
else
{
var task = new Task(y => message.Process(), id);
_workers.TryAdd(id, task);
task.Start();
}
}
}

java.lang.IllegalStateException: trying to requery an already closed cursor android.database.sqlite.SQLiteCursor#

I've read several related posts and even posted and answer here but it seems like I was not able to solve the problem.
I have 3 Activities:
Act1 (main)
Act2
Act3
When going back and forth Act1->Act2 and Act2->Act1 I get no issues
When going Act2->Act3 I get no issues
When going Act3->Act2 I get occasional crashes with the following error: java.lang.IllegalStateException: trying to requery an already closed cursor android.database.sqlite.SQLiteCursor#.... This is a ListView cursor.
What I tried:
1. Adding stopManagingCursor(currentCursor);to the onPause() of Act2 so I stop managing the cursor when leaving Act2 to Act3
protected void onPause()
{
Log.i(getClass().getName() + ".onPause", "Hi!");
super.onPause();
saveState();
//Make sure you get rid of the cursor when leaving to another Activity
//Prevents: ...Unable to resume activity... trying to requery an already closed cursor
Cursor currentCursor = ((SimpleCursorAdapter)getListAdapter()).getCursor();
stopManagingCursor(currentCursor);
}
When returning back from Act3 to Act2 I do the following:
private void populateCompetitorsListView()
{
ListAdapter currentListAdapter = getListAdapter();
Cursor currentCursor = null;
Cursor tournamentStocksCursor = null;
if(currentListAdapter != null)
{
currentCursor = ((SimpleCursorAdapter)currentListAdapter).getCursor();
if(currentCursor != null)
{
//might be redundant, not sure
stopManagingCursor(currentCursor);
// Get all of the stocks from the database and create the item list
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
((SimpleCursorAdapter)currentListAdapter).changeCursor(tournamentStocksCursor);
}
else
{
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
}
}
else
{
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
}
startManagingCursor(tournamentStocksCursor);
//Create an array to specify the fields we want to display in the list (only name)
String[] from = new String[] {StournamentConstants.TblStocks.COLUMN_NAME, StournamentConstants.TblTournamentsStocks.COLUMN_SCORE};
// and an array of the fields we want to bind those fields to (in this case just name)
int[] to = new int[]{R.id.competitor_name, R.id.competitor_score};
// Now create an array adapter and set it to display using our row
SimpleCursorAdapter tournamentStocks = new SimpleCursorAdapter(this, R.layout.competitor_row, tournamentStocksCursor, from, to);
//tournamentStocks.convertToString(tournamentStocksCursor);
setListAdapter(tournamentStocks);
}
So I make sure I invalidate the cursor and use a different one. I found out that when I go Act3->Act2 the system will sometimes use the same cursor for the List View and sometimes it will have a different one.
This is hard to debug and I was never able to catch a crashing system while debugging. I suspect this has to do with the time it takes to debug (long) and the time it takes to run the app (much shorter, no pause due to breakpoints).
In Act2 I use the following Intent and expect no result:
protected void onListItemClick(ListView l, View v, int position, long id)
{
super.onListItemClick(l, v, position, id);
Intent intent = new Intent(this, ActivityCompetitorDetails.class);
intent.putExtra(StournamentConstants.App.competitorId, id);
intent.putExtra(StournamentConstants.App.tournamentId, mTournamentRowId);
startActivity(intent);
}
Moving Act1->Act2 Act2->Act1 never gives me trouble. There I use startActivityForResult(intent, ACTIVITY_EDIT); and I am not sure - could this be the source of my trouble?
I would be grateful if anyone could shed some light on this subject. I am interested in learning some more about this subject.
Thanks,D.
I call this a 2 dimensional problem: two things were responsible for this crash:
1. I used startManagingCursor(mItemCursor); where I shouldn't have.
2. I forgot to initCursorAdapter() (for autocomplete) on onResume()
//#SuppressWarnings("deprecation")
private void initCursorAdapter()
{
mItemCursor = mDbHelper.getCompetitorsCursor("");
startManagingCursor(mItemCursor); //<= this is bad!
mCursorAdapter = new CompetitorAdapter(getApplicationContext(), mItemCursor);
initItemFilter();
}
Now it seems to work fine. I hope so...
Put this it may work for you:
#Override
protected void onRestart() {
// TODO Auto-generated method stub
super.onRestart();
orderCursor.requery();
}
This also works
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB) {
startManagingCursor(Cursor);
}

Transaction when saving many object in Grails service

I am having a problem with transaction in Grails. I want to save a list of object to DB by a checking condition at each object. All these process I want to put to one transaction, it means if the k-th object does not satisfied the checking condition, all previous objects (from the first object to the (k-1)th one) will be rolled back from DB. Here is my example:
static transactional = true
public void saveManyPeople() {
// ...
List<People> peoples = new ArraysList();
for(i = 0, i < n, i++) {
People newPeople = createPeopleFromRawData(); // return a people object in memory
if(<checking-condition>) {
newPeople.save(flush : false)
} else {
throw new MyCustomizedException() // MyCustomizedException has extended from RuntimException
}
}
// ...
}
As you may see, I set transactional variable to true and I've tried to use flush : true and flush : false, but it didn't work as I want. I've read this article Rolling back a transaction in a Grails Service
And the author recommended that the service method should throw a RuntimeException then the process will be rollbacked. But if I want to throw another exception, so what I have to do?
Could you please give me some suggestions on this problem?
Thank you so much!
You can throw any exception that extends from RuntimeException to rollback the transaction. Or you can use Programmatic Transactions, using withTransation, to have more control over the transaction.
Could you verify that saveManyPeople() is within a Service and not a Controller?
The static transactional = true isn't respected in a Controller. I am suspecting that this is the issue.
If you need to have transactional support with the controller, you could always use DomainClass.withTransaction. Reference Documentation
Example:
Account.withTransaction { status ->
def source = Account.get(params.from)
def dest = Account.get(params.to)
def amount = params.amount.toInteger()
if(source.active) {
source.balance -= amount
if(dest.active) {
dest.amount += amount
}
else {
status.setRollbackOnly()
}
}
}

Resources