Is it ever necessary to flush in a service? - grails

I know if a Grails service is transactional a call to save(flush: true) can be rolled back. My question is if there is ever a need to call flush whilst in a service.

It depends on the working scenario. Ideally, it wont be necessary to flush every time you save something in the service class because the session gets flushed once returned back from the service class.
But think of a scenario where you have two different hibernate sessions working separately but data from one depends on another, then you would need to flush.
For example, Session 2 needs data read from db which frequently gets updated by Session 1 at the same time then that information has to be flushed to underlying persistence to make it available for session 2.
You can get granularity about how transactions can be handled by using #Transactional in service class explicitly and specifying the Propagation/Isolation Strategy, if required.

If you are doing bulk inserts using Hibernate then you will want to flush the Session periodically in order to prevent an OutOfMemoryException as the Session will keep growing until it is flushed (and cleared). Flushing writes the objects queued in the Hibernate Session cache to the database (in other words, doing SQL inserts), but the inserts are within the scope of a DB transaction so they can be rolled back.
The Hibernate docs have further discussion on this topic.

Related

Rails ActiveRecord/Postgres single query timeout?

I have a logging query (a simple INSERT) that happens on every single request.
For this request only (the one that happens on every page load), I want to set the limit to 500ms in case the database is locked/slow/down it won't affect the site, where the site hangs while it waits to connect/write.
Is there a way I can specify a timeout somehow on a per-query basis that I can abort the LoggedRequest.create! if it's taking too long?
I don't want to set it in my config because I have many other queries that shouldn't have timeouts that low.
I'm using Postgres 11.7
I also don't know how I feel about setting a timeout for the entire session because I don't want that connection to be shared from the pool with other queries that can't have that timeout.
Rails 6 introduces event based triggers for notifications, logging etc that comes in very handy, provided you are using/can afford to migrate to Rails 6. Here'a useful post that demonstrates creating event based triggers for notifications/logging: https://pramodbshinde.wordpress.com/2020/03/20/custom-events-tracking-with-activesupportnotifications-and-audited/
If, for some reason, you cannot use Rails 6, perhaps this article might help you find some answers: https://evilmartians.com/chronicles/the-silence-of-the-ruby-exceptions-a-rails-postgresql-database-transaction-thriller
If I were you, I could also contemplate using AJAX with a fire-and-forget API request to server for logging/whatever that is not critical to normal functioning of the application.

Grails synchronize execution of service method

I am trying to add Files to a (Blog)Post Domain Object.
The file uploader allows to send multiple ajax requests one for each file in parallel.
How can I best take care of the synchronization on the server side?
To avoid:
Could not synchronize database state with session org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect):
I am doing this in Grails service with:
static scope = "session"
This didn't do what I needed.

Can I Use Async to Perform Fire-and-Forget NHibernate Saves in an ASP.NET MVC Application?

I have an ASP.NET MVC application that uses NHibernate to persist data into a SQL Server database.
There are cases where I want to save an entry into a database (initially triggered by a call into an action method on a controller) but there's no need to block the caller.
Is it "safe" to try to implement a fire-and-forget mechanism into the database that will put the database call into a Task and then invoke it on the background so control can return immediately to the caller? (OR accomplish the same thing with BackgroundWorker or the "async/await" keywords) I need a solution where NHibernate will not get tripped up by ASP.NET trying to clean up its ISession, which is per-request. I'm using Autofac for lifetime management on the session. I assume that the database operation would have a slightly longer lifetime than the web request itself, and I'm not sure how smoothly that would work.
It is not safe to do this; I have a blog post on the subject. The problem is that when you have no requests in progress, it is possible that your entire AppDomain can be torn down. Also, consider what would happen if the database insert failed for some reason? If you return early, then there's no way to notify the client of an error.
A reliable solution must store the data in some kind of persistent place before returning success to the caller. This can be directly in the database, or in a queue of some kind (to be later processed by an independent worker).

HTTP requests in transactions?

I have a model which sends a HTTP request to an external web service on creation in order to find out some information to add before it is saved.
Currently I'm doing this in a before_create callback. I recently learned that before/after callbacks happen within database transactions.
Am I opening myself up to any issues such as limiting DB throughput by doing this? Would it be better to commit the record before sending the http request and then update the record when it returns?
As long a s you keep a transaction open, all the locks it acquired are active. If you have a call to an external source that may stall you for a long period of time, be sure not to to have any unrelated locks in the same transaction.
In other words: don't put anything else into the same transaction.
If you don't mind the new row being visible before you look up the additional information, you might just commit and later update the row.
Or you fetch the information from the external web service before you even start the transaction. That would be cleanest / fastest solution for the database.
PostgreSQL lock types.
How to view locks.

Design for long running ASP.net MVC web request

I'm aware of the model that involves a scheduled task runninng in the back ground which runs jobs registered with a web request but how about this for an idea that keeps everything within ASP.net...
User uploads a CSV file with, perhaps, several thousand rows. The rows are persisted to the database. I think this would take maybe a minute or so which would be an acceptable wait.
Request returns to the browser and then an automatic Ajax request would go back to the server and request, say, ten rows at a time and process them. (Each row requires a number of web service requests.)
Ajax call returns, display is updated and then another automatic Ajax request goes back for more rows. This repeats until all rows are completed.
If user leaves the web page, then they could return and restart the job.
Any thoughts?
Cheers, Ian.
If i get you right, you actually dont need any "interaction" between background jobs and the long-running request, you just want to "lauch" background jobs with incoming requests? Not such a good idea. Take a look at the Quartz.NET project, it is scheduler embeddable into ASP.NET application, it will handle this stuff for you without need of requests. Of course, if there is app pool shutdown, also your scheduler goes down, but this you cant guarantee not to happen even with your long-running requests solution, dependent on browser waiting on other side.
Also take a look on this interesting article from phil haack on this topic, with his own little scheduler library specific for ASP.NET :
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
A server side program (or ideally service) could still be quick and dirty and would be more reliable. You could still do step 1 as you have proposed, upload the file and insert the data ( don't forget to increase the maxRequestLength time out value in the web.config ). Then have a program running on the server that checks for new records and processes them.
If the user needs status you could store an entry in the database for each file and update the database record when the import is complete.
Maybe I'm reading the question and interpreting it in a weird way, but why couldn't you read the file into a database and store in a table the current line of the file that you've completed through. You could then track your progress via the db and just send small json objects telling the user how far along you are. That way if their connection drops you can keep processing their request, and if they return later you can notify them of how far along the job is. Also, if multiple clients are connecting you can use the db to queue and throttle (by serializing) the workload. Or if the user connects mid-job with another file, then their new request will be queued up after their current job.

Resources