Grails: What makes sure that version check and update is atomic? - grails

The update action in Grails first checks for the version of the object to be updated and then updates it.
What part of Grails ensures that the object isn't updated by another request during checking the version and updating the object?
Update:
Yes, hibernate will check for the version when savin the object and will throw an exception is optimistic locking fails. And I guess hibernate will make sure that the check+update is atomic, but...
if you take a look at the grails generated update method, you'll find that grails first double-checks and then (from my point of view) isn't prepared to handle the exception. The chances that hibernate will throw an exception after the update method has already checked for the right version are small, but it seems possible to me.
So wouldn't it be enough to try a save and catch the exception (if there is one)?

It's managed by Hibernate layer. It's called 'optimistic locking', and basically it updates only a object with a known version. Like:
UPDATE SET
%... fields ...%,
version = version + 1 --update version to a new value
WHERE
id = %obj id% --current object
AND version = %previous obj version%` --last known version
And throw exception when it fails to update (btw, at this moment it's hard to recover from this error, at most cases you just losing your update).
If you want to be sure that data is saved, try to force data saving (and check for saving/validtion error):
try {
if (!obj.save(flush: true)) {
// validation error
}
} catch (OptimisticLockingFailureException e) {
// not saved
}
or even lock data before update. It's useful only when you have a lot of concurrent updates.
MyDomain obj = MyDomain.lock(params.id) //now it's locked for update
// update fields
obj.save()
See more details about GORM locking at http://grails.org/doc/latest/guide/GORM.html#locking

Related

ambiguous for type lookup in this context when -com.apple.CoreData.ConcurrencyDebug 1 is active

The code below runs perfectly fine when I have the above parameter turned off in my Scheme. When it is turned on I get a 'Group' is ambiguous for type lookup in this context crash error. on the line "let currentGroup = context.object(with: groupID) as? Group"
I've checked my project and there is no duplicate reference to Group NSManagedObject.
let context = CoreDataStack.shared.newPrivateContext()
if reset {
AppDefault.current_ListGroup = nil
}
if let groupID = AppDefault.current_ListGroup,
let currentGroup = context.object(with: groupID) as? Group {
return currentGroup.objectID
} else {
Can someone help me figure out why it works with the .ConcurrencyDebug 1 off but crashes when it is on?
Thanks in Advance
When concurrency debugging is on, the app will crash any time you break the concurrency rules. Breaking the rules on its own doesn't always crash the app-- but with debugging enabled, you're saying that you want to crash as soon as you break the rules, even if the app would work normally without debugging. This is a good thing, because breaking the rules will probably make the app crash eventually even if it doesn't happen right now.
How you're breaking the rules here is:
You're creating a new private queue context with newPrivateContext.
You're using that context without calling perform or performAndWait.
With a private queue context, you must use one of those functions whenever you use the context. Really the only time you don't have to use one of those is if you're using main queue concurrency and you know that your code is running on the main queue. You can sometimes get away with not doing that, if everything is just right, but concurrency debugging will stop you immediately. That's what you're seeing.

Recover from trigger ERROR state after Job constructor threw an exception?

When using Quartz.net to schedule jobs, I occasionally receive an exception when instantiating a job. This, in turn causes Quartz to set the trigger for the job to an error state. When this occurs, the trigger will cease firing until some manual intervention occurs (restarting the service since I'm using in-memory job scheduling).
How can I prevent the error state from being set, or at the very least, tell Quartz to retry triggers that are in the error state?
The reason for the exception is due to flaky network calls that are required to get configuration data that is passed in to the job's constructor. I'm using a custom IJobFactory to do this.
I've seen other references to this without resolutions:
https://groups.google.com/forum/#!topic/quartznet/8qaT70jfJPw
http://forums.terracotta.org/forums/posts/list/2881.page
For the record, I consider this a design flaw of Quartz. If a job can't be constructed once, that doesn't mean it can't always be constructed. This is a transient error and should be treated as such. Stopping all future scheduled jobs violates the principle of least astonishment.
Anyway, my hack solution is to catch any errors that are the result of my job construction and instead of throwing an error or returning null to return a custom IJob instead that simply logs an error. This isn't perfect, but at least it doesn't prevent future triggering of the job.
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
try
{
var job = this.container.Resolve(bundle.JobDetail.JobType) as IJob;
return job;
}
catch (Exception ex)
{
this.logger.Error(ex, "Exception creating job. Giving up and returning a do-nothing logging job.");
return new LoggingJob(this.logger);
}
}
When exception occurs on trigger instatiating IJob class, then trigger change it TRIGGER_STATE to ERROR, and then trigger in this state will no longer fire.To reenable trigger your need to change it state to WAITING, and then it could to fire again.
Here the example how your can reenable yours misfired trigger.
var trigerKey = new TriggerKey("trigerKey", "trigerGroup");
if (scheduler.GetTriggerState(trigerKey) == TriggerState.Error)
{
scheduler.ResumeTrigger(trigerKey);
}
Actually the best way to reset Trigger from ERROR state is:
private final SchedulerFactoryBean schedulerFactoryBean;
Scheduler scheduler = schedulerFactoryBean.getScheduler();
TriggerKey triggerKey = TriggerKey.triggerKey(triggerName, triggerGroup);
if (scheduler.getTriggerState(triggerKey).equals(Trigger.TriggerState.ERROR)) {
scheduler.resetTriggerFromErrorState(triggerKey);
}
Note:
You should never modify the records in a table from a third-party library or software manually. All changes should be made through the API to that library if there is any functionality.
JobStoreSupport.resetTriggerFromErrorState
How can I prevent the error state from being set, or at the very least, tell Quartz to retry triggers that are in the error state?
Unfortunately, in current version, you cannot retry those triggers. As per the documentation of Quartz,
It should be extremely rare for this method to throw an exception -
basically only the case where there is no way at all to instantiate
and prepare the Job for execution. When the exception is thrown, the
Scheduler will move all triggers associated with the Job into the state, which will require human
intervention (e.g. an application restart after fixing whatever
configuration problem led to the issue with instantiating the Job).
Simply put, you should follow good object oriented practices: constructors should not throw exceptions. Try to move pulling of configuration data to job's execution phase (Execute method) where retries will be handled correctly. This might mean providing a service/func via constructor that allows pulling the data.
To change the trigger state to WAITING the author also suggests that a way could be to manually update the database.
[...] You might need to update database manually, but yeah - if jobs cannot be instantiated it's considered quite bad thing and Quartz will flag them as broken.
I created another job scheduled at app startup that updates the triggers in error state to recover them.
UPDATE QRTZ_TRIGGERS SET [TRIGGER_STATE] = 'WAITING' WHERE [TRIGGER_STATE] = 'ERROR'
More information in this github discussion.

How to download Azure blob asynchronously only if it exists - in one step?

I want to asynchronously download a block blob from Azure storage, but only if the blob exists.
var blob = documentsContainer.GetBlockBlobReference(blobName);
if (await blob.ExistsAsync())
await blob.DownloadToStreamAsync(stream);
But this makes two HTTP calls, right? The common path in my app is that the blob will exist, so most of the time I don't want the overhead of the existence check. But I need to gracefully handle the case where the blob doesn't exist also.
I tried leaving the existence check out and just using a try/catch block. That works if I am using DownloadTextAsync, but when using DownloadToStreamAsync, if the blob isn't there, it just hangs.
Is there a way to download a binary blob to a stream asynchronously, only if it exists, without making two calls?
It turns out that it does properly throw the exception:
try
{
var blob = documentsContainer.GetBlockBlobReference(blobName);
await blob.DownloadToStreamAsync(stream);
...
}
catch (StorageException ex)
{
if ((HttpStatusCode)ex.RequestInformation.HttpStatusCode == HttpStatusCode.NotFound)
{
return null; // exit the calling function
}
throw;
}
When I tried this originally, it hung at the DownloadToStreamAsync call. After the comments in the original question, I started checking the versions, and I found a mismatch in Microsoft.Data.Services.Client.dll. I was using 5.6.1, but my test project somehow had 5.6.0. (I'm not sure where it pulled that from, as it's not in my solution at all). After manually referencing Microsoft.Data.Services.Client 5.6.1 from the test project, it no longer hangs.

SMO Server connection not closed

I'm writing a C# application that upgrades client machines from one application version to another. The first step is to create a backup of a SQL database. I'm doing this using SMO and it works fine. Next I uninstall a windows service. Then I try to rename the database that I backed up, again, using SMO. This fails because it says it can't gain exclusive access to the database. When I look at the activity monitor, I can see that there are two connections to the database I'm trying to rename. One connection is the one I'm using to try to rename the database, the other is the one I used to backup the database. It's status is sleeping but I'm assuming this is why I can't get exclusive access to rename the database. I was kind of surprised to find the SMO objects didn't implement IDisposable. I tried setting my Server object reference to null incase garbage collection might help, but that didn't work. The connections stay there until I quit the application.
So I have a couple of questions
How do I get rid of the first connection? I know it's possible because it happens when my application shuts down
Can I put the database in single user mode using or force the rename in some other way using SMO?
Thanks
I got it to work if I turn off pooling in my connection string by adding Pooling=false. Then calling Disconnect on the ServerConnection:
ServerConnection svrConn = null;
try
{
string connString = Cryptographer.Decrypt(ConfigurationManager.ConnectionStrings["CS"].ConnectionString);
svrConn = new Microsoft.SqlServer.Management.Common.ServerConnection(new System.Data.SqlClient.SqlConnection(connString));
Server server = new Microsoft.SqlServer.Management.Smo.Server(svrConn);
Backup backup = new Microsoft.SqlServer.Management.Smo.Backup();
...
backup.SqlBackup(server);
}
catch (Exception ex)
{
...
}
finally
{
if (svrConn != null)
svrConn.Disconnect();
}
I think server.ConnectionContext.Disconnect would also work, but haven't tried it.

node.js process out of memory error

FATAL ERROR: CALL_AND_RETRY_2 Allocation Failed - process out of memory
I'm seeing this error and not quite sure where it's coming from. The project I'm working on has this basic workflow:
Receive XML post from another source
Parse the XML using xml2js
Extract the required information from the newly created JSON object and create a new object.
Send that object to connected clients (using socket.io)
Node Modules in use are:
xml2js
socket.io
choreographer
mysql
When I receive an XML packet the first thing I do is write it to a log.txt file in the event that something needs to be reviewed later. I first fs.readFile to get the current contents, then write the new contents + the old. The log.txt file was probably around 2400KB around last crash, but upon restarting the server it's working fine again so I don't believe this to be the issue.
I don't see a packet in the log right before the crash happened, so I'm not sure what's causing the crash... No new clients connected, no messages were being sent... nothing was being parsed.
Edit
Seeing as node is running constantly should I be using delete <object> after every object I'm using serves its purpose, such as var now = new Date() which I use to compare to things that happen in the past. Or, result object from step 3 after I've passed it to the callback?
Edit 2
I am keeping a master object in the event that a new client connects, they need to see past messages, objects are deleted though, they don't stay for the life of the server, just until their completed on client side. Currently, I'm doing something like this
function parsingFunction(callback) {
//Construct Object
callback(theConstructedObject);
}
parsingFunction(function (data) {
masterObject[someIdentifier] = data;
});
Edit 3
As another step for troubleshooting I dumped the process.memoryUsage().heapUsed right before the parser starts at the parser.on('end', function() {..}); and parsed several xml packets. The highest heap used was around 10-12 MB throughout the test, although during normal conditions the program rests at about 4-5 MB. I don't think this is particularly a deal breaker, but may help in finding the issue.
Perhaps you are accidentally closing on objects recursively. A crazy example:
function f() {
var shouldBeDeleted = function(x) { return x }
return function g() { return shouldBeDeleted(shouldBeDeleted) }
}
To find what is happening fire up node-inspector and set a break point just before the suspected out of memory error. Then click on "Closure" (below Scope Variables near the right border). Perhaps if you click around something will click and you realize what happens.

Resources