Concurrency control - delphi

Hello
I would like to know the best way to implement concurrency control in 3 tier application?
May first thought is:
A client wants to edit a record from a dataset.
send a request to the server asking a lock on that record
the server accepts/denies the edit request based on lock table
Based on this scenario the locks should have a reference to both the record locked and client using that record.
The client has to send periodic keep alive messages to the server. The keep alive is used to free locked records in case we lost the client in meddle of editing operation.
I will be using Delphi with datasnap. Maybe this is a novice question but I have to ask!!

I'm building on jachguate's Optimistic Concurrency Control answer to answer a question posed in comments.
I prefer to use OCC wherever I can because the implementation is easier. I'm going to talk about a three tier app using an object persistence framework. There are three levels to my prferred scheme:
row or object level control, where a unique version ID is stored on each object. If you try to update the object the version id is automatically changed. If your version id doesn't match what's already there your update fails.
field or column level locking. You send a complete copy of the original object as well as the updated one. Each field in your update has the actual and old values compared before the new value is applied. It's possible to ask the user to resolve the conflicts rather than discarding them, but this becomes messy as the amount of data in the commit increases.
pessimistic locking. Each object has a lock owner which is usually null (the object is not locked). When you want to edit the object you first lock it. The problem here is that locks need to be tidied up and the business rules around that can be ugly (what timeout is desirable).
The advantage of this is that most of the time the low-cost OCC path is taken. For things that happen a lot but with low contention the benefits are significant. Think of product tracking in a warehouse - products move all the time, but very rarely do identical items move at the same time, and when they do resolving is easy (quantity left = original less my removal and your removal). For the complex case where (say) a product is relocated it probably makes sense to lock the product while it's in transit (because that mirrors the physical situation).
When you do have to fall back to locking, it's often useful to be able to notify both users and have a communication channel. At least notify the user who wants the lock when it's available, preferably allow them to send a message to the lock holder and possibly even allow them to force the lock. Then notify the lock loser that "Jo Smith has taken you lock, you lose your changes". Let office politics sort that one out :)
I usually drive the fallback process by user complaints rather than bug reports. If users are complaining that they lose their edits too often in a particular process, change it. If users complain that records are locked too often, you will have to refactor your object mappings to increase lock granularity or make business process changes.

I Design my applications with a Optimistic concurrency control in mind, by no locking any record when a user want to edit it nor trying to control concurrency.
Important calculations and updates are done server side (Application or database) after proper built-in database locking functionality is set while processing the updates applied by the client. DataSnap automatic transaction rollback prevent these locks to block other concurrent users in case of failure.
With DataSnap, you have total control to prevent data loss when two users edits collide by appropriate using the ProviderFlags for your fields. Set the pfInWhere for any field you want to check automatically to have the same value at edition/deletion time as when the record was read.
Additionally, when a collision occurs you can react programatically at the Application Server (provider OnUpdateError event), at the client (TClientDataSet OnReconcileError event) or even ask the user for proper conflict resolution (take a look at the ReconcileErrorDialog in the New Item repository).
In the meantime, IMHO avoiding the complexity required to maintain lock lists, client lists, locks-per-client-lists, keep-alive messages, robust application server failure recovery and all possible glitches you'll end with a cleaner and better solution.

The approach given by jachgate is great, and probably better, but in case you do want to implement this, you will need a TThreadList on the server that is created when the service is started. Use the TThreadList because it's thread-safe. You can have on TThreadList per table so that you can minimize the performance hit of navigating the lists.
To control what's locked, you'll need an object that's created and passed to the list
TLockedItem = class(TObject)
public
iPK: Integer;
iClientID: Integer;
end;
To do the actual locking, you'd need something like this:
function LockItem(pPK, pClientID: Integer): Boolean;
var
oLockedItem: TLockedItem;
oInternalList: TList;
iCont: Integer;
bExists: Boolean;
begin
bExists := False;
if (Assigned(oLockedList)) then
begin
oInternalList := oLockedList.LockList;
try
if (oInternalList.Count > 0) then
begin
iCont := 0;
while ((not bExists) and (iCont < oInternalList.Count)) do
begin
oLockedItem := TLockedItem(oInternalList[iCont]);
if (oLockedItem.iPK = pPk) then
bExists := True
else
Inc(iCont);
end;
end;
finally
oLockedList.UnlockList;
end;
if (not bExists) then
begin
oLockedItem := TLockedItem.Create;
oLockedItem.iPK := pPK;
oLockedItem.iClientID := pClientID;
oInternalList := oLockedList.LockList;
try
oInternalList.Add(oLockedItem);
finally
oLockedList.UnlockList;
end;
end;
end;
Result := bExists;
end;
That's just an ideia of what you'd need. You would have to do an unlock method with similar logic. You'd probably need a list for the clients, that would keep a point of each TLockItem held by each client, in case of lost connection. This is not a definitive answer, just a push on the direction, in case you want to implement this approach.
Good luck

Related

'cursor is in BOF position' error

On before delete of the table I have :
procedure TData_Module.MyTableBeforeDelete(DataSet: TDataSet);
begin
if MessageDlg('Are you sure you want to delete the record written by '+
QuotedStr(MyTable.FieldByName('written_by_who').AsString),mtConfirmation,[mbYes,mbNo],0) = mrYes then
MyTable.Delete;
end;
However, on trying I get the error :
...cursor is in BOF position
What goes ? Database is Accuracer.
edit : The entire code above :)
As I said in comments, it sounded from your q as if you are calling Delete within the dataset's BeforeDelete event. Don't do that, because the BeforeDelete event occurs while the dataset is already in the process of deleting the record. So you deleting it is pulling the rug out from under the dataset's built-in code.
So, if you want the user's confirmation, get it before calling Delete, not inside the BeforeDelete event. Btw, the standard TDBNavigator has a ConfirmDelete property associated with its integrated DeleteRecord button which will do exactly that, i.e. pop up a confirmation prompt to the user.
More generally, people frequently create problems for themselves by trying to perform actions within a dataset event which are inappropriate to the state the dataset is in when the event code is called. The TDataSet has a very carefully designed flow of logic to its operations, and t is rather too easy for the inexperienced on unwary programmer to subvert its logic by doing something in a TDataSet event that shouldn't be there: One example is calling Delete inside an event. Another, frequently encountered ,one is executing code which moves the dataset's cursor inside an event that's called as result of moving the cursor, like the AfterScroll event, f.i.
There's no easy rule to say what you should and shouldn't do inside a dataset event, but generally speaking, any action you take which attempts to change the dataset's State property (see TDataSetState in the OLH) should be your prime suspect when you find that something unexpected is happening.
I suppose another general rule, with exceptions that are usually clear from the descriptions of events in the OLH, is that dataset events should generally be used for reacting to the event by doing something to some other object, like updating a status panel, rather than doing something to the dataset. F.i. the AfterScroll event is very useful for reacting to changes when the dataset's cursor is moved. The exceptions are events like BeforePost which are intended to allow you the opportunity to do things (like validate changes to the dataset's fields).
Btw, you can call Abort from inside the BeforeDelete event and it will prevent the deletion. However, imo, it's cleaner and tidier to check whether a deletion should go ahead and plan and code for its consequences before going ahead rather than have to back out part way through. So, with respect, I disagree with the other answer. The time to decide whether to cross a bridge is before you start, not when you're already part way across it. Ymmv, of course.
The question is in the right place. To ask before the delete is wrong because it forces you to ask the question every time you call Delete. A more correct answer is to abort the delete here, in the OnBeforeDelete, if the user doesn't want to delete.

List all queries connected through ado connection

I have application that has ADO connection on main form and several plugins that have ADO queries which I connect to this main connection. One problem is that I can't properly design those plugins without their personal connection which becomes messy when I connect plugins to main app. One plugin has plenty of queries.
I can use ConnectionObject to pass plugin's queries through main connection, but this is non-convenient for me, because when main connection needs to reconnect, I can't automatically reconnect all the queries. So I have to reassign those plugins' Connection property to main connection after plugin creation.
I know that one can list all active queries using ADOConnection's DataSets property. But what property should I use if I want to list both active and inactive DataSets? The IDE lists them automatically in designer, so I think there should be a generic way to do this.
Perhaps documentation regarding TADOConnection.DataSets which can be found here has confused you.
It says:
Use DataSets to access active datasets associated with a connection
component.
This might leed to thinking that DataSets keeps only active datasets which is not the case. To test this, just put one TADOConnection and one TADOQuery component on a form and set up TADOQuery.Connection to the instance of your connection, for example ADOConnection1.
To test that DataSets property keeps also inactive datasets you might use this code:
procedure TForm1.FormCreate(Sender: TObject);
var
i: Integer;
begin
for i := 0 to ADOConnection1.DataSetCount - 1 do
begin
if not ADOConnection1.DataSets[i].Active then
ShowMessage('Inactive dataset!');
end;
end;

Do I need to free dynamically created forms?

If I dynamically create a TForm using TForm.CreateNew(Application) to make a free-floating window do I have to keep track of these objects and free them upon Application close?
Or will delphi automatically free all forms on Application close?
Also, what happens if I have a free-floating dynamically created form and the user hits the close button? Do I need to call some code somewhere to free those?
If so, how? I assume I can't place it in any of the form's events.
Just to touch on an important point not covered in the other answers...
Yes, when you create a form (or any other component) with an owner, it will be destroyed when the owner is destroyed.
But, very NB: This does not mean you won't have a leak. To clarify:
If every time you create a form you set Application as the owner, then the forms will be destroyed by the Application object when your app closes.
But (unless you code something extra), those forms will be destroyed only when the application closes.
In other words, if every time your user selects a particular menu item you create a particular form owned by the application, then more memory will be consumed over time. Depending how much memory is used each time you create the form, your application may run out of memory.
So, provided you don't keep re-creating the objects, the model is perfectly acceptable. However, this means that you do want to keep track of these objects. But not so you can free them yourself, rather so you can re-Show them instead of re-creating them.
Also to cover some of the other points in your question:
What happens if I have a free-floating dynamically created form window and the user hits the close button? Will do I need to call some code somewhere to free those?
You don't need to free them if next time your user shows the form you reuse the existing instance. If you're going to create a new instance, then you should free the form when it's closed. Otherwise all the old instances only get destroyed when your application closes.
How? I assume I can't place it in any of the form's events.
It just so happens that Delphi provides an ideal event: OnClose. If you hook that event, then you can set the var Action: TCloseAction to indicate what should happen when the form closes. By default:
An MDI form will be minimised (caMinimize).
And an SDI form will be hidden (caHide).
You can change this to destroy the form (caFree).
NOTE: If you do decide to destroy the form, be careful to not try reusing it after it has been destroyed. Any variable you had that pointed to the form will be pointing to the same location in memory, but the form is no longer there.
Do I need to free dynamically created forms?
No you don't, except if you create the form with TForm.CreateNew(nil), that is passing no owner to the constructor.
The parameter for CreateNew is the owner, if the owner (Application/Form/WhatEverYouLike) gets deleted all owned objects will be freed. Give it a try.
procedure TMainfom.Button1Click(Sender: TObject);
var
a:TForm;
begin
With Tform.CreateNew(Application) do
begin
OnClose := MyAllFormClose;
OnDestroy := AllMyFormDestroy;
Show;
end;
end;
procedure TMainfom.AllMyFormDestroy(Sender: TObject);
begin
Showmessage(':( I''m going to be destroyed')
end;
procedure TMainfom.MyAllFormClose(Sender: TObject; var Action: TCloseAction);
begin
// unremark if you want to get the form freed OnClose already
// Action := caFree;
end;
The parameter you passed to CreateNew is the owner of the component. When a component's owner is destroyed, it destroys all the components that it owns. So, the application object is the owner of your form. When the application closes, the application object is destroyed. And so it destroys all of its owned components. Including your dynamically created form.
I don't know why it is that people seem to think forms are some kind of magical entity that have their own unique behavior and rules.
Forms are regular objects derived from TForm and TCustomForm, which are ultimately derived from TObject, just like every other class in Delphi.
If you create anything derived from TObject dynamically, it gets destroyed when the application terminates. However, if you fail to destroy it yourself and leave it to the system, that's generally regarded as a "memory leak". It's not a big deal for programs that run once and terminate quickly. But for things that users will leave open for hours or days on end, memory leaks can become quite an annoyance.
As mentioned earlier, TForms have an OnClose event that has a parameter Action. You can set Action to caFree and the form will be destroyed upon return to the Show or ShowModal call that displayed it. But if you use it, you'll need to create the form object yourself rather than use the default auto-create mechanism.
Other types of objects don't have this, like TStringList. You just need to practice "safe object management" and ensure that objects that get created also get destroyed in a timely manner, including forms. This can get into a whole rat's nest of a discussion about stuff related to garbage collection, Interfaces, and a whole lot of related stuff. Suffice it to say, you need to be aware of the options and manage your object lifetimes appropriately, rather than just leaving them to get destroyed when the application terminates.

Is it possible to modify data in a client dataset without changing current record?

I have a TClientDataSet which stores data coming from a medical instrument. This client dataset is linked to a grid to display data in real time. My problem is, when the user is editing the data, and the instrument sends a new packet, the data which the user has modified but not yet posted is lost because I only can get a TBookmark on current record, append the new record, and then goto the saved bookmark (which is sometimes not the correct record, apparently due to the new record). I can check dataset's State, Post if necessary, and then set the State afterwards, I'm looking for a way to update data in client dataset without affecting its State. Is this even possible?
Clone the dataset and modify the data on the clone.
A document on it by Cary Jensen is here: http://edn.embarcadero.com/article/29416
Basically you need something like
var
lEdDataset: TClientdataset;
begin
lEdDataset := TClientDataSet.create(nil);
try
lEdDataset.CloneCursor(SourceDataSet, True**);
StoreMedDeviceRecord(lEdDataset);
finally
lEdDataset.free;
end;
** You'll need to read the documentation on the True/False settings and decide what you actually need (I can't remember off-hand)

How long does a TDataset bookmark remain valid?

I have code like below in a project I'm working.
procedure TForm.EditBtnClick(Sender:TObject);
begin
// Mark is form variable. It's private
Mark = cdsMain.GetBookmark;
// blabalbal
.
.
.
end;
procedure TForm.OkBtnClick(Sender:TObject);
var
mistakes: Integer;
begin
//Validation stuff and transaction control
//removed to not clutter the code
If cdsMain.ChangeCount <> 0 then
mistakes := cdsMain.AppyUpdates(-1);
cdsMain.Refresh;
try
cdsMain.GotoBookmark(Mark);
// Yes, I know I would have to call FreeBookmark
// but I'm just reproducing
except
cdsMain.First;
end;
end;
Personally, I do not use bookmarks much — except to reposition a dataset where I only moved the cursor position (to create a listing, fill a string list, etc). If I Refresh, update (especially when a filter can make the record invisible), refetch (Close/Open) or any operation that modifies the data in the dataset, I don't use bookmarks. I prefer to Locate on the primary key (using a TClientDataset, of course) or requery modifying the parameters.
Until when is a bookmark valid? Until a Refresh? Until a Close/Open is done to refetch data? Where does the safe zone end?
Consider in the answer I'm using TClientDataset with a TSQLQuery (DbExpress).
Like both c0rwin and skamradt already mention: the bookmark behaviour depends on the TDataSet descendant you use.
In general, bookmarks become invalid during:
close/open
refresh (on datasets that support it)
data changes (sometimes only deletions)
I know 1. and 2. can invalidate your bookmarks in TClientDataSets. I am almost sure that for TClientDataSets it does not matter which underlying provider is used (TSQLQuery, TIBQuery, etc).
The only way to make sure what works and what not is testing it.
Which means you are totally right in not using them: bookmarks have an intrinsic chance of being unreliable.
To be on the safe side, always call BookmarkValid before going to a bookmark.
Personally I rarely ever use bookmarks. I instead use the id of the record I am viewing and perform a locate on it once the refresh is complete. If I need to iterate over all of the records in the set, I do that using a clone of the tClientDataset (which gets its own cursor).
It is my understanding is that the implementation of the bookmark is up to the vendor of the tDataset descendant and can vary between implementations. In my very simple dataset (tBinData), I implemented bookmarks as the physical record number so it would persist between refreshes as long as the record was not deleted. I can not speak this true for all implementations.
TDataSet implements virtual bookmark methods. While these methods ensure that any dataset object derived from TDataSet returns a value if a bookmark method is called, the return values are merely defaults that do not keep track of the current location. Descendants of TDataSet, such as TBDEDataSet, reimplement the bookmark methods to return meaningful values as described in the following list:
BookmarkValid, for determining if a specified bookmark is in use.
CompareBookmarks, to test two bookmarks to see if they are the same.
GetBookmark, to allocate a bookmark for your current position in the dataset.
GotoBookmark, to return to a bookmark previously created by GetBookmark
FreeBookmark, to free a bookmark previously allocated by GetBookmark.
Get it from here

Resources