It makes me confused when I read the article by Zarko Gajic today:
"Multithreaded Delphi Database Queries"
Article URL: http://delphi.about.com/od/kbthread/a/query_threading.htm
Sourecode: http://delphi.about.com/library/weekly/code/adothreading.zip
With the code of "TCalcThread.Execute" procedure, Why the following code do not need to be placed in the Synchronize() method to run?
Line 173: ListBox.Clear;
Line 179: ListBox.Items.Insert(......);
Line 188: ListBox.Items.Add('*---------*');
Line 195: TicksLabel.Caption := 'Ticks: ' + IntToStr(ticks);
These codes are operating the VCL components, and are related to the UI updates. In my knowledge, these operations should be use thread synchronize, and executed by the main thread. Is my knowledge has the flaw?
This is a rare case where you're benefiting from the fact that Windows is doing the thread synchronization for you. The reason is that for a listbox, the items are manipulated using SendMessage with control specific messages. Because of this, each SendMessage call makes sure the message is processed by the same thread on which the control was created, notably the main thread.
Like I said, this is a rare case. It is also causing a thread switch for each of those three calls, which will degrade performance. You're still better off using Synchronize to force that block of code to run in the main thread where it belongs. It also ensures that if you begin working with a control that doesn't internally use SendMessage, you won't get bitten.
Indeed. Maybe the sample isn't problematic cause there are no UI changes while the thread is executing. But UI things always have to occur inside the UI thread.
The only differences I see between the sync'ed and the not sync'ed instructions are:
the not sync'ed are not no-params methods so the program will be more dificult to write :)
the sync'ed method is updating a TLabel which is not a TControl (if I remember my Delphi days) so it uses canvas directly...
But anyway: UI is touched by a single thread. Always. Once I wanted to update a TTreeBox inside a thread (no paralelism nor cross updates, simply a separate thread) and it was a very bad thing (random errors)...
Related
From the wikibook on F# there is a small section where it says:
What does let! do?#
let! runs an async<'a> object on its own thread, then it immediately
releases the current thread back to the threadpool. When let! returns,
execution of the workflow will continue on the new thread, which may
or may not be the same thread that the workflow started out on.
I have not found anywhere else in books or on the web where this fact (highlighted in bold) is stated.
Is this true for all let!/do! regardless of what the async object contains (e.g. Thread.Sleep()) and how it is started (e.g. Async.Start)?
Looking in the F# source code on github, I wasn't able to find the place where a call to bind executes on a new (TP) thread. Where in the code is the magic happening?
Which part of that statement do you find surprising? That parts of a single async can execute on different threadpool threads, or that a threadpool thread is necessarily being released and obtained on each bind?
If it's the latter, then I agree - it sounds wrong. Looking at the code, there are only a few places where a new work item is being queued on the threadpool (namely, the few Async module functions that use queueAsync internally), and Async.SwitchToNewThread spawns a non-threadpool thread and runs the continuation there. A bind alone doesn't seem to be enough to switch threads.
The spirit of the statement however seems to be about the former - no guarantees are made that parts of an async block will run on the same thread. The exact thread that you run on should be treated as an implementation detail, and when you yield control and await some result, you can be pretty sure that you'll land on a different thread at least some of the time.
No. An async operations might execute synchronously on the current thread, or it might wind up completing on a different thread. It depends entirely on how the async API in question is implemented.
See Do the new C# 5.0 'async' and 'await' keywords use multiple cores? for a decent explanation. The implementation details of F# and C# async are different, but the overall principles are the same.
The builder that implements the F# async computation expression is here.
Do I need to call Coinitialize in the main/VCL thread in Delphi
before using ShellExecuteEx?
For a thread, yes but for the VCL thread ?
No need to call CoInitialize for Windows Forms Applications.
This is done for you in the main thread.
More specific TApplication.Create in Forms.Pas:
...
if not IsLibrary then
FNeedToUninitialize := Succeeded(OleInitialize(nil));
...
If in doubt, do it. In either case, CoInitialize() will return a hr : HRESULT which you should check, because you need to CoUninitialize() on SUCCEEDED(hr), but not when FAILED(hr). A failed result usually indicates that it already has been called.
Cited from your MSDN ref:
Nonetheless, it is good practice to always initalize COM before using
this function.
In the RTL/VCL source, COM is initialized in the following ways:
By a call to OleInitialize made from Forms.TApplication.Create. So this call will be made for all VCL forms applications, but not, for example, for service applications.
By a call to CoInitialize or CoInitializeEx in ComObj.InitComObj. This is registered as an InitProc in the initialization section of the ComObj unit. In turn, the call to Application.Initialize in your project .dpr file's code will invoke ComObj.InitComObj.
In many and various other locations around the RTL/VCL. Including, but not limited to, Datasnap, ComServ, Soap, System.Win.Sensors, Winapi.DirectShow9. Some of these areas of code are more recent than Delphi 7.
Now, of these various COM initializations, the ones that count are 1 and 2. In any standard VCL forms application, both of these will run at startup in the main thread. Item 1 runs first and so gets to initialize COM first. That's the initialization that counts. Item 2 runs after and returns S_FALSE meaning that COM was already initialized.
So, to your question:
Do I need to call Coinitialize in the main/VCL thread?
No you do not. You can be sure that COM has already been initialized in a VCL application's main thread.
I have recently played around with one demo opensource project for the basic functionality of the INDY10 TCP/IP server and stumbled upon the problem of internal multitasking implementation of INDY and its interaction with VCL components. Since there are many different topics in SO on the subject, I decided to make a simple client-server application and test some of the solutions and approaches suggested, at least the ones that I understood correctly. Below I would like to summarize and review an approach that was previously suggested on SO, and if possible listen to your expert opinion on the subject.
Problem: Encapsulation the VCL for thread-safe usage inside an indy10-based client/server application.
Description of the Development Env.:
Delphi Version: Delphi® XE2 Version 16.0
INDY Version 10.5.8.0
O.S. Windows 7 (32Bit)
As mentioned in the article ([ Is the VCL Thread-safe?]) (sorry I do not have enough reputation to post the link) special care should be taken when one wishes to use any kind of VCL components inside a multithreaded (multitasking) application. VCL is not thread safe, but can be used in a thread safe way!
The how and the why usually depend on the application at hand but one can attempt to generalize a bit and suggest some kind of general approach to this problem. First of all, as in the case of INDY10, one does not need to be explicitly parallelizing his code, i.e. create and execute multiple threads, in order to expose VCL to deadlocks and data inter dependencies.
In every sclient-server application, the server has to be able to handle multiple requests simultaneously, so naturally, INDY10 internally implements this functionality. This would mean that the INDY10 set of classes are responsible to manage the program's thread creation, execution and destruction procedures internally.
The most obvious place where our code is exposed to the inner workings of INDY10 and hence possible thread conflicts, is the IdTCPServerExecute (TIdTCPServer onExecute event) method.
Naturally, INDY10 provides classes (wrappers) that ensure thread-safe program flow, but since I did not manage to get enough explanation on their application and usage, I prefer a custom made approach.
Below I summarize a method ( the suggested technique is based on a previous comment I found in SO How to use TIdThreadSafe class from Indy10 ) that attempts (and presumably succeeds) in dealing with this problem:
The question I tackle below is: How to make a specific class "MyClass" ThreadSafe?
The main idea is to create kind of a wrapper class that encapsulates "MyClass" and queues the threads that try to access it in First-In-First-Out principle. The underlying objects that are used for synchronization are [Windows's Critical Section Objects.].
In the context of a client-server application, "MyClass" will contain all thread unsafe functionality of our server, so we will try to ensure that those procedures and functions are not executed by more than one working thread simultaneously. This naturally means loss of parallelism of our code, but since the approach is simple and seems to be , in some cases this maybe a useful approach.
Wrapper class Implementation:
constructor TThreadSafeObject<T>.Create(originalObject: T);
begin
tsObject := originalObject; // pass it already instantiated instance of MyClass
tsCriticalSection:= TCriticalSection.Create; // Critical section Object
end;
destructor TThreadSafeObject<T>.Destroy();
begin
FreeAndNil(tsObject);
FreeAndNil(tsCriticalSection);
inherited Destroy;
end;
function TThreadSafeObject<T>.Lock(): T;
begin
tsCriticalSection.Enter;
result:=tsObject;
end;
procedure TThreadSafeObject<T>.Unlock();
begin
tsCriticalSection.Leave;
end;
procedure TThreadSafeObject<T>.FreeOwnership();
begin
FreeAndNil(tsObject);
FreeAndNil(tsCriticalSection);
end;
MyClass Definition:
MyClass = class
public
procedure drawRandomBitmap(abitmap: TBitmap); //Draw Random Lines on TCanvas
function decToBin(i: LongInt): String; //convert decimal number to Bin.
procedure addLineToMemo(aLine: String; MemoFld: TMemo); // output message to TMemo
function randomColor(): TColor;
end;
Usage:
Since threads execute in order and wait for the thread which has the current ownership of the critical section to finish (tsCriticalSection.Enter; and tsCriticalSection.Leave;) it is logical that if you want to manage that ownership relay, you need one unique instance TThreadSafeObject (you can consider using the singleton pattern). so include:
tsMyclass:= TThreadSafeObject<MyClass>.Create(MyClass.Create);
in Form.Create and
tsMyclass.Destroy;
in Form.Close; Here tsMyclass is a global variable of type MyClass.
Usage:
Regarding the usage of MyClass try the following:
with tsMyclass.Lock do
try
addLineToMemo('MemoLine1', Memo1);
addLineToMemo('MemoLine2', Memo1);
addLineToMemo('MemoLine3', Memo1);
finally
// release ownership
tsMyclass.unlock;
end;
, where Memo1 is an instance of a TMemo component on the form.
With this, we are supposed to ensure that anything that happens when tsMyClass is locked
will be executed by only one thread at a time. An obvious drawback of this approach, however, is that since I have only one instance of tsMyclass, even if one thread is trying to draw for e.g. on the Canvas, while another is writing on the Memo, the first thread will have to wait for the second to finish and only then it will be able to carry out its job.
My questions here are:
Is the above suggested method correct? Am I still free of race
conditions or do I have some "loopholes" in the code, from where
data conflicts could occur?
How can one, in general, test for thread
unsafety of his/her applicaiton?
I would like to stress that the above approach is in no way my own doing. It is basically a summary of the solution found in 2. Nevertheless, I have decided to post again in an attempt to get some kind of closure on the topic or a kind of proof of validity for the suggested solution. Besides, repetition is mother of all knowledge, as they say.
With this, we are supposed to ensure that anything that happens when
tsMyClass is locked will be executed by only one thread at a time. An
obvious drawback of this approach, however, is that since I have only
one instance of tsMyclass, even if one thread is trying to draw for
e.g. on the Canvas, while another is writing on the Memo, the first
thread will have to wait for the second to finish and only then it
will be able to carry out its job.
I see one big problem here: the VCL (forms, drawing, etc...) lives on the main thread. Even if you block concurrent thread access, the updates need to be done in the context of the main thread. This is the part where you need to use Synhronize(), the big difference with a lock (Criticalsection) is that synchronized code is ran in the context of the main thread. The end result is basically the same, your threaded code is serialized and you lose the advantage of using threads in the first place.
Locking on the whole object can be much too coarse.
Imagine cases where some properties or methods are independent of others. If the lock works on a "global" level, many operations will be blocked needlessly.
From Reduce lock granularity – Concurrency optimization
So, how can we reduce lock granularity? With a short answer, by asking
for locks as less as possible. The basic idea is to use separate locks
to guard multiple independent state variables of a class, instead of
having only one lock in class scope.
First things first: You don't need to implement a LOCK for each of your objects, Delphi's done that for you with the TMonitor class:
TMonitor.Enter(WhateverObject);
try
// Your code goes here.
finally TMonitor.Leave(WhateverObject);
end;
just make sure you free the WhateverObject when your application shuts down, or else you'll run into a bug that I've opened on QC: http://qc.embarcadero.com/wc/qcmain.aspx?d=111795
Secondly, making an application multi-threading is a bit more involved. You can't just wrapp each call between Enter/Leave calls: your "locking" needs to take into account what the object does and what the access pattern is. Wrapping calls within Enter/Leave simply make sure that only one thread runs that method at any time, but race conditions are much more complex, and might arise from successive calls to your locked methods. Even those each method is locked, and only one thread ever called those methods at any given time, the state of the locked object might change between as a consequence of other thread's activity.
This kind of code would be just fine in a single-threaded application, but locking at method level is not enough when switching to multi-threaded:
if List.IndexOf(Something) = -1 then
List.Add(Something);
I have code that logs execution times of routines by accessing QueryPerformanceCounter. Roughly:
var
FStart, FStop : Int64 ;
...
QueryPerformanceCounter (FStart) ;
... <code to be measured>
QueryPerformanceCounter (FStop) ;
<calculate FStop - FStart, update minimum and maximum execution times, etc>
Some of this logging code is inside threads, but on the other hand, there is a display UI that accesses the derived results. I figure the possibility exists of the VCL thread accessing the same variables that the logging code is also accessing. The VCL will only ever read the data (and a mangled read would not be too serious) but the logging code will read and write the data, sometimes from another thread.
I assume QueryPerformanceCounter itself is thread-safe.
The code has run happily without any sign of a problem, but I'm wondering if I need to wrap my accesses to the Int64 counters in a critical section?
I'm also wondering what the speed penalty of the critical section access is?
Any time you access multi-byte non-atomic data across thread when both reads and writes are involved, you need to serialize the access. Whether you use a critical section, mutex, semaphore, SRW lock, etc is up to you.
The documentation of delphi says that the WaitFor function for TMutex and others sychronization objects wait until a handle of object is signaled.But this function also guarantee the ownership of the object for the caller?
Yes, the calling thread of a TMutex owns the mutex; the class is just a wrapper for the OS mutex object. See for yourself by inspecting SyncObjs.pas.
The same is not true for other synchronization objects, such as TCriticalSection. Any thread my call the Release method on such an object, not just the thread that called Acquire.
TMutex.Acquire is a wrapper around THandleObjects.WaitFor, which will call WaitForSingleObject OR CoWaitForMultipleHandles depending on the UseCOMWait contructor argument.
This may be very important, if you use STA COM objects in your application (you may do so without knowing, dbGO/ADO is COM, for instance) and you don't want to deadlock.
It's still a dangerous idea to enter a long/infinite wait in the main thread, 'cause the only method which correctly handles calls made via TThread.Synchronize is TThread.WaitFor and you may stall (or deadlock) your worker threads if you use the SyncObjs objects or WinAPI wait functions.
In commercial projects, I use a custom wait method, built upon the ideas from both THandleObjects.WaitFor AND TThread.WaitFor with optional alertable waiting (good for asynchronous IO but irreplaceable for the possibility to abort long waits).
Edit: further clarification regarding COM/OLE:
COM/OLE model (e.g. ADO) can use different threading models: STA (single-threaded) and MTA (multi or free-threaded).
By definition, the main GUI thread is initialized as STA, which means, the COM objects can use window messages for their asynchronous messaging (particulary when invoked from other threads, to safely synchronize). AFAIK, they may also use APC procedures.
There is a good reason for the CoWaitForMultipleHandles function to exist - see its' use in SyncObjs.pas THandleObject.WaitFor - depending on the threading model, it can process internal COM messages, while blocking on the wait handle.