In a Delphi forms app, how can I get processing code to execute without user input, and how do I get the UI to update with a given frame rate?
The code in question is a test frame for testing/measuring the concurrent operation of components under heavy load, with multiple processes on the same or different machines. The focus is mostly on database operations (peer-to-peer or server-based) and filesystem reliability/performance with regard to file and byte range locking, especially over the network with heterogeneous client OSes.
The frame waits for external events (IPC, file system, network) that signal start and stop of a test run; after the start signal it calls the provided test function in a tight loop until the stop signal is received. Then it waits for the next start signal or the signal to quit.
I've been doing similar things in FoxPro for ages. There it is easy because the Fox doesn't have to sit on a message pump like Delphi's Application.Run(); so I just put up a non-modal form, arrange for it to be refreshed every couple hundred milliseconds and then dive into the procedural code. In raw Win16/Win32 it was slightly less easy but still fairly straightforward.
In Delphi I wouldn't even begin to know where to look, and the structure of the documentation (D7+XE2) has successfully defied me so far. What's the simplest way to do this in Delphi? I guess I could always spin up a new thread for the actual processing, and use raw Win32 calls like RedrawWindow() and PostQuitMessage() to bend the app to my will. But that looks rather klunky. Surely there must be 'delphier' ways of doing this?
Create a background thread to do the processing task. That leaves the main UI thread free to service its message loop as required.
Any information that the task needs to present to the user must be synchronized or queued to the main UI thread. Of course, there's plenty more detail required to write the complete application, but threading is the solution. You can use a high level library to shield yourself from the raw threads, but that doesn't change the basic fact that you need to offload the processing to a thread other than the main UI thread.
Related
I have seen a lot of argument about epoll accepted new fd and spawn new thread for read and write on it's own thread doesn't scale well? But how it doesn't scale well? What if every connection has heavy processing like:-
doing database transaction
doing heavy algorithm work
waiting for other things to completed.
If my purpose definitely just want to do the thing inside the program(no more fancy routing to other connection to do stuff), and do not spawn new thread for read/write io. It might be hanging forever just because of one function waiting for something right? If this is the case how epoll scale well if do not spawn new thread?
epoll_wait(...);
// available to read now
recv(....);
// From here if i don't spawn thread, the program will be hanging. What should I do?
processing algorithm work.....// At least 3 secs to do the job.
continue;
AFAIU, epoll(7) does not spawn new threads by itself (see also pthreads(7)...). You need some other call (using pthread_create(3) or the underlying clone(2) system call used by pthread_create...) to create threads.
Read more about the C10K problem (which today should be called C100K) and some pthread tutorial. But it looks like your program could be compute-intensive, not IO-bound. So the bottleneck might be computer power (then you cannot get scalability with just multi-threading on a single computer node; you need distributed computing)
Threads are quite heavy resources. So you want to have some thread pool and have only a few dozens of active (i.e. runnable) threads. See this.
Be also aware of other multiplexing system calls (such as poll(2)), of non-blocking IO (fcntl(2) with O_NONBLOCK), of asynchronous IO (see aio(7)).
I recommend using some existing event-loop based library (look into libev, libevent, Glib, Poco, Qt, ... or for HTTP mostly: libonion on the server side, libcurl on the client side). Look also into 0mq.
The concepts related to callbacks, continuations, CPS could be useful and improve your thinking.
Languages like Go and its Goroutines could be helpful.
It might be hanging forever ....
That should not happen if you design your program carefully (of course having event loops using something like poll or epoll_wait - with a limited delay of less than a second and probably prefering non-blocking IO).
Probably, spending a few weeks learning more about Operating Systems concepts should be worthwhile. Also understanding most system calls (listed in syscalls(2)) after having read more about Linux programming (e.g. the old ALP book, or something newer) should be preferable. Perhaps you don't need something as sophisticated as epoll (because using just poll might be enough).
One of the reasons async was praised for ASP.NET was to follow Nodejs async platform which led to more scalability with the freeing up of threads to handle subsequent requests etc.
However, I've since read that wrapping CPU-bound code in Task.Run will have the opposite effect i.e. add even more overhead to a server and use more threads.
Apparently, only true async operations will benefit e.g. making a web-request or a call to a database.
So, my question is as follows. Is there any clear guidance out there as to when action methods should be async?
Mr Cleary is the one who opined about the fruitlessness of wrapping CPU-bound operations in async code.
Not exactly, there is a difference between wrapping CPU-bound async code in an ASP.NET app and doing that in a - for example - WPF desktop app. Let me use this statement of yours to build my answer upon.
You should categorize the async operations in your mind (in its simplest form) as such:
ASP.NET async methods, and among those:
CPU-bound operations,
Blocking operations, such as IO-operations
Async methods in a directly user-facing application, among those, again:
CPU-bound operations,
and blocking operations, such as IO-operations.
I assume that by reading Stephen Cleary's posts you've already got to understand that the way async operations work is that when you are doing a CPU-bound operation then that operation is passed on to a thread pool thread and upon completion, the program control returns to the thread it was started from (unless you do a .ConfigureAwait(false) call). Of course this only happens if there actually is an asynchronous operation to do, as I was wondering myself as well in this question.
When it comes to blocking operations on the other hand, things are a bit different. In this case, when the thread from which the code performed asynchronously gets blocked, then the runtime notices it, and "suspends" the work being done by that thread: saves all state so it can continue later on and then that thread is used to perform other operations. When the blocking operation is ready - for example, the answer to a network call has arrived - then (I don't exactly know how it is handled or noticed by the runtime, but I am trying to provide you with a high-level explanation, so it is not absolutely necessary) the runtime knows that the operation you initiated is ready to continue, the state is restored and your code can continue to run.
With these said, there is an important difference between the two:
In the CPU-bound case, even if you start an operation asynchronously, there is work to do, your code does not have to wait for anything.
In the IO-bound case or blocking case, however, there might be some time during which your code simply cannot do anything but wait and therefore it is useful that you can release that thread that has done the processing up until that point and do other work (perhaps process another request) meanwhile using it.
When it comes to a directly-user-facing application, for example, a WPF app, if you are performing a long-running CPU-operation on the main thread (GUI thread), then the GUI thread is obviously busy and therefore appears unresponsive towards the user because any interaction that is normally handled by the GUI thread just gets queued up in the message queue and doesn't get processed until the CPU-bound operation finishes.
In the case of an ASP.NET app, however, this is not an issue, because the application does not directly face the user, so he does not see that it is unresponsive. Why you don't gain anything by delegating the work to another thread is because that would still occupy a thread, that could otherwise do other work, because, well, whatever needs to be done must be done, it just cannot magically be done for you.
Think of it the following way: you are in a kitchen with a friend (you and your friend are one-one threads). You two are preparing food being ordered. You can tell your friend to dice up onions, and even though you free yourself from dicing onions, and can deal with the seasoning of the meat, your friend gets busy by dicing the onions and so he cannot do the seasoning of the meat meanwhile. If you hadn't delegated the work of dicing onions to him (which you already started) but let him do the seasoning, the same work would have been done, except that you would have saved a bit of time because you wouldn't have needed to swap the working context (the cutting boards and knives in this example). So simply put, you are just causing a bit of overhead by swapping contexts whereas the issue of unresponsiveness is invisible towards the user. (The customer neither sees nor cares which of you do which work as long as (s)he receives the result).
With that said, the categorization I've outlined at the top could be refined by replacing ASP.NET applications with "whatever application has no directly visible interface towards the user and therefore cannot appear unresponsive towards them".
ASP.NET requests are handled in thread pool threads. So are CPU-bound async operations (Task.Run).
Dispatching async calls to a thread pool thread in ASP.NET will result in returning a thread pool thread to the thread pool and getting a another. one to run the code and, in the end, returning that thread to the thread pool and getting a another one to get back to the ASP.NET context. There will be a lot of thread switching, thread pool management and ASP.NET context management going on that will make that request (and the whole application) slower.
Most of the times one comes up with a reason to do this on a web application ends up being because the web applications is doing something that it shouldn't.
I have an application that can run for quite a long time scanning a database.
During this process I keep my program responsive by using processmessage.
This processmessage is triggered when my progress bar is updated and inc'ed.
This works fine is most cases, but when the databases get larger it takes longer for the progress bar to jump up 1%, the program becomes unresponsive until that time.
Is there another way to keep my program alive besides processmessages?
Multi threading is the answer. A standard Delphi application is basically a single threaded application that can do one thing at a time. Hence the gui lockup, it can't remain responsive if it's doing something else.
If you want to have a responsive gui and do heavy lifting at the same time, you need to have the heavy lifting in a separate thread or threads. This way your main thread can make sure you have a responsive program and the worker threads do the heavy lifting.
This works nice for heavy database work but also for for instance the downloading of files or situations where an answer of for instance a remote server can take a long time.
But this answer will probably give you more questions then answers because to explain HOW to use multi threading would be too big of an explanation for this question.
One other thing though: have a long and hard look at your database code. How are you retrieving records from the database, are there good indexes on the database etc. etc. etc. You can get insane speed improvements by optimizing this code before you have to start thinking about multi threading.
I've found the following resource: http://thaddy.co.uk/threads/ which you can download with pictures at: http://cc.embarcadero.com/item/14809 to be very usefull threading tutorial.
If you want to make your GUI program appear responsive, you must service the message queue in a timely fashion. There is no alternative.
When it comes to running database queries, the way to do that without freezing your UI, is to move the query to a different thread.
I was going over the RunLoop iOS documentation and it discusses the idea illustrated here:
(source: apple.com)
in the RunLoopSource it provides the following interface for client threads (ie the Main thread in the above illustration) to fill the audio buffer with commands and data, and to subsequently fire all commands available in the said buffer:
// Client interface for registering commands to process
- (void)addCommand:(NSInteger)command withData:(id)data
- (void)fireAllCommandsOnRunLoop:(CFRunLoopRef)runloop
In the add command method we're simply adding commands to an NSMutableArray data structure.
My question is how can we encapsulate those commands in variables such that they are methods.. the data variable in the addCommand method is of type id.. can we put a block in there for example? Are there any best practices here/sample code etc? thanks.
This technique pre-dates blocks. The beauty about using blocks with concurrency is that you can throw as much work as you want at the system, and given its total device scope can schedule that work on multiple cores and threads as it sees fit. You can also use a concurrent NSOperation and have it implement a fifo to accept work and process it, but in this case there will only be the secondary thread, and it will again be scheduled run time as the system sees fit to giving it, so no advantage over blocks.
I'd like to delay the handling for some captured events in ActionScript until a certain time. Right now, I stick them in an Array when captured and go through it when needed, but this seems inefficient. Is there a better way to do this?
Well, to me this seems a clean and efficient way of doing that.
What do you mean by delaying? you mean simply processing them later, or processing them after a given time?
You can always set a timout to the actual processing function in your event handler (using flash.utils.setTimeout), to process the event at a precise moment in time. But that can become inefficient, since you may have many timeouts dangeling about, that need to be handled by the runtime.
Maybe you could specify your needs a little more.
edit:
Ok, basically, flash player is single threaded - that is bytecode execution is single threaded. And any event, that is dispatched, is processed immediatly, i.e. dispatchEvent(someEvent) will directly call all registered handlers (thus AS bytecode).
Now there are events, which actually are generated in the background. These come either from I/O (network, userinput) or timers (TimerEvents). It may happen, that some of these events actually occur, while bytecode is executed. This usually happens in a background thread, which passes the event (in the abstract sense of the term) to the main thread through a (de)queue.
If the main thread is busy executing bytecode, then it will ignore these messages until it is done (notice: nearly any bytecode execution is always the implicit consequence of an event (be it enter frame, or input, or timer or load operation or whatever)). When it is idle, it will look in all queues, until it finds an available message, wraps the information into an ActionScript Event object, and dispatches it as previously described.
Thus this queueing is a very low level mechanism, that comes from thread-to-thread communication (and appears in many multi-threading scenarios), and is inaccessible to you.
But as I said before, your approach both is valid and makes sense.
Store them into Vector instead of Array :p
I think it's all about how you structure your program, maybe you can assign the captured event under the related instance? So that it's all natural to process the captured event with it instead of querying from a global vector