About GLib.Idle on Vala - glib

Valadoc is not very well documented in some parts, the namespace Idle in GLib has no description whats they do, there are only a few functions to define a priory level for idle event!
Someone know whats this does?
Functions:
public uint add (owned SourceFunc function, int priority = DEFAULT_IDLE)
public uint add_full (int priority, owned SourceFunc function)
public bool remove_by_data (void* data)

When in doubt refer to the C documentation:
https://developer.gnome.org/glib/stable/glib-The-Main-Event-Loop.html#g-idle-add
Adds a function to be called whenever there are no higher priority
events pending to the default main loop. The function is given the
default idle priority, G_PRIORITY_DEFAULT_IDLE. If the function
returns FALSE it is automatically removed from the list of event
sources and will not be called again.
See memory management of sources for details on how to handle the
return value and memory management of data .
This internally creates a main loop source using g_idle_source_new()
and attaches it to the global GMainContext using g_source_attach(), so
the callback will be invoked in whichever thread is running that main
context. You can do these steps manually if you need greater control
or to use a custom main context.
In general you may want to read about the main loop:
https://developer.gnome.org/glib/stable/glib-The-Main-Event-Loop.html#glib-The-Main-Event-Loop.description
The main event loop manages all the available sources of events for
GLib and GTK+ applications. These events can come from any number of
different types of sources such as file descriptors (plain files,
pipes or sockets) and timeouts. New types of event sources can also be
added using g_source_attach().
To allow multiple independent sets of sources to be handled in
different threads, each source is associated with a GMainContext. A
GMainContext can only be running in a single thread, but sources can
be added to it and removed from it from other threads.
Each event source is assigned a priority. The default priority,
G_PRIORITY_DEFAULT, is 0. Values less than 0 denote higher priorities.
Values greater than 0 denote lower priorities. Events from high
priority sources are always processed before events from lower
priority sources.
Idle functions can also be added, and assigned a priority. These will
be run whenever no events with a higher priority are ready to be
processed.
[...]

Related

Virtual TListView Item->SubItems->Assign() during OnData triggers refresh and hence never ending updates

Maintaining an older project using c++ Builder 2009
In a TListView instance (ViewStyle = vsReport), setup to operate virtual (OwnerData = true) I wanted to try and improve speed as much as possible. Every tiny bit helps. In the OnData event I noticed that Item->SubItems->Capacity = 0 to start with, and it increases per 4 as sub items are added. I read in the docs that Capacity is read only, yet I want to avoid TStrings' internal reallocating as much as possible. Since I also need to do caching I figured I'd use a TStringList as cache that has already grown to the required capacity. I assume TStrings Assign() will then immediately allocate an array big enough to store the required amount of Strings in ?
Item->SubItems->Assign(Cache.SubItems) ;
While this works I noticed this triggers ListView to call OnData again, and again and ... causing it to never stop.
Easily fixed again by doing this:
for (int x = 0 ; x < Cache.SubItems->Count ; x++)
Item->SubItems->Add(Cache.SubItems->Strings[x]) ;
But of course the whole point was to be able to tell SubItems the amount of strings from the start.
I realize I may be running against an old VCL issue ? That has long been resolved ? Or is there a point to this behavior that I don't understand right now ?
Is there a way to 'enable' Capacity to accept input ? So that it allocates just enough space for the strings that will be added ?
The TListView::OnData event is triggered whenever the ListView needs data for a given list item, such as (but not limited to) drawing operations.
Note that when the OnData event is triggered, the TListItem::SubItems has already been Clear()'ed beforehand. TStringList::Clear() sets the Capacity to 0, freeing its current string array. That is why the Count and Capacity are always 0 when entering your OnData handler.
The SubItems property is implemented as a TSubItems object, which derives from TStringList. The TStrings::Capacity property setter is implemented in TStringList, and does what you expect to pre-allocate the string array. But that is all it does - allocate VCL memory for the array, nothing more. There is still the aspect of updating the ListView subitems themselves at the Win32 API layer, and that has to be done individually as each string is added to the SubItems.
When your OnData handler calls SubItems->Assign(), you are calling TStrings::Assign() (as TStringList and TSubItems do not override it). However, TStrings::Assign() DOES NOT pre-allocate the string array to the size of the source TStrings object, as one would expect (at least, not in CB2009, I don't know if modern versions do or not). Internally, Assign() merely calls Clear() and then TStrings::AddStrings() (which neither TStringList nor TSubItems override to pre-allocate the array). AddStrings() merely calls TStrings::AddObject() in a loop (which both TStringList and TSubItems do override).
All of that clearing and adding logic is wrapped in a pair of TStrings::(Begin|End)Update() calls. This is important to note, because TSubItems reacts to the update counter. When the counter falls to 0, TSubItems triggers TListView to make some internal updates, which includes calling Invalidate() on itself, which triggers a whole repaint, and thus triggering a new series of OnData events for list items that need re-drawing.
On the other hand, when you call SubItems->Add() in your own manual loop, and omit the (Begin|End)Update() calls, you are skipping the repaint of the whole ListView. TSubItems overrides TStrings::Add/Object() to (among other things) update only the specific ListView item that it is linked to. It does not repaint the whole ListView.
So, you should be able to set the Capacity before entering your manual loop, if you really want to:
Item->SubItems->Capacity = Cache.SubItems->Count;
for (int x = 0; x < Cache.SubItems->Count; ++x)
Item->SubItems->Add(Cache.SubItems->Strings[x]);
In which case, you can use AddStrings() instead of a manual loop:
Item->SubItems->Capacity = Cache.SubItems->Count;
Item->SubItems->AddStrings(Cache.SubItems);

How does g_main_loop_unref(GMainLoop* loop) work?

The Question
Excerpt from the documentation:
Decreases the reference count on a GMainLoop object by one.
If the result is zero, free the loop and free all associated memory.
I could not find information regarding this reference counter. What is it initially set to and how is it used?
Details
In particular, I'm confused about this piece of example code (in the main method) (note that set_cancel is a static method:
void (*old_sigint_handler)(int);
old_sigint_handler = signal (SIGINT, set_cancel);
/* Create a new glib main loop */
data.main_loop = g_main_loop_new (NULL, FALSE);
old_sigint_handler = signal (SIGINT, set_cancel);
/* Run the main loop */
g_main_loop_run (data.main_loop);
signal (SIGINT, old_sigint_handler);
g_main_loop_unref (data.main_loop);
If g_main_loop is blocking, how it ever going to stop? I could not find information on this signal method either. But that might be native to the library (although I do not know).
Note: I reduced the code above to what I thought was the essential part. It is from a camera interface library called aravis under docs/reference/aravis/html/ArvCamera.html
I could not find information regarding this reference counter. What is it initially set to and how is it used?
It is initially set to 1. Whenever you store a reference to the object you increment the reference counter, and whenever you remove a reference you decrement the reference counter. It's a form of manual garbage collection. Just google "reference counting" and you'll get lots of information.
If g_main_loop is blocking, how it ever going to stop?
Somewhere someone will call g_main_loop_quit. Judging by the question I'm guessing you're not very familiar with the concept of an event loop—GLib's manual isn't a very gentle introduction to the basic concept, you may want to try the Wikipedia article or just search for "event loop".
I could not find information on this signal method either. But that might be native to the library (although I do not know).
The signal is a standard function (both C and POSIX). Again, there is lots of information out there, including good old man pages (man 2 signal).

Do I need to wrap accesses to Int64's with a critical section?

I have code that logs execution times of routines by accessing QueryPerformanceCounter. Roughly:
var
FStart, FStop : Int64 ;
...
QueryPerformanceCounter (FStart) ;
... <code to be measured>
QueryPerformanceCounter (FStop) ;
<calculate FStop - FStart, update minimum and maximum execution times, etc>
Some of this logging code is inside threads, but on the other hand, there is a display UI that accesses the derived results. I figure the possibility exists of the VCL thread accessing the same variables that the logging code is also accessing. The VCL will only ever read the data (and a mangled read would not be too serious) but the logging code will read and write the data, sometimes from another thread.
I assume QueryPerformanceCounter itself is thread-safe.
The code has run happily without any sign of a problem, but I'm wondering if I need to wrap my accesses to the Int64 counters in a critical section?
I'm also wondering what the speed penalty of the critical section access is?
Any time you access multi-byte non-atomic data across thread when both reads and writes are involved, you need to serialize the access. Whether you use a critical section, mutex, semaphore, SRW lock, etc is up to you.

State in OTP event manager process (not handler!)

Can an OTP event manager process (e.g. a logger) have some state of its own (e.g. logging level) and filter/transform events based on it?
I also have a need to put some state into the gen_event itself, and my best idea at the moment is to use the process dictionary (get/put). Handlers are invoked in the context of the gen_event process, so the same process dictionary will be there for all handler calls.
Yes, process dictionaries are evil, but in this case they seem less evil than alternatives (ets table, state server).
The gen_event implementation as contained in the OTP does no provide means for adding state.
You could extend the implementation to achieve this and use your implementation instead of gen_event. However I would advise against it.
The kind of state you want to add to the event manager belongs really in the event handler for several reasons:
You might want to use different levels in different handlers, e.g. only show errors on the console but write everything to the disk.
If the event level would be changed in the manager event handlers depending on getting all unfiltered events might cease to function (events have more uses than just logging). This might lead to hard to debug problems.
If you want a event manager for multiple handlers that all only get filtered events you can easily achieve this by having two managers: one for unfiltered messages and one for e.g. level filtered messages. Then install a handler to the unfiltered one, filter in the handler by level (easy) and pass on the filtered events to the other manager. All handlers that only want to get filtered messages can be registered to the second manager.
The handlers can have their own state that gets passed on every callback like:
Module:handle_event(Event, State) -> Result
Filtering might look like this (assuming e.g. {level N, Content} events):
handle_event({level, Lvl, Content}, State#state{max_level=Max}) when Lvl >= Max ->
gen_event:notify(filtered_man, Content);
The State can be changed either by special events, by gen_event:call\3,4 (preferably) or by messages handled by handle_info.
For details see Gen_Event Behaviour and gen_event(3)
When you start_link a gen_event process - thing that you should always do via a supervisor -, you can merely specify a name for the new process, if you need/want it to be registered.
As far as I can see, there's no way to initiate a state of some sort using that behaviour.
Of course, you can write your own behaviour, on the top of a gen_event or of a simple gen_server.
As an alternative, you might use a separate gen_event process for each debugging level.
Or you can just filter the messages in the handlers.

Does a lock-free queue "multiple producers-single consumer" exist for Delphi?

I've found several implementations for single producer-single consumer, but none for multiple producer-single consumer.
Does a lock-free queue for "multiple producers-single consumer" exist for Delphi?
Lock-free queue from the OmniThreadLibrary supports multiple producers. You can use it separately from the threading library (i.e. you can use OtlContainers unit in any other framework).
As the Daniele pointed below, there are two queues in the OmniThreadLibrary. The one in the OtlContainers supports multiple producers and multiple consumers while the "smarter" version in OtlComm (which is just a wrapper for the simpler version) is only single producer/single consumer.
Documentation is still a big problem of the OmniThreadLibrary project :(. Some information on the queue can be found here .
May be that could be helpful: Interlocked SList functions.
http://svn.berlios.de/svnroot/repos/dzchart/utilities/dzLib/trunk/lockfree/
#Daniele Teti:
The reader must wait for all writers who still have access to the old queue to exit the Enqueue method. Since the first thing the reader does in the Dequeue method is providing a new queue for new writers which enter Enqueue it should not take long for all writers that have a reference to the old queue to exit Enqueue. But you are right: It is lock free only for the writers but might still require the reader thread to wait for some writers to exit Enqueue.
For a multiple-producer / single-consumer Queue/FIFO, you can easily make one LockFree using SLIST or a trivial Lock Free LIFO stack. What you do is have a second "private" stack for the consumer (which can also be done as a SLIST for simplicity or any other stack model you choose). The consumer pops items off the private stack. Whenever the private LIFO is exhasted, you do a Flush rather than Pop off the shared concurrent SLIST (grabbing the entire SLIST chain) and then walk the Flushed list in-order pushing items onto the private stack.
That works for single-producer / single-consumer and for multiple-producer / single-consumer.
However, it does not work for multiple-producer / multiple-consumer cases.

Resources