How does Vala handle reference counting with multithreading? - vala

As far as I understand, in multithreaded environment reference counting should be performed with locking to ensure all threads see the same snapshot of memory. But locking slows down perfomance. How does Vala solve this problem?

Reference counting is mostly handled in GObject (for GLib.Object-derived types), which in turn uses the Atomic Operations in GLib. Atomics are a tricky subject; if you want to get into details a good place to start is Herb Sutter's Atomic Weapons talk from a few years ago. I would recommend watching the videos even if you're never going to put them to use (and 99.9% of programmers should never put them to use) because it will give you a much better understanding of how computers really work.
The name "atomics" can be a bit misleading; it's not really about atomicicity, though that's part of it. The operations are atomic in the sense that the change is either made in its entirety or not at all, which is vital, but the more interesting part is that atomics act as barriers which prevent the compiler from re-ordering operations across the barrier. Herb Sutter's talk goes into a lot of detail about this which I'm not going to repeat here.
For example, think about a simple unprotected reference counter:
typedef struct {
int reference_count = 0;
} Foo;
Foo* foo_create(void) {
Foo* foo = malloc(sizeof(Foo));
foo->reference_count = 1;
}
void ref(Foo* foo) {
++(foo->reference_count);
}
void unref(Foo* foo) {
if (--(foo->reference_count) == 0) {
free(foo);
}
}
I'm going to assume you can see the problems with leaving this unprotected because I'm writing a SO post not a book.
The specific atomic operation we're interested in is compare-and-swap (CAS), which basically provides the ability to perform this operation safely:
bool cas(int* value, int* expected, int desired) {
if (*value == *expected) {
*value = desired;
return true;
} else {
return false;
}
}
Using this, we would change our refcounting implementation above to something like:
typedef struct {
int reference_count = 0;
} Foo;
Foo* foo_create(void) {
Foo* foo = malloc(sizeof(Foo));
/* No atomics needed, we haven't made the value public yet */
foo->reference_count = 1;
}
void ref(Foo* foo) {
int old_refcount;
int new_refcount;
do {
current_refcount = foo->reference_count;
new_refcount = current_refcount + 1;
} while (!cas (&(foo->reference_count), &old_refcount, new_refcount))
}
void unref(Foo* foo) {
int old_refcount;
int new_refcount;
do {
current_refcount = foo->reference_count;
new_refcount = current_refcount - 1;
} while (!cas (&(foo->reference_count), &old_refcount, new_refcount));
if (new_refcount == 0) {
free(foo);
} else if (new_recount < 0) {
// Double-free bug, code should not be reached!
}
}
But locking slows down perfomance.
So do atomics. A lot. But also a lot less than a higher-level lock would. For one thing, if you were working with a mutex what you are doing would basically be:
Acquire the lock.
Perform the operation.
Release the lock.
With atomics, we're basically begging forgiveness instead of asking permission:
Attempt to perform the operation.
Then we just look to see whether the operation was successful (i.e., if cas() returned true).
The operation is also a lot smaller and faster; with a mutext, you would probably acquire the lock then read the current value, increment / decrement it, then release the lock. With atomics, the CAS operation gets wrapped up in a single CPU instruction.
The CPU still has to deal with cache coherency by making sure that next time any other core (a bit oversimplified since even within a core there are multiple caches) asks to read the data they are presented with the new data. In other words, atomic reference counting is bad for performance, but it's a lot less bad than a mutex. Frankly, if you want reference counting instead of tracing garbage collection atomics are pretty much your least-bad option.

Related

Constant variable initialization based on IO operation with a condition

I've been programming some opencv app with kotlin and stumbled on a matter that I'm curious about based on the code below:
val image =
if (!Imgcodecs.imread(filename).empty())
Imgcodecs.imread(filename)
else
Mat.eye(512, 512, CvType.CV_8U).mul(Mat(512, 512, CvType.CV_8U, Scalar(255.0)))
Does compiler (in general) optimize such IO operations like these consecutive calls (imreads)?
What are the proven and / or elegant ways to deal with such problem?
I don't think the compiler has any way to know that an arbitrary method is side-effect free. And in fact this one isn't (I assume) - there's potential for a race condition here.
One way to avoid this is with something like this:
val image = with(Imgcodecs.imread(filename)) {
if (!empty()) {
this
} else {
Mat.eye(...)
}
}
Or something a bit more explicit, thus avoiding the magic of the with idiom:
val image = {
val mtx = Imgcodecs.imread(filename)
if (!mtx.empty()) {
mtx
} else {
Mat.eye(...)
}
}

How to use lazy initialization with getter/setter method?

How i can use lazy initialization with get and set() closure.
Here is lazy initialization code:
lazy var pi: Double = {
// Calculations...
return resultOfCalculation
}()
and here is getter/setter code :
var pi: Double {
get {
//code to execute
return someValue
}
set(newValue) {
//code to execute
}
}
I assume what you're trying to do is lazily generate the default for a writable property. I often find that people jump to laziness when it isn't needed. Make sure this is really worth the trouble. This would only be worth it if the default value is rarely used, but fairly expensive to create. But if that's your situation, this is one way to do it.
lazy implements one very specific and fairly limited pattern that often is not what you want. (It's not clear at all that lazy was a valuable addition to the language given how it works, and there is active work in replacing it with a much more powerful and useful system of attributes.) When lazy isn't the tool you want, you just build your own. In your example, it would look like this:
private var _pi: Double?
var pi: Double {
get {
if let pi = _pi { return pi }
let result = // calculations....
_pi = result
return result
}
set { _pi = newValue }
}
This said, in most of the cases I've seen this come up, it's better to use a default value in init:
func computePi() -> Double {
// compute and return value
}
// This is global. Globals are lazy (in a thread-safe way) automatically.
let computedPi = computePi()
struct X {
let pi: Double // I'm assuming it was var only because it might be overridden
init(pi: Double = computedPi) {
self.pi = pi
}
}
Doing it this way only computes pi once in the whole program (rather than once per instance). And it lets us make pi "write-exactly-once" rather than mutable state. (That may or may not match your needs; if it really needs to be writable, then var.)
A similar default value approach can be used for objects that are expensive to construct (rather than static things that are expensive to compute) without needing a global.
struct X {
let pi: Double
init(pi: ExpensiveObject = ExpensiveObject()) {
self.pi = pi
}
}
But sometimes getters and setters are a better fit.
The point of a lazy variable is that it is not initialized until it is fetched, thus preventing its (possibly expensive) initializer from running until and unless the value of the variable is accessed.
Well, that's exactly what a getter for a calculated variable does too! It doesn't run until and unless it is called. Therefore, a getter for a calculated variable is lazy.
The question, on the whole, is thus meaningless. (The phrase "How i can use lazy initialization" reveals the flaw, since a calculated variable is never initialized — it is calculated!)

how do i declare variables, compare them and then use them inside a function

i am developing an ea that requires me to compare the high of previous 2 bars and whichever one is higher, use that as a stop loss value.
same for opposite side trades, i need to compare previous 2 lows and use the lower one as stop loss value.
what i am doing is this:-
void onTick()
{
static int ticket=0;
double ab=(//calculation for ab);
double de=(//calculation for de);
if(Low[1]<Low[2])
double sll=Low[1];
if(Low[1]>Low[2])
double sll=Low[2];
if(buy logic comes here)
{
double entryPrice=////////;
double stoploss=sll-xyz;
double takeprofit=entryPrice+((entryPrice-stoploss)*3);
ticket = OrderSend(Symbol(),...entryPrice,stoploss,takeprofit,.....);
}
if(ticket == false)
{
Alert("Order Sending Failed");
}
}
the problem is i am not able to reference the values of sll and get an error message saying "sll undeclared identifier"
i am fairly new to programming and would appreciate if someone can help me out with this.
I have added most of the code for you to understand the logic.
you would have to declare them outside the scope of the if statements if you want to use variables anywhere else so instead of doing that take a look at this
double sll; // declare sll outside the if statements
if(Low[1]<Low[2])
sll=Low[1];
if(Low[1]>Low[2])
sll=Low[2];
if(buy logic comes here)
{
bool res = OrderSend(..........);
}
Judging by what you wrote, it looks like you may be using res somewhere else too which then you need to define outside of the if statement because scoping.

Game Programming: Critical Area protection

How would you go about protecting a shared resources in cocos2d so that only one class or method is allowed to access or change it at one time? My initial though was to setup a class that handles Lock / Unlock coordination as follows:
- (BOOL)requestLock {
if (self.lockAvailable == YES) {
self.lockAvailable = NO;
return YES;
}
return NO;
}
- (void)returnLock:(CGFloat)time {
self.timer = 0;
self.timeToUnlock = time;
}
- (void)update:(CGFloat)dt {
self.timer += dt;
if (self.timer > self.timeToUnlock) {
self.lockAvailable = YES;
}
}
#end
But it just doesn't seem to be working as expected. After one of my classes grabs a lock, it calls performs some action, then returnLock for however long that action is expected to take. The results are unexpected, however, as it seems like any other class trying to request a lock can do so no matter the time provided before an unlock. Do I have some sort of flaw here?
On another note: is this going to end up being horribly inefficient at some point? I have about 3 classes trying to access the same resource every update. Every single time they are calling 'requestLock' over and over and over.
If indeed this is an 'update' scheduled by cocos2d, even though you have multiple accessors in the same update cycle, they are never going to access simultaneously - AFAIK cocos2d runs on a single thread.

Difference between DART Isolate and Thread (Java,C#)

For me The DART Isolate looks like a Thread (Java/C#) with a different terminology. In which aspect Isolate differs from a Thread?
Threads use shared memory, isolates don't.
For example, the following pseudocode in Java/C#
class MyClass {
static int count = 0;
}
// Thread 1:
MyClass.count++;
print(MyClass.count); // 1;
// Thread 2:
MyClass.count++;
print(MyClass.count); // 2;
This also runs the risk of the shared memory being modified simultaneously by both threads.
Whereas in Dart,
class MyClass {
static int count = 0;
}
// Isolate 1:
MyClass.count++;
print(MyClass.count); // 1;
// Isolate 2:
MyClass.count++;
print(MyClass.count); // 1;
Isolates are isolated from each other. The only way to communicate between them is to pass messages. One isolate can listen for callbacks from the other.
Check out the docs here including the "isolate concepts" section.

Resources