What is a difference in lifecycle between 'entity' object and function object in javascript? - closures

Why does this code snippet outputs 5 (as expected due to scope chain) ?
let arr = []
var firstFunc;
for(var i = 0; i < 5; i++) {
var iterFunc = function () {
return function() {
return i
}
}
arr.push(iterFunc())
}
console.log(arr[0]())
but this outputs {a: 0}:
let arr = []
var firstFunc;
for(var i = 0; i < 5; i++) {
var iterFunc = function () {
return {
a: i
}
}
arr.push(iterFunc())
}
console.log(arr[0])
what memory allocation logic occurs under the hood ? Why 'entity' object persists current value in contrast to closure ?

Returning i or {a: i} here doesn't matter.
The important thing is that in first example, iterFunc() returns a function and is inside that (not yet invoked) function where i or {a: i} is evaluated.
Having i is always holding a scalar (immutable) value, that value is what you get in place. (If i would have been an object, a reference to that object would be returned and if, its contents mutated, you could see that mutations).
Being immutable value, you will get that value. But, as you know, i value changes in time, so the key thing here is WHEN that value is read.
If you see at your first example, in console.log(...) statement, you are intentionally invoking it as the function you know it is (the unnamed function returned by iterFunc()) and, that time, i is holding a value of 5.
If, in your first example, you just change following line:
arr.push(iterFunc())
by
arr.push(iterFunc()())
...and, of course:
console.log(arr[0]())
by
console.log(arr[0]) // Same of your second example.
You will realize that output will be the same in both cases (0).

Related

c++ std::multi_map iterating equal_range problem

I'm currently debugging some code and am confused as to how the following is possible:
void DoSomething(int cell, const std::multimap<int, const Foo*>& map) {
auto range = map.equal_range(cell);
if (range.first != map.end()) {
int iterated = 0;
for (auto iter = range.first; iter != range.second; ++iter) {
iterated++;
}
assert(iterated > 0);
}
}
based on my understanding of std::multimap this assertion should in any case always pass, yet it fails sometimes with iterated = 0.
Under what circumstances can this be possible?
Ok I figured it out.
I was under the wrong assumption that equal_range() would return end() as the first iterator if the multimap does not contain the requested key, but that's not correct.
If the multimap does not contain any elements for a certain key, it does not return map.end() for the first iterator, but instead it returns an iterator to the first element with a key Not Less than the requested key. So, if the multimap doesn't contain the key requested, if (range.first != map.end()) will still pass, since both the first as well as the second iterator will both point to the element with the next larger key, but then there will be no iteration.

How do closures capture primitive values in local scope [duplicate]

I know there are several related question and moreover I can find many posts in the Internet.
However, I can't understand the fact that closures can hold references. In case of a reference type, it is totally usual and very reasonable, but how about a value type, including struct and enum?
See this code.
let counter: () -> Int
var count = 0
do {
counter = {
count += 1
return count
}
}
count += 1 // 1
counter() // 2
counter() // 3
We can access the value type count through two ways. One is by using count directly and the another is through the closure counter.
However, if we write
let a = 0
let b = a
, in the memory b has of course a different area with a because they are value type. And this behavior is a distinct feature of value type which is different with reference type.
And then backing to the closure topic, closure has the reference to value type's variable or constant.
So, can I say the value type's feature that we can't have any references to value type is changed in case of closure's capturing values?
To me, capturing references to value type is very surprising and at the same time the experience I showed above indicates that.
Could you explain this thing?
I think the confusion is by thinking too hard about value types vs reference types. This has very little to do with that. Let's make number be reference types:
class RefInt: CustomStringConvertible {
let value: Int
init(value: Int) { self.value = value }
var description: String { return "\(value)" }
}
let counter: () -> RefInt
var count = RefInt(value: 0)
do {
counter = {
count = RefInt(value: count.value + 1)
return count
}
}
count = RefInt(value: count.value + 1) // 1
counter() // 2
counter() // 3
Does this feel different in any way? I hope not. It's the same thing, just in references. This isn't a value/reference thing.
The point is that, as you note, the closure captures the variable. Not the value of the variable, or the value of the reference the variable points to, but the variable itself). So changes to the variable inside the closure are seen in all other places that have captured that variable (including the caller). This is discussed a bit more fully in Capturing Values.
A bit deeper if you're interested (now I'm getting into a bit of technicalities that may be beyond what you care about right now):
Closures actually have a reference to the variable, and changes they make immediately occur, including calling didSet, etc. This is not the same as inout parameters, which assign the value to their original context only when they return. You can see that this way:
let counter: () -> Int
var count = 0 {
didSet { print("set count") }
}
do {
counter = {
count += 1
print("incremented count")
return count
}
}
func increaseCount(count: inout Int) {
count += 1
print("increased Count")
}
print("1")
count += 1 // 1
print("2")
counter() // 2
print("3")
counter() // 3
increaseCount(count: &count)
This prints:
1
set count
2
set count
incremented count
3
set count
incremented count
increased Count
set count
Note how "set count" is always before "incremented count" but is after "increased count." This drives home that closures really are referring to the same variable (not value or reference; variable) that they captured, and why we call it "capturing" for closures, as opposed to "passing" to functions. (You can also "pass" to closures of course, in which case they behave exactly like functions on those parameters.)

Using an 'is' expression when the right-hand operand is a variable?

I am trying to write a function that takes two arguments: givenType and targetType. If these two arguments match, I want givenType to be returned, otherwise null.
For this objective, I am trying to utilize Dart's is expression (maybe there is a better way to go about it, I am open to suggestions). Initially, I thought it would be as simple as writing this:
matchesTarget(givenType, targetType) {
if (givenType is targetType) {
return givenType;
}
return null;
}
But this produces an error:
The name 'targetType' isn't a type and can't be used in an 'is'
expression. Try correcting the name to match an existing
type.dart(type_test_with_non_type)
I tried looking up what satisfies an is expression but cannot seem to find it in the documentation. It seems like it needs its right-hand operand to be known at compile-time (hoping this is wrong, but it does not seem like I can use a variable), but if so, how else can I achieve the desired effect?
I cant guess the purpose of the function (or the scenario where it would be used, so if you can clarify it would be great). First of all, I don't know if you are passing "types" as arguments. And yes, you need to specify in compile time the right hand argument of the is function.
Meanwhile, if you are passing types, with one change, you can check if the types passed to your function at runtime.
matchesTarget(Type givenType, Type targetType) {
print('${givenType.runtimeType} ${targetType.runtimeType}');
if (givenType == targetType) {
return givenType;
}
return null;
}
main(){
var a = int; //this is a Type
var b = String; //this is also a Type
print(matchesTarget(a,b)); //You are passing different Types, so it will return null
var c = int; //this is also a Type
print(matchesTarget(a,c)); //You are passing same Types, so it will return int
}
But if you are passing variables, the solution is pretty similar:
matchesTarget(givenVar, targetVar) {
print('${givenVar.runtimeType} ${targetVar.runtimeType}');
if (givenVar.runtimeType == targetVar.runtimeType) {
return givenVar.runtimeType;
}
return null;
}
main(){
var a = 10; //this is a variable (int)
var b = "hello"; //this is also a variable (String)
print(matchesTarget(a,b)); //this will return null
var c = 12; //this is also a variable (int)
print(matchesTarget(a,c)); //this will return int
}
The Final Answer
matchesTarget(givenVar, targetType) {
print('${givenVar.runtimeType} ${targetType}');
if (givenVar.runtimeType == targetType) {
return givenVar;
}
return null;
}
main(){
var a = 10; //this is a variable (int)
var b = String; //this is a type (String)
print(matchesTarget(a,b)); //this will return null because 'a' isnt a String
var c = int; //this is also a type (int)
print(matchesTarget(a,c)); //this will return the value of 'a' (10)
}
The as, is, and is! operators are handy for checking types at runtime.
The is operator in Dart can be only used for type checking and not checking if two values are equal.
The result of obj is T is true if obj implements the interface specified by T. For example, obj is Object is always true.
See the below code for an example of how to use the is operator
if (emp is Person) {
// Type check
emp.firstName = 'Bob';
}
Even the error message that you're getting says that
The name 'targetType' isn't a type and can't be used in an 'is'
expression.
So the bottomline is that you can use is only for checking if a variable or value belongs to a particular data type.
For checking equality, you can use the == operator if comparing primitive types, or write your own method for comparing the values. Hope this helps!

Saving closure as a variable understanding

I'm testing in playground this code (I'm using UnsafeMutablePointers to simulate deinitialization):
class TestClassA {
func returnFive() -> Int {
return 5
}
deinit {
println("Object TestClassA is destroyed!") //this way deinit is not called
}
}
class TestClassB {
let closure: () -> Int
init(closure: () -> Int) {
self.closure = closure
}
deinit {
println("Object TestClassB is destroyed!")
}
}
let p1 = UnsafeMutablePointer<TestClassA>.alloc(1)
p1.initialize(TestClassA())
let p2 = UnsafeMutablePointer<TestClassB>.alloc(1)
p2.initialize(TestClassB(closure: p1.memory.returnFive))
p2.memory.closure()
p1.memory.returnFive()
p1.destroy()
However, when I change the initialization of TestClassB as:
p2.initialize(TestClassB(closure: {p1.memory.returnFive()}))
now TestClassA can be deinitialized.
So can someone tell me, what is the difference between
TestClassB(closure: p1.memory.returnFive)
and
TestClassB(closure: {p1.memory.returnFive()})
and why in the second case there is no strong reference to TestClassA so it can be deinitalized?
The problem here is the use of UnsafeMutablePointer<SomeStruct>.memory. Its important not to fall into the trap of thinking that memory is like a stored property containing the pointed-to object, that will be kept alive as long as the pointer is. Even though it feels like one, it isn’t, it’s just raw memory.
Here’s a simplified example that just uses one class:
class C {
var x: Int
func f() { println(x) }
init(_ x: Int) { self.x = x; println("Created") }
deinit { println("Destroyed") }
}
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
p.memory.f()
p.destroy() // “Destroyed” printed here
p.dealloc(1)
// using p.memory at this point is, of course, undefined and crashy...
p.memory.f()
However, suppose you took a copy of the value of memory, and assigned it to another variable. Doing this would increment the reference count of the object memory pointed to (same as if you took a copy of another regular class reference variable:
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
var c = p.memory
p.destroy() // Nothing will be printed here
p.dealloc(1)
// c has a reference
c.f()
// reassigning c decrements the last reference to the original
// c so the next line prints “Destroyed” (and “Created” for the new one)
c = C(123)
Now, imagine you created a closure that captured p, and used it’s memory after p.destroy() was called:
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
let f = { p.memory.f() }
p.destroy() // “Destroyed” printed here
p.dealloc(1)
// this amounts to calling p.memory.f() after it's destroyed,
// and so is accessing invalid memory and will crash...
f()
But, as in your case, if you instead just assign p.memory.f to f, it’s perfectly fine:
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
var f = p.memory.f
p.destroy() // Nothing will print, because
// f also has a reference to what p’s reference
// pointed to, so the object stays alive
p.dealloc(1)
// this is perfectly fine
f()
// This next line will print “Destroyed” - reassigning f means
// the reference f has to the object is decremented, hits zero,
// and the object is destroyed
f = { println("blah") }
So how come f captures the value?
As #rintaro pointed out, member methods in Swift are curried functions. Imagine there were no member methods. Instead, there were only regular functions, and structs that had member variables. How could you write the equivalent of methods? You might do something like this:
// a C.f method equivalent. Using this
// because self is a Swift keyword...
func C_f(this: C) {
println(this.x)
}
let c = C(42)
// call c.f()
C_f(c) // prints 42
Swift takes this one step further though, and “curries” the first argument, so that you can write c.f and get a function that binds f to a specific instance of C:
// C_f is a function that takes a C, and returns
// a function ()->() that captures the this argument:
func C_f(this: C) -> ()->() {
// here, because this is captured, it’s reference
// count will be incremented
return { println(this.x) }
}
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
var f = C_f(p.memory) // The equivalent of c.f
p.destroy() // Nothing will be destroyed
p.dealloc(1)
f = { println("blah") } // Here the C will be destroyed
This is equivalent to the capture in your original question code and should show why you aren’t seeing your original A object being destroyed.
By the way, if you really wanted to use a closure expression to call your method (supposed you wanted to do more work before or after), you could use a variable capture list:
let p = UnsafeMutablePointer<C>.alloc(1)
p.initialize(C(42))
// use variable capture list to capture p.memory
let f = { [c = p.memory] in c.f() }
p.destroy() // Nothing destroyed
p.dealloc(1)
f() // f has it’s own reference to the object
p1.memory.returnFive in TestClassB(closure: p1.memory.returnFive) is a curried function func returnFive() -> Int bound to the instance of ClassA. It owns the reference to the instance.
On the other hand, {p1.memory.returnFive()} is just a closure that captures p1 variable. This closure does not have a reference to the instance of ClassA itself.
So, in the second case, p1.memory is the only owner of the reference to ClassA instance. That's why p1.destroy() deallocates it.

Why Array.zeroCreate still fills null for non nullable type?

Does it imply that whenever I am passed an array of a non nullable type, I should still check if it is null? Actually it is not even possible to check <> null but have to use operator.unchecked .How is it better than C#?
type test=
{
value: int
}
let solution = Array.zeroCreate 10
solution.[0] <- {value = 1}
solution.[1].value // System.NullReferenceException: Object reference not set to an instance of an object
type test =
{value: int;}
val solution : test [] =
[|{value = 1;}; null; null; null; null; null; null; null; null; null|]
val it : unit = ()
It depends where the array is being passed from.
If the array is created and used only within F#, then no, you don't need to check for null; in fact, you shouldn't check for null (using Unchecked.defaultOf) because the F# compiler optimizes some special values like [] (and None, in certain cases) by representing them as null in the compiled IL.
If you're consuming an array being passed in by code written in another language (such as C#), then yes, you should still check for null. If the calling code just creates the array and doesn't mutate it any further, then you'll only need to perform the null checks once.
EDIT : Here's a previous discussion about how the F# compiler optimizes the representation of certain values using null: Why is None represented as null?
As the documentation for Array.zeroCreate indicates, it initializes the elements to Unchecked.defaultof<_>. This therefore carries with it all of the same caveats that direct use of Unchecked.defaultof does. Generally, my advice would be to use Array.create/Array.init whenever possible, and to treat Array.zeroCreate as a possible performance optimization (requiring care whenever dealing with non-nullable types).
You're creating a record type, which is implemented as a class, which is indeed nullable. If you intended to create a struct, your code should look something like this:
type test =
struct
val value: int
new(v) = { value = v }
override x.ToString() = x.value.ToString()
end
let solution = Array.zeroCreate 10
solution.[0] <- test(1)
This outputs: val solution : test [] = [|1; 0; 0; 0; 0; 0; 0; 0; 0; 0|]
You could also write the type using the Struct attribute, saving you a level of indentation.
[<Struct>]
type test =
val value: int
new(v) = { value = v }
override x.ToString() = x.value.ToString()

Resources