Disposing an IRandomAccessStream - stream

Is it safe to assume that when the first then clause finishes, bitmapStream will be disposed because it goes out of scope at that point (thereby making it's ref count go to 0)?
BitmapImage^ bmp = ref new BitmapImage();
create_task(StorageFile::GetFileFromApplicationUriAsync(uri)).then([](StorageFile^ file)
{
return file->OpenReadAsync();
}).then([bmp](IRandomAccessStream^ bitmapStream)
{
return bmp->SetSourceAsync(bitmapStream);
}).then([bmp]()
{
// Do some stuff with bmp here
});

Not really, StorageFile::OpenReadAsync() returns a IAsyncOperation<IRandomAccessStreamWithContentType>^ which is an asynchronous operation.
This operation creates and holds a reference to the IRandomAccessStream, when the operation is done, the PPL tasks get a reference to this stream through IAsyncOperation<TResult>::GetResults()
and they hold the reference, at least, until the second lambda function is done (the lambda function in the second then()).
After that, if the BitmapImage holds another reference to the stream, then the stream won't be disposed for a "long" time.
If you would like to dig more into this topic, you may be able to create your own implementation of IRandomAccessStream interface and put a breakpoint in the Dispose method.

Related

Pausing a stream in dart null safety

I'm converting dart code to nnbd.
I have the following code.
var subscription = response.listen(
(newBytes) async {
/// if we don't pause we get overlapping calls from listen
/// which causes the [writeFrom] to fail as you can't
/// do overlapping io.
subscription.pause();
/// we have new data to save.
await raf.writeFrom(newBytes);
subscription.resume();
});
The problem is I get the following error:
The non-nullable local variable 'subscription' must be assigned before it can be used.
Try giving it an initializer expression, or ensure that it's assigned on every execution path.
I've had a similar problem solved here:
dart - correct coding pattern for subscription when using null saftey?
which was answered by #lrn
However the pattern solution pattern doesn't seem to work in this case.
raf.writeFrom is an async operation so I must use an 'async' method which means I can't use the 'forEach' solution as again I don't have access to the subscription object.
If you really want to use listen, I'd do it as:
var subscription = response.listen(null);
subscription.onData((newBytes) async {
subscription.pause();
await raf.writeFrom(newBytes);
subscription.resume();
});
or, without the async:
var subscription = response.listen(null);
subscription.onData((newBytes) {
subscription.pause(raf.writeFrom(newBytes));
});
which will pause the subscription until the future returned by raf.writeFrom completes (it shouldn't complete with an error, though).
If using listen is not a priority, I'd prefer to use an asynchronous for-in like:
await for (var newBytes in subscription) {
await raf.writeFrom(newBytes);
}
which automatically pauses the implicit subscription at the await and resumes it when you get back to the loop.
Both with stream.listen and the StreamController constructor, null safety has made it nicer to create them first without callbacks, and then add the callbacks later, if the callback needs to refer to the subscription/controller.
(That's basically the same nswer as in the linked question, only applied to onData instead of onDone. You have to pass a default onData argument to listen, but it can be null precisely to support this approach.)
I don't think your code, as written, was legal before null-safety either; you can't reference a variable (subscription) before it's declared, and the declaration isn't complete until after the expression you initialize it with (response.listen(...)) is evaluated. You will need to separate the declaration from the initialization to break the circular dependency:
StreamSubscription<List<int>> subscription;
subscription = response.listen(...);

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

Future functions in a loop not working

I want to execute same future function with different values. The order is not important. But I want to execute some functions after the above future function. My idea is
addrMapList.forEach((addrMap){ //length is 3
exeQuery(sql).then((result){
print(result);
});
});
print('All finished');
// other actions
Future exeQuery(String sql){
var c=new Completer();
Random rnd=new Random();
c.complete(rnd.nextInt(100));
return c.future;
}
But the result is
All finished
72
90
74
But I need a result like
72
90
74
All finished
How can this implement in dart.. Please help.
Here is modified version of your sample to work as you expected it to.
First of all, you should understand how asynchronous code works, and why it was not in your case:
When you write constructions like <some future>.then( (){...} ); you are not immediately running code defined inside .then( ). You just defining a callback, to be called later. So, in your code, you're defined 3 callbacks, and then, immediately, printed "All finished", at the time when no of your futures even started to work. At this moment they are just sitting in dart's event loop and waiting for a chance to be executed. And they will get that chance only when you finish execution of current code, and not a moment earlier, because Isolate is run as a single thread.
I used Future.wait() to wait for multiple futures because you said order is not important. This is more efficient then waiting Futures one by one. But if order is important, you have to use Future.forEach(), it will not start execution of second Future until first one is completed.
One more thing in your code is that your function returning a Future is actually synchronous, because it always returns already completed Future. This is also changed in dartpad sample above to better visualize how asynchronous code works.
forEach can't be used this way. Use await for instead (the enclosing function needs to be async)
Future someFunc() async {
await for (var addrMap in addrMapList) {
var result = await exeQuery(sql);
print(result);
}
// other action
}

why does dart create closures when referencing a method?

void main() {
A one = new A(1);
A two = new A(2);
var fnRef = one.getMyId; //A closure created here
var anotherFnRef = two.getMyId; //Another closure created here
}
class A{
int _id;
A(this._id);
int getMyId(){
return _id;
}
}
According to the dart language tour page referencing methods like this creates a new closure each time. Does anyone know why it does this? I can understand creating closures when defining a method body as we can use variables in an outer scope within the method body, but when just referencing a method like above, why create the closure as the method body isn't changing so it can't use any of the variables available in that scope can it? I noticed in a previous question I asked that referencing methods like this effectively binds them to the object they were referenced from. So in the above example if we call fnRef() it will behave like one.getMyId() so is the closure used just for binding the calling context? ... I'm confused :S
UPDATE
In response to Ladicek. So does that mean that:
void main(){
var fnRef = useLotsOfMemory();
//did the closure created in the return statement close on just 'aVeryLargeObj'
//or did it close on all of the 'veryLargeObjects' thus keeping them all in memory
//at this point where they aren't needed
}
useLotsOfMemory(){
//create lots of 'veryLarge' objects
return aVeryLargeObj.doStuff;
}
Ladicek is right: accessing a method as a getter will automatically bind the method.
In response to the updated question:
No. It shouldn't keep the scope alive. Binding closures are normally implemented as if you invoked a getter of the same name:
class A{
int _id;
A(this._id);
int getMyId() => _id;
// The implicit getter for getMyId. This is not valid
// code but explains how dart2js implements it. The VM has
// probably a similar mechanism.
Function get getMyId { return () => this.getMyId(); }
}
When implemented this way you will not capture any variable that is alive in your useLotsOfMemory function.
Even if it really was allocating the closure inside the useLotsOfMemory function, it wouldn't be clear if it kept lots of memory alive.
Dart does not specify how much (or how little) is captured when a closure is created. Clearly it needs to capture at least the free variables of itself. This is the minimum. The question is thus: "how much more does it capture"?
The general consensus seems to be to capture every variable that is free in some closure. All local variables that are captured by some closure are moved into a context object and every closure that is created will just store a link to that object.
Example:
foo() {
var x = new List(1000);
var y = new List(100);
var z = new List(10);
var f = () => y; // y is free here.
// The variables y and z are free in some closure.
// The returned closure will keep both alive.
// The local x will be garbage collected.
return () => z; // z is free here.
}
I have seen Scheme implementations that only captured their own free variables (splitting the context object into independent pieces), so less is possible. However in Dart this is not a requirement and I wouldn't rely on it. For safety I would always assume that all captured variables (independent of who captures them) are kept alive. I would also make the assumption that bound closures are implemented similar to what I showed above and that they keep a strict minimum of memory alive.
That's exactly right -- the closure captures the object on which the method will be invoked.

How am I meant to use Filepath.Walk in Go?

The filepath.Walk function takes a function callback. This is straight function with no context pointer. Surely a major use case for Walk is to walk a directory and take some action based on it, with reference to a wider context (e.g. entering each file into a table).
If I were writing this in C# I would use an object (with fields that could point back to the objects in the context) as a callback (with a given callback method) on it so the object can encapsulate the context that Walk is called from.
(EDIT: user "usr" suggests that the closure method occurs in C# too)
If I were writing this in C I'd ask for a function and a context pointer as a void * so the function has a context pointer that it can pass into the Walk function and get that passed through to the callback function.
But Go only has the function argument and no obvious context pointer argument.
(If I'd designed this function I would have taken an object as a callback rather than a function, conforming to the interface FileWalkerCallback or whatever, and put a callback(...) method on that interface. The consumer could then attach whatever context to the object before passing it to Walk.)
The only way I can think of doing it is by capturing the closure of the outer function in the callback function. Here is how I am using it:
func ScanAllFiles(location string, myStorageThing *StorageThing) (err error) {
numScanned = 0
// Wrap this up in this function's closure to capture the `corpus` binding.
var scan = func(path string, fileInfo os.FileInfo, inpErr error) (err error) {
numScanned ++
myStorageThing.DoSomething(path)
}
fmt.Println("Scan All")
err = filepath.Walk(location, scan)
fmt.Println("Total scanned", numScanned)
return
}
In this example I create the callback function so its closure contains the variables numScanned and myStorageThing.
This feels wrong to me. Am I right to think it feels weird, or am I just getting used to writing Go? How is it intended for the filepath.Walk method to be used in such a way that the callback has a reference to a wider context?
You're doing it about right. There are two little variations you might consider. One is that you can replace the name of an unused parameter with an underbar. So, in your example where you only used the path, the signature could read
func(path string, _ os.FileInfo, _ error) error
It saves a little typing, cleans up the code a little, and makes it clear that you are not using the parameter. Also, for small functions especially, it's common skip assigning the function literal to a variable, and just use it directly as the argument. Your code ends up reading,
err = filepath.Walk(location, func(path string, _ os.FileInfo, _ error) error {
numScanned ++
myStorageThing.DoSomething(path)
})
This cleans up scoping a little, making it clear that you are using the closure just once.
As a C# programmer I can say that this is exactly how such an API in .NET would be meant to be used. You would be encouraged to use closures and discouraged to create an explicit class with fields because it just wastes your time.
As Go supports closures I'd say this is the right way to use this API. I don't see anything wrong with it.

Resources