Following questions as a beginner to Dart and RxDart. The versions of Dart and that of RxDart are latest as of yesterday.
In the following example Dart program, saved in file 't.dart', only one of the two options, A or B, is un-commented at a time. Before executing it a 'fifo' is created by executing 'mkfifo fifo'. The results of the execution are as below.
Questions:
Why does a Stream opened using File show only one byte received, whereas when using stdin Stream and input from the same fifo sees all the input?
Why does the RxDart operator take emits only one value?
Option-A: Executed as 'dart t.dart' in one window, and '(for i in A B C D; do echo -n $i; sleep 1; done) > fifo' another window in same directory. The output is:
byte count: 1, bytes: A
File is now closed.
Option-B: Executed as 'cat fifo | dart t.dart' in one window, and '(for i in A B C D; do echo -n $i; sleep 1; done) > fifo'. The output is:
byte count: 1, bytes: A
byte count: 1, bytes: B
byte count: 1, bytes: C
byte count: 1, bytes: D
File is now closed.
import 'dart:io';
import 'dart:convert';
main(List<String> args) {
// Option-A
// Stream<List<int>> inputStream = File("fifo").openRead();
// Option-B
// Stream<List<int>> inputStream = stdin;
inputStream
.transform(utf8.decoder)
.take(16)
.listen((bytes) => print('byte count: ${bytes.length}, bytes: ${bytes}'),
onDone: () { print('File is now closed.'); },
onError: (e) { print(e.toString()); }
);
}
(I'm not knowledgeable enough in the internals of how Dart I/O works to give a firm answer, so this is my best guess as to what is happening.)
What it seems is going on is that in Option A, you are creating a stream to a yet-to-exist file. Dart sees that the file doesn't yet exist, so publishing to the stream is delayed. Then when you run the echo script, it creates the file and appends the first value to it "A", after which you tell it to sleep for 1 second.
During that second, Dart sees that the file now exists and begins streaming data from it. It reads "A", and then it reaches the end of the file. As far as Dart is concerned, that's the end of the story, so it closes the stream. By the time the script adds the "B", "C", and "D" to the file, Dart has already finished executing the program and exited the process.
In Option B, rather than telling Dart to stream from a file, you are tapping into the process's input stream which (as far as I am aware) is going to remain open for as long as there is stuff being written to it. I have a feeling that understanding what is exactly happening requires better knowledge of cat and how piping works in the terminal than I possess, but I believe the long-story-short of it is that the cat program knows that the file is being written to which prevents it from terminating early. As such, whenever cat gets new data, it pipes that data to the Dart process's input stream.
Back to the Dart code, you are listening to the input stream which is still expecting data since cat is still executing, and as such hasn't closed. Only when the file writing process is complete does cat recognized that it has reached the true end of the file and shuts down, at which point Dart recognizes that it isn't going to get more data and so closes the input stream.
(As I said, this is merely my best guess, but I suspect that an easy way to tell would be to look at the times your Dart script and other script are finishing. If in Option A the Dart finishes long before the script does and in Option B they finish at roughly the same time, that would be sufficient evidence to me to indicate the above is indeed what is happening.)
Related
Working through how to read sound files into a Buffer and then looping it. When I run the script to create a Buffer and read a sound file into it, it succeeds, but when I create a SynthDef using that buffer (the second line of code here), it gives me the error Buffer UGen: no buffer data. It's drawing on the same bufnum, so I'm not sure what's going on.
b = Buffer.read(s, Platform.resourceDir +/+ "sounds/testing.wav");
c= SynthDef(\loopbuffer, {arg start=0, end=10000; Out.ar(0,Pan2.ar(BufRd.ar(1, 0, Phasor.ar(0, BufRateScale.kr(b.bufnum), start, end),0.0)))}).play(s);
Platform.resourceDir ++ "/sounds/testing.wav"
The ++ here means no space is inserted when concatenating.
BufRd.ar(b.numChannels, b.bufNum)
The missing b.bufNum is causing your error. The channels 0 through 3 are reserved for hardware in/outs.
I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function
There's variable in my module, and there's receive method to renew variable value. And multiple process are calling this method simultaneously. I need lock this variable when one process is modifying it. Sample as below
mytest.erl
%%%-------------------------------------------------------------------
-module(mytest).
%% API
-export([start_link/0,display/1,callDisplay/2]).
start_link()->
Pid=spawn(mytest,display,["Hello"]),
Pid.
display(Val) ->
io:format("It started: ~p",[Val]),
NextVal=
receive
{call,Msg}->
NewVal=Val++" "++Msg++" ",
NewVal;
stop->
true
end,
display(NextVal).
callDisplay(Pid,Val)->
Pid!{call,Val}.
Start it
Pid=mytest:start_link().
Two process are calling it in the same time
P1=spawn(mytest,callDisplay,[Pid,"Walter"]),
P2=spawn(mytest,callDisplay,[Pid,"Dave"]).
I hope it can add "Walter", "Dave" one by one like "Hello Walter Dave", however, when there're too many of them running together, some Names(Walter, Dave, etc) will be override.
Because when P1, P2 started the same time, Val both are "Hello". P1 add "Walter" to become "Hello Walter", P2 add "Dave" to become "Hello Dave". P1 saved it firstly to NextVal as "Hello Walter", then P2 saved it to NextVal as "Hello Dave", so result will be "Hello Dave". "Hello Walter" is replaced by "Hello Dave", and "Walter" lost forever.
Is there any way I can lock "Val", so when we add "Walter", "Dave" will waiting till Value setting is done?
Even though it's an old question but it's worth explaining.
From what you said and if I'm correct,
you expect to see
"Hello Walter", and "Hello Dave". However, you're seeing successive names been appended to the former as, "Hello Walter Dave.."
This behavior is normal and to see that let look briefly at Erlang memory model. Erlang process memory is divided into three main parts:
Process Control Block(PCB):
This hold the process pid, registered name,table,states and pointers to messages in the it's queue.
Stack:
This hold function parameters, local variables and function return address.
Private Heap: This hold incoming message compound data like tuple, list and binary(not larger than 64 bytes).
All data in these memory belong to and are private to the owning process.
Stage1:
When Pid=spawn(mytest,display,["Hello"]) is called, the server process is created, then the display function with "Hello" passed as argument is called. Since display/1 is executed in the serve process, the "Hello" argument lives in the server's process stack. Execution of display/1 continues until it reaches the receive clause then block and await message matching your format.
Stage 2:
Now P1 starts, it executes ServerPid ! {call, "Walter"}, then P2 executes ServerPid ! {call, "Dave"}. In both cases, erlang makes a copy of the message and send it to the server's process mailbox (Private Heap). This copied message in the mailbox belongs to the server process not the client's.
Now, when {call, "Walter"} is matched, Msg get bound to "Walter".
From stage1, we know Val is bounded to "Hello", Newval then get bounded to "Val ++ " " ++ Msg" = "Hello Walter".
At this point, P2's message, {call, "Dave"}, is still in the server's mailbox awaiting the next receive clause which will happen in the next recursive call to display/1. NextVal get bound to NewVal and the recursive call to dispaly/1 with "Hello Walter" passed as argument is made. This gives the first print "Hello Walter " which now also lives in the server's process stack.
Now when the receive clause is reach again, P2's message {call, "Dave"} is matched.
Now NewVal and NextVal get bound to "Hello Walter" ++ " " ++ "Dave" = "Hello Walter Dave". This get passed as argument to display/1 as the new Val to print Hello Walter Dave. In a nutshell, this variable is updated on every server loop. It serves the same purpose as the State term in gen_server behavior. In your case, successive client calls just appends the message to this serve state variable. Now to your question,
Is there any way I can lock Val, so when we add "Walter", "Dave" will waiting till Value setting is done?
No. Not by locking. Erlang does not work this way.
There are no process locking constructs as it does not need one.
Data(Variables) are always immutable and private(except large binaries which stays in the Shared Heap) to the process that created it.
Also, it's not the actual message you used in the Pid ! Msg construct that is process by the receiving process. It's it copy. The Val parameter in yourdisplay/1 function is private and belongs to the server process because it lives in it stack memory as every call to display/1 is made by the server process itself. So there is no way any other process can lock not even see that variable.
Yes. By sequential message processing
This is exactly what the server process is doing. Polling one message a time from it queue. When {call, "Walter"} was taken, {call, "Dave"} was waiting in the queue. The reason why you see unexpected greeting is because the you change the server state, the display/1 parameter for the next display/1 call which process {call, "Dave"}
I have a vimscript which needs to switch to a particular buffer. That buffer will be specified by either full path, partial path, or just its name.
For example:
I am in the directory /home/user/code and I have 3 vim buffers open foo.py src/foo.py and src/bar.py.
If the script was told to switch to buffer /home/user/code/foo.py it would switch to buffer foo.py.
If it were told to switch to user/code/src/foo.py it would switch to buffer src/foo.py
If it were told to switch to foo.py it would switch to buffer foo.py
If it were told to swith to bar.py it would switch to buffer src/bar.py
The simplest solution I can see is to somehow get a list of the buffers stored in a variable and use trial and error.
It would be nice if the solution was cross platform, but it needs to at least run on Linux.
The bufname() / bufnr() functions can lookup loaded buffers by partial filename. You can anchor the match to the end by appending a $, like this:
echo bufnr('/src/foo.py$')
I found a way to do this using python in a vimscript. With python I was able to get the names of all the buffers from vim.buffers[i].name and used os.path and os.sep to process which buffer to switch to.
In the end, I decided that it would be more helpful for it to refuse to do anything if the buffer it was requested to switch to was ambiguous.
Here it is:
"Given a file, full path, or partial path, this will try to change to the
"buffer which may match that file. If no buffers match, it returns 1. If
"multiple buffers match, it returns 2. It returns 0 on success
function s:GotoBuffer(buf)
python << EOF
import vim, os
buf = vim.eval("a:buf")
#split the paths into lists of their components and reverse.
#e.g. foo/bar/baz.py becomes ['foo', 'bar', 'baz.py']
buf_path = os.path.normpath(buf).split(os.sep)[::-1]
buffers = [os.path.normpath(b.name).split(os.sep)[::-1] for b in vim.buffers]
possible_buffers = range(len(buffers))
#start eliminating incorrect buffers by their filenames and paths
for component in xrange(len(buf_path)):
for b in buffers:
if len(b)-1 >= component and b[component] != buf_path[component]:
#This buffer doesn't match. Eliminate it as a posibility.
i = buffers.index(b)
if i in possible_buffers: possible_buffers.remove(i)
if len(possible_buffers) > 1: vim.command("return 2")
#delete the next line to allow ambiguous switching
elif not possible_buffers: vim.command("return 1")
else:
vim.command("buffer " + str(possible_buffers[-1] + 1))
EOF
endfunction
EDIT: The above code seems to have some bugs. I am not going to fix them because there is another answer which is much better.
As library docs say CString created with newCString must be freed with free function. I have been expecting that when CString is created it would take some memory and when it is released with free memory usage would go down, but it didn't! Here is example code:
module Main where
import Foreign
import Foreign.C.String
import System.IO
wait = do
putStr "Press enter" >> hFlush stdout
_ <- getLine
return ()
main = do
let s = concat $ replicate 1000000 ['0'..'9']
cs <- newCString s
cs `seq` wait -- (1)
free cs
wait -- (2)
When program stopped at (1), htop program showed that memory usage is somewhere around 410M - this is OK. I press enter and the program stops at line (2), but memory usage is still 410M despite cs has been freed!
How is this possible? Similar program written in C behaves as it should. What am I missing here?
The issue is that free just indicates to the garbage collector that it can now collect the string. That doesn't actually force the garbage collector to run though -- it just indicates that the CString is now garbage. It is still up to the GC to decide when to run, based on heap pressure heuristics.
You can force a major collection by calling performGC straight after the call to free, which immediately reduces the memory to 5M or so.
E.g. this program:
import Foreign
import Foreign.C.String
import System.IO
import System.Mem
wait = do
putStr "Press enter" >> hFlush stdout
_ <- getLine
return ()
main = do
let s = concat $ replicate 1000000 ['0'..'9']
cs <- newCString s
cs `seq` wait -- (1)
free cs
performGC
wait -- (2)
Behaves as expected, with the following memory profile - the first red dot is the call to performGC, immediately deallocating the string. The program then hovers around 5M until terminated.