vimscript: switch to buffer by filename path - path

I have a vimscript which needs to switch to a particular buffer. That buffer will be specified by either full path, partial path, or just its name.
For example:
I am in the directory /home/user/code and I have 3 vim buffers open foo.py src/foo.py and src/bar.py.
If the script was told to switch to buffer /home/user/code/foo.py it would switch to buffer foo.py.
If it were told to switch to user/code/src/foo.py it would switch to buffer src/foo.py
If it were told to switch to foo.py it would switch to buffer foo.py
If it were told to swith to bar.py it would switch to buffer src/bar.py
The simplest solution I can see is to somehow get a list of the buffers stored in a variable and use trial and error.
It would be nice if the solution was cross platform, but it needs to at least run on Linux.

The bufname() / bufnr() functions can lookup loaded buffers by partial filename. You can anchor the match to the end by appending a $, like this:
echo bufnr('/src/foo.py$')

I found a way to do this using python in a vimscript. With python I was able to get the names of all the buffers from vim.buffers[i].name and used os.path and os.sep to process which buffer to switch to.
In the end, I decided that it would be more helpful for it to refuse to do anything if the buffer it was requested to switch to was ambiguous.
Here it is:
"Given a file, full path, or partial path, this will try to change to the
"buffer which may match that file. If no buffers match, it returns 1. If
"multiple buffers match, it returns 2. It returns 0 on success
function s:GotoBuffer(buf)
python << EOF
import vim, os
buf = vim.eval("a:buf")
#split the paths into lists of their components and reverse.
#e.g. foo/bar/baz.py becomes ['foo', 'bar', 'baz.py']
buf_path = os.path.normpath(buf).split(os.sep)[::-1]
buffers = [os.path.normpath(b.name).split(os.sep)[::-1] for b in vim.buffers]
possible_buffers = range(len(buffers))
#start eliminating incorrect buffers by their filenames and paths
for component in xrange(len(buf_path)):
for b in buffers:
if len(b)-1 >= component and b[component] != buf_path[component]:
#This buffer doesn't match. Eliminate it as a posibility.
i = buffers.index(b)
if i in possible_buffers: possible_buffers.remove(i)
if len(possible_buffers) > 1: vim.command("return 2")
#delete the next line to allow ambiguous switching
elif not possible_buffers: vim.command("return 1")
else:
vim.command("buffer " + str(possible_buffers[-1] + 1))
EOF
endfunction
EDIT: The above code seems to have some bugs. I am not going to fix them because there is another answer which is much better.

Related

F# seq behavior

I'm a little baffled about the inner work of the sequence expression in F#.
Normally if we make a sequential file reader with seq with no intentional caching of data
seq {
let mutable current = file.Read()
while current <> -1 do
yield current
}
We will end up with some weird behavior if we try to do some re-iterate or backtracking, My Idea of this was, since Read() is a function calling some mutable value we can't expect the output to be correct if we re-iterate. But then this behaves nicely even on boundary reading?
let Read path =
seq {
use fp = System.IO.File.OpenRead path
let buf = [| for _ in 0 .. 1024 -> 0uy |]
let mutable pos = 1
let mutable current = 0
while pos <> 0 do
if current = 0 then
pos <- fp.Read(buf, 0, 1024)
if pos > 0 && current < pos then
yield buf.[current]
current <- (current + 1) % 1024
}
let content = Read "some path"
We clearly use the same buffer to enhance performance, but assuming that we read the 1025 byte, it will trigger an update to the buffer, if we then try to read any byte with position < 1025 after we still get the correct output. How can that be and what are the difference?
Your question is a bit unclear, so I'll try to guess.
When you create a seq { }, you're essentially creating a state machine which will run only as far as it needs to. When you request the very first element from it, it'll start at the top and run until your first yield instruction. Then, when you request another value, it'll run from that point until the next yield, and so on.
Keep in mind that a seq { } produces an IEnumerable<'T>, which is like a "plan of execution". Each time you start to iterate the sequence (for example by calling Seq.head), a call to GetEnumerator is made behind the scenes, which causes a new IEnumerator<'T> to be created. It is the IEnumerator which does the actual providing of values. You can think of it in more classical terms as having an array over which you can iterate (an iterable or enumerable) and many pointers over that array, each of which are at different points in the array (many iterators or enumerators).
In your first code, file is most likely external to the seq block. This means that the file you are reading from is baked into the plan of execution; no matter how many times you start to iterate the sequence, you'll always be reading from the same file. This is obviously going to cause unpredictable behaviour.
However, in your second code, the file is opened as part of the seq block's definition. This means that you'll get a new file handle each time you iterate the sequence or, essentially, a new file handle per enumerator. The reason this code works is that you can't reverse an enumerator or iterate over it multiple times, not with a single thread at least.
(Now, if you were to manually get an enumerator and advance it over multiple threads, you'd probably run into problems very quickly. But that is a different topic.)

SuperCollider Error: Buffer UGen: no buffer data

Working through how to read sound files into a Buffer and then looping it. When I run the script to create a Buffer and read a sound file into it, it succeeds, but when I create a SynthDef using that buffer (the second line of code here), it gives me the error Buffer UGen: no buffer data. It's drawing on the same bufnum, so I'm not sure what's going on.
b = Buffer.read(s, Platform.resourceDir +/+ "sounds/testing.wav");
c= SynthDef(\loopbuffer, {arg start=0, end=10000; Out.ar(0,Pan2.ar(BufRd.ar(1, 0, Phasor.ar(0, BufRateScale.kr(b.bufnum), start, end),0.0)))}).play(s);
Platform.resourceDir ++ "/sounds/testing.wav"
The ++ here means no space is inserted when concatenating.
BufRd.ar(b.numChannels, b.bufNum)
The missing b.bufNum is causing your error. The channels 0 through 3 are reserved for hardware in/outs.

DTrace build in built-in variable stackdepth always return 0

I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function

Save vector to file during debug session (Xcode)

My application has crashed in an assert, and the debugger is attached. To be able to reproduce the crash I want to save a C++ vector with 397 struct{uint64_t, uint64_t} elements to file.
My first approach was to try to print the vector. I can print the vector to the console, but it seems like only the first 256 values are written. Is it possible to remove the 256 element restriction?
I've also searched for a way to save the vector to file from within the debugger, but I've not found any way. I've not even found a way to save a memory region, but I guess that must be possible...
Since you mentioned that you're stopped in the debugger in Xcode, I'll assume you're debugging with lldb. You can use the expression command to execute essentially arbitrary code when you're stopped in the debugger, for example:
expression for(int j = 0; j < 10; j++) { (void)NSLog(#"%d", j); }
Will execute a for loop and print the numbers 0 through 9. You should be able to use a similar technique to iterate over your vector and write it to a file. You can combine multiple expressions using a semicolon, just as if you were writing normal code (well, except for newlines). For example, this will write "Hello, world" to a temporary file at /tmp/vector.dat, not exactly what you want, but I think you'll get the idea:
expression FILE *fp = (FILE*)fopen("/tmp/vector.dat", "w"); (void)fprintf(fp, "Hello, world!\n"); (void)fclose(fp);

Can I get a list of all currently-registered atoms?

My project has blown through the max 1M atoms, we've cranked up the limit, but I need to apply some sanity to the code that people are submitting with regard to list_to_atom and its friends. I'd like to start by getting a list of all the registered atoms so I can see where the largest offenders are. Is there any way to do this. I'll have to be creative about how I do it so I don't end up trying to dump 1-2M lines in a live console.
You can get hold of all atoms by using an undocumented feature of the external term format.
TL;DR: Paste the following line into the Erlang shell of your running node. Read on for explanation and a non-terse version of the code.
(fun F(N)->try binary_to_term(<<131,75,N:24>>) of A->[A]++F(N+1) catch error:badarg->[]end end)(0).
Elixir version by Ivar Vong:
for i <- 0..:erlang.system_info(:atom_count)-1, do: :erlang.binary_to_term(<<131,75,i::24>>)
An Erlang term encoded in the external term format starts with the byte 131, then a byte identifying the type, and then the actual data. I found that EEP-43 mentions all the possible types, including ATOM_INTERNAL_REF3 with type byte 75, which isn't mentioned in the official documentation of the external term format.
For ATOM_INTERNAL_REF3, the data is an index into the atom table, encoded as a 24-bit integer. We can easily create such a binary: <<131,75,N:24>>
For example, in my Erlang VM, false seems to be the zeroth atom in the atom table:
> binary_to_term(<<131,75,0:24>>).
false
There's no simple way to find the number of atoms currently in the atom table*, but we can keep increasing the number until we get a badarg error.
So this little module gives you a list of all atoms:
-module(all_atoms).
-export([all_atoms/0]).
atom_by_number(N) ->
binary_to_term(<<131,75,N:24>>).
all_atoms() ->
atoms_starting_at(0).
atoms_starting_at(N) ->
try atom_by_number(N) of
Atom ->
[Atom] ++ atoms_starting_at(N + 1)
catch
error:badarg ->
[]
end.
The output looks like:
> all_atoms:all_atoms().
[false,true,'_',nonode#nohost,'$end_of_table','','fun',
infinity,timeout,normal,call,return,throw,error,exit,
undefined,nocatch,undefined_function,undefined_lambda,
'DOWN','UP','EXIT',aborted,abs_path,absoluteURI,ac,accessor,
active,all|...]
> length(v(-1)).
9821
* In Erlang/OTP 20.0, you can call erlang:system_info(atom_count):
> length(all_atoms:all_atoms()) == erlang:system_info(atom_count).
true
I'm not sure if there's a way to do it on a live system, but if you can run it in a test environment you should be able to get a list via crash dump. The atom table is near the end of the crash dump format. You can create a crash dump via erlang:halt/1, but that will bring down the whole runtime system.
I dare say that if you use more than 1M atoms, then you are doing something wrong. Atoms are intended to be static as soon as the application runs or at least upper bounded by some small number, 3000 or so for a medium sized application.
Be very careful when an enemy can generate atoms in your vm. especially calls like list_to_atom/1 is somewhat dangerous.
EDITED (wrong answer..)
You can adjust number of atoms with +t
http://www.erlang.org/doc/efficiency_guide/advanced.html
..but I know very few use cases when it is necessary.
You can track atom stats with erlang:memory()

Resources