I'm having troubles figuring out how to edit a function in GNU APL.
I have tried ∇func (DEFN ERROR), )edit (BAD COMMAND), )editor (BAD COMMAND)
and all give me errors.
Any suggestion on how to edit a simple function would be appreciated.
GNU APL 1.8 on Arch Linux
Build Date: 2020-07-07 19:33:16 UTC
You should be able to use the line editor (a.k.a. the ∇ editor).
Dyalog's documentation should apply to GNU APL too.
An example session:
⍝ define Foo:
∇r←Foo y
r←2×y
∇
⍝ apply Foo (result to stdout):
Foo 10
⍝ change Foo's line 1:
∇Foo[1]
r←3×y
∇
⍝ apply new Foo (result to stdout):
Foo 10
⍝ display Foo's definition (to stderr)
∇foo[⎕]∇
This should print the following to stdout:
20
30
And report Foo's new definition to stderr:
[0] r←foo y
[1] r←3×y
Related
I am trying to improve my loop computation speed by using foreach, but there is a simple Rcpp function I defined inside of this loop. I saved the Rcpp function as mproduct.cpp, and I call out the function simply using
sourceCpp("mproduct.cpp")
and the Rcpp function is a simple one, which is to perform matrix product in C++:
// [[Rcpp::depends(RcppArmadillo, RcppEigen)]]
#include <RcppArmadillo.h>
#include <RcppEigen.h>
// [[Rcpp::export]]
SEXP MP(const Eigen::Map<Eigen::MatrixXd> A, Eigen::Map<Eigen::MatrixXd> B){
Eigen::MatrixXd C = A * B;
return Rcpp::wrap(C);
}
So, the function in the Rcpp file is MP, referring to matrix product. I need to perform the following foreach loop (I have simplified the code for illustration):
foreach(j=1:n, .package='Rcpp',.noexport= c("mproduct.cpp"),.combine=rbind)%dopar%{
n=1000000
A<-matrix(rnorm(n,1000,1000))
B<-matrix(rnorm(n,1000,1000))
S<-MP(A,B)
return(S)
}
Since the size of matrix A and B are large, it is why I want to use foreach to alleviate the computational cost.
However, the above code does not work, since it provides me error message:
task 1 failed - "NULL value passed as symbol address"
The reason I added .noexport= c("mproduct.cpp") is to follow some suggestions from people who solved similar issues (Can't run Rcpp function in foreach - "NULL value passed as symbol address"). But somehow this does not solve my issue.
So I tried to install my Rcpp function as a library. I used the following code:
Rcpp.package.skeleton('mp',cpp_files = "<my working directory>")
but it returns me a warning message:
The following packages are referenced using Rcpp::depends attributes however are not listed in the Depends, Imports or LinkingTo fields of the package DESCRIPTION file: RcppArmadillo, RcppEigen
so when I tried to install my package using
install.packages("<my working directory>",repos = NULL,type='source')
I got the warning message:
Error in untar2(tarfile, files, list, exdir, restore_times) :
incomplete block on file
In R CMD INSTALL
Warning in install.packages :
installation of package ‘C:/Users/Lenovo/Documents/mproduct.cpp’ had non-zero exit status
So can someone help me out how to solve 1) using foreach with Rcpp function MP, or 2) install the Rcpp file as a package?
Thank you all very much.
The first step would be making sure that you are optimizing the right thing. For me, this would not be the case as this simple benchmark shows:
set.seed(42)
n <- 1000
A<-matrix(rnorm(n*n), n, n)
B<-matrix(rnorm(n*n), n, n)
MP <- Rcpp::cppFunction("SEXP MP(const Eigen::Map<Eigen::MatrixXd> A, Eigen::Map<Eigen::MatrixXd> B){
Eigen::MatrixXd C = A * B;
return Rcpp::wrap(C);
}", depends = "RcppEigen")
bench::mark(MP(A, B), A %*% B)[1:5]
#> # A tibble: 2 x 5
#> expression min median `itr/sec` mem_alloc
#> <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt>
#> 1 MP(A, B) 277.8ms 278ms 3.60 7.63MB
#> 2 A %*% B 37.4ms 39ms 22.8 7.63MB
So for me the matrix product via %*% is several times faster than the one via RcppEigen. However, I am using Linux with OpenBLAS for matrix operations while you are on Windows, which often means reference BLAS for matrix operations. It might be that RcppEigen is faster on your system. I am not sure how difficult it is for Windows user to get a faster BLAS implementation (https://csgillespie.github.io/efficientR/set-up.html#blas-and-alternative-r-interpreters might contain some pointers), but I would suggest spending some time on investigating this.
Now if you come to the conclusion that you do need RcppEigen or RcppArmadillo in your code and want to put that code into a package, you can do the following. Instead of Rcpp::Rcpp.package.skeleton() use RcppEigen::RcppEigen.package.skeleton() or RcppArmadillo::RcppArmadillo.package.skeleton() to create a starting point for a package based on RcppEigen or RcppArmadillo, respectively.
I am recently using DTrace to analyze my iOS app。
Everything goes well except when I try to use the built-in variable stackDepth。
I read the document here where shows the introduction of built-in variable stackDepth.
So I write some D code
pid$target:::entry
{
self->entry_times[probefunc] = timestamp;
}
pid$target:::return
{
printf ("-----------------------------------\n");
this->delta_time = timestamp - self->entry_times[probefunc];
printf ("%s\n", probefunc);
printf ("stackDepth %d\n", stackdepth);
printf ("%d---%d\n", this->delta_time, epid);
ustack();
printf ("-----------------------------------\n");
}
And run it with sudo dtrace -s temp.d -c ./simple.out。 unstack() function goes very well, but stackDepth always appears to 0。
I tried both on my iOS app and a simple C program.
So anybody knows what's going on?
And how to get stack depth when the probe fires?
You want to use ustackdepth -- the user-land stack depth.
The stackdepth variable refers to the kernel thread stack depth; the ustackdepth variable refers to the user-land thread stack depth. When the traced program is executing in user-land, stackdepth will (should!) always be 0.
ustackdepth is calculated using the same logic as is used to walk the user-land stack as with ustack() (just as stackdepth and stack() use similar logic for the kernel stack).
This seems like a bug in the Mac / iOS implementation of DTrace to me.
However, since you're already probing every function entry and return, you could just keep a new variable self->depth and do ++ in the :::entry probe and -- in the :::return probe. This doesn't work quite right if you run it against optimized code, because any tail-call-optimized functions may look like they enter but never return. To solve that, you can turn off optimizations.
Also, because what you're doing looks a lot like this, I thought maybe you would be interested in the -F option:
Coalesce trace output by identifying function entry and return.
Function entry probe reports are indented and their output is prefixed
with ->. Function return probe reports are unindented and their output
is prefixed with <-.
The normal script to use with -F is something like:
pid$target::some_function:entry { self->trace = 1 }
pid$target:::entry /self->trace/ {}
pid$target:::return /self->trace/ {}
pid$target::some_function:return { self->trace = 0 }
Where some_function is the function whose execution you want to be printed. The output shows a textual call graph for that execution:
-> some_function
-> another_function
-> malloc
<- malloc
<- another_function
-> yet_another_function
-> strcmp
<- strcmp
-> malloc
<- malloc
<- yet_another_function
<- some_function
Chapter 3 of Starting FORTH says,
Now that you've made a block "current", you can list it by simply typing the word L. Unlike LIST, L does not want to be proceeded by a block number; instead it lists the current block.
When I run 180 LIST, I get
Screen 180 not modified
0
...
15
ok
But when I run L, I get an error
:30: Undefined word
>>>L<<<
Backtrace:
$7F0876E99A68 throw
$7F0876EAFDE0 no.extensions
$7F0876E99D28 interpreter-notfound1
What am I doing wrong?
Yes, gForth supports an internal (BLOCK) editor. Start gforth
type: use blocked.fb (a demo page)
type: 1 load
type editor
words will show the editor words,
s b n bx nx qx dl il f y r d i t 'par 'line 'rest c a m ok
type 0 l to list screen 0 which describes the editor,
Screen 0 not modified
0 \\ some comments on this simple editor 29aug95py
1 m marks current position a goes to marked position
2 c moves cursor by n chars t goes to line n and inserts
3 i inserts d deletes marked area
4 r replaces marked area f search and mark
5 il insert a line dl delete a line
6 qx gives a quick index nx gives next index
7 bx gives previous index
8 n goes to next screen b goes to previous screen
9 l goes to screen n v goes to current screen
10 s searches until screen n y yank deleted string
11
12 Syntax and implementation style a la PolyFORTH
13 If you don't like it, write a block editor mode for Emacs!
14
15
ok
Creating your own block file
To create your own new block file myblocks.fb
type: use blocked.fb
type: 1 load
type editor
Then
type use myblocks.fb
1 load will show BLOCK #1 (lines 0 till 15. 16 Lines of 64 characters each)
1 t will highlight line 1
Type i this is text to [i]nsert into line 1
After the current BLOCK is edited type flush in order to write BLOCK #1 to the file myblocks.fb
For more information see, gForth Blocks
It turns out these are "Editor Commands" the book says,
For Those Whose EDITOR Doesn't Follow These Rules
The FORTH-79 Standard does not specify editor commands. Your system may use a different editor; if so, check your systems documentation
I don't believe gforth supports an internal editor at all. So L, T, I, P, F, E, D, R are all presumably unsupported.
gforth is well integrated with emacs. In my xemacs here, by default any file called *.fs is considered FORTH source. "C-h m", as usual, gives the available commands.
No, GNU Forth doesn't have an internal editor; I use Vim :)
I'm using luacheck (within the Atom editor), but open to other static analysis tools.
Is there a way to check that I'm using an uninitialized table field? I read the docs (http://luacheck.readthedocs.io/en/stable/index.html) but maybe I missed how to do this?
In all three cases in the code below I'm trying to detect that I'm (erroneously) using field 'y1'. None of them do. (At run-time it is detected, but I'm trying to catch it before run-time).
local a = {}
a.x = 10
a.y = 20
print(a.x + a.y1) -- no warning about uninitialized field y1 !?
-- luacheck: globals b
b = {}
b.x = 10
b.y = 20
print(b.x + b.y1) -- no warning about uninitialized field y1 !?
-- No inline option for luacheck re: 'c', so plenty of complaints
-- about "non-standard global variable 'c'."
c = {} -- warning about setting
c.x = 10 -- warning about mutating
c.y = 20 -- " " "
print(c.x + c.y1) -- more warnings (but NOT about field y1)
The point is this: as projects grow (files grow, and the number & size of modules grow), it would be nice to prevent simple errors like this from creeping in.
Thanks.
lua-inspect should be able to detect and report these instances. I have it integrated into ZeroBrane Studio IDE and when running with the deep analysis it reports the following on this fragment:
unknown-field.lua:4: first use of unknown field 'y1' in 'a'
unknown-field.lua:7: first assignment to global variable 'b'
unknown-field.lua:10: first use of unknown field 'y1' in 'b'
unknown-field.lua:14: first assignment to global variable 'c'
unknown-field.lua:17: first use of unknown field 'y1' in 'c'
(Note that the integration code only reports first instances of these errors to minimize the number of instances reported; I also fixed an issue that only reported first unknown instance of a field, so you may want to use the latest code from the repository.)
People who look into questions related to "Lua static analysis" may also be interested in the various dialects of typed Lua, for example:
Typed Lua
Titan
Pallene
Ravi
But you may not have heard of "Teal". (early in its life it was called "tl"); .
I'm taking the liberty to answer my original question using Teal, since I find it intriguing.
-- 'record' (like a 'struct')
local Point = record
x : number
y : number
end
local a : Point = {}
a.x = 10
a.y = 20
print(a.x + a.y1) -- will trigger an error
-- (in VS Code using teal extension & at command line)
From command line:
> tl check myfile.tl
========================================
1 error:
myfile.tl:44:13: invalid key 'y1' in record 'a'
By the way...
> tl gen myfile.tl'
creates a pure Lua file: 'myfile.lua' that has no type information in it. Note: running this Lua file will trigger the 'nil' error... lua: myfile.lua:42: attempt to index a nil value (local 'a').
So, Teal gives you a chance to catch 'type' errors, but it doesn't require you to fix them before generating Lua files.
I only know list_to_binary in erlang otp, but today I see iolist_to_binary?
I tried it in erlang shell, but I can't find the difference.
(ppb1_bs6#esekilvxen263)59> list_to_binary([<<1>>, [1]]).
<<1,1>>
(ppb1_bs6#esekilvxen263)60> iolist_to_binary([<<1>>, [1]]).
<<1,1>>
(ppb1_bs6#esekilvxen263)61> iolist_to_binary([<<1>>, [1], 1999]).
** exception error: bad argument
in function iolist_to_binary/1
called as iolist_to_binary([<<1>>,[1],1999])
(ppb1_bs6#esekilvxen263)62> list_to_binary([<<1>>, [1], 1999]).
** exception error: bad argument
in function list_to_binary/1
called as list_to_binary([<<1>>,[1],1999])
(ppb1_bs6#esekilvxen263)63> list_to_binary([<<1>>, [1], <<1999>>]).
<<1,1,207>>
(ppb1_bs6#esekilvxen263)64> ioslist_to_binary([<<1>>, [1], <<1999>>]).
** exception error: undefined shell command ioslist_to_binary/1
(ppb1_bs6#esekilvxen263)65> iolist_to_binary([<<1>>, [1], <<1999>>]).
<<1,1,207>>
From the test, I think iolist_to_binary may be the same as list_to_binary.
In current versions of Erlang/OTP both functions will automatically flatten the list or iolist() you provide, leaving functionally only one difference, which is that list_to_binary/1 will not accept a binary parameter, but iolist_to_binary/1 will:
1> erlang:iolist_to_binary(<<1,2,3>>).
<<1,2,3>>
2> erlang:list_to_binary(<<1,2,3>>).
** exception error: bad argument
in function list_to_binary/1
called as list_to_binary(<<1,2,3>>)
This is clear from the type specification of iolist():
maybe_improper_list(byte() | binary() | iolist(), binary() | [])
As you can see, iolist() can be a binary, but clearly a list can never be a binary.
If you want to follow through the convolution of the improper list type you can find its definition, along with iolist() at (Erlang) Types and Function Specifications.
In terms of practical use iolist() is a messy or unflattened list. Think of it as a kind of lazy output, deliberately not flattened to be a proper list by the produver as an optimisation because not every consumer will need to flatten it. You can see what it looks like by checking the output of io_lib:format/2:
1> io_lib:format("Hello ~s ~b!~n", ["world", 834]).
[72,101,108,108,111,32,"world",32,"834",33,"\n"]
Or:
2> io:format("~w~n", [ io_lib:format("Hello ~s ~b!~n", ["world", 834]) ]).
[72,101,108,108,111,32,[119,111,114,108,100],32,[56,51,52],33,[10]]
I notice that many of your examples include the number 1999. Attempts to convert any list or iolist() which contain any number that is greater than 255 to a binary will always fail by the way:
8> erlang:list_to_binary([1,2,3,255]).
<<1,2,3,255>>
9> erlang:list_to_binary([1,2,3,256]).
** exception error: bad argument
in function list_to_binary/1
called as list_to_binary([1,2,3,256])
This is because it wants to create a list of bytes from your list, and a number above 255 is ambiguous in terms of its possible byte representation; it can't be known if you mean little endian or big endian, etc (see (Wikipedia) Endianness).
The differences are not large. Type checking is done. But in the end caused by one and the same function. If I correctly found.
Source list_to_binary (https://github.com/erlang/otp/blob/maint/erts/emulator/beam/binary.c#L896):
BIF_RETTYPE list_to_binary_1(BIF_ALIST_1)
{
return erts_list_to_binary_bif(BIF_P, BIF_ARG_1, bif_export[BIF_list_to_binary_1]);
}
Source iolist_to_binary (https://github.com/erlang/otp/blob/maint/erts/emulator/beam/binary.c#L903):
BIF_RETTYPE iolist_to_binary_1(BIF_ALIST_1)
{
if (is_binary(BIF_ARG_1)) {
BIF_RET(BIF_ARG_1);
}
return erts_list_to_binary_bif(BIF_P, BIF_ARG_1, bif_export[BIF_iolist_to_binary_1]);
}
iolist_to_binary - converts iolist to binary
for your ref:
http://www.erlangpatterns.org/iolist.html
http://www.erlang.org/doc/man/erlang.html