Compiling LFE files with make - erlang

Is there a standard way of compiling .lfe source files from a make rule in an OTP project?
According to the docs, I'm supposed to use lfe_comp:file/1, which doesn't help much if I want to compile multiple such files in an OTP application (where I'm supposed to keep the source files in src, but the binaries in ebin).
Ideally, I'd be able to do something like
erlc -Wf -o ebin src/*lfe
But there doesn't seem to be lfe support in erlc. The best solution I can think of off the top of my head is
find src/*lfe -exec erl -s lfe_comp file {} -s init stop \;
mv src/*beam ebin/
but that seems inelegant. Any better ideas?

On suggestion from rvirding, here's a first stab at lfec that does what I want above (and pretty much nothing else). I'd invoke it from a Makefile with ./lfec -o ebin src/*lfe.
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -smp enable -sname lfec -mnesia debug verbose
main(Arguments) ->
try
{Opts, Args} = parse_opts(Arguments),
case get_opt("-o", Opts) of
false ->
lists:map(fun lfe_comp:file/1, Args);
Path ->
lists:map(fun (Arg) -> lfe_comp:file(Arg, [{outdir, Path}]) end,
Args)
end
catch
_:_ -> usage()
end;
main(_) -> usage().
get_opt(Target, Opts) ->
case lists:keyfind(Target, 1, Opts) of
false -> false;
{_} -> true;
{_, Setting} -> Setting
end.
parse_opts(Args) -> parse_opts(Args, []).
parse_opts(["-o", TargetDir | Rest], Opts) ->
parse_opts(Rest, [{"-o", TargetDir} | Opts]);
parse_opts(Args, Opts) -> {Opts, Args}.
usage() ->
io:format("usage:\n"),
io:format("-o [TargetDir] -- output files to specified directory\n"),
halt(1).

Not really. LFE is not supported by OTP so erlc does not know about .lfe files. And as far as I know there is no way to "open up" erlc and dynamically add information how to process files. An alternative would be to write an lfec script for this. I will think about it.
Just as a matter of interest what are using LFE for?

Related

How can I pass command-line arguments to a Erlang program?

I'm working on a Erlang. How can I pass command line parameters to it?
Program File-
-module(program).
-export([main/0]).
main() ->
io:fwrite("Hello, world!\n").
Compilation Command:
erlc Program.erl
Execution Command-
erl -noshell -s program main -s init stop
I need to pass arguments through execution command and want to access them inside main written in program's main.
$ cat program.erl
-module(program).
-export([main/1]).
main(Args) ->
io:format("Args: ~p\n", [Args]).
$ erlc program.erl
$ erl -noshell -s program main foo bar -s init stop
Args: [foo,bar]
$ erl -noshell -run program main foo bar -s init stop
Args: ["foo","bar"]
It is documented in erl man page.
I would recommend using escript for this purpose because it has a simpler invocation.
These are not really commandline-parameters, but if you want to use environment-variables, the os-module might help. os:getenv() gives you a list of all environment variables. os:getenv(Var) gives you the value of the variable as a string, or returns false if Var is not an environment-variable.
These env-variables should be set before you start the application.
I always use an idiom like this to start (on a bash-shell):
export PORT=8080 && erl -noshell -s program main
If you want "named" argument, with possible default values, you can use this command line (from a toy appli I made):
erl -pa "./ebin" -s lavie -noshell -detach -width 100 -height 80 -zoom 6
lavie:start does nothing more than starting an erlang application:
-module (lavie).
-export ([start/0]).
start() -> application:start(lavie).
which in turn start the application where I defined default value for parameters, here is the app.src (rebar build):
{application, lavie,
[
{description, "Le jeu de la vie selon Conway"},
{vsn, "1.3.0"},
{registered, [lavie_sup,lavie_wx,lavie_fsm,lavie_server,rule_wx]},
{applications, [
kernel,
stdlib
]},
{mod, { lavie_app, [200,50,2]}}, %% with default parameters
{env, []}
]}.
then, in the application code, you can use init:get_argument/1 to get the value associated to each option if it was defined in the command line.
-module(lavie_app).
-behaviour(application).
%% Application callbacks
-export([start/2, stop/1]).
%% ===================================================================
%% Application callbacks
%% ===================================================================
start(_StartType, [W1,H1,Z1]) ->
W = get(width,W1),
H = get(height,H1),
Z = get(zoom,Z1),
lavie_sup:start_link([W,H,Z]).
stop(_State) ->
% init:stop().
ok.
get(Name,Def) ->
case init:get_argument(Name) of
{ok,[[L]]} -> list_to_integer(L);
_ -> Def
end.
Definitively more complex than #Hynek proposal, but it gives you more flexibility, and I find the command line less opaque.

Unset all paths before node boot

TLDR;
When starting erlang node (using just erl command for an instance), how could I force it not to use local OTP libraries and get code:get_path() empty?
Rationale.
I want to touch erl_boot_server. Don't doing something certain, just playing. I have built sample release and want to load it via network. Here is it.
[vkovalev#t30nix foobar]$ tree -L 2
.
|-- bin
| |-- foobar
| |-- foobar-0.0.0+build.1.ref307ae38
| |-- install_upgrade.escript
| |-- nodetool
| `-- start_clean.boot
|-- erts-6.1
| |-- bin
| |-- doc
| |-- include
| |-- lib
| |-- man
| `-- src
|-- lib
| |-- foobar-0.1.0
| |-- kernel-3.0.1
| |-- sasl-2.4
| `-- stdlib-2.1
`-- releases
|-- 0.0.0+build.1.ref307ae38
|-- RELEASES
`-- start_erl.data
First I start boot node.
[vkovalev#t30nix foobar]$ erl -sname boot -pa lib/*/ebin -pa releases/0.0.0+build.1.ref307ae38/ -s erl_boot_server start localhost
(boot#t30nix)1> {ok, _, _} = erl_prim_loader:get_file("foobar.boot").
(boot#t30nix)2> {ok, _, _} = erl_prim_loader:get_file("foobar_app.beam").
As you can see, all okay here. Then I start slave node:
[vkovalev#t30nix ~]$ erl -sname slave -loader inet -hosts 127.0.0.1 -boot foobar
{"init terminating in do_boot",{'cannot get bootfile','foobar.boot'}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
I dug into erl_prim_loader and found that stuff. One clause acts when Paths is empty (it just forward requested filename to boot server as is), another acts when Paths is non-empty. In this case (I wonder why) prim loader cripples requested file name with its own (clientside) paths and then ask SERVER to serve this path. In my understanding this is quite weird thing, but okay. Then I checked code:get_path() on slave node, and yes, it has paths to local otp installation.
So, returning to subject. How could I force slave node not to use any local OTP installation (if it already presents)?
UPD: Added more investigation results.
First thing -
https://github.com/erlang/otp/blob/maint/erts/preloaded/src/erl_prim_loader.erl#L669.
erl_prim_loader (in inet mode) for some (unclear for me) reasons tries
to cripple any requested module with local (clientside) paths.
It seems there is no way to force loader on slave node to keep its
paths empty: https://github.com/erlang/otp/blob/maint/erts/preloaded/src/init.erl#L697
Paths in my bootscript looks like
{path,["$ROOT/lib/kernel-4.0/ebin","$ROOT/lib/stdlib-2.5/ebin"]}, so
it seems, if I'll get bootscript loaded, anyway, I won't be able boot
system with it.
What's going on? Is erlang network boot feature broken? Or just my
brains? How could I get node successfully network-booted?
Do you think node of slave type is just named by "slave"?
The following code from the project tsung, file named "ts_os_mon_erlang.erl".
start_beam(Host) ->
Args = ts_utils:erl_system_args(),
?LOGF("Starting os_mon beam on host ~p ~n", [Host], ?NOTICE),
?LOGF("~p Args: ~p~n", [Host, Args], ?DEB),
slave:start(list_to_atom(Host), ?NODE, Args).
In additon, the slave module restriction as follows:
Slave nodes on other hosts than the current one are started with the program rsh. The user must be allowed to rsh to the remote hosts without being prompted for a password. This can be arranged in a number of ways (refer to the rsh documentation for details). A slave node started on the same host as the master inherits certain environment values from the master, such as the current directory and the environment variables. For what can be assumed about the environment when a slave is started on another host, read the documentation for the rsh program.
An alternative to the rsh program can be specified on the command line to erl as follows: -rsh Program.
The slave node should use the same file system at the master. At least, Erlang/OTP should be installed in the same place on both computers and the same version of Erlang should be used.
If you want to start a node with different path, I think you could do it by script with differnt environment variable, for master node, not slave node.
I think rebar project's can help for the similiar purpose. It include how to manipulate the path:
From rebar_core.erl file:
process_dir1(Dir, Command, DirSet, Config, CurrentCodePath,
{DirModules, ModuleSetFile}) ->
Config0 = rebar_config:set(Config, current_command, Command),
%% Get the list of modules for "any dir". This is a catch-all list
%% of modules that are processed in addition to modules associated
%% with this directory type. These any_dir modules are processed
%% FIRST.
{ok, AnyDirModules} = application:get_env(rebar, any_dir_modules),
Modules = AnyDirModules ++ DirModules,
%% Invoke 'preprocess' on the modules -- this yields a list of other
%% directories that should be processed _before_ the current one.
{Config1, Predirs} = acc_modules(Modules, preprocess, Config0,
ModuleSetFile),
%% Remember associated pre-dirs (used for plugin lookup)
PredirsAssoc = remember_cwd_predirs(Dir, Predirs),
%% Get the list of plug-in modules from rebar.config. These
%% modules may participate in preprocess and postprocess.
{ok, PluginModules} = plugin_modules(Config1, PredirsAssoc),
{Config2, PluginPredirs} = acc_modules(PluginModules, preprocess,
Config1, ModuleSetFile),
AllPredirs = Predirs ++ PluginPredirs,
?DEBUG("Predirs: ~p\n", [AllPredirs]),
{Config3, DirSet2} = process_each(AllPredirs, Command, Config2,
ModuleSetFile, DirSet),
%% Make sure the CWD is reset properly; processing the dirs may have
%% caused it to change
ok = file:set_cwd(Dir),
%% Check that this directory is not on the skip list
Config7 = case rebar_config:is_skip_dir(Config3, Dir) of
true ->
%% Do not execute the command on the directory, as some
%% module has requested a skip on it.
?INFO("Skipping ~s in ~s\n", [Command, Dir]),
Config3;
false ->
%% Check for and get command specific environments
{Config4, Env} = setup_envs(Config3, Modules),
%% Execute any before_command plugins on this directory
Config5 = execute_pre(Command, PluginModules,
Config4, ModuleSetFile, Env),
%% Execute the current command on this directory
Config6 = execute(Command, Modules ++ PluginModules,
Config5, ModuleSetFile, Env),
%% Execute any after_command plugins on this directory
execute_post(Command, PluginModules,
Config6, ModuleSetFile, Env)
end,
%% Mark the current directory as processed
DirSet3 = sets:add_element(Dir, DirSet2),
%% Invoke 'postprocess' on the modules. This yields a list of other
%% directories that should be processed _after_ the current one.
{Config8, Postdirs} = acc_modules(Modules ++ PluginModules, postprocess,
Config7, ModuleSetFile),
?DEBUG("Postdirs: ~p\n", [Postdirs]),
Res = process_each(Postdirs, Command, Config8,
ModuleSetFile, DirSet3),
%% Make sure the CWD is reset properly; processing the dirs may have
%% caused it to change
ok = file:set_cwd(Dir),
%% Once we're all done processing, reset the code path to whatever
%% the parent initialized it to
restore_code_path(CurrentCodePath),
%% Return the updated {config, dirset} as result
Res.
restore_code_path(no_change) ->
ok;
restore_code_path({added, Paths}) ->
%% Verify that all of the paths still exist -- some dynamically
%% added paths can get blown away during clean.
[code:del_path(F) || F <- Paths, erl_prim_loader_is_file(F)],
ok.
erl_prim_loader_is_file(File) ->
erl_prim_loader:read_file_info(File) =/= error.
Make sure you use the -setcookie option. From the erl -man erl page:
-loader Loader:
Specifies the method used by erl_prim_loader to
load Erlang modules in to the system. See
erl_prim_loader(3). Two Loader methods are supported,
efile and inet. efile means use the local file
system, this is the default. inet means use a boot
server on another machine, and the -id, -hosts and
-setcookie flags must be specified as well. If Loader
is something else, the user supplied Loader port
program is started.

Can't send anything to spawned Erlang process

I have the following Erlang code:
#!/usr/bin/env escript
%%! -pz ../deps/amqp_client ../deps/rabbit_common ../deps/amqp_client/ebin ../deps/rabbit_common/ebin
% RMQ module
-module(rmq).
-export([main/1, send/1, validate/0, test/0]).
-include_lib("../deps/amqp_client/include/amqp_client.hrl").
main(_) ->
%send(<<"test_esio">>),
%validate(),
Pid = spawn(rmq, test, []),
% Pid = spawn(fun() -> test() end), <= I've tried this way too
Pid ! s.
test() ->
receive
s ->
io:format("BAR ~n"),
send(<<"esio">>),
test();
get ->
validate(),
test();
_ ->
io:format("FOO"),
test()
end.
I run this with:
excript rmq.erl
This code doesn't work. Looks like spawn doesn't work.
Rest of my code works, function send and validate works correctly if I run it from main (I've commented its). What I'm doing wrong?
Sorry, maybe it's a dumb question but I'm a beginner with erlang. I've tried search answer in internet and books and I failed...
The problem is not actually in spawn, but in module/escript confusion.
In few words, escript file are not really modules, not from point of Erlang VM, even if you use -module() directive. They are interpreted, and not compiled at all, and definitely they can not be called by module like "rmq:test()", or in you case trough dynamic module call by spawn.
Easiest solution is separate script from actual modules. In your rmq.es you would just start some proper module:
#!/usr/bin/env escript
%%! -pz ../deps/amqp_client ../deps/rabbit_common ../deps/amqp_client/ebin ../deps/rabbit_common/ebin
main(_) ->
rmq:start().
And there in module rmq.erl:
-module(rmq).
-export([start/0, send/1, validate/0, test/0]).
-include_lib("../deps/amqp_client/include/amqp_client.hrl").
start() ->
Pid = spawn(rmq, test, []),
%% Pid = spawn(?MODULE, test, []), %% works with macro too
%% Pid = spawn(fun() -> test() end), <= I've tried this way too
Pid ! s.
test() ->
receive
s ->
io:format("BAR ~n"),
send(<<"esio">>),
test();
get ->
validate(),
test();
_ ->
io:format("FOO"),
test()
end.
Or you could just start this module without escript, with -run flag like this
erl -pz deps/*/ebin -run rmq start
EDIT regarding compilation problems
You compile your modules with erlc command. To just compile use erlc rmq.erl, which will produce rmq.beam file in current directory. But convention is to keep all your source files in src directory, all compiled files in ebin direcory, and things like run-scripts could be placed in the top directory. Something like that:
project
|-- ebin
| |-- rmq.beam
|
|-- src
| |-- rmq.erl
|
|-- rmq.es
Assuming that you run all your shell commands form project directory, to compile all file from src and place .beam binaries in ebin use erlc src/rmq.erl -o ebin, or erlc src/* -o ebin In documentation you can find you explanation of -o flag"
-o directory
The directory where the compiler should place the output files. If not specified, output files will be placed in the current working directory.
Then, after compilation you can run your code, either with erl Erlang VM or using escript (which kind-off uses erl.
erl to runs code from compiled modules, and to do that he needs to be able to locate those compiled *.ebin binaries. For this he uses code path, which is the list of directors in which he will search for those files. This list automatically consist standard library directories, and of course you can add to it directories with your own code with use of -pa flag.
-pa Dir1 Dir2 ...
Adds the specified directories to the beginning of the code path, similar to code:add_pathsa/1. See code(3). As an alternative to -pa, if several directories are to be prepended to the code and the directories have a common parent directory, that parent directory could be specified in the ERL_LIBS environment variable. See code(3).
In your case it would be erl -pa ebin, or to include binaries of all deps you can use erl -pa ebin -pa deps/*/ebin.
Exactly same options are used in second line of your escript. With exception of * character, which will not be expand like it would be in the shell. In escript you have to provide paths to each dependency separately. But the idea of including -pa ebin stays exactly the same.
To automate and standardize this process tools like rebar and erlang.mk where created (I would recommend the later). Using those should help you a little with your workflow.

mnesia save out info

how to save mnesia:info() output?
I use remote sh in unix screen and can't to scroll window
Here's a function that you can put in the user_default.erl module on the remote node:
out(Fun, File) ->
G = erlang:group_leader(),
{ok, FD} = file:open(File, [write]),
erlang:group_leader(FD, self()),
Fun(),
erlang:group_leader(G, self()),
file:close(FD).
Then, you can do the following (after recompiling and loading user_default):
1> out(fun () -> mnesia:info() end, "mnesia_info.txt").
Or, just cut-and paste the following into the shell:
F = fun (Fun, File) ->
G = erlang:group_leader(),
{ok, FD} = file:open(File, [write]),
erlang:group_leader(FD, self()),
Fun(),
erlang:group_leader(G, self()),
file:close(FD)
end,
F(fun () -> mnesia:info() end, "mnesia_info.txt").
In cases where you are situated at a terminal without scrolling (if you are on a xterm and see no scrollbar simply switch it on) a tool very useful is screen: it provides virtual vt100 termials, you can switch between terminals even detach from it and come back later (nice for long running programs on remote serversthat need the occasional interaction).
And you can log transcripts to a file and scroll in the output of the virtual terminal.
If you are on a Unix like System you will probably be able to just install a pre-built package, if all else fails you can always pick up the source and build it yourself.
Also look at this article for other solutions.
If you are not able to install screen on the system, a simple but not very comfortable hack that only uses Unix built-in stuff is:
Start erlang shell with tee(1) to redirect the output:
$ erl | tee output.log
Eshell V5.7.5 (abort with ^G)
1> mnesia:info().
===> System info in version {mnesia_not_loaded,nonode#nohost,
{1301,742014,571300}}, debug level = none <===
opt_disc. Directory "/usr/home/peer/Mnesia.nonode#nohost" is NOT used.
use fallback at restart = false
running db nodes = []
stopped db nodes = [nonode#nohost]
ok
2>
Its a bit hard to get out of the shell (you probably have to type ^D to end the input file) but then you have the tty output in the file:
$ cat output.log
Eshell V5.7.5 (abort with ^G)
1> ===> System info in version {mnesia_not_loaded,nonode#nohost,
{1301,742335,572797}}, debug level = none <===
...
I believe you cant. See system_info(all).
Convert to a string:
S = io_lib:format("~p~n", [mnesia:info()]).
Then write it to disk.

Erlang emakefile explain

I have an Emakefile that looks like:
%% --
%%
%% --
{'/Users/user/projects/custom_test/trunk/*',
[debug_info,
{outdir, "/Users/user/projects/custom_test/trunk/ebin"},
{i, "/Users/user/projects/custom_test/trunk/include/."}
]
}.
What is an explanation in layman's terms for what each item does in the list?
How do I run the emakefile so that I am able to compile it?
After compilation, how do I run that generated BEAM file?
1/ {"source files globbed", Options}
Here the options are :
debug_info add debug info for the debugger
{outdir, "/Users/user/projects/custom_test/trunk/ebin"} where should the output be written (the .beam files)
{i, "/Users/user/projects/custom_test/trunk/include/."} where to find the .hrl header files.
2/ erl -make
3/ erl -pa /Users/user/projects/custom_test/trunk/ebin starts a shell.
Find the module serving as an entry point in your application and call the functions :
module:start().
You can also run the code non interactively :
erl -noinput -noshell -pa /Users/user/projects/custom_test/trunk/ebin -s module start
For the Emakefile synax visit the man page
In the directory where the Emakefile is run erl -make to compile using the Emakefile
Simplest way to run would be to simply start an erlang shell in the same directory as the beam files with the command erl. Then run the code with module_name:function_name(). (including the dot).

Resources