This should be simple, although I could not find a way or example yet...
The Mnesia documentation shows how to initialize/create an Mnesia database from the erlang shell, which requires to start the erl shell with the -mnesia parameter:
erl -mnesia dir '"/tmp/funky"'
Once in the shell you can create the schema/etc...
>mnesia:create_schema([node()]).
ok.
>mnesia:start().
ok.
Well, that's simple enough. What if I want to create the schema/etc from another erlang module and I did not start the process with the -mnesia parateter/flag ? I think that basically means, how to dynamically, without running a script but from a pure erlang code approach. For instance, I'd like to do something like this:
-module(something).
-export([test/0]).
test() ->
erlang:setParameter("mnesia","/tmp/funcky"),
mnesia:create_schema([node()]),
...
Well, I think I found the solution. set_env is what I needed:
application:set_env(mnesia, dir, "/tmp/funcky"),
mnesia:create_schema([node()]),
etc...
Related
I have been trying to export all the functions in an erlang module for use in a common test SUITE, not an eunit module. So far it has not worked for me. I am using rebar to run the SUITE, and I came across this question (http://lists.basho.com/pipermail/rebar_lists.basho.com/2011-October/001141.html), which is essentially exactly what I want to do, but the method will not work for me.
I have also added {plugins, [rebar_ct]}. into rebar.config but it has made no difference. I should point out all my tests pass when I export the functions normally, but I want to avoid this.
Any help would be great thanks.
The compiler will cause all functions in a module to be exported if you add this into it:
-compile(export_all).
Or you could do it based on defs, like:
-ifdef(EXPORTALL).
-compile(export_all).
-endif.
That will only export everything if you have {d, 'EXPORTALL', true} in your rebar config erl_opts setting, e.g. something like:
{erl_opts, [
{d, 'EXPORTALL', true}
]}.
If that doesn't work, make sure you don't have erl_opts twice in your rebar config.
with rebar3 you can define in the config file extra option for the compilation for common test:
{ct_compile_opts, []}.
her you can add export_all which will be available for common test only. not sure it exists for rebar.
I follow the REST API with yaws tutorial of the 'Building web application with Erlang' book.
I get the following error when starting $ yaws :
file:path_eval([".","/Users/<uername>"],".erlang"): error on line 3: 3: evaluation failed with reason error:{undefined_record,airport} and stacktrace [{erl_eval,exprs,2,[]}]
.erlang file:
application:start(mnesia).
mnesia:create_table(airport,[{attributes, record_info(fields, airport)}, {index, [country]}]).
rest.erl file can be found here.
How can I define a record? I tried to add rd(airport, {code, city, country, name}). without success.
Record 'airport' is defined in the module rest, and all functions in the 'rest' know about record 'airport'. But when you start your application, erlang executes .erlang file, which has nothing to do with the module rest. So, erlang just has no idea what is the record airport and where to find it.
Easiest workaround I believe - is to define some function (for instance 'init') in the module rest, this function must contain all that you have now in .erlang file, export it, and in the .erlang file just invoke rest:init().
The main thing I'm confused about here (I think) is what the arguments to the qfun are supposed to be and what the return value should be. The README basically doesn't say anything about this and the example it gives throws away the second and third args.
Right now I'm only trying to understand the arguments and not using Riak for anything practical. Eventually I'll be trying to rebuild our (slow, MySQL-based) financial reporting system with it. So ignoring the pointlessness of my goal here, why does the following give me a badfun exception?
The data is just tuples (pairs) of Names and Ages, with the keys being the name. I'm not doing any conversion to JSON or such before inserting the data from the Erlang console.
Now with some {Name, Age} pairs stored in <<"people">> I want to use MapReduce (for no other reason than to understand "how") to get the values back out, unchanged in this first use.
riakc_pb_socket:mapred(
Pid, <<"people">>,
[{map, {qfun, fun(Obj, _, _) -> [Obj] end}, none, true}]).
This just gives me a badfun, however:
{error,<<"{\"phase\":0,\"error\":\"{badfun,#Fun<erl_eval.18.17052888>}\",\"input\":\"{ok,{r_object,<<\\\"people\\\">>,<<\\\"elaine\\\">"...>>}
How do I just pass the data through my map function unchanged? Is there any better documentation of the Erlang client than what is in the README? That README seems to assume you already know what the inputs are.
There are 2 Riak Erlang clients that serve different purposes.
The first one is the internal Riak client that is included in the riak_kv module (riak_client.erl and riak_object.erl). This can be used if you are attached to the Riak console or if you are writing a MapReduce function or a commit hook. As it is run from within a Riak node it works quite well with qfuns.
The other client is the official Riak client for Erlang that is used by external applications and connects to Riak through the protocol buffers interface. This is what you are using in your example above. As this connects through protocol buffers, it is usually recommended that MapReduce functions in Erlang are compiled and deployed on the nodes of the cluster as named functions. This will also make them accessible from other client libraries.
I think my code is actually correct and my problem lies in the fact I'm trying to use the shell to execute the code. I need to actually compile the code before it can be run in Riak. This is a limitation of the Erlang shell and the way it compiles funs.
After a few days of playing around, here's a neat trick that makes development easier. Exploit Erlang's RPC support and the fact it has runtime code loading, to distribute your code across all the Riak nodes:
%% Call this somewhere during your app's initialization routine.
%% Assumes you have a list of available Riak nodes in your app's env.
load_mapreduce_in_riak() ->
load_mapreduce_in_riak(application:get_env(app_name, riak_nodes, [])).
load_mapreduce_in_riak([]) ->
ok;
load_mapreduce_in_riak([{Node, Cookie}|Tail]) ->
erlang:set_cookie(Node, Cookie),
case net_adm:ping(Node) of
pong ->
{Mod, Bin, Path} = code:get_object_code(app_name_mapreduce),
rpc:call(Node, code, load_binary, [Mod, Path, Bin]);
pang ->
io:format("Riak node ~p down! (ping <-> pang)~n", [Node])
end,
load_mapreduce_in_riak(Tail).
Now you can refer to any of the functions in the module app_name_mapreduce and they'll be visible to the Riak cluster. The code can be removed again with code:delete/1, if needed.
In the Erlang shell i can re-use my variables very well. like this:
1> R = "muzaaya".
"muzaaya"
2> f(R).
ok
3> R = "muzaaya2".
"muzaaya2"
So, i cannot call f(Variable) in my source code because i do not know which module this function belongs to. I have tried modules like: erlang,shell,c, e.t.c. Has anyone tried re-using variables in Erlang Source code, other than just in the Shell ? How did you do it ? Thanks
No, you can't do this inside a module.
The REPL shell is interpreted, the code file is compiled.
The shell comes in handy to test things, but you would not write your web server in a shell. ;-)
It would be possible and not even difficult for the Erlang hackers to implement an f(V) language construct, but it would not fit the Erlang design model.
Mind, no function could accomplish the forgetting of a variable, so it had to be done in a new native language construct.
When compiled, the virtual machine does not know the variables anymore, as Erlang is run by a rather ordinary stack machine, not much different from the JVM.
It just would not be functional programming if one could rebind a variable V.
The functions which are listed when you type help(). in the shell are shell only functions and cannot be used when programming Erlang. f() is one of there functions.
As other have already pointed out f() is a shell command and only exists in the shell. That f(), and all other shell commands, looks like a normal function call is because the only way to do something in Erlang is to call a function. And the shell does not introduce any new syntax. All shell commands behave like normal functions in that they always return a value.
It was not deemed necessary to be able to use f() in normal functions, although there are many who disagree and find the once only binding of variables unnecessarily restrictive.
This started off as the question:
Almost every time when I use the Erlang shell, I'd like to run some command on shell startup, e.g. something like
rr("*.hrl").
Or similar. Currently I have to type it every time I start a Erlang shell and I'm getting tired of it and forget it all the time.
But this was actually the wrong question! For what I actually wanted to do is read my record definition headers in every shell job. Not use for other of the shell built-in commands to run on startup. So I changed the question header to show the question how it should have asked.
While trying the solution with .erlang I stumbled upon a solution for the specific rr/1 usage:
From the man-page of shell:
There is some support for reading and printing records in the shell.
During compilation record expressions are translated to tuple expres-
sions. In runtime it is not known whether a tuple actually represents a
record. Nor are the record definitions used by compiler available at
runtime. So in order to read the record syntax and print tuples as
records when possible, record definitions have to be maintained by the
shell itself. The shell commands for reading, defining, forgetting,
listing, and printing records are described below. Note that each job
has its own set of record definitions. To facilitate matters record
definitions in the modules shell_default and user_default (if loaded)
are read each time a new job is started. For instance, adding the line
-include_lib("kernel/include/file.hrl").
to user_default makes the definition of file_info readily available in
the shell.
For clarification I add some example:
File foo.hrl:
-record(foo, {bar, baz=5}).
File: user_default.erl:
-module(user_default).
-compile(export_all).
-include("foo.hrl"). % include all relevant record definition headers here
%% more stuff probably ...
Lets try out in the shell:
$ erl
Erlang R13B04 (erts-5.7.5) [source] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.7.5 (abort with ^G)
1> #foo{}.
#foo{bar = undefined,baz = 5}
→ the shell knows about the record from foo.hrl
The file .erlang is evaluated when the shell is started, but it is NOT evaluated in the context of the shell. This means that it can only contain general expressions which are evaluated and not shell commands. Unfortunately rr() is a shell command (it initialises local shell data to recognise records) so it can not be used in the .erlang file.
While the user defined module user_default, which must be preloaded, can include files which contain record definitions using -include or -include_lib, these record definitions will then only be available to functions defined within user_default. user_default is normal compiled module and exported functions in it are called as any other functions so the record definitions will not be visible within the shell. user_default allows the user to define more complex functions which can be called from within the shell as shell commands.
EDIT:
I was partially wrong here. While I was correct about how .erlang is evaluated and how the functions in user_default are called I missed how user_default.erl is scanned at shell startup for record definitions which are then available within the shell. Thanks #Peer Stritzinger for pointing this out.
Place it in file called .erlang in your home directory (see paragraph 1.7.1 in http://www.erlang.org/documentation/doc-5.2/doc/getting_started/getting_started.html).