Using Erlang Application Module for Globals - erlang

Are there any downsides to using the application:get_env / application:set_env to keep global state dynamically?
Example:
I have a module that handles http requests.
I have a collection of http request handlers.
I want to register/get http request handlers via the application env.
start() ->
% Init default handlers
application:set_env(?REQUEST_HANDLER_MODULES, [foo_handler, bar_handler]).
add_http_request_handler(Module) ->
% Register handler at runtime
Modules = application:get_env(?REQUEST_HANDLER_MODULES),
application:set_env(?REQUEST_HANDLER_MODULES, [Module|Modules]).
handle_request(Req) ->
Modules = application:get_env(?REQUEST_HANDLER_MODULES),
handle_request(Req, Modules, {}).
handle_request(Req, [], Agg) ->
Agg;
handle_request(Req, [M|L], Agg) ->
handle_request(Req, L, M:handle(Req, Agg)).
What are the downsides to this?
Does this require a round trip to an application process each time application:get_env is fetched?
Is this a functional anti-pattern?

If you need to have persistent storage i.e. maintain the exact data after Erlang VM restart, I think you should go with Mnesia's disc_copies table. Based on my experience, this beast can handle load and concurrency very well.

Related

Erlang: Functions work in shell but not in YAWS

My sole method of debugging (io:format/2) is not working in YAWS. I'm at a loss. My supervisor starts three processes: ETS Manager, YAWS Init, and Ratelimiter. This is successful. I can play around with the rate limiter in the shell... calling the same functions YAWS should be. The difference being the shell behaves as I would expect and I have no idea what is happening in YAWS.
I do know if I spam the command in shell: ratelimiter:limit(IP) it will return true eventually. I can execute the following and it will also return true: ratelimiter:lockout(IP), ratelimiter:blacklist(IP). The limiter is a gen_server.
The functions do the following:
limit/1: Check ETS table if counter > threshold; update counter. If counter > blacklist threshold make entry in mnesia table
blacklist/1: Check mnesia table if entry exists; Yes: reset timer
lockout/1: Immediately enters ID into mnesia table
In my arg_rewrite_mod module I'm doing some checks to ensure I'm getting the HTTP requests I expect, namely GET, POST, and HEAD. I thought this would be a nice place to also do the rate limiting. Do it as soon as possible in the web server's chain of events.
All the changes I've made to the arg_rewrite module seem to work except using "printf"s and the limiter. I'm new to the language so I'm not sure my mistake is obvious or not.
Skeleton of my arg_rewrite_mod:
-module(arg_preproc).
-export([arg_rewrite/1]).
-include("limiter_def.hrl").
-include_lib("/usr/lib/yaws/include/yaws_api.hrl").
is_blacklisted(ID) ->
case ratelimiter:blacklist(ID) of
false -> continue;
true -> throw(blacklist)
end.
is_limited(ID) ->
case ratelimiter:limit(ID) of
false -> continue;
true -> throw(limit)
end.
arg_rewrite(A) ->
Allow = ['GET','POST', 'HEAD'],
try
{IP, _} = A#arg.client_ip_port,
ID = IP,
is_blacklisted(ID),
io:format("~p ~p ~n",[ID, is_blacklisted(ID)]),
%% === Allow expected HTTP requests
HttpReq = (A#arg.req)#http_request.method,
case lists:member(HttpReq, Allow) of
true ->
{_,ReqTgt} = (A#arg.req)#http_request.path,
PassThru = [".css",".jpg",".jpeg",".png",".js"],
%% ... much more ...
false ->
is_limited(ID),
throw(http_method_denied)
end
catch
throw:blacklist -> %% Send back a 429;
throw:limit -> %% Same but no Retry-After;
throw:http_method_denied ->
%%Only thrown experienced
AllowedReq = string:join([atom_to_list(M) || M <- Allow], ","),
A#arg{state=#rewrite_response{status=405,
headers=[{header, {"Allow", AllowedReq}},{header, {connection, "close"}}]
}};
Type:Reason -> {error, {unhandled,{Type, Reason}}}
end.
I can spam curl -I -X HEAD <<any page>> as fast as I can in a bash shell and all I get is HTTP 200. The ETS table has zero entries as well. Using PUT I get a HTTP 405 as intended. I can ratelimiter:lockout({MY_IP}) and get the web page to load in my browser and a HTTP 200 with curl.
I'm confused. Is it the way I started YAWS?
start() ->
os:putenv("YAWSHOME", ?HOMEPATH_YAWS),
code:add_patha(?MODPATH_YAWS),
ok = case (R = application:start(yaws)) of
{error, {already_started, _}} -> ok;
_ -> R
end,
{ok,self()}. %% Tell supervisor everything okay in a manner it expects.
I did this because I thought it would be "easier."
When starting Yaws as part of another application, it's important to use its embedding support. One important thing the Yaws embedding startup code does is set the application environment variable embedded to true:
application:set_env(yaws, embedded, true),
Yaws checks this variable in several of its code paths, especially during initialization, in order to avoid assuming that it's running as a stand-alone daemon process.
Regarding rate limiting, rather than using an arg rewriter, you might consider using a shaper. The yaws_shaper module provides a behavior that expects its callback module to implement two functions:
check/1: yaws_shaper calls this to allow the callback module to decide whether to allow the request from the client. It passes client host information as the callback argument. Your shaper callback module returns either the atom allow to allow the request to proceed, or the tuple {deny, Status, Message} where Status is an HTTP status code to return to the client, such as 429 to indicate the client is making too many requests, and Message is any extra HTML to be returned to the client. (It might be nice if Message could include a reply header such as Retry-After as well; this is something I'll consider adding to Yaws.)
update/3: yaws_shaper calls this when the response for a client is ready to be returned. The first argument is the client host information, the second argument is the number of "hits" (the value 1 for each request), and the third argument is the number of bytes being delivered in response to the client's request. Your shaper callback module can return ok from update/3 (Yaws does not use the return value).
A shaper can use this framework to track how many requests each client is making and how much data Yaws is delivering to each client, and use that information to limit or deny particular clients.
And finally, while "printf debugging" works, it's less than ideal especially in Erlang, which has built-in tracing. You should consider learning the dbg module so you can trace any function you want, see who called it, see what arguments are being passed to it, see what it returns, etc.

Multiple gen_esme sessions

Trust you all are doing well.
I am trying to make multiple sessions to SMSC using OSERL application.
Since to make a SMPP client you need to inherit gen_esme behaviour.
I was wondering whether it is possible to make multiple connections towards SMSC without writing multiple gen_esme modules?
There are two main strategies for starting multiple processes using the same gen_esme based module:
gen_esme:start_link/4 - named or reference based server
gen_esme:start_link/3 - pid based server
I'm going to be referencing the sample_esme file found under the examples for oserl.
Named Server
Most of the examples from oserl show usage of gen_esme:start_link/4 which in turn is calling gen_server:start_link/4. The ServerName variable for gen_server:start_link/4 has a typespec of {local, Name::atom()} | {global, GlobalName::term()} | {via, Module::atom(), ViaName::term()}.
That means if we change the sample_esme:start_link/0,1,2 functions to look like this:
start_link() ->
start_link(?MODULE).
start_link(SrvName) ->
start_link(SrvName, true).
start_link(SrvName, Silent) ->
Opts = [{rps, 1}, {queue_file, "./sample_esme.dqueue"}],
gen_esme:start_link({local, SrvName}, ?MODULE, [Silent], Opts).
We can start multiple servers using:
sample_esme:start_link(). %% SrvName = 'sample_esme'
sample_esme:start_link(my_client1). %% SrvName = 'my_client1'
sample_esme:start_link(my_client2). %% SrvName = 'my_client2'
To make our sample_esme module work properly with this named server strategy, most of its calling functions will need to be modified. Let's use sample_esme:rps/0,1 as an example:
rps() ->
rps(?MODULE).
rps(SrvName) ->
gen_esme:rps(SrvName).
Now we can call the gen_esme:rps/1 function on any of our running servers:
sample_esme:rps(). %% calls 'sample_esme'
sample_esme:rps(my_client1). %% 'my_client1'
sample_esme:rps(my_client2). %% 'my_client2'
This is similar to how projects like pooler reference members of pools it creates.
pid Server
This is essentially the same as the Named Server strategy, but we're just going to pass the pid of the server around instead of a registered atom.
That means if we change the sample_esme:start_link/0,1 functions to look like this:
start_link() ->
start_link(true).
start_link(Silent) ->
Opts = [{rps, 1}, {queue_file, "./sample_esme.dqueue"}],
gen_esme:start_link(?MODULE, [Silent], Opts).
Notice that all we did was drop the {local, SrvName} argument so it won't register the SrvName atom with the server's pid.
That means we need to capture the pid of each created server:
{ok, Pid0} = sample_esme:start_link().
{ok, Pid1} = sample_esme:start_link().
{ok, Pid2} = sample_esme:start_link().
Using the same sample_esme:rps/0,1 example from Named Server, we will need to remove sample_esme:rps/0 and add a sample_esme:rps/1 function which takes a pid:
rps(SrvPid) ->
gen_esme:rps(SrvPid).
Now we can call the gen_esme:rps/1 function on any of our running servers:
sample_esme:rps(Pid0).
sample_esme:rps(Pid1).
sample_esme:rps(Pid2).
This is similar to how projects like poolboy reference members of pools it creates.
Recommendations
If you are simply trying to pool connections, I would recommend using a library like pooler or poolboy.
If you have a finite number of specifically named connections that you want to reference by name, I would recommend just having a supervisor with a child spec like the following for each connection:
{my_client1,
{sample_esme, start_link, [my_client1]},
permanent, 5000, worker, [sample_esme]}

Pass some arguments to supervisor init function when app starts

I want to pass some arguments to supervisor:init/1 function and it is desirable, that the application's interface was so:
redis_pool:start() % start all instances
redis_pool:start(Names) % start only given instances
Here is the application:
-module(redis_pool).
-behaviour(application).
...
start() -> % start without params
application:ensure_started(?APP_NAME, transient).
start(Names) -> % start with some params
% I want to pass Names to supervisor init function
% in order to do that I have to bypass application:ensure_started
% which is not GOOD :(
application:load(?APP_NAME),
case start(normal, [Names]) of
{ok, _Pid} -> ok;
{error, {already_started, _Pid}} -> ok
end.
start(_StartType, StartArgs) ->
redis_pool_sup:start_link(StartArgs).
Here is the supervisor:
init([]) ->
{ok, Config} = get_config(),
Names = proplists:get_keys(Config),
init([Names]);
init([Names]) ->
{ok, Config} = get_config(),
PoolSpecs = lists:map(fun(Name) ->
PoolName = pool_utils:name_for(Name),
{[Host, Port, Db], PoolSize} = proplists:get_value(Name, Config),
PoolArgs = [{name, {local, PoolName}},
{worker_module, eredis},
{size, PoolSize},
{max_overflow, 0}],
poolboy:child_spec(PoolName, PoolArgs, [Host, Port, Db])
end, Names),
{ok, {{one_for_one, 10000, 1}, PoolSpecs}}.
As you can see, current implementation is ugly and may be buggy. The question is how I can pass some arguments and start application and supervisor (with params who were given to start/1) ?
One option is to start application and run redis pools in two separate phases.
redis_pool:start(),
redis_pool:run([] | Names).
But what if I want to run supervisor children (redis pool) when my app starts?
Thank you.
The application callback Module:start/2 is not an API to call in order to start the application. It is called when the application is started by application:start/1,2. This means that overloading it to provide differing parameters is probably the wrong thing to do.
In particular, application:start will be called directly if someone adds your application as a dependency of theirs (in the foo.app file). At this point, they have no control over the parameters, since they come from your .app file, in the {mod, {Mod, Args}} term.
Some possible solutions:
Application Configuration File
Require that the parameters be in the application configuration file; you can retrieve them with application:get_env/2,3.
Don't start a supervisor
This means one of two things: becoming a library application (removing the {mod, Mod} term from your .app file) -- you don't need an application behaviour; or starting a dummy supervisor that does nothing.
Then, when someone wants to use your library, they can call an API to create the pool supervisor, and graft it into their supervision tree. This is what poolboy does with poolboy:child_spec.
Or, your application-level supervisor can be a normal supervisor, with no children by default, and you can provide an API to start children of that, via supervisor:start_child. This is (more or less) what cowboy does.
You can pass arguments in the AppDescr argument to application:load/1 (though its a mighty big tuple already...) as {mod, {Module, StartArgs}} according to the docs ("according to the docs" as in, I don't recall doing it this way myself, ever: http://www.erlang.org/doc/apps/kernel/application.html#load-1).
application:load({application, some_app, {mod, {Module, [Stuff]}}})
Without knowing anything about the internals of the application you're starting, its hard to say which way is best, but a common way to do this is to start up the application and then send it a message containing the data you want it to know.
You could make receipt of the message form tell the application to go through a configuration assertion procedure, so that the same message you send on startup is also the same sort of thing you would send it to reconfigure it on the fly. I find this more useful than one-shotting arguments on startup.
In any case, it is usually better to think in terms of starting something, then asking it to do something for you, than to try telling it everything in init parameters. This can be as simple as having it start up and wait for some message that will tell the listener to then spin up the supervisor the way you're trying to here -- isolated one step from the application inclusion issues RL mentioned in his answer.

right way to register an internal ejabberd module

Lately I have been woorking with ejabberd and internal module development.
I would like to have an internal module developed using gen_mod + gen_server behaviours. My module has an ejabberd hook which is based on this one: http://metajack.im/2008/08/28/writing-ejabberd-modules-presence-storms
My start_link function is like:
start_link(Host, Opts) ->
Proc = gen_mod:get_module_proc(Host, ?PROCNAME),
gen_server:start_link({local, Proc}, ?MODULE, [Host, Opts], []).
Where ?PROCNAME is:
-define(PROCNAME, ejabberd_mod_mine)
So in my localhost it is registered as ejabberd_mod_mine_localhost
As you see in the tutorial I linked, they use an hook in order to parse the presence stanza directly, but what if I want to compare the From value with a value I saved in the gen_server state? I thought of using a gen_server cast passing the packet to it, but the problem is that the function hook runs in a different process and therefore I cannot use:
gen_server:cast(self(), {filter, Packet})
and I can just use:
gen_server:cast(ejabberd_mod_mine_localhost, {filter, Packet})
But should I hardcode the name of the process? What if the host name is different? Should I register my gen_server using just its module name?
A common pattern is to use the domain of either the sender or the receiving user (depending on what you are trying to do). For example mod_offline (that store packets on DB when the destination user is offline) uses the destination JID to discover on which domain it have to run, something like:
gen_mod:get_module_proc(To#jid.lserver, ?PROCNAME)

How to maintain stateful in yaws

I have some process (spawned) with state.
How to maintain simple stateful service in yaws?
How to implement communication to process in "appmods" erl source file?
update:
let's we have simple process
start() -> loop(0).
loop(C) ->
receive
{inc} -> loop(C + 1);
{get, FromPid} -> FromPid ! C, loop(C)
end.
What is the simplest (trivial: without gen_server, yapp) way to access process from web?
Maybe, I need a minimal example with gen_server+yapp+yaws / appmods+yaws.
The #arg structure is a very important datastructure for the yaws programmer.
In the ARG of Yaws out/1 there is a variable that can save user state.
"state, %% State for use by users of the out/1 callback"
You can get detail info here .
There only 2 ways to access a process in Erlang: Either you know its Pid (and the node where you expect the process to be) or You know its registered Name (and the erlang node its expected to be).
Lets say you have your appmod:
-module(myappmod).
-export([out/1]).
-include("PATH/TO/YAWS_SERVER/include/yaws_api.hrl").
-include("PATH/TO/YAWS_SERVER/include/yaws.hrl").
out(Arg) ->
case check_initial_state(Arg) of
unknown -> create_initial_state();
{ok,Value}->
UserPid = list_to_pid(Value),
UserPid ! extract_request(Arg),
receive
Response -> {html,format_response(Response)}
after ?TIMEOUT -> {html,"request_timedout"}
end
end.
check_initial_state(A)->
CookieObject = (A#arg.headers)#headers.cookie,
case yaws_api:find_cookie_val("InitialState", CookieObject) of
[] -> unkown;
Cookie -> {ok,Cookie}
end.
extract_request(Arg)-> %% get request from POST Data or Get Data
Post__data_proplist = yaws_api:parse_post(Arg),
Get_data_proplist = yaws_api:parse_query(Arg),
%% do many other things....
Request = remove_request(Post__data_proplist,Get_data_proplist),
Request.
That simple set up shows you how you would use processes to keep things about a user. However, the use of processes is not good. Processes do fail, so you need a way of recovering what data they were holding.
A better approach is to have a Data storage about your users and have one gen_server to do the look ups. You could use Mnesia. I do not advise you to use processes on the web to keep user state, no matter what kind of app you are doing, even if its a messaging app. Mnesia or ETS tables can keep state and all you need to do is look up.
Use a better storage mechanism to keep state other than processes. Processes are a point of failure. Others use Cookies (and/or Session cookies), whose value is used in some way to look up something from a database. However, if you insist that you need processes, then, have a way of remembering their Pids or registered names. You could store a user Pid into their session cookie e.t.c.

Resources