Trust you all are doing well.
I am trying to make multiple sessions to SMSC using OSERL application.
Since to make a SMPP client you need to inherit gen_esme behaviour.
I was wondering whether it is possible to make multiple connections towards SMSC without writing multiple gen_esme modules?
There are two main strategies for starting multiple processes using the same gen_esme based module:
gen_esme:start_link/4 - named or reference based server
gen_esme:start_link/3 - pid based server
I'm going to be referencing the sample_esme file found under the examples for oserl.
Named Server
Most of the examples from oserl show usage of gen_esme:start_link/4 which in turn is calling gen_server:start_link/4. The ServerName variable for gen_server:start_link/4 has a typespec of {local, Name::atom()} | {global, GlobalName::term()} | {via, Module::atom(), ViaName::term()}.
That means if we change the sample_esme:start_link/0,1,2 functions to look like this:
start_link() ->
start_link(?MODULE).
start_link(SrvName) ->
start_link(SrvName, true).
start_link(SrvName, Silent) ->
Opts = [{rps, 1}, {queue_file, "./sample_esme.dqueue"}],
gen_esme:start_link({local, SrvName}, ?MODULE, [Silent], Opts).
We can start multiple servers using:
sample_esme:start_link(). %% SrvName = 'sample_esme'
sample_esme:start_link(my_client1). %% SrvName = 'my_client1'
sample_esme:start_link(my_client2). %% SrvName = 'my_client2'
To make our sample_esme module work properly with this named server strategy, most of its calling functions will need to be modified. Let's use sample_esme:rps/0,1 as an example:
rps() ->
rps(?MODULE).
rps(SrvName) ->
gen_esme:rps(SrvName).
Now we can call the gen_esme:rps/1 function on any of our running servers:
sample_esme:rps(). %% calls 'sample_esme'
sample_esme:rps(my_client1). %% 'my_client1'
sample_esme:rps(my_client2). %% 'my_client2'
This is similar to how projects like pooler reference members of pools it creates.
pid Server
This is essentially the same as the Named Server strategy, but we're just going to pass the pid of the server around instead of a registered atom.
That means if we change the sample_esme:start_link/0,1 functions to look like this:
start_link() ->
start_link(true).
start_link(Silent) ->
Opts = [{rps, 1}, {queue_file, "./sample_esme.dqueue"}],
gen_esme:start_link(?MODULE, [Silent], Opts).
Notice that all we did was drop the {local, SrvName} argument so it won't register the SrvName atom with the server's pid.
That means we need to capture the pid of each created server:
{ok, Pid0} = sample_esme:start_link().
{ok, Pid1} = sample_esme:start_link().
{ok, Pid2} = sample_esme:start_link().
Using the same sample_esme:rps/0,1 example from Named Server, we will need to remove sample_esme:rps/0 and add a sample_esme:rps/1 function which takes a pid:
rps(SrvPid) ->
gen_esme:rps(SrvPid).
Now we can call the gen_esme:rps/1 function on any of our running servers:
sample_esme:rps(Pid0).
sample_esme:rps(Pid1).
sample_esme:rps(Pid2).
This is similar to how projects like poolboy reference members of pools it creates.
Recommendations
If you are simply trying to pool connections, I would recommend using a library like pooler or poolboy.
If you have a finite number of specifically named connections that you want to reference by name, I would recommend just having a supervisor with a child spec like the following for each connection:
{my_client1,
{sample_esme, start_link, [my_client1]},
permanent, 5000, worker, [sample_esme]}
Related
My sole method of debugging (io:format/2) is not working in YAWS. I'm at a loss. My supervisor starts three processes: ETS Manager, YAWS Init, and Ratelimiter. This is successful. I can play around with the rate limiter in the shell... calling the same functions YAWS should be. The difference being the shell behaves as I would expect and I have no idea what is happening in YAWS.
I do know if I spam the command in shell: ratelimiter:limit(IP) it will return true eventually. I can execute the following and it will also return true: ratelimiter:lockout(IP), ratelimiter:blacklist(IP). The limiter is a gen_server.
The functions do the following:
limit/1: Check ETS table if counter > threshold; update counter. If counter > blacklist threshold make entry in mnesia table
blacklist/1: Check mnesia table if entry exists; Yes: reset timer
lockout/1: Immediately enters ID into mnesia table
In my arg_rewrite_mod module I'm doing some checks to ensure I'm getting the HTTP requests I expect, namely GET, POST, and HEAD. I thought this would be a nice place to also do the rate limiting. Do it as soon as possible in the web server's chain of events.
All the changes I've made to the arg_rewrite module seem to work except using "printf"s and the limiter. I'm new to the language so I'm not sure my mistake is obvious or not.
Skeleton of my arg_rewrite_mod:
-module(arg_preproc).
-export([arg_rewrite/1]).
-include("limiter_def.hrl").
-include_lib("/usr/lib/yaws/include/yaws_api.hrl").
is_blacklisted(ID) ->
case ratelimiter:blacklist(ID) of
false -> continue;
true -> throw(blacklist)
end.
is_limited(ID) ->
case ratelimiter:limit(ID) of
false -> continue;
true -> throw(limit)
end.
arg_rewrite(A) ->
Allow = ['GET','POST', 'HEAD'],
try
{IP, _} = A#arg.client_ip_port,
ID = IP,
is_blacklisted(ID),
io:format("~p ~p ~n",[ID, is_blacklisted(ID)]),
%% === Allow expected HTTP requests
HttpReq = (A#arg.req)#http_request.method,
case lists:member(HttpReq, Allow) of
true ->
{_,ReqTgt} = (A#arg.req)#http_request.path,
PassThru = [".css",".jpg",".jpeg",".png",".js"],
%% ... much more ...
false ->
is_limited(ID),
throw(http_method_denied)
end
catch
throw:blacklist -> %% Send back a 429;
throw:limit -> %% Same but no Retry-After;
throw:http_method_denied ->
%%Only thrown experienced
AllowedReq = string:join([atom_to_list(M) || M <- Allow], ","),
A#arg{state=#rewrite_response{status=405,
headers=[{header, {"Allow", AllowedReq}},{header, {connection, "close"}}]
}};
Type:Reason -> {error, {unhandled,{Type, Reason}}}
end.
I can spam curl -I -X HEAD <<any page>> as fast as I can in a bash shell and all I get is HTTP 200. The ETS table has zero entries as well. Using PUT I get a HTTP 405 as intended. I can ratelimiter:lockout({MY_IP}) and get the web page to load in my browser and a HTTP 200 with curl.
I'm confused. Is it the way I started YAWS?
start() ->
os:putenv("YAWSHOME", ?HOMEPATH_YAWS),
code:add_patha(?MODPATH_YAWS),
ok = case (R = application:start(yaws)) of
{error, {already_started, _}} -> ok;
_ -> R
end,
{ok,self()}. %% Tell supervisor everything okay in a manner it expects.
I did this because I thought it would be "easier."
When starting Yaws as part of another application, it's important to use its embedding support. One important thing the Yaws embedding startup code does is set the application environment variable embedded to true:
application:set_env(yaws, embedded, true),
Yaws checks this variable in several of its code paths, especially during initialization, in order to avoid assuming that it's running as a stand-alone daemon process.
Regarding rate limiting, rather than using an arg rewriter, you might consider using a shaper. The yaws_shaper module provides a behavior that expects its callback module to implement two functions:
check/1: yaws_shaper calls this to allow the callback module to decide whether to allow the request from the client. It passes client host information as the callback argument. Your shaper callback module returns either the atom allow to allow the request to proceed, or the tuple {deny, Status, Message} where Status is an HTTP status code to return to the client, such as 429 to indicate the client is making too many requests, and Message is any extra HTML to be returned to the client. (It might be nice if Message could include a reply header such as Retry-After as well; this is something I'll consider adding to Yaws.)
update/3: yaws_shaper calls this when the response for a client is ready to be returned. The first argument is the client host information, the second argument is the number of "hits" (the value 1 for each request), and the third argument is the number of bytes being delivered in response to the client's request. Your shaper callback module can return ok from update/3 (Yaws does not use the return value).
A shaper can use this framework to track how many requests each client is making and how much data Yaws is delivering to each client, and use that information to limit or deny particular clients.
And finally, while "printf debugging" works, it's less than ideal especially in Erlang, which has built-in tracing. You should consider learning the dbg module so you can trace any function you want, see who called it, see what arguments are being passed to it, see what it returns, etc.
Currently my Erlang application is started within an escript (TCP server) and all works fine since it uses the default port I provided. Now I want to pass the port via the escript to the application but I have no idea how. (The app runs a supervisor)
script.escript
!/usr/bin/env escript
%% -*- erlang -*-
-export([main/1]).
main([UDPort, TCPort]) ->
U = list_to_integer(UDPort),
T = list_to_integer(TCPort),
app:start(), %% Want to pass T into the startup.
receive
_ -> ok
end;
...
app.erl
-module(app).
-behaviour(application).
-export([start/0, start/2, stop/0, stop/1]).
-define(PORT, 4300).
start () -> application:start(?MODULE). %% This is called by the escript.
stop () -> application:stop(?MODULE).
start (_StartType, _StartArgs) -> supervisor:start(?PORT).
stop (_State) -> ok.
I'm honestly not sure if this is possible with using application but I thought it best to just ask.
The common way is to start things from whatever shell just calling
erl -run foo
But you can also do
erl -appname key value
to set an environment value and then
application:get_env(appname, key)
to get the value you are looking for.
That said...
I like to have service applications be things that don't have to shut down to be (re)configured. I usually include some message protocol like {config, Aspect, Setting} or similar that can alter the basic state of a service on the fly. Because I often do this I usually just wind up having whatever script starts up the application also send a configuration message to it.
So with this in mind, consider this rough conceptual example:
!/usr/bin/env escript
%% -*- erlang -*-
-export([main/1]).
main([UDPort, TCPort]) ->
U = list_to_integer(UDPort),
T = list_to_integer(TCPort),
ok = case whereis(app) of
undefined -> app:start();
_Pid -> ok
end,
ok = set_ports(U, T).
%% Just an illustration.
%% Making this a synchronous gen_server/gen_fsm call is way better.
set_ports(U, T) ->
app ! {config, listen, {tcp, T}},
app ! {config, listen, {udp, U}},
ok.
Now not only is the startup script a startup script, it is also a config script. The point isn't to have a startup script, it is to have a service running on the ports you designated. This isn't a conceptual fit for all tools, of course, but it should give you some ideas. There is also the practice of putting a config file somewhere the application knows to look and just reading terms from it, among other techniques (like including ports in the application specification, etc.).
Edit
I just realized you are doing this in an escript which will spawn a new node every time you call it. To make the technique above work properly you would need to make the escript name a node for the service to run on, and locate it if it already exists.
One of my website is using Nitrogen with a Cowboy server.
I would like to log every access to web pages just like Apache does with access.log.
What would be the best way to do that ?
You can use cowboy middlewares https://ninenines.eu/docs/en/cowboy/1.0/guide/middlewares/
Just create a simple log module:
-module(app_web_log).
-behaviour(cowboy_middleware).
-export([execute/2]).
execute(Req, Env) ->
{{Peer, _}, Req2} = cowboy_req:peer(Req),
{Method, Req3} = cowboy_req:method(Req2),
{Path, Req4} = cowboy_req:path(Req3),
error_logger:info_msg("~p: [~p]: ~p ~p", [calendar:universal_time(), Peer, Method, Path]),
{ok, Req4, Env}.
and add it in list of middlwares:
{ok, _} = cowboy:start_http(http, 100, [{port, 8080}], [
{env, [{dispatch, Dispatch}]},
{middlewares, [cowboy_router, app_web_log, cowboy_handler]}]).
Try using Nitrogen on top of the Yaws web server instead, since it performs access logging by default.
Each underlying webserver does it differently (or not at all) - this is something simple_bridge does not yet have abstracted.
So in the case of cowboy, you'll likely have to rig it up yourself.
If you're using a newer build of Nitrogen (if you have the file site/src/nitrogen_main_handler.erl), then you can edit that file to manually log yourself. For example, using erlang's error handler, you could add something simple like:
log_request() ->
error_logger:info_msg("~p: [~p]: ~p", [{date(), time()}, wf:peer_ip(), wf:url()]).
run() ->
handlers(),
log_request(), %% <--- insert before wf_core:run()
wf_core:run().
Then whatever happens with the log can be handled by configuring error_logger to write to disk (http://erldocs.com/17.0/kernel/error_logger.html?i=13&search=error_logger#logfile/1)
If you use an older Nitrogen (which would have site/src/nitrogen_cowboy.erl), then you would similarly edit that file, once again before the wf_core:run() call.
Alternatively, your hooks option with cowboy could work as well. I've not worked with them, so you're on your own there :)
I want to pass some arguments to supervisor:init/1 function and it is desirable, that the application's interface was so:
redis_pool:start() % start all instances
redis_pool:start(Names) % start only given instances
Here is the application:
-module(redis_pool).
-behaviour(application).
...
start() -> % start without params
application:ensure_started(?APP_NAME, transient).
start(Names) -> % start with some params
% I want to pass Names to supervisor init function
% in order to do that I have to bypass application:ensure_started
% which is not GOOD :(
application:load(?APP_NAME),
case start(normal, [Names]) of
{ok, _Pid} -> ok;
{error, {already_started, _Pid}} -> ok
end.
start(_StartType, StartArgs) ->
redis_pool_sup:start_link(StartArgs).
Here is the supervisor:
init([]) ->
{ok, Config} = get_config(),
Names = proplists:get_keys(Config),
init([Names]);
init([Names]) ->
{ok, Config} = get_config(),
PoolSpecs = lists:map(fun(Name) ->
PoolName = pool_utils:name_for(Name),
{[Host, Port, Db], PoolSize} = proplists:get_value(Name, Config),
PoolArgs = [{name, {local, PoolName}},
{worker_module, eredis},
{size, PoolSize},
{max_overflow, 0}],
poolboy:child_spec(PoolName, PoolArgs, [Host, Port, Db])
end, Names),
{ok, {{one_for_one, 10000, 1}, PoolSpecs}}.
As you can see, current implementation is ugly and may be buggy. The question is how I can pass some arguments and start application and supervisor (with params who were given to start/1) ?
One option is to start application and run redis pools in two separate phases.
redis_pool:start(),
redis_pool:run([] | Names).
But what if I want to run supervisor children (redis pool) when my app starts?
Thank you.
The application callback Module:start/2 is not an API to call in order to start the application. It is called when the application is started by application:start/1,2. This means that overloading it to provide differing parameters is probably the wrong thing to do.
In particular, application:start will be called directly if someone adds your application as a dependency of theirs (in the foo.app file). At this point, they have no control over the parameters, since they come from your .app file, in the {mod, {Mod, Args}} term.
Some possible solutions:
Application Configuration File
Require that the parameters be in the application configuration file; you can retrieve them with application:get_env/2,3.
Don't start a supervisor
This means one of two things: becoming a library application (removing the {mod, Mod} term from your .app file) -- you don't need an application behaviour; or starting a dummy supervisor that does nothing.
Then, when someone wants to use your library, they can call an API to create the pool supervisor, and graft it into their supervision tree. This is what poolboy does with poolboy:child_spec.
Or, your application-level supervisor can be a normal supervisor, with no children by default, and you can provide an API to start children of that, via supervisor:start_child. This is (more or less) what cowboy does.
You can pass arguments in the AppDescr argument to application:load/1 (though its a mighty big tuple already...) as {mod, {Module, StartArgs}} according to the docs ("according to the docs" as in, I don't recall doing it this way myself, ever: http://www.erlang.org/doc/apps/kernel/application.html#load-1).
application:load({application, some_app, {mod, {Module, [Stuff]}}})
Without knowing anything about the internals of the application you're starting, its hard to say which way is best, but a common way to do this is to start up the application and then send it a message containing the data you want it to know.
You could make receipt of the message form tell the application to go through a configuration assertion procedure, so that the same message you send on startup is also the same sort of thing you would send it to reconfigure it on the fly. I find this more useful than one-shotting arguments on startup.
In any case, it is usually better to think in terms of starting something, then asking it to do something for you, than to try telling it everything in init parameters. This can be as simple as having it start up and wait for some message that will tell the listener to then spin up the supervisor the way you're trying to here -- isolated one step from the application inclusion issues RL mentioned in his answer.
Lately I have been woorking with ejabberd and internal module development.
I would like to have an internal module developed using gen_mod + gen_server behaviours. My module has an ejabberd hook which is based on this one: http://metajack.im/2008/08/28/writing-ejabberd-modules-presence-storms
My start_link function is like:
start_link(Host, Opts) ->
Proc = gen_mod:get_module_proc(Host, ?PROCNAME),
gen_server:start_link({local, Proc}, ?MODULE, [Host, Opts], []).
Where ?PROCNAME is:
-define(PROCNAME, ejabberd_mod_mine)
So in my localhost it is registered as ejabberd_mod_mine_localhost
As you see in the tutorial I linked, they use an hook in order to parse the presence stanza directly, but what if I want to compare the From value with a value I saved in the gen_server state? I thought of using a gen_server cast passing the packet to it, but the problem is that the function hook runs in a different process and therefore I cannot use:
gen_server:cast(self(), {filter, Packet})
and I can just use:
gen_server:cast(ejabberd_mod_mine_localhost, {filter, Packet})
But should I hardcode the name of the process? What if the host name is different? Should I register my gen_server using just its module name?
A common pattern is to use the domain of either the sender or the receiving user (depending on what you are trying to do). For example mod_offline (that store packets on DB when the destination user is offline) uses the destination JID to discover on which domain it have to run, something like:
gen_mod:get_module_proc(To#jid.lserver, ?PROCNAME)