I'm currently trying to write an ejabberd module but am falling at the first hurdle so assume I am missing something simple.
The aim for the finished module is to inspect a message body and look for a certain term. If the term is present then the module will drop the message and send a reply to the originator informing them that their message has been dropped due to the violation. I do not want to just obfuscate the offending term.
I have built a skeleton test module using the 'hello world' example from the main ejabberd webpages and it works so I seem to be putting things in the right place and compiling them properly.
Taking inspiration from sources such as mod_pottymouth as well as related stack overflow questions such as Filtering message packet's body in ejabberd and How to filter messages in Ejabberd (not to mention various blogs and other articles) code along the lines of the following should work as far as I can tell:
-module(mod_helloworld).
-behaviour(gen_mod).
-include("logger.hrl").
-include("ejabberd.hrl").
-export([start/2, stop/1, on_filter_packet/1]).
start(_Host, _Opts) ->
?INFO_MSG("Hello there, ejabberd world!", []),
ejabberd_hooks:add(filter_packet, global, ?MODULE, on_filter_packet, 0),
ok.
stop(_Host) ->
?INFO_MSG("Bye bye, ejabberd world!", []),
ejabberd_hooks:delete(filter_packet, global, ?MODULE, on_filter_packet, 0),
ok.
on_filter_packet(drop) ->
?INFO_MSG("world - Message was dropped",[]),
drop;
on_filter_packet({_From, _To, Xml} = Packet) ->
% this clause never fires
?INFO_MSG("world - message hit clause", [Xml]),
Packet;
on_filter_packet(Msg) ->
% catch-all clause for non-message types
?INFO_MSG("world - message hit default", []),
Msg.
When the module runs every packet triggers the catch-all clause so I know the hooks seem to be doing their job, and if I check the main ejabberd logs then messages are going through just fine. If I remove the catch-all then the filter just crashes.
I've tried a number of variations of the filter clause taken from various examples (plus supporting code) e.g. :
{_From, _To, #xmlel{name = StanzaType}} = Input
or
{_From, _To, {xmlel, <<"message">>, _Attrs, Els} = _Packet} = _Msg
but none of them seem to be working.
I'm using two different Centos 7 set-ups (one with ejabberd installed as a package running on a physical box, the other using Vagrant with ejabberd installed from source) and a number of different XMPP clients, but the result is the same. Ejabberd version is 17.07.31.
What obvious thing am I missing?
Related
My sole method of debugging (io:format/2) is not working in YAWS. I'm at a loss. My supervisor starts three processes: ETS Manager, YAWS Init, and Ratelimiter. This is successful. I can play around with the rate limiter in the shell... calling the same functions YAWS should be. The difference being the shell behaves as I would expect and I have no idea what is happening in YAWS.
I do know if I spam the command in shell: ratelimiter:limit(IP) it will return true eventually. I can execute the following and it will also return true: ratelimiter:lockout(IP), ratelimiter:blacklist(IP). The limiter is a gen_server.
The functions do the following:
limit/1: Check ETS table if counter > threshold; update counter. If counter > blacklist threshold make entry in mnesia table
blacklist/1: Check mnesia table if entry exists; Yes: reset timer
lockout/1: Immediately enters ID into mnesia table
In my arg_rewrite_mod module I'm doing some checks to ensure I'm getting the HTTP requests I expect, namely GET, POST, and HEAD. I thought this would be a nice place to also do the rate limiting. Do it as soon as possible in the web server's chain of events.
All the changes I've made to the arg_rewrite module seem to work except using "printf"s and the limiter. I'm new to the language so I'm not sure my mistake is obvious or not.
Skeleton of my arg_rewrite_mod:
-module(arg_preproc).
-export([arg_rewrite/1]).
-include("limiter_def.hrl").
-include_lib("/usr/lib/yaws/include/yaws_api.hrl").
is_blacklisted(ID) ->
case ratelimiter:blacklist(ID) of
false -> continue;
true -> throw(blacklist)
end.
is_limited(ID) ->
case ratelimiter:limit(ID) of
false -> continue;
true -> throw(limit)
end.
arg_rewrite(A) ->
Allow = ['GET','POST', 'HEAD'],
try
{IP, _} = A#arg.client_ip_port,
ID = IP,
is_blacklisted(ID),
io:format("~p ~p ~n",[ID, is_blacklisted(ID)]),
%% === Allow expected HTTP requests
HttpReq = (A#arg.req)#http_request.method,
case lists:member(HttpReq, Allow) of
true ->
{_,ReqTgt} = (A#arg.req)#http_request.path,
PassThru = [".css",".jpg",".jpeg",".png",".js"],
%% ... much more ...
false ->
is_limited(ID),
throw(http_method_denied)
end
catch
throw:blacklist -> %% Send back a 429;
throw:limit -> %% Same but no Retry-After;
throw:http_method_denied ->
%%Only thrown experienced
AllowedReq = string:join([atom_to_list(M) || M <- Allow], ","),
A#arg{state=#rewrite_response{status=405,
headers=[{header, {"Allow", AllowedReq}},{header, {connection, "close"}}]
}};
Type:Reason -> {error, {unhandled,{Type, Reason}}}
end.
I can spam curl -I -X HEAD <<any page>> as fast as I can in a bash shell and all I get is HTTP 200. The ETS table has zero entries as well. Using PUT I get a HTTP 405 as intended. I can ratelimiter:lockout({MY_IP}) and get the web page to load in my browser and a HTTP 200 with curl.
I'm confused. Is it the way I started YAWS?
start() ->
os:putenv("YAWSHOME", ?HOMEPATH_YAWS),
code:add_patha(?MODPATH_YAWS),
ok = case (R = application:start(yaws)) of
{error, {already_started, _}} -> ok;
_ -> R
end,
{ok,self()}. %% Tell supervisor everything okay in a manner it expects.
I did this because I thought it would be "easier."
When starting Yaws as part of another application, it's important to use its embedding support. One important thing the Yaws embedding startup code does is set the application environment variable embedded to true:
application:set_env(yaws, embedded, true),
Yaws checks this variable in several of its code paths, especially during initialization, in order to avoid assuming that it's running as a stand-alone daemon process.
Regarding rate limiting, rather than using an arg rewriter, you might consider using a shaper. The yaws_shaper module provides a behavior that expects its callback module to implement two functions:
check/1: yaws_shaper calls this to allow the callback module to decide whether to allow the request from the client. It passes client host information as the callback argument. Your shaper callback module returns either the atom allow to allow the request to proceed, or the tuple {deny, Status, Message} where Status is an HTTP status code to return to the client, such as 429 to indicate the client is making too many requests, and Message is any extra HTML to be returned to the client. (It might be nice if Message could include a reply header such as Retry-After as well; this is something I'll consider adding to Yaws.)
update/3: yaws_shaper calls this when the response for a client is ready to be returned. The first argument is the client host information, the second argument is the number of "hits" (the value 1 for each request), and the third argument is the number of bytes being delivered in response to the client's request. Your shaper callback module can return ok from update/3 (Yaws does not use the return value).
A shaper can use this framework to track how many requests each client is making and how much data Yaws is delivering to each client, and use that information to limit or deny particular clients.
And finally, while "printf debugging" works, it's less than ideal especially in Erlang, which has built-in tracing. You should consider learning the dbg module so you can trace any function you want, see who called it, see what arguments are being passed to it, see what it returns, etc.
Here is a simple UDP server:
-module(kvstore_udpserver).
-author("mylesmcdonnell").
%% API
-export([start/0]).
start() ->
spawn(fun() -> server(2346) end).
server(Port) ->
{ok, Socket} = gen_udp:open(Port, [binary]),
loop(Socket).
loop(Socket) ->
receive
{udp, Socket, Host, Port, Bin} ->
case binary_to_term(Bin) of
{store, Value} ->
io:format("kvstore_udpserver:{store, Value}~n"),
gen_udp:send(Socket,Host,Port,term_to_binary(kvstore:store(Value)));
{retrieve, Key} ->
io:format("kvstore_udpserver:{retrieve, Value}~n"),
gen_udp:send(Socket,Host,Port,term_to_binary(kvstore:retrieve(Key)))
end,
loop(Socket)
end.
How can I restructure this so that
a) It, or at least the relevant part of it, is a gen_server so that I can add to the supervision tree
b) increase concurrency by handling each message in a separate process.
I have reimplemented the sockserv example from Learn You Some Erlang for my TCP server but I'm struggling to determine a similar model for UDP.
For a):
You need to delcare the gen_server behaviour and implement all the callback functions (this is obvious, but it's worth calling out explicitly). If you have rebar installed, you can use the command rebar create template=simplesrv srvid=your_server_name to add the boilerplate functions.
You'd probably want to move the server starting business logic (the gen_udp:open/2 call) to your server's init/1 function. (The init is required by the gen_server behaviour. You can also start your loop/1 function there.
You'd probably want to make sure the udp server is closed by the module's terminate/2 function.
Move the business logic for handling requests that come in from parsing the messages to your loop/1 function into handle_call/3 or handle_cast/2 in your module (see below).
For b):
You have a few options, but basically, whenever you receive a message, you can use gen_server:cast/2 (if you don't care about the response) or gen_server:call/2,3 if you do. The casts or calls will be handled by the handle_cast/2 or handle_call/3 functions in your module.
Casts are inherently non-blocking and the answers to this question have a good design pattern for handling call operations asynchronously in gen_servers. You can crib from that.
I want to pass some arguments to supervisor:init/1 function and it is desirable, that the application's interface was so:
redis_pool:start() % start all instances
redis_pool:start(Names) % start only given instances
Here is the application:
-module(redis_pool).
-behaviour(application).
...
start() -> % start without params
application:ensure_started(?APP_NAME, transient).
start(Names) -> % start with some params
% I want to pass Names to supervisor init function
% in order to do that I have to bypass application:ensure_started
% which is not GOOD :(
application:load(?APP_NAME),
case start(normal, [Names]) of
{ok, _Pid} -> ok;
{error, {already_started, _Pid}} -> ok
end.
start(_StartType, StartArgs) ->
redis_pool_sup:start_link(StartArgs).
Here is the supervisor:
init([]) ->
{ok, Config} = get_config(),
Names = proplists:get_keys(Config),
init([Names]);
init([Names]) ->
{ok, Config} = get_config(),
PoolSpecs = lists:map(fun(Name) ->
PoolName = pool_utils:name_for(Name),
{[Host, Port, Db], PoolSize} = proplists:get_value(Name, Config),
PoolArgs = [{name, {local, PoolName}},
{worker_module, eredis},
{size, PoolSize},
{max_overflow, 0}],
poolboy:child_spec(PoolName, PoolArgs, [Host, Port, Db])
end, Names),
{ok, {{one_for_one, 10000, 1}, PoolSpecs}}.
As you can see, current implementation is ugly and may be buggy. The question is how I can pass some arguments and start application and supervisor (with params who were given to start/1) ?
One option is to start application and run redis pools in two separate phases.
redis_pool:start(),
redis_pool:run([] | Names).
But what if I want to run supervisor children (redis pool) when my app starts?
Thank you.
The application callback Module:start/2 is not an API to call in order to start the application. It is called when the application is started by application:start/1,2. This means that overloading it to provide differing parameters is probably the wrong thing to do.
In particular, application:start will be called directly if someone adds your application as a dependency of theirs (in the foo.app file). At this point, they have no control over the parameters, since they come from your .app file, in the {mod, {Mod, Args}} term.
Some possible solutions:
Application Configuration File
Require that the parameters be in the application configuration file; you can retrieve them with application:get_env/2,3.
Don't start a supervisor
This means one of two things: becoming a library application (removing the {mod, Mod} term from your .app file) -- you don't need an application behaviour; or starting a dummy supervisor that does nothing.
Then, when someone wants to use your library, they can call an API to create the pool supervisor, and graft it into their supervision tree. This is what poolboy does with poolboy:child_spec.
Or, your application-level supervisor can be a normal supervisor, with no children by default, and you can provide an API to start children of that, via supervisor:start_child. This is (more or less) what cowboy does.
You can pass arguments in the AppDescr argument to application:load/1 (though its a mighty big tuple already...) as {mod, {Module, StartArgs}} according to the docs ("according to the docs" as in, I don't recall doing it this way myself, ever: http://www.erlang.org/doc/apps/kernel/application.html#load-1).
application:load({application, some_app, {mod, {Module, [Stuff]}}})
Without knowing anything about the internals of the application you're starting, its hard to say which way is best, but a common way to do this is to start up the application and then send it a message containing the data you want it to know.
You could make receipt of the message form tell the application to go through a configuration assertion procedure, so that the same message you send on startup is also the same sort of thing you would send it to reconfigure it on the fly. I find this more useful than one-shotting arguments on startup.
In any case, it is usually better to think in terms of starting something, then asking it to do something for you, than to try telling it everything in init parameters. This can be as simple as having it start up and wait for some message that will tell the listener to then spin up the supervisor the way you're trying to here -- isolated one step from the application inclusion issues RL mentioned in his answer.
I'm currently writing a piece of software in erlang, which is now based on gen_server behaviour. This gen_server should export a function (let's call it update/1) which should connect using ssl to another service online and send to it the value passed as argument to the function.
Currently update/1 is like this:
update(Value) ->
gen_server:call(?SERVER, {update, Value}).
So once it is called, there is a call to ?SERVER which is handled as:
handle_call({update, Value}, _From, State) ->
{ok, Socket} = ssl:connect("remoteserver.com", 5555, [], 3000).
Reply = ssl:send(Socket, Value).
{ok, Reply, State}.
Once the packet is sent to the remote server, the peer should severe the connection.
Now, this works fine with my tests in shell, but what happens if we have to call 1000 times mymod:update(Value) and ssl:connect/4 is not working well (i.e. is reaching its timeout)?
At this point, my gen_server will have a very large amount of values and they can be processed only one by one, leading to the point that the 1000th update will be done only 1000*3000 milliseconds after its value was updated using update/1.
Using a cast instead of a call would leave to the same problem. How can I solve this problem? Should I use a normal function and not a gen_server call?
From personal experience I can say that 1000 messages per gen_server process wont be a problem unless you are queuing big messages.
If from your testing it seems that your gen_server is not able to handle this much load, then you must create multiple instances of your gen_server preferably under a supervisor process at the boot time (or run-time) of your application.
Besides that, I really don't understand the requirement of making a new connection for each update!! you should consider some optimization like cached connections/ pre-connections to the server..no?
I have been trying to learn Erlang and came across some code written by Joe Armstrong:
start() ->
F = fun interact/2,
spawn(fun() -> start(F, 0) end).
interact(Browser, State) ->
receive
{browser, Browser, Str} ->
Str1 = lists:reverse(Str),
Browser ! {send, "out ! " ++ Str1},
interact(Browser, State);
after 100 ->
Browser ! {send, "clock ! tick " ++ integer_to_list(State)},
interact(Browser, State+1)
end.
It is from a blog post about using websockets with Erlang: http://armstrongonsoftware.blogspot.com/2009/12/comet-is-dead-long-live-websockets.html
Could someone please explain to me why in the start function, he spawns the anonymous function start(F, 0), when start is a function that takes zero arguments. I am confused about what he is trying to do here.
Further down in this blog post (Listings) you can see that there is another function (start/2) that takes two arguments:
start(F, State0) ->
{ok, Listen} = gen_tcp:listen(1234, [{packet,0},
{reuseaddr,true},
{active, true}]),
par_connect(Listen, F, State0).
The code sample you quoted was only an excerpt where this function was omitted for simplicity.
The reason for spawning a fun in this way is to avoid having to export a function which is only intended for internal use. One problem with is that all exported functions are available to all users even if they only meant for internal use. One example of this is a call-back module for gen_server which typically contains both the exported API for clients and the call-back functions for the gen_server behaviour. The call-back functions are only intended to be called by the gen_server behaviour and not by others but they are visible in the export list and not in anyway blocked.
Spawning a fun decreases the number of exported internal functions.
In Erlang, functions are identified by their name and their arity (the number of parameters they take). You can have more than one function with the same name, as long as they all have different numbers of parameters. The two functions you've posted above are start/0 and interact/2. start/0 doesn't call itself; instead it calls start/2, and if you take a look further down the page you linked to, you'll find the definition of start/2.
The point of using spawn in this way is to start a server process in the background and return control to the caller. To play with this code, I guess that you'd start up the Erlang interpreter, load the script and then call the start/0 function. This method would then start a process in the background and return so that you could continue to type into the Erlang interpreter.