ejabberd mod_pubsub offline message - ios

I want to intercept the offline message of mod_pubsub, if I send a normal message in ejabberd to offline user, I see that message in offline odbc table, if user reconnect the message arrive.
If I publish to a node, and some user are offline, I see nothing in offline message table, but if the user reconnect the item node is sent correctly so the message is saved somewhere.
Can I send the item offline to offline message odbc table? or can I intercept the offline item of mod_pubsub like for message, in fact for message from plugin I can do this:
start(_Host, _Opt) ->
inets:start(),
ejabberd_hooks:add(offline_message_hook, _Host, ?MODULE, create_message, 50).
stop (_Host) ->
ejabberd_hooks:delete(offline_message_hook, _Host, ?MODULE, create_message, 50).
this is my ejabberd.yml config for mod_pubsub:
mod_pubsub:
access_createnode: pubsub_createnode
## reduces resource comsumption, but XEP incompliant
ignore_pep_from_offline: true
## XEP compliant, but increases resource comsumption
## ignore_pep_from_offline: false
last_item_cache: false
db_type: odbc
plugins:
- "flat"
- "hometree"
- "pep" # pep requires mod_caps

As default, pubsub message type are headline. As per XMPP specifications, headline messages should not stored in offline message store.
However, there is a mod_pubsub option to change default notification type. You can for example set mod_pubsub notification_type option to normal. Normal messages are store in offline store.

Related

Raid doesn't receive C_ChatInfo.SendAddonMessage

I'm making this addons that have to send to the raid my interrupt cooldown.
The problem is that whenever i send a message to the raid i am the only one that receive it.
This is the code that send the message:
C_ChatInfo.SendAddonMessage("KickRotation",string.format( "%0.2f",remainingCd ), "RAID")
This is the event handler:
frame:RegisterEvent("PLAYER_ENTERING_WORLD")
frame:RegisterEvent("CHAT_MSG_ADDON")
frame:SetScript("OnEvent", function(self, event, ...)
local prefix, msg, msgType, sender = ...;
if event == "CHAT_MSG_ADDON" then
if prefix == "KickRotation" then
print("[KickRotation]" ..tostring(sender) .." potrĂ  interrompere tra: " ..msg);
end
end
if event == "PLAYER_ENTERING_WORLD" then
print("[KickRotation] v0.1 by Galfrad")
end
end)
Basically when the message is sended it is printed only to me.
Network messages are handled and transferred to the recipient channel (in this case, Raid Group) by the server. The reason that you are seeing the message locally, but the other people do not see it is that the message will be handled on the local system (sender) to reduce the repetition of data transmit.
Server however, only accepts and sends messages that are registered to it.
Therefore, you must first register your add-on messages to the server so the other players in the requested channel be able to receive it.
First, register your add-on messages with the name you have given already (But be sure to call the registration method only once per client):
local success = C_ChatInfo.RegisterAddonMessagePrefix("KickRotation") -- Addon name.
Next, check if your message was accepted and registered to the server. In case success is set to false (failure), you may want to handle proper warning messages and notifications to the user. The case of failure means that either server has disabled add-on messages or you have reached the limit of add-on message registrations.
Finally, send your message again check if it did not fail.
if not C_ChatInfo.SendAddonMessage("KickRotation",string.format( "%0.2f",remainingCd ), "RAID") then
print("[KickRotation] Failed to send add-on message, message rejected by the server.")
end

Does Firebase always guarantee added events in order?

I am developing messenger IOS app based on Firebase Realtime Database.
I want that all messages to be ordered based on timestamp.
There is a scenario as like below.
There are 3 clients. A, B and C.
1)
All clients register 'figure-1' listener to receive messages from others.
<figure-1>
ref.queryOrdered(byChild: "timestamp").queryStarting(atValue: startTime).observe(.childAdded, with:
{
....
// do work for the messages, print, save to storage, etc.
....
// save startTime to storage for next open.
startTime = max(timeOfSnapshot, startTime)
saveToStorage(startTime)
}
2)
Client A write message 1 to server with ServerValue.timestamp().
Client B write message 2 to server with ServerValue.timestamp().
Client C write message 3 to server with ServerValue.timestamp().
They sent messages extremely the same moment.
All clients have good speed wifi.
So, finally. Server data saved like 'figure-2'
<figure-2>
text : "Message 1", timestamp : 100000001
text : "Message 2", timestamp : 100000002
text : "Message 3", timestamp : 100000003
As my listener's code, i keep messages on storage and next listening timestamp for preventing downloading duplicated messages.
In this case.
Does Firebase always guarantee to trigger callback in order as like below?
Message 1
Message 2
Message 3
If it is not guaranteed, my strategy is absolutely wrong.
For example, some client received messages as like below.
Message 3 // the highest timestamp.
// app crash or out of storage
Message 1
Message 2
The client do not have chance to get message 1, 2 anymore.
I think if there are some nodes already, Firebase might trigger in order for those. Because, that is role of 'queryOrdered' functionality.
However, there are no node before register the listener and added new nodes additionally after then. What is will happen?
I suppose Firebase might send 3 packets to clients. (No matter how quickly the message arrives, Firebase has to send it out as soon as it arrives.)
Packet1 for message1
Packet2 for message2
Packet3 for message3
ClientA fail to receive for packet 1,2
ClientA success to receive for packet 3
Firebase re-send packet 1,2 again.
ClientA success to receive for packet 1,2
Eventually, all datas are consistent. But ordering is corrupted.
Does Firebase guarantee to occur events in order?
I have searched stack overflow and google and read official documents many times. However, i could not find the clear answer.
I have almost spent one week for this. Please give me piece of advice.
The order in which the data for a query is returns is consistent, and determined by the server. So all clients are guaranteed to get the results in the same order.
For new data that is sent to the database after the listeners are attached, all remote clients will receive it in the same order. The local client will see events for it's write operation right away though, before the data even reaches the database server.
In figure 2, it is actually quite simple: since each node has a unique timestamp, and they will be returned in the order of that timestamp. But even if they'd have the same timestamp, they'd be returned in the same order (timestamp first, then key) for each client.

Simple chat system over websockets with reconnection feature

I have seen many examples of chat room systems over websocket implemented with erlang and cowboy.
Most of the examples I have seen use gproc. In practice each websocket handler registers itself with gproc and then broadcasts/receives messages from it.
Since a user could close by accident the webpage I am thinking about connecting to the websocket handler a gen_fsm which actually broadcasts/receives all the messages from gproc. In this way the gen_fsm could switch from a "connected" state to a "disconnected" state whenever the user exits and still buffer all the messages. After a while if the user is not back online the gen_fsm will terminate.
Is this a good solution? How can I make the new websocket handler to recover the gen_fsm process? Should I register the gen_fsm using the user name or is there any better solution?
What i do is the folowing :
When an user connects to the site, i swpawn a gen_server reprensenting the user. Then, the gen server registers itself in gproc as {n,l, {user, UserName}}. (It can register properties like {p,l, {chat, ChannelID}} to listen to chat channels. (see gproc pub/sub))
Ok so now the user websocket connection starts the cowboy handler (i use Bullet). The handlers asks gproc the pid() of the user's gen_server and registrers itself as a receiver of messages. So now, when the user gen_server receives messages, it redirects them to the websocket handler.
When the websocket connexion ends, the handler uregister from the user gen_server, so the user gen_server will keep messages until the next connection, or the next timeout. At the timeout, you can simply terminate the server (messages will be lost but it is ok).
See : (not tested)
-module(user_chat).
-record(state, {mailbox,receiver=undefined}).
-export([start_link/1,set_receiver/1,unset_receiver/1]).
%% API
start_link(UserID) ->
gen_server:start_link(?MODULE,[UserID],[]).
set_receiver(UserID) ->
set_receiver(UserID,self()).
unset_receiver(UserID) ->
%% Just set the receiver to undefined
set_receiver(UserID,undefined).
set_receiver(UserID, ReceiverPid) ->
UserPid = gproc:where({n,l,UserID}),
gen_server:call(UserPid,{set_receiver,ReceiverPid}).
%% Gen server internals
init([UserID]) ->
gproc:reg({n,l,{user,UserID}}),
{ok,#state{mailbox=[]}}.
handle_call({set_receiver,ReceiverPid},_From,#state{mailbox=MB}=State) ->
NewMB = check_send(MB,State),
{reply,ok,State#state{receiver=ReceiverPid,mailbox=NewMB}}.
handle_info({chat_msg,Message},#state{mailbox=MB}=State) ->
NewMB = check_send([Message|MB],State),
{noreply, State#state{mailbox=NewMB}}.
%% Mailbox empty
check_send([],_) -> [];
%% Receiver undefined, keep messages
check_send(Mailbox,#state{receiver=undefined}) -> Mailbox
%% Receiver is a pid
check_send(Mailbox,#state{receiver=Receiver}) when is_pid(Receiver) ->
%% Send all messages
Receiver ! {chat_messages,Mailbox},
%% Then return empty mailbox
[].
With the solution you propose you may have many processes pending and you will have to write a "process cleaner" for all user that never come back. Anyway it will not support a shutdown of the chat server VM, all messages stored in living FSM will vanish if the node is down.
I think that a better way should be to store all messages in a database like mnesia, with sender, receiver, expiration date... and check for any stored message at connection, and have a message cleaner process to destroy all expired messages from time to time.

ejabberd online status when user loses connection

I have ejabberd setup to be the xmpp server between mobile apps, ie. custom iPhone and Android app.
But I've seemingly run into a limitation of the way ejabberd handles online status's.
Scenario:
User A is messaging User B via their mobiles.
User B loses all connectivity, so client can't disconnect from server.
ejabberd still lists User B as online.
Since ejabberd assumes User B is still online, any message from User A gets passed on to the dead connection.
So user B won't get the message, nor does it get saved as an offline message, as ejabberd assumes the user is online.
Message lost.
Until ejabberd realises that the connection is stale, it treats it as an online user.
And throw in data connection changes (wifi to 3G to 4G to...) and you'll find this happening quite a lot.
mod_ping:
I tried to implement mod_ping on a 10 second interval.
https://www.process-one.net/docs/ejabberd/guide_en.html#modping
But as the documentation states, the ping will wait 32 seconds for a response before disconnecting the user.
This means there will be a 42 second window where the user can lose their messages.
Ideal Solution:
Even if the ping wait time could be reduce, it's still not a perfect solution.
Is there a way that ejabberd can wait for a 200 response from the client before discarding the message? If no response then save it offline.
Is it possible to write a hook to solve this problem?
Or is there a simple setting I've missed somewhere?
FYI: I am not using BOSH.
Here is the mod I wrote that fixes my problem.
To make it work you'll need receipts to be activated client side and the client should be able to handle duplicate messages.
Firstly I created a table called confirm_delivery. I save every 'chat' message to that table. I set a 10 second timer, if I receive a confirmation back, I delete the table entry.
If I don't get a confirmation back, I save the message manually to the offline_msg table and try and resend it again (this might be over the top, but for you to decide) and then delete it from our confirm_delivery table
I've chopped out all the code I perceive as unnecessary, so I hope this will still compile.
Hope this is of help to other ejabberd devs out there!
https://github.com/johanvorster/ejabberd_confirm_delivery.git
%% name of module must match file name
-module(mod_confirm_delivery).
-author("Johan Vorster").
%% Every ejabberd module implements the gen_mod behavior
%% The gen_mod behavior requires two functions: start/2 and stop/1
-behaviour(gen_mod).
%% public methods for this module
-export([start/2, stop/1, send_packet/3, receive_packet/4, get_session/5, set_offline_message/5]).
%% included for writing to ejabberd log file
-include("ejabberd.hrl").
-record(session, {sid, usr, us, priority, info}).
-record(offline_msg, {us, timestamp, expire, from, to, packet}).
-record(confirm_delivery, {messageid, timerref}).
start(_Host, _Opt) ->
?INFO_MSG("mod_confirm_delivery loading", []),
mnesia:create_table(confirm_delivery,
[{attributes, record_info(fields, confirm_delivery)}]),
mnesia:clear_table(confirm_delivery),
?INFO_MSG("created timer ref table", []),
?INFO_MSG("start user_send_packet hook", []),
ejabberd_hooks:add(user_send_packet, _Host, ?MODULE, send_packet, 50),
?INFO_MSG("start user_receive_packet hook", []),
ejabberd_hooks:add(user_receive_packet, _Host, ?MODULE, receive_packet, 50).
stop(_Host) ->
?INFO_MSG("stopping mod_confirm_delivery", []),
ejabberd_hooks:delete(user_send_packet, _Host, ?MODULE, send_packet, 50),
ejabberd_hooks:delete(user_receive_packet, _Host, ?MODULE, receive_packet, 50).
send_packet(From, To, Packet) ->
?INFO_MSG("send_packet FromJID ~p ToJID ~p Packet ~p~n",[From, To, Packet]),
Type = xml:get_tag_attr_s("type", Packet),
?INFO_MSG("Message Type ~p~n",[Type]),
Body = xml:get_path_s(Packet, [{elem, "body"}, cdata]),
?INFO_MSG("Message Body ~p~n",[Body]),
MessageId = xml:get_tag_attr_s("id", Packet),
?INFO_MSG("send_packet MessageId ~p~n",[MessageId]),
LUser = element(2, To),
?INFO_MSG("send_packet LUser ~p~n",[LUser]),
LServer = element(3, To),
?INFO_MSG("send_packet LServer ~p~n",[LServer]),
Sessions = mnesia:dirty_index_read(session, {LUser, LServer}, #session.us),
?INFO_MSG("Session: ~p~n",[Sessions]),
case Type =:= "chat" andalso Body =/= [] andalso Sessions =/= [] of
true ->
{ok, Ref} = timer:apply_after(10000, mod_confirm_delivery, get_session, [LUser, LServer, From, To, Packet]),
?INFO_MSG("Saving To ~p Ref ~p~n",[MessageId, Ref]),
F = fun() ->
mnesia:write(#confirm_delivery{messageid=MessageId, timerref=Ref})
end,
mnesia:transaction(F);
_ ->
ok
end.
receive_packet(_JID, From, To, Packet) ->
?INFO_MSG("receive_packet JID: ~p From: ~p To: ~p Packet: ~p~n",[_JID, From, To, Packet]),
Received = xml:get_subtag(Packet, "received"),
?INFO_MSG("receive_packet Received Tag ~p~n",[Received]),
if Received =/= false andalso Received =/= [] ->
MessageId = xml:get_tag_attr_s("id", Received),
?INFO_MSG("receive_packet MessageId ~p~n",[MessageId]);
true ->
MessageId = []
end,
if MessageId =/= [] ->
Record = mnesia:dirty_read(confirm_delivery, MessageId),
?INFO_MSG("receive_packet Record: ~p~n",[Record]);
true ->
Record = []
end,
if Record =/= [] ->
[R] = Record,
?INFO_MSG("receive_packet Record Elements ~p~n",[R]),
Ref = element(3, R),
?INFO_MSG("receive_packet Cancel Timer ~p~n",[Ref]),
timer:cancel(Ref),
mnesia:dirty_delete(confirm_delivery, MessageId),
?INFO_MSG("confirm_delivery clean up",[]);
true ->
ok
end.
get_session(User, Server, From, To, Packet) ->
?INFO_MSG("get_session User: ~p Server: ~p From: ~p To ~p Packet ~p~n",[User, Server, From, To, Packet]),
ejabberd_router:route(From, To, Packet),
?INFO_MSG("Resend message",[]),
set_offline_message(User, Server, From, To, Packet),
?INFO_MSG("Set offline message",[]),
MessageId = xml:get_tag_attr_s("id", Packet),
?INFO_MSG("get_session MessageId ~p~n",[MessageId]),
case MessageId =/= [] of
true ->
mnesia:dirty_delete(confirm_delivery, MessageId),
?INFO_MSG("confirm_delivery clean up",[]);
_ ->
ok
end.
set_offline_message(User, Server, From, To, Packet) ->
?INFO_MSG("set_offline_message User: ~p Server: ~p From: ~p To ~p Packet ~p~n",[User, Server, From, To, Packet]),
F = fun() ->
mnesia:write(#offline_msg{us = {User, Server}, timestamp = now(), expire = "never", from = From, to = To, packet = Packet})
end,
mnesia:transaction(F).
This is well known limitation of TCP connections. You need to introduce some acknowledgment functionality.
One of options in xep-0184. A message may carry receipt request and when it is delivered the receipt goes back to sender.
Another option is xep-0198. This is stream management which acknowledges stanzas.
You can also implement it entirely in application layer and send messages from recipient to sender.
Act accordingly when acknowledgment is not delivered.
Mind that Sender -> Server connection also may be severed in that way.
I am not aware of implementation of those xeps and features in ejabberd. I implemented them on my own depending on project requirements.
ejabberd supports stream management as default in latest version. It is implemented in most mobile libraries like Smack for Android and XMPPFramework for iOS.
This is the state of the art in XMPP specification at the moment.
Implementing XEP-198 on ejabberd is quite involved.
Erlang Solutions (I work for them) has an XEP-184 module for ejabberd, with enhanced functionality, that solves this problem. It does the buffering and validation on the server side. As long as client sends messages carrying receipt request and when it is delivered the receipt goes back to sender.
The module validates receipts to see if message has been received. If it hasn't within timeout, it gets saved as an offline message.
I think the better way is that if a message has not be received make user offline and then store message in offline message table and use a push service and configure it for offline message.
Then a push will be send and if there are more message they will be stored on offline message, and for understanding on server that message has not received you can use this https://github.com/Mingism/ejabberd-stanza-ack.
I think Facebook has the same way when a message doesn't deliver it makes user offline until he become online again
Ejabberd supports stream management as default in latest version.
After set stream manager config in ejabberd_c2s, You should set some config in your client.
Please see this post for this config in client.
https://community.igniterealtime.org/thread/55715

How to keep track of a process per browser window and access it at each event in Nitrogen?

In Nitrogen, the Erlang web framework, I have the following problem. I have a process that takes care of sending and receiving messages to another process that acts as a hub. This process acts as the comet process to receive the messages and update the page.
The problem is that when the user process a button I get a call to event. How do I get ahold of that Pid at an event.
the code that initiates the communication and sets up the receiving part looks like this, first I have an event which starts the client process by calling wf:comet:
event(start_chat) ->
Client = wf:comet(fun() -> chat_client() end);
The code for the client process is the following, which gets and joins a room at the beginning and then goes into a loop sending and receiving messages to/from the room:
chat_client() ->
Room = room_provider:get_room(),
room:join(Room),
chat_client(Room).
chat_client(Room) ->
receive
{send_message, Message} ->
room:send_message(Room, Message);
{message, From, Message} ->
wf:insert_bottom(messages, [#p{}, #span { text=Message }]),
wf:comet_flush()
end,
chat_client(Room).
Now, here's the problem. I have another event, send_message:
event(send_message) ->
Message = wf:q(message),
ClientPid ! {send_message, Message}.
except that ClientPid is not defined there, and I can't see how to get ahold of it. Any ideas?
The related threat at the Nitrogen mailing list: http://groups.google.com/group/nitrogenweb/browse_thread/thread/c6d9927467e2a51a
Nitrogen provides a key-value storage per page instance called state. From the documentation:
Retrieve a page state value stored under the specified key. Page State is different from Session State in that Page State is scoped to a series of requests by one user to one Nitrogen Page:
wf:state(Key) -> Value
Store a page state variable for the current user. Page State is different from Session State in that Page State is scoped to a series of requests by one user to one Nitrogen Page:
wf:state(Key, Value) -> ok
Clear a user's page state:
wf:clear_state() -> ok
Have an ets table which maps session id's to client Pid's. Or if nitrogen provides any sort of session management, store the Pid as session data.
Every thing that needs to be remembered needs a process. It looks like your room provider isn't.
room:join(Room) need to be room:join(Room,self()). The room need to know what your comet-process pid is.
To send a message to a client you first send the message to the room, the room will then send a message to all clients in the room. But for that to work. Every client joining the room need to submit the comet-pid. The room need to keep a list of all pid's in the room.

Resources