How does ejabberd handle message ordering and delivery? - erlang

As per https://datatracker.ietf.org/doc/rfc6120/?include_text=1 and 10.1. In-Order Processing
How is Ordered Message Delivery ensured across all items in roster?
Is it done at server or client side? If it is on any side, are newer messages being waited upon older messages with a timeout?
Does it use an incremental sequence number for ordering guarantees?
On client re-connect, how does client know what to pull from server? Does the client send last msgIds of all items in roster? or does Server keep the QOS data and client state for each device ?

First of all, As XMPP uses TCP transport protocol it ensures the server receives the data in the same order the client sends it.
As per TCP docs:
TCP guarantees delivery of data and also guarantees that packets will
be delivered in the same order in which they were sent
ejabbred is an XMPP server, the raw data received over the TCP must be compliant with the XMPP protocol and the same being verified XMPP server.
In XMPP protocol client can able to send messages after it has done with session initiation, resource bind, and authentication, etc..
These messages are being processed in the order the client sends it and routes to its recipients. If recipients are offline it push and pop to database the same order for later delivery.
Here ordering guarantees mostly ensure by the TCP network stack.

Related

What is the best way to implement application level acknowledgement to the broker in mqtt?

I am using esp32 mqtt client library. In this, whenever a valid mqtt message is received with QoS1, mqtt_client library sends a Puback message automatically without waiting for application layer to authenticate the payload. There are two ways to overcome this situation -
Holding Puback message untill application layer process it
Opening a new topic between client and broker to implement application level authentication mechanism?
Could you guys please suggest the best approach ?
This is the right behavior because the ack is at protocol level. Holding PUBACK, if the MQTT library you are using allows doing that is not the right thing to do imho because you break somehow the contract at protocol level and if the application spends too much time processing the message without sending the PUBACK, the broker will resend the message not receiving the PUBACK on time.
Imho the best solution is relying on a different topic on which the client publish an "ack at application level".
Puback ack packet is being sent without validation from Application level because that's how it is supposed to work. Quality of Service (QoS) factor in MQTT doesn't mean that ack packet is sent when data content in packet is validated. It is meant to acknowledge the reception. That's all.
For what you are trying to do, you need to implement an application level protocol to serve or discard the packets after validating the data (and then notify the sender with another packet).

MQTT broker communication to MQTT Client

I already have a cumulocity client that communicates with the cumulocity broker through MQTT.
What should I do in order to send data back from MQTT broken in cumulocity to mqtt client? (Say the client sends some data and I want the confirmation that the data was sent successfully)
For some reason couldn't find any info on this on the cumulocity docs the only for client.
If you want to get confirmation from the server of getting your data you should use the normal MQTT QoS. http://cumulocity.com/guides/mqtt/implementation/
If you want to send data in general from the platform to your device client operations is what you are looking for. This is currently the only data you can subscribe to on Cumulocity MQTT.
http://cumulocity.com/guides/concepts/domain-model/#operations
You can check the python example. It contains the subscription part
http://cumulocity.com/guides/mqtt/hello-mqtt-python/
You should connect to the broker with Qos 1 or above. This will guarantee that the data has reached the broker at least once. The client will be receiving the PUBACK message once this happens. If the connectivity is lost then the client is supposed to re-send the PUBLISH message with Duplicate flag set. So the cient should stop publishing when a PUBACK is received.
For more information about Qos, refer this link
HiveMQ/blog/mqtt-essentials/QualityOfService

MQTT: Message queuing at server side

I am using mqtt to implement one of the kind of email notification system. I am also planning to use it for trigger notifications inside the webapp. I am confused between whether MQTT stores data at server side itself when we throw on MQTT url with publisher id in JSON format? The reason i am asking this is because in my case, the MQTT stores only the last thrown data, if i send another one then the previous one get disappeared. I want to know is it present at MQTT side from birth(as MQ stands for Message queuing) & i haven't used or need to be implemented at server/consumer side?
There is a common error on Internet ... MQTT stands for MQ Telemetry Transport and not Message Queueing Telemetry Transport. It was created by IBM (with Eurotech) and it was part of the MQ products family in IBM.
MQTT hasn't queue. The broker receives the message on a topic and forwards it on all subscribers on that topic.
There are two main variations on this behaviour:
If the publisher send a message with the "retain" flag active, the broker store this message (only this). If a client subscribes to that topic, the broker immediately sends this last storage message. It's so called "last known message"
If the subscriber connects to the broker with "clean session" to false, the broker saves all subscriptions and all messages only if the client is offline. It's like a queue but not a very queue. With "clean session" false, if the client goes offline but some publishers send messages to topic it is subscribed, the broker store these messages. When the client returns online, it will receive all messages lost.
Paolo.

Losing messages over lost connection xmpp

i went through this question
Lost messages over XMPP on device disconnected
but there is no answer.
When a connection is lost due to some network issue then the server is not able to recognize it and keeps on sending messages to disconnected receiver which are permanently lost.
I have a workaround in which i ping the client from server and when the client gets disconnected server is able to recognize it after 10 sec and save further messages in queue preventing them from being lost.
my question is can 100% fail save message delivery be achieved by using some other way i know psi and many other xmpp client are doing it.
on ios side i am using xmppframework
One way is to employ the Advanced Message Processing (AMP) on your server; another one is to employ the Message Delivery Receipts on your clients.
The former one requires an AMP-enabled server implementation and the initiating client has to be able to tell the server what kind of delivery status reports it wants (it wants an error to be returned if the delivery is not possible). Note that this is not bullet-proof anyway as there is a window between the moment the target client losts its connectivity with the server and the moment the TCP stack on the server's machine detects this and tells the server about it: during this window, everything sent to the client is considered by the server to be sent okay because there's no concept of message boundaries in the TCP layer and hence if the server process managed to stuff a message stanza's XML into the system buffers of its TCP connection, it considers that stanza to be sent—there's no way for it to know which bits of its stream did not get to the receiver once the TCP stack says the connection is lost.
The latter one is bullet-proof as the clients rely on explicit notifications about message reception. This does increase chattiness though. In return, no server support for this feature is required—it's implemented solely in the clients.
go with XEP-0198 and enjoy...
http://xmpp.org/extensions/xep-0198.html
For a XMPP client I'm working on, the following mechanism is used:
Add Reachability to the project, to detect quickly when the phone is having connectivity problems.
Use a modified version of XEP-0198, adding a confirmation sent by the server. So, the client sends a message, the server confirms with a receipt. Later on, the receiving user will also confirm with a receipt. For each message you send, you get two confirmations, one from the server, one from the client. This requires modifications on the server of course.
When the app is not connected to the XMPP server, messages are queued.
When the app is logged in again to the XMPP server, the app takes all messages which were not confirmed by the server and sends them again.
For this to work, you have to locally store the messages in the app with three possible states: "Not sent", "Confirmed by server", "Confirmed by user"

MQTT: How to know which msg a puback is for?

I am trying to set up a MQTT server which will persist the messages sent by clients into a local DB. Each message has a "successfully received" flag that I want to flip when receiving clients return a puback for each message (QOS = 1) received.
The question is:
When I publish a message, the server receives the puback back from the receiving client correctly. However, the messageId is not the same as the one from publishing client's packet. I know this is intended. But then I will not be able to find the right message in DB to flip the flag. What if client A sends 2 messages with QOS = 1 to client B back to back? How does the server distinguish between the 2 pubacks coming back?
Maybe MQTT client is doing something magical to map the messageIds that I am missing?
I am using mqttjs and paho mqttv3 btw.
MQTT PUBLISH messages with QoS 1 or 2 require a message id as part of the packet. The message id is used to identify which message a PUBACK (or PUBREC/PUBREL/PUBCOMP for QoS 2) is referring to. This is an important feature because you may have multiple messages "in flight" at once.
An important point that you may be missing is that clients are completely separate from one another. This means that message ids are unique to a client (and direction of message flow, broker to client or client to broker). The broker generates message ids for messages originating from the broker and the client generates message ids for messages originating from the client; the message ids are independent for each direction so that there is no need for the broker and the client to keep track of what the other is doing.
If you want to keep track of which incoming messages have been sent to all subscribing clients, you will need to keep track of what outgoing messages relate to the incoming message and only trigger your DB once all of the PUBACKs have been received for those outgoing messages. That will tell you which messages have successfully been sent to all clients that were subscribed at the time of the message being received.
If you just want a log of all messages that have been sent to the broker and can assume that the sending works ok, then life is a lot easier. Simply create a client on the broker host that listens to the "#" topic, or whatever you are interested in, then use the client on_message() callback (or however your library manages it) to process the message and store it in the DB.

Resources