Can a SCTP entity still send data once it has received a SHUTDOWN chunk? - sctp

i'm learning SCTP protocols and I cannot figure out that one thing.
Once the server has received a SHUTDOWN message from the client, is it allowed to send data back?
And what about the ACK messages, can they still be sent/received or must the server immediately respond with a SHUTDOWN ACK?
Thanks for your help!

SHUTDOWN chunk is just an indication that remote side would like to close the association. The local side still can transmit data it has previously received from upper layer. As the matter of fact the local side have to deliver everything that has been received from upper layer and has not been delivered yet to remote end.
According to RFC 4960, chapter 9.2:
Upon reception of the SHUTDOWN, the peer endpoint shall
enter the SHUTDOWN-RECEIVED state,
stop accepting new data from its SCTP user, and
verify, by checking the Cumulative TSN Ack field of the chunk,
that all its outstanding DATA chunks have been received by the
SHUTDOWN sender.
...
If there are still outstanding DATA chunks left, the SHUTDOWN
receiver MUST continue to follow normal data transmission procedures
defined in Section 6, until all outstanding DATA chunks are
acknowledged; however, the SHUTDOWN receiver MUST NOT accept new data
from its SCTP user.
Once all outgoing data has been successfully delivered to remote end, it is allowed to send SHUTDOWN_ACK:
If the receiver of the SHUTDOWN has no more outstanding DATA chunks,
the SHUTDOWN receiver MUST send a SHUTDOWN ACK and start a T2-
shutdown timer of its own, entering the SHUTDOWN-ACK-SENT state. If
the timer expires, the endpoint must resend the SHUTDOWN ACK.
The same applies to SACK chunks. Local side still can receive SACKs confirming outstanding data. Local side however should not send any new SACK chunk to remote end, because the SHUTDOWN chunk is send only after remote end successfully delivered all of it outgoing data.
Hopefully that helps.

Related

What happens when a message is published with QoS=1?

I would like to have a better understanding of the behavior of this library.
Specifically: let's say I have an open connection (over WSS, if this change anything) with an MQTT server.
I publish a message with QoS=1.
My understanding is that mqtt awaits for a PUBACK message. After the ack has been received, the done callback is called and the flow is ended.
What is not clear to me is the low-level stuff: how much "time" do the library awaits for the ack? what happen if the ack doesn't come? the message is resent? the connection is closed/reopened? something else?
Is this behavior tunable?
Prior to answering your specific question I feel its worth outlining what the protocol requires (I'll highlight the key term). The MQTT 3.1.11 spec says:
When a Client reconnects with CleanSession set to 0, both the Client and Server MUST re-send any unacknowledged PUBLISH Packets (where QoS > 0) and PUBREL Packets using their original Packet Identifiers [MQTT-4.4.0-1]. This is the only circumstance where a Client or Server is REQUIRED to redeliver messages.
The v5 spec tightens this:
When a Client reconnects with Clean Start set to 0 and a session is present, both the Client and Server MUST resend any unacknowledged PUBLISH packets (where QoS > 0) and PUBREL packets using their original Packet Identifiers. This is the only circumstance where a Client or Server is REQUIRED to resend messages. Clients and Servers MUST NOT resend messages at any other time [MQTT-4.4.0-1].
So the only time the spec requires that the publish be resent is when the client reconnects. v3.1.1 does not prohibit resending at other times but I would not recommend doing this (see this answer for more info).
Looking specifically at mqtt.js I have scanned through the code and the only resend mechanism I can see is when the connection is established (backed up by this issue). So to answer your specific questions:
how much "time" do the library awaits for the ack?
There is no limit; the callback is stored and called when the flow completes (for example).
what happen if the ack doesn't come? the message is resent? the connection is closed/reopened? something else?
Nothing. However in reality the use of TCP/IP means that if a message is not delivered then the connection should drop (and if the broker receives the message, but is unable to process it, then it should really drop the connection).
Is this behavior tunable?
I guess you could implement a timed resend but this is unlikely to be a good idea (and doing so would breach the v5 spec). A better approach might be to drop the connection if a message is not acknowledged within a set time frame. However there really should be no need to do this.

How does ejabberd handle message ordering and delivery?

As per https://datatracker.ietf.org/doc/rfc6120/?include_text=1 and 10.1. In-Order Processing
How is Ordered Message Delivery ensured across all items in roster?
Is it done at server or client side? If it is on any side, are newer messages being waited upon older messages with a timeout?
Does it use an incremental sequence number for ordering guarantees?
On client re-connect, how does client know what to pull from server? Does the client send last msgIds of all items in roster? or does Server keep the QOS data and client state for each device ?
First of all, As XMPP uses TCP transport protocol it ensures the server receives the data in the same order the client sends it.
As per TCP docs:
TCP guarantees delivery of data and also guarantees that packets will
be delivered in the same order in which they were sent
ejabbred is an XMPP server, the raw data received over the TCP must be compliant with the XMPP protocol and the same being verified XMPP server.
In XMPP protocol client can able to send messages after it has done with session initiation, resource bind, and authentication, etc..
These messages are being processed in the order the client sends it and routes to its recipients. If recipients are offline it push and pop to database the same order for later delivery.
Here ordering guarantees mostly ensure by the TCP network stack.

Leaving Photon room immediately after RPC

How does Photon handle a player leaving a room immediately after issuing a RPC? Does the RPC reach the targeted players?
RPCs are sent reliably independent from the transport protocol used.
RPCs are RaiseEvent operation calls under the hood.
The client sends RaiseEvent operation request to the relay server (Game Server) then the relay server will send a custom event to the target active actors if any.
Since this operation request is sent reliably, the client can retry sending it if no ack is received from the server after some time. However, if the client leaves the room, it will switch servers (disconnect from the Game Server and connects to the Master Server). So the retry attempts may be skipped in this case.
If the RaiseEvent operation request successfully reaches the server then the RPC reaching the target will be the responsibility of the server only.

Rebooting server with MQTT service

Imagine an MQTT broker with remote clients connected, which continuously send QoS 2 data - the standard situation. Clients are configured with "cleansession false" - they have a queue to send messages in case of a connection failure.
On the server, local clients subscribe to topics to receive messages.
Server load:
Launch the MQTT Broker
Running local clients
Connecting remote clients and receiving data from the queue
What if the third point occurs before the second? Are there standard solutions? How not to lose the first messages?
Assuming you are talking about all later reboots of the broker, not the very first time the system is started up then the broker should have stored the persistent subscription state of the clients to disk before it was shutdown and restored this when it restarted. This means that it should queue messages for the local clients.
Also you can always use a firewall to stop the remote client being able to connect until all the local clients have started, this would solve the very first startup issue as well.

MQTT QoS2 why use 4 packets?

I think we can just use publish and pubrcv to meet the QoS2.
ClientA-> Server publish packet
Server -> ClientA pubrecv packet
When the server recv the publish packet, save to db, then the server publish to other clients, eg. ClientB.
Even if we recv two same publish packets from ClientA, the server check the db, and know this is repeated message, so not publish to the ClientB.
So I don't think need 4 packets.
Do my logic is correct?
There protocol uses the two exchanges of packets in order to provide the exactly-once semantics of QoS 2 messaging.
C --- PUBLISH --> S
*1
C <-- PUBREC --- S
*2
C --- PUBREL --> S
*3
C <-- PUBCOMP --- S
*4
When the server receives the PUBLISH it stores the ID and forwards the message on. When the server receives the PUBREL it can then delete the ID.
If the connection breaks at *1, the client does not know if the server received the message or not. It resends the PUBLISH (containing the full message payload). If the server had already received the message it just needs to respond with the PUBREC.
If the connection breaks at *2, the client may or may not have received the PUBREC. If it didn't, it will resend the PUBLISH. Otherwise it will send the PUBREL.
If the connection breaks at *3, the client does not know if the server received the message or not. It resends the PUBREL - which does not contain full message payload.
If the connection breaks at *4 and the client hasn't received the PUBCOMP it can resend the PUBREL.
There are two observations for why the two exchanges are needed:
the server is not required to remember every message it has ever seen. There is a very well defined period for it to store the message ID. The two exchanges allow both sides to have certainty that the message has been delivered exactly once.
the client does not need to resend the PUBLISH multiple times (unless the connection is interrupted at *1. Given the protocol is intended to minimise network traffic, this is an important feature.

Resources