I am doing Comet chat with Erlang. I only use one connection (long-polling) for the message transportation. But, as you know, the long-polling connection can not be stay connected all the time. Every time a new message comes or reaches the timeout, it will break and then connect to the server again. If a message is sent before the connection re-connected, it is a problem to keep the integrity of chat.
And also, if a user opens more than one window with Comet-chat, all the chat messages have to keep sync, which means a user can have lots of long-polling connections. So it is hard to keep every message delivered on time.
Should I build a message queue for every connection? Or what else better way to solve this?
For me seems simplest way to have one process/message queue per user connected to chat (even have more than one chat window). Than keep track of timestamp of last message in chat window application and when reconnect ask for messages after this timestamp. Message queue process should keeps messages only for reasonable time span. In this scenario reconnecting is all up to client. In another scenario you can send some sort of hart beats from server but it seems less reliable for me. It is not solving issue with other reason of disconnection than timeout. There are many variant of server side queuing as one queue per client, per user, per chat room, per ...
Related
Is there any way for a mqtt client to know when his queue was processed and he is "up-to-date" again?
I want to prevent editing of certain elements in the frontend until I am sure that I received all queued changes after a reconnect.
Is that possible?
No, queued messages are not flagged in any way, but they will all be delivered as soon as the client connects.
You could just set a flag when connecting to just stop all UI updates for a period of time to allow messages to arrive and then update with the last data.
Take an iOS app like Instagram. Instagram is fundementally a real-time application that updates its UI whenever a user interacts with you. For example, if someone likes your post and you are using the app, the UI is updated to trigger dopamine release and inform you that something has hapened to one of your posts. Similarly, when someone sends you a direct message on instagram, while using the app, you see the message spring down from the top, all in real-time.
In terms of implementing such real time features it is obvious that a naive HTTPS polling approach is far too inefficient. Thus this leaves two strategies:
1.) APNS Push Notifications:
When a user likes a post, sends a direct message, comments (etc.), send an HTTP POST to a backend server that will then update the database and send a silent Apple push-notification to the device of the recipient. The recipient, which is using the app, will receive the pushed payload and will send an HTTP GET to the backend server to fetch the needed data (ie. the contents of the direct message sent). The UI is updated in quasi "real-time".
2.) Websockets:
Whenever any user opens the iOS app, connect the user to the server via a websocket. This means, that all users currently using the app are connected to the server via their own websocket. When a user likes a post, sends a direct message, comments (etc.), the app sends a message to the server through the socket indicating the action. The server, before updating the database, finds the socket associated with the recipient and forwards the message through the socket to the recipient. Upon reception of the message, the UI is updated in real-time
Which of these approaches is scalable and better suited for a production environment?
TL;DR You'll love websockets + Combine framework, event driven to update your UI smoothly. APNS can be somewhat unreliable in terms of deliverability and resource/database costs, and to me is more of an information center.
Reasons to like websockets:
Decreases DB costs (e.g. finding device to send to)
You know when the user is not using the app, decreasing outbound data costs
Latency and faster practically speaking (more in depth in the websocket paragraph)
To answer the question: websockets are scalable based on your user count obviously. APNS is not server side testable in the CI, not cross platform, and not exactly feasible in terms of resource consumption.
I have a bias with websockets. But to present why I like it more, think about the efficiency given by your Instagram example.
To me, websockets == Event Driven, APNS is a simple information center.
APNS:
You sign up with APNS and you save the device in your backend. Great, every time an action is performed, like someone liking your post, query your backend, find the device that is linked to the OP, then send it outbound (which costs money if you think cloud computing costs). And you have to do that every single time someone likes your post (obviously you can aggregate, but why bother when you have websockets?). So database costs is one thing to think about. Additionally, a user could be muting their notifications. I don't have Instagram, but something efficiency wise they could have done (if not by sockets) is updating their UI based on incoming notifications for likes/hearts rather than a websocket connection. That's slow in terms of latency, costly, and unreliable if marked as spam. However, APNS has the benefit of not needing authorization unlike websockets, but...
Websockets:
When we approach websockets, you only authorize once (at least for mobile applications). You're removing outbound data costs (in terms of $$ and latency) by removing stuff like headers. You want to send small chunks of data, and sockets are just sending text/binary (I like to create commands out of them using JSON. A notable example is GitHub). When your user is done with the app, you close the connection and you don't need to send anymore data via APNS. Your server itself can be communicating with a single websocket via something like Redis PubSub in order to update your UI when someone uses a POST request to heart a post without needing to send some push notification. That's a win for the liker (who doesn't need to wait for that push notification to be sent) or the other way if you offload it to a background task (the OP doesn't need to wait). Websockets == Event Driven:
Small data chunks that are constantly delivered (as in a lot) is much better than having APNS in which it can be slow to actually deliver, especially if you're flooding Apple/your user with notifications, marking you as spam. Although, disclaimer, that's just my personal belief and speculation depending on your use case.
You're removing those unnecessary database costs.
A downside would be keeping the connection alive.
Personally, I've used websockets with the combine framework recently for a chat application, but it can be used in so many different circumstances. Updating a Facebook feed/comment section, live notifications via the websocket instead of APNS, even posting content.
APNS is not guaranteed to be delivered, especially if you have a lot of notifications - I don't remember where I read that, but after some threshold per min it'll stop working. Also, you'll have to send them to all your users, not just online ones. It's possible to send silent notifications, but it's still not optimal.
That's why websocket is a preferred way. Also you can use something like https://github.com/centrifugal/centrifugo, which is a helper for your server, that'll hold all the connections, and is very stable.
It is not one out of two, even the web-socket isn't failsafe, but all of above should be considered for an effective communication, when the app is in foreground use a web-socket to listen for any updates from the server, expect a confirmation from client when something is delivered by socket, if the web-socket connection is not active or the there is no confirmation response deliver the update through APNS/FCM, as APNS/FCM it is not guaranteed to deliver, deliver the updates when next Socket connection is successful.
I have an application that uses mqtt for communication between modules and with a mobile terminal.
In some situations when the messages do not arrive, the node performs a self test of MQTT (sending a msg to itself), and when the selftest fails, tries to reconnect to the broker (mqtt offline not always arrives). And then two problems may arise:
If I perform a mqtt.client:close() to assure that the client is closed (to avoid the second problem) and the client is already closed, the node resets.
If I perform a mqtt.client:connect() and the client is still connected, an exception and a restet occurs.
is there a way to know if the mqtt client is connected or not?
Thanks for your comment. I am going to describe what I am doing, to see if you can help me:
I have two independent system, a master and a slave. The master publish a test message every 10 minutes. If there is no answer from the slave. it publish a test message to itself. If this self test does not arrive, a disconnection from the broker is assumed, and a reconnection is initiated.
And here is where the problem comes, sometimes the client is disconnected and everything go well, but sometimes it is still connected but unresponsive, and the node resets with an exception "already connected".
Performing a mqtt:close() previously to the reconnection, should be safe, but if I send it and the client is truly disconnected, the node resets without any reason (known to me).
All this is happening without receiving any offline message.
Instead of waiting for messages manually sent by the master client (which could fail to send for various reasons, leading a listening device to the wrong conclusion about the state of its connection to the broker), I recommend using MQTT's built-in connection management.
First, you can make sure that each client's initial connection has succeeded by including an error handler in :connect(). If the client really opens, nothing in the NodeMCU documentation indicates that it will close itself; it may go offline.
Once connected, the client only knows that something is wrong when it sends a message and does not receive a response. It sounds like you are not calling :publish() much (which would otherwise let you know by returning false), so pinging may be best. If you expect to receive a message from the broker every n seconds, set a keepalive time slightly higher than that on the client.
Then, failure to get a response to those messages should trigger an event that you can respond to. That might be something like the following (not tested, may work better called outside the callback):
m:on("offline", function(client) m:close() end)
In the iOS Developers Guide for Parse, it states "By default, all connections have a timeout of 10 seconds." I am looking to change this for all requests made from the app, but am not finding any information on how to do so.
The reason we'd like to modify this is that it's taking a long time for requests to fail when the user doesn't have Wi-Fi or Cellular enabled. We want to reduce the amount of time it takes to receive said error message, just a little. We don't want to implement our own reachability tests, as it will result in duplicate popup error messages and we have many requests in various view controllers throughout the app.
Can the timeout be modified, or is there some other way to obtain a better user experience than waiting 10 seconds for an error message?
There is no information on this but certainly the request timeout limits are set by Parse and a developer will not be able to change them. I think they kept the timeouts to be long to avoid a user request being rejected if their connection becomes suddenly intermittent or they go in a tunnel, etc.
You can try to warp Parse queries around a timer which uses let say 5 seconds timeout, if the response does not come in that time you cancel your your query using PFQuery cancel function and show them a message.
If you want to avoid timing out, consider checking Reachability before making the call. You may want to show the user an alert if they're not connected and you need to do something.
A lot of people say you should just assume a connection and make the attempt without checking reachability; basically just let the connection fail and handle the error that way. I think as long as the failure isn't invisible to the user, so they don't blame your app vs their network you're good though.
The company I am working for has evaluated MQTT and decided to use it as a core messaging platform for a large scale system. The main reason is how compact the protocol is and how easy it can actually be implemented. I have a single issue with MQTT though and I'm seeking for an answer to the following question:
QoS1 and QoS2 messages require confirmation from the client. The only thing I know about the message (identifying it) when receiving PUBACK, PUBREC, PUBREL and PUBCOMP is messageId and the clientId. Message id is an unsigned int16 so the max value is 65535. It doesn't seem to be large enough for long running clients, say a year, sending 15 QoS2 messages an hour.
I am not quite sure if there's any other way to identify the message? I would like to be as compliant with the standard as possible.
Probably the first point to make clear is that message IDs are handled on a per client and per direction basis. That is to say that the broker will create a message ID for each outgoing message with QoS>0 for each client that is connected and these message IDs will be completely independent of any other message IDs used for the same message published to other clients. Likewise, each client generates its own message IDs for messages that it sends.
The message ID doesn't have to be unique, so your client sending 15 messages per hour with QoS level 2 would simply overflow at some point. The real limitation is that there can only be a maximum of 65535 messages per direction "in flight" at once (i.e. part way through the message handshake). Once a message with a given ID has been fully processed then that message ID can be reused.
Another way of looking at it is to consider how it would work if your client only ever had one message in flight at once, whether because of the rate the messages are being transmitted or by design in the way you handle the messages. In this case, you could keep message ID set to 1 for every single message because there is never a chance that there will be a duplicate.
If you wish to support having multiple messages in flight at once it would be relatively straightforward to check there are no message ID duplicates before you assign a new one.
Because the message ID is per client, if you send a single message to >65535 clients there will be no chance of message ID collisions. If you send >65535 messages to each client at once and the message flows aren't complete then there will be problems.
Answering the comment "I have noticed that every MQTT broker tends to deliver only the last QoS1/2 message":
The broker will only send messages to clients it knows about. If you connect for the first time there is no way to get messages from the past, with one exception: retained messages. If a message is set to retained then it is a "last known good" value. When a new client subscribes it will be sent the retained message immediately which makes it useful for things that are updated infrequently. I suspect this is what you are referring to. If you want a client to have messages queued when it is not connected then you must connect with the "clean session" option disabled to make the client persistent. You must also use QoS>0 subscriptions and QoS>0 publications. When your client reconnects (with clean session still set to disabled), the queued messages will be delivered. You can normally configure the number of messages to queue in this way in the broker, where any further messages will be discarded. An important point is that queueing messages for a client that has not previously connected is not supported by design.
For delivering more messages at QOS1 or QOS2, you should use concept of persistant memory. In this when ever a subscriber is not available the message get stored in persistant memory and deliver once subscriber is connected. You can do this at QOS0 also after configuring mosquitto.conf file.