So we have a messenger ( I know we should just switch to Xmpp, but no time right now) - problem being if a user sends say 20 messages rapid fire there is more than a good chance of them storing ALLLLL out of order.Is there any way to ensure the requests are sent in order without blocking the user from sending at the pace they want to send?
You can create a queue of NSURLRequest. When the user fires a message, this message is added to the queue and the oldest message of the queue is sent. When the completion blocks (success or failure) are called, send the new oldest message, and so on...
Just be careful to have only one process which send messages.
You can also take a look to NSOperation and NSOperationQueue.
Related
Is there any way for a mqtt client to know when his queue was processed and he is "up-to-date" again?
I want to prevent editing of certain elements in the frontend until I am sure that I received all queued changes after a reconnect.
Is that possible?
No, queued messages are not flagged in any way, but they will all be delivered as soon as the client connects.
You could just set a flag when connecting to just stop all UI updates for a period of time to allow messages to arrive and then update with the last data.
I have an app that is firing a lot of initial Alamofire GET Requests to an API to eventually collect the data. However there are buttons on the app screen which also fire off POST requests to save etc. Though when i tap on the buttons, the alamofire requests take a long time to fire off due to the fact all the other GET Requests are still running.
Is it possible to make it so I can push the POST request ahead of the queue?
There is no way to do this using Alamofire or the underlying URLSession APIs. What you'd want to do is build your own request queue that lets you keep perhaps half a dozen requests in flight at any time. You would enqueue all of your GETs and then push your POSTs to the front of the line when needed.
Here is our scenario:
We are publishing messages to a topic, and have a queue subscribed to that topic. We have several node.js consumers connected to that queue to process incoming messages, via the solclient package.
What we want to do is process the incoming messages, and after successfully processing the message, acknowledge the message so that it is removed from the queue. The challenge we're having is how to deal with error conditions. We are trying to figure out how to flag to the broker that the message failed to be processed? The expectation would be that the broker would then attempt to send to a consumer again, and once max redeliveries is hit, it moves to the DMQ.
I don't believe there's a way in Solace for a consumer to "NACK" a received message to signal an error in processing. I believe your option would be to unbind from the queue (i.e. disconnect() the Flow object, or MessageConsumer in the Node.js API) which will place allow any unacknowledged messages back on the queue and available for redelivery, and then rebind to the queue.
Alternatively, if you do your processing in the message received callback method, you could throw an exception in there, and that should (?) accomplish the same thing. But probably a lot less graceful.
https://docs.solace.com/Solace-PubSub-Messaging-APIs/Developer-Guide/Creating-Flows.htm
https://docs.solace.com/API-Developer-Online-Ref-Documentation/js/solace.MessageConsumer.html#disconnect
I’m working on a simple wrapper around CoreBluetooth to send any data to any device.
During developing I encountered a lot of bugs in framework, they were very annoying and to make my wrapper stable I had to shorten some of functionality for reliability.
For now I’m working on sending data from peripheral.
Ok, so I have following case:
Client asks for value of dynamic characteristic
I get a callback on server-side - peripheral:didReceiveReadRequest:.
Note : I need to respond to this CBATTRequest in this method - I can’t store it elsewhere and respond to it asynchronously. (Im just putting some chunk #“PrepareToReceiveValue” that will be ignored on central side. All sending is done in queue.)
For providing data for various devices I constructed a queue with BTMessage's in it. (So for readRequest I create message and add it to sending queue. If chunk sending failed - I will get a callback from peripheral manager about readyToUpdateSubscribers and will ask queue to resend failed chunk)
So when I’m requesting immediately a lot of dynamic characteristic values and sending data from peripheral to central concurrently sometimes it just freezes sending progress and leads to disconnection.
After several testing I found out that it was all about transmit queue:
If transmit queue is full and you will receive read request - it just won’t respond to it.
So I have potential unstable system state:
Peripheral is sending data to some central.
In my sending method updateValue:forCharac… returns NO because transmit queue is full.
At this moment central requests dynamic value for characteristic and peripheral:didReceiveReadRequest: invocation will be added to current runloop.
After returning from sending method it will dequeue peripheral:didReceiveReadRequest: method and responding to this request will have no effect (transmit queue is full).
So in this case respondToRequest: is ignored like I didn’t invoked it at all.
CoreBluetooth will not be able to send/receive any data until I will respond to request. That was the reason for freezing any sending/receiving progress with concomitant disconnection.
As I mentioned before - I must respond to request in appropriate method - otherwise it will also have no effect. (Im saying it because I’ve tried to put those request in array if queue is full and respond to them when it will have some space but with no luck).
Im waiting for your proposals/suggestions how to resolve this problem, any help would be appreciated.
I am doing Comet chat with Erlang. I only use one connection (long-polling) for the message transportation. But, as you know, the long-polling connection can not be stay connected all the time. Every time a new message comes or reaches the timeout, it will break and then connect to the server again. If a message is sent before the connection re-connected, it is a problem to keep the integrity of chat.
And also, if a user opens more than one window with Comet-chat, all the chat messages have to keep sync, which means a user can have lots of long-polling connections. So it is hard to keep every message delivered on time.
Should I build a message queue for every connection? Or what else better way to solve this?
For me seems simplest way to have one process/message queue per user connected to chat (even have more than one chat window). Than keep track of timestamp of last message in chat window application and when reconnect ask for messages after this timestamp. Message queue process should keeps messages only for reasonable time span. In this scenario reconnecting is all up to client. In another scenario you can send some sort of hart beats from server but it seems less reliable for me. It is not solving issue with other reason of disconnection than timeout. There are many variant of server side queuing as one queue per client, per user, per chat room, per ...