How is the synchronization mode in a IoT application handled? - mqtt

I am doing an IoT project. A typical scenario is:
I need to control the device to move to a coordinate (x, y)
Then I need to get the current coordinate to decide what to do next.
I use MQTT to communicate with devices. So in my code, I express the operations like this in Blockly:
//javascript
robot.move(x, y);
if(robot.x > 100) {
// do something...
}
Obviously, move(..) method is an asynchronous one, it just publishes the command and does not wait for the completion.
Due to the messaging pattern, even I make move(..) as an async function and apply await. I still don't think it works, because I only get the callback about the message is delivered to the robot by the message broker, rather than the robot DOES actually move to the specified location.
So how should I do for this kind of scenario?

Did you try using callbacks and promises? Usually an asynchronous function either provides some callback:
robot.move(x, y, function(err, res) { // do something })
or returns a promise:
robot.move(x, y).then(function(res) { // do something }).catch(function(err) { // error })

To be clear, there is no end to end delivery notification in MQTT, the callback is only that the message has been delivered to the broker, not onward to the robot. The only way to know it's been acted on by the robot is to have it publish a separate message to confirm it has completed an action

Related

Distinct Stream in Dart

I'm writing a flutter app which sends commands via BlueTooth (FlutterBlue) to a device. The device controlls some LEDs.
The communication is working in general quite well but:
On the UI I have a slider controlling the light intensity. When I pull the slider there are more values generated than the bluetooth backend can handle.
In my first implementation I was sending the data directly to the bluetooth characteristic, resulting in exceptions from the bluetooth backend and some values get lost. It's hard to fade light down to zero.
In my second approach I'm using a stream and an await for loop to send the data. Now all values are send without any exceptions but it takes several seconds after releasing the slider until all values are send. Since I want direct visual feedback on the LEDs, this is not an option.
Since there are multiple commands of the same type to be send, I can skip all commands of the same type which were added while the bluetooth send routine was processing a write event.
I saw that there is a Stream.Distinct method but: It returns a new stream. So I have to exit my await for loop and handle the new stream.
Is there a way of removing undesired events from an existing stream without creating a new stream where I have to listen to?
Here is what I'm doing:
class MyBlueToothDevice {
BluetoothDevice _device;
List<BluetoothCharacteristic> _characteristics =
List<BluetoothCharacteristic>();
final _sendStream = StreamController<Tuple2<SendCommands, List<int>>>();
MyBlueToothDevice(this._device) {
_writeNext();
}
Future<void> write(SendCommands command, List<int> value) async {
if (isConnected) {
_sendStream.add(Tuple2<SendCommands, List<int>>(command, value));
// await _characteristics[command.index].write(value).catchError((value) {
// print("Characteristics.write error: $value");
// });
}
}
Future<void> _writeNext() async {
await for (var tuple in _sendStream.stream) {
await _characteristics[tuple.item1.index]
.write(tuple.item2)
.catchError((value) {
print("Characteristics.write error: $value");
});
}
}
}
The best solution is to use application state management to receive all the events from your slider. The state manager will then rate-limit the messages to the device to something it can handle, and also ensure that the most recent message is not lost.
A very basic solution would receive the slider value and update the value in the state manager. A periodic timer with a suitable rate could then update that value to the device; possibly only if the value actually changed since the last time it was sent.

Bidirectional gRPC stream sometimes stops processing responses after stopping and starting

In short
We have a mobile app that streams fairly high volumes of data to and from a server through various bidirectional streams. The streams need to be closed on occasion (for example when the app is backgrounded). They are then reopened as needed. Sometimes when this happens, something goes wrong:
From what I can tell, the stream is up and running on the device's side (the status of both the GRPCProtocall and the GRXWriter involved is either started or paused)
The device sends data on the stream fine (the server receives the data)
The server seems to send data back to the device fine (the server's Stream.Send calls return as successful)
On the device, the result handler for data received on the stream is never called
More detail
Our code is heavily simplified below, but this should hopefully provide enough detail to indicate what we're doing. A bidirection stream is managed by a Switch class:
class Switch {
/** The protocall over which we send and receive data */
var protocall: GRPCProtoCall?
/** The writer object that writes data to the protocall. */
var writer: GRXBufferedPipe?
/** A static GRPCProtoService as per the .proto */
static let service = APPDataService(host: Settings.grpcHost)
/** A response handler. APPData is the datatype defined by the .proto. */
func rpcResponse(done: Bool, response: APPData?, error: Error?) {
NSLog("Response received")
// Handle response...
}
func start() {
// Create a (new) instance of the writer
// (A writer cannot be used on multiple protocalls)
self.writer = GRXBufferedPipe()
// Setup the protocall
self.protocall = Switch.service.rpcToStream(withRequestWriter: self.writer!, eventHandler: self.rpcRespose(done:response:error:))
// Start the stream
self.protocall.start()
}
func stop() {
// Stop the writer if it is started.
if self.writer.state == .started || self.writer.state == .paused {
self.writer.finishWithError(nil)
}
// Stop the proto call if it is started
if self.protocall?.state == .started || self.protocall?.state == .paused {
protocall?.cancel()
}
self.protocall = nil
}
private var needsRestart: Bool {
if let protocall = self.protocall {
if protocall.state == .notStarted || protocall.state == .finished {
// protocall exists, but isn't running.
return true
} else if writer.state == .notStarted || writer.state == .finished {
// writer isn't running
return true
} else {
// protocall and writer are running
return false
}
} else {
// protocall doesn't exist.
return true
}
}
func restartIfNeeded() {
guard self.needsRestart else { return }
self.stop()
self.start()
}
func write(data: APPData) {
self.writer.writeValue(data)
}
}
Like I said, heavily simplified, but it shows how we start, stop, and restart streams, and how we check whether a stream is healthy.
When the app is backgrounded, we call stop(). When it is foregrounded and we need the stream again, we call start(). And we periodically call restartIfNeeded(), eg. when screens that use the stream come into view.
As I mentioned above, what happens occasionally is that our response handler (rpcResponse) stops getting called when server writes data to the stream. The stream appears to be healthy (server receives the data we write to it, and protocall.state is neither .notStarted nor .finished). But not even the log on the first line of the response handler is executed.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
Second question: How do we debug this? Everything we could think of that we can query for a status tells us that the stream is up and running, but it feels like the objc gRPC library keeps a lot of its mechanics hidden from us. Is there a way to see whether responses from server may do reach us, but fail to trigger our response handler?
Third question: As per the code above, we use the GRXBufferedPipe provided by the library. Its documentation advises against using it in production because it doesn't have a push-back mechanism. To our understanding, the writer is only used to feed data to the gRPC core in a synchronised, one-at-a-time fashion, and since server receives data from us fine, we don't think this is an issue. Are we wrong though? Is the writer also involved in feeding data received from server to our response handler? I.e. if the writer broke due to overload, could that manifest as a problem reading data from the stream, rather than writing to it?
UPDATE: Over a year after asking this, we have finally found a deadlock bug in our server-side code that was causing this behaviour on client-side. The streams appeared to hang because no communication sent by the client was handled by server, and vice-versa, but the streams were actually alive and well. The accepted answer provides good advice for how to manage these bi-directional streams, which I believe is still valuable (it helped us a lot!). But the issue was actually due to a programming error.
Also, for anyone running into this type of issue, it might be worth investigating whether you're experiencing this known issue where a channel gets silently dropped when iOS changes its network. This readme provides instructions for using Apple's CFStream API rather than TCP sockets as a possible fix for that issue.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
From what I can tell by looking at your code, the start() function seems to be right. In the stop() function, you do not need to call cancel() of self.protocall; the call will be finished with the previous self.writer.finishWithError(nil).
needsrestart() is where it gets a bit messy. First, you are not supposed to poll/set the state of protocall yourself. That state is altered by itself. Second, setting those state does not close your stream. It only pause a writer, and if app is in background, pausing a writer is like a no-op. If you want to close a stream, you should use finishWithError to terminate this call, and maybe start a new call later when needed.
Second question: How do we debug this?
One way is to turn on gRPC log (GRPC_TRACE and GRPC_VERBOSITY). Another way is to set breakpoint at here where gRPC objc library receives a gRPC message from the server.
Third question: Is the writer also involved in feeding data received from server to our response handler?
No. If you create a buffered pipe and feed that as request of your call, it only feed data to be sent to server. The receiving path is handled by another writer (which is in fact your protocall object).
I don't see where the usage of GRXBufferedPipe in production is discouraged. The known drawback about this utility is that if you pause the writer but keep writing data to it with writeWithValue, you end up buffering a lot of data without being able to flush them, which may cause memory issue.

QNX MsgReceive Pulse

I have a problem because I don't know how _pulse receiving works. If I have my data struct
typedef struct _my_data {
msg_header_t hdr;
int data;
} my_data_t;
and I am receiving only my msg I cant tell if it is a pulse
my_data_t msg;
...
rcvid = MsgReceive(g_Attach->chid, &msg, sizeof(msg), NULL);
when rcvid = 0 BUT how a program knows that it need to send _pulse in a form of msg (struct that I defined) or else how does it work. In addition is _IO_CONNECT a pulse? If yes why doesn't it have rcvid==0? - according to http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/n/name_attach.html
1 - _IO_CONNECT is not used for pulse. Its used for connect system call to resource managers. Example system calls are open(), close(), etc.
2 - You need to know whether the server or client is waiting on pulse message or not. For pulse message the blocking function in the resource manager will be MsgReceivePulse() and the client will use MsgSendPulse().
MsgSend() is used for normal message and MsgSendPulse() is for sending pulse message.
Similarly MsgReceive() is used for receiving normal message and MsgReceivePulse() is used for receiving pulse messages. Please refer to the QNX documents for more detailed description.
Both variants have different parameters like the functions for pulse messages do not have any parameter for return data because pulses are non blocking small messages which do not block for any reply but functions for normal messages have parameters for receive data.
You need to create channel and connection, for example
chid=ChannelCreate(0);
int pid=getpid();
coid=ConnectAttach(0, pid, chid, 0, 0);
and attach channel to connection.............
Then if you have two threads...............from one thread you can to call MsgSend function, for example MsgSend(coid, &(message), sizeof(message), &rmsg, sizeof(rmsg)); and in the other thread rcvid=MsgReceive(chid, (void*)&message, sizeof(message),NULL);

Are these two Observable Operations Equivalent?

I'm not sure why, but for some reason when using the observable that is created via concat I will always get all values that are pushed from my list (works as intended). Where as with the normal subscribe it seems that some values never make it to those who have subscribed to the observable (only in certain conditions).
These are the two cases that I am using. Could anyone attempt to explain why in certain cases when subscribing to the second version not all values are received? Are they not equivalent? The intent here is to rewind the stream. What are some reasons that could explain why Case 2 fails while Case 1 does not.
Replay here is just a list of the ongoing stream.
Case 1.
let observable =
Observable.Create(fun (o:IObserver<'a>) ->
let next b =
for v in replay do
o.OnNext(v.Head)
o.OnNext(b)
o.OnCompleted()
someOtherObs.Subscribe(next, o.OnError, o.OnCompleted))
let toReturn = observable.Concat(someOtherObs).Publish().RefCount()
Case 2.
let toReturn =
Observable.Create(fun (o:IObserver<'a>) ->
for v in replay do
o.OnNext(v.Head)
someOtherObs.Subscribe(o)
).Publish().RefCount()
Caveat! I don't use F# regularly enough to be 100% comfortable with the syntax, but I think I see what's going on.
That said, both of these cases look odd to me and it greatly depends on how someOtherObs is implemented, and where (in terms of threads) things are running.
Case 1 Analysis
You apply concat to a source stream which appears to work like this:
It subscribes to someOtherObs, and in response to the first event (a) it pushes the elements of replay to the observer.
Then it sends event (a) to the observer.
Then it completes. At this point the stream is finished and no further events are sent.
In the event that someOtherObs is empty or just has a single error, this will be propagated to the observer instead.
Now, when this stream completes, someOtherObs is concatenated on to it. What happens now is a little unpreditcable - if someOtherObs is cold, then the first event would be sent a second time, if someOtherObs is hot, then the first event is not resent, but there's a potential race condition around which event of the remainder will go next which depends on how someOtherObs is implemented. You could easily miss events if it's hot.
Case 2 Analysis
You replay all the replay events, and then send all the events of someOtherObs - but again there's a race condition if someOtherObs is hot because you only subscribe after pushing replay, and so might miss some events.
Comments
In either case, it seems messy to me.
This looks like an attempt to do a merge of a state of the world (sotw) and a live stream. In this case, you need to subscribe to the live stream first, and cache any events while you then acquire and push the sotw events. Once sotw is pushed, you push the cached events - being careful to de-dupe events that may been read in the sotw - until you are caught up with live at which point you can just pass live events though.
You can often get away with naive implementations that flush the live cache in an OnNext handler of the live stream subscription, effectively blocking the source while you flush - but you run the risk of applying too much back pressure to the live source if you have a large history and/or a fast moving live stream.
Some considerations for you to think on that will hopefully set you on the right path.
For reference, here is an extremely naïve and simplistic C# implementation I knocked up that compiles in LINQPad with rx-main nuget package. Production ready implementations I have done in the past can get quite complex:
void Main()
{
// asynchronously produce a list from 1 to 10
Func<Task<List<int>>> sotw =
() => Task<List<int>>.Run(() => Enumerable.Range(1, 10).ToList());
// a stream of 5 to 15
var live = Observable.Range(5, 10);
// outputs 1 to 15
live.MergeSotwWithLive(sotw).Subscribe(Console.WriteLine);
}
// Define other methods and classes here
public static class ObservableExtensions
{
public static IObservable<TSource> MergeSotwWithLive<TSource>(
this IObservable<TSource> live,
Func<Task<List<TSource>>> sotwFactory)
{
return Observable.Create<TSource>(async o =>
{
// Naïve indefinite caching, no error checking anywhere
var liveReplay = new ReplaySubject<TSource>();
live.Subscribe(liveReplay);
// No error checking, no timeout, no cancellation support
var sotw = await sotwFactory();
foreach(var evt in sotw)
{
o.OnNext(evt);
}
// note naive disposal
// and extremely naive de-duping (it really needs to compare
// on some unique id)
// we are only supporting disposal once the sotw is sent
return liveReplay.Where(evt => !sotw.Any(s => s.Equals(evt)))
.Subscribe(o);
});
}
}

block incoming call in blackberry

I am developing an app which blocks incoming calls. Currently, when an incoming call arrives on the device, it is blocked. But after returning from the blocked call, the screen turns to the dial call screen, and shows a dialog to alert that you have a missed call.
I want to block the incoming call, then when hung up, the screen is the home screen. How do I make this happen?
My second question: what is the permission in blocking incoming call? How do I add it to my app? I added "ApplicationPermissions.PERMISSION_IDLE_TIMER" but it's not useful.
Edit1:
this is my code in my application.
private void blockincomingcall(){
int master_volume= net.rim.device.api.system.Alert.getVolume(); //net.rim.device.api.notification.NotificationsManag er.getMasterNotificationVolume();
System.out.println("Master Volume "+master_volume);
net.rim.device.api.system.Alert.setVolume(0);
int alert_volume = Alert.getVolume();
Main.log("Master Volume after setting "+alert_volume);
int notifi_volume = NotificationsManager.getMasterNotificationVolume();
Main.log("Master Volume 1 after setting "+notifi_volume);
EventInjector.KeyCodeEvent ev1 = new EventInjector.KeyCodeEvent(EventInjector.KeyCodeEvent.KEY_DOWN, ((char) Keypad.KEY_END), KeypadListener.STATUS_ALT, 100);
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
EventInjector.invokeEvent(ev1);
EventInjector.invokeEvent(ev1);
net.rim.device.api.system.Alert.setVolume(master_volume);
//System.out.println("Master volume 2 "+master_volume);
requestBackground();
}
when, it runs on os5.0 it can block calls. but the screen will turn to the dial screen,and show a notify dialog that a new incoming call. and the volume set is no effect. it runs ok on os 7.0 and 6.0 but no effect on volume set. what should i do ,thank you
Thats a good piece of malware, but anyway:
Detect incoming call
Terminate it.
Put your app on foreground again.
For #1 You need to detect active calls (use PhoneListener class). #2 is the most difficult step and you are going to need key injection to accomplish it. It is a bit hackish:
EventInjector.KeyCodeEvent ev = new EventInjector.KeyCodeEvent(EventInjector.KeyCodeEvent.KEY_DOWN, ((char)Keypad.KEY_END), KeypadListener.STATUS_ALT);
EventInjector.invokeEvent(ev);
The #3 point can be done in two different ways:
3.1: Pass a reference to your app to the PhoneListener implementation and then call <YourUiApplication>.requestForeground()
3.2: Given that code in PhoneListener runs inside the phone app (this should answer your second question), call:
UiApplication.getUiApplication().requestBackground();

Resources