What is the best/recommended way to start and stop FlowReceiver in J2EE/JEE application? - solace

What is the best place to start/stop Solace listener which subscribes to a topic. Is it ok to create/initialize connection in PostConstruct and close it in PreDestroy methods?
In all the examples I see CountDownLatch is used to read just 'n' message, whereas I need to listen for all the messages .

Related

Synchronous MQTT communication using Paho client

I have a scenario where mobile app calls rest API hosted by my application. With in this process, I need to send message to downstream system over MQTT and wait until I get the response for that message. And then I have reply back to mobile app.
The challenge here is, messaging over MQTT is asynchronous. So the message which I receive back will be in different thread (some listener class, listening on messageArrived()). How to get back to calling http thread?
Do we have synchronous communication supported by Paho library.? Something like I send a message, open some topic and wait on it till some message is received or timeout?
MQTT by it's very nature is asynchronous, as are all Pub/Sub implementations. There is no concept of a reply to a message at the protocol level, you have no way of knowing if you will EVER get a response (or you may get many) to a published message as you can't know if there is even a subscriber to the topic you publish on.
It is possible to build a system that will work this way, but you need to maintain a state machine of all in flight requests, implement a sensible timeout policy and work out what to do if you get more than one response.
You have not mentioned which of the different Paho libraries you are using, but I'm guessing Java from the method names, but without knowing what HTTP framework you are using and a host of other factors I'm not going to suggest a solution, especially as it will involve a lot of polling and synchronisation.
Is there any reason why the mobile application can't publish and subscribe to MQTT topics directly? This would remove the need for this.

How to identify the start and end of a stream through wowza?

I'm new in Wowza. I need to know whether there is any mechanism to identify the start and end of a stream through wowza. This service will monitor each of the inbound ports that are currently bound to wowza engine and then try to create a notification on two events.
1. Start of UDP packets in to the port, the event should trigger everytime a new port receives packets.
2. End of UDP packets in to the port. If no packets hit the given port for a certain period of time, this event will be triggered.
Wowza has modules and there is one that makes something similar: https://www.wowza.com/forums/content.php?171-How-to-use-IMediaStreamActionNotify2-to-monitor-live-streams-%28ModuleStreamWatchDog%29
the source is public so you could modify to match your needs. Java skills required.
the other way is to call the rest api periodically and check for new streams.

Sending Commands with MQTT - is there a pattern?

I'm new to MQTT: but I have got some basic Python programs working where sensor readings can be published to a particular topic: and other clients can then subscribe to get the temperature on a event-driven basis.
But when it comes to sending commands; I'm a little stuck on the best to do this.
So for example: take a 'countdown timer' connected to mqtt.
This timer has two states: 'stopped' and 'started'.
It will initialize itself into the 'stopped' state and wait for a 'start' command; and then will count down; publishing the current countdown to a topic.
When the countdown reaches zero; it will switch its state to 'stopped' again, and wait for another 'start' command.
If it receives a 'stop' command (over mqtt); it should also go into the 'stopped' state.
So perhaps I could create topics something like:
countdown_timer/command
countdown_timer/state
countdown_timer/value
And the countdown device could subscribe to 'command' and react by publishing to 'state'. ('stopped' or 'started'?)
But should the client somehow 'consume' the 'command' topic value once it has processed it ?
Or would it better to have something like:
countdown_timer/send_command
countdown_timer/command_result
Where the controller would send a command, the subscribed-device would carry-out the command and put 'ok' or 'error' on the 'command_result' topic ?
In general, both approaches that you describe are valid MQTT patterns. You choose what is most appropriate for your application. Here are some comments:
For your countdown timer, I would go with your first suggestion. But for other applications, other approaches may make more sense.
If you write to countdown/state and countdown/value, you may want to make these publish messages retained. This will ensure that newly subscribed clients will immediately receive the latest value.
If your countdown timer process is always running, then you don't need the retained flag for countdown_timer/command --- but sometimes it makes sense when a server process can fail, restart and reconnect to just continue with the last command.
The send_command and command_result pattern is common for MQTT when one client speaks to one server and receives one answer for each question. This doesn't seem to fit this current example well: You don't have one specific answer to respond to for each command.
Here is another pattern for client-server applications: The server subscribes to one channel server/command and each client subscribes to a separate channel: client/1, client/2, client/3 etc. When a client sends a command to the server, it includes its client id --- and the server responds on the corresponding channel.
A modification of this pattern is to use independent channels for command queries: service/1, service/2 etc. The first clients publishes to service/1 and subscribes to client/1. The second client publishes to service/2 and subscribes to client/2. The server subscribes to service/#, extracts the client id from the topic name of the received message and responds to the corresponding client channel.
You see: There are many valid patterns for MQTT --- at one hand, this flexibility is an advantage. At the other hand it puts the responsibility on you to choose wisely.

Two-way TCP communication in Indy 10?

I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?
Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.
A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.
You can download the INDY 10 TCP server demo sample code here.

Erlang: Why don't I see error_logger:info_msg output when connected by remsh?

I connect to a running node with the -remsh flag, and I run my usual Common Test sanity tests, but none of the error_logger:info_msg messages appear in the shell. Why?
The SASL default event handler will only write events to console/tty of the local node.
When connecting via "-remsh", you're starting a second node and communicating via
message passing to the first. The output from the "nodes()" BIF can confirm this.
Calls to the error_logger functions will send events to the local 'error_logger'
registered process, which is a gen_event server. You can manipulate it using
error_logger:tty/1 and error_logger:logfile/1, see the reference docs in the "Basic"
Application Group, then the "kernel" application, then the "error_logger" module.
You can also add your own event handler to the 'error_logger' server, which can then
do anything you want with the event. I'd guess that error_logger:logfile/1 might be
sufficient for your purposes, though.
-Scott

Resources