I am using WSO2 APIM (2.5.0) and Analytics snapshot.
I am trying to setup receiver, stream and publisher in APIM. For that, using MQTT event input adapter.
mqtt-client-0.4.0.jar is deployed in all required places of APIM server.
Furthermore, I am now creating the receiver and then tagged that receiver to the Stream and also the publisher has stream attached to it.
Configuration looks fine as the receiver, streams and publisher gets added successfully and there is no error or exception thrown when seen in carbon.logs
Next, when I am subscribing to one of the topic and providing all the relevant information in the receiver for that MQTT topic to get data which is posted every 1 sec from the source dispatcher.
I am getting below exception:
[2019-04-16 10:48:44,937] INFO {org.wso2.carbon.event.input.adapter.mqtt.internal.util.MQTTAdapterListener} - MQTT Connection successful
[2019-04-16 10:48:45,813] WARN {org.wso2.carbon.event.input.adapter.mqtt.internal.util.MQTTAdapterListener} - MQTT connection not reachable MqttException (0) - com.jayway.jsonpath.PathNotFoundException: Path 'event' not found in the current context:
{"satellites":1,"valid":true,"name":section,"bearing":1.1000,"speed":0,"df":99071508810}
[2019-04-16 10:50:05,905] INFO {org.wso2.carbon.event.input.adapter.mqtt.internal.util.MQTTAdapterListener} - MQTT Connection successful
[2019-04-16 10:50:06,921] WARN {org.wso2.carbon.event.input.adapter.mqtt.internal.util.MQTTAdapterListener} - MQTT connection not reachable MqttException (0) - com.jayway.jsonpath.PathNotFoundException: Path 'event' not found in the current context:
{"satellites":1,"valid":true,"name":section12,"bearing":1.1789550,"speed":0.9898,"df":71508810}
Also, stream runs but it does not trigger the publisher to post these events through MQTT only or using HTTP adapter.
I am confused that why this exception is coming without any reason. Is it something which is not configured properly for receiver or streams or publisher.
Any thoughts in this direction would be really helpful.
Thanks
I have tried the same setup in Analytics server also as APIM was throwing this error. But this was same in Analuytics server also.
Related
I have two machines and am testing MQTT bridge connections with Sparkplug B payloads.
I have the bridges working as expected but when I simulate some failure points as annotated in the image below, things are not working as expected. My expectation is an NDEATH will be visible on the broker on Machine B when any of the three points in the image disconnect.
When I kill the publisher or the local MQTT Broker on Machine A, I do indeed see the NDEATH as expected when subscribed to the Machine B MQTT Broker, but when I pull the plug between Machine A & B as noted by #3 in the image, I do not see a NDEATH! I have waited for a long period to make sure the 60 second keep alive has had plenty of time to react which I understand to be 1.5x the keep alive typically.
The publisher and Broker on Machine A continue to operate and when the connection at point #3 is brought back online, all messages are delivered, but I was expecting with the bridge connection down, any nodes that had published a last will & testament (LWT) would see an NDEATH due to the connection loss at any point.
I have tested with mosquitto, vernemq and a little with hive-ce, however hive-ce is severely limited in functionality. Am I missing something with my understanding of MQTT bridging? Shouldn't NDEATH be sent in all three scenarios?
From the sparkplug spec:
A critical aspect for MQTT in a real-time SCADA/IIoT application is making sure that the primary MQTT SCADA/IIoT
Host Node can know the “STATE” of any EoN node in the infrastructure within the MQTT Keep Alive period (refer
to section 3.1.2.10 in the MQTT Specification). To implement the state a known Will Topic and Will Message is
defined and specified. The Will Topic and Will Message registered in the MQTT CONNECT session establishment,
collectively make up what we are calling the Death Certificate. Note that the delivery of the Death Certificate
upon any MQTT client going offline unexpectedly is part of the MQTT protocol specification, not part of this
Sparkplug™ specification (refer to section 3.1 CONNECT in the MQTT Specification for further details on how an
MQTT Session is established and maintained).
So, in MQTT terms, NDEATH is a 'Will' which, as mentioned above, is defined in section 3.1 of the the MQTT spec:
If the Will Flag is set to 1 this indicates that, if the Connect request is accepted, a Will Message MUST be stored on the Server and associated with the Network Connection. The Will Message MUST be published when the Network Connection is subsequently closed unless the Will Message has been deleted by the Server on receipt of a DISCONNECT Packet
In summary NDEATH creates a 'Will' which the MQTT broker publishes if it looses the connection with the publisher (unless a DISCONNECT is received first).
When you establish a bridge this relays messages published on whatever topic(s) the bridge is configured to relay. The bridge only communicates published messages; not information about what clients are connected (or any 'Will' they may have set) so when the bridged connection goes down subscribers will not receive the NDEATH.
Monitoring the connection status of bridges is not something covered by the spec so options vary from broker to broker. For example Mosquitto can (using a 'Will' on the bridge connection) provide a notification when the connection goes down (see notifications in mosquitto.conf). This may provide you with some options to get the information you need.
Recently one of our MQTT clients is disconnected by Solace quite often in our Development Solace appliance but there is no issue for the same client in Test Solace appliance. We have no clue why this happens.
Upon checking Solace event log, I noticed there are quite a number of records in the event log for CLIENT_CLIENT_DISCONNECT_MQTT event. There are different reasons given for the event. The unique reasons I filtered out from the event log are listed below. May I know what could be the causes of these reasons?
Following are the reasons for CLIENT_CLIENT_DISCONNECT_MQTT event I filtered from the event log:
Client Disconnect Received
Forced Logout
Peer TCP Closed
Peer TCP Reset
I tried to think of the possible causes. For (1), does that mean client performs a normal MQTT disconnect call? For (2), could it be triggered by our backend application which issues SEMP command to disconnect the client as we do have such a function at the backend application? As for (3) and (4), I am not sure under what circumstances it happens as our MQTT client does not do anything specifically that could cause a disconnection to happen.
Is there any documentation of the reasons and the explanation for the causes of them?
I found the answer in Solace syslog documentation, https://docs.solace.com/System-and-Software-Maintenance/Monitoring-Events-Using-Syslog.htm
In addition, I did a simple experiment and found the following:
Client Disconnect Received: when client does a mqtt disconnect call
Forced Logout: (a) when Solace disconnects a client if duplicate client ID is used; (b) When SEMP command is used to disconnect the client
Peer
Peer TCP Reset: when the client 's connection is interrupted (e.g. the client program is killed by pressing ctrl+c)
We have an IoT based application device which is configured to communication with our Dashboard via MQTT bridge from Various service providers like Google, AWS and Azure.
So the flow is:
Device start TLS session with service provider.
Subscribe to a particular topic and wait for messages from the
service provider with 5 second timeout.
Dashboard publishes messages to same topic periodically.
IoT service provider broadcast it to all devices subscribed.
Publish and subscribe messages are with MQTT QOS 1 services.
Observation:
AWS and Azure works fine with above flow, but device stop receiving messages from Google MQTT bridge after 3-5 successful iterations even though our dashboard is publishing messages to Google IoT MQTT bridge.
For Google, we have identified that control flow is different when compared with Azure and AWS.
For Google, we need to subscribe and un-subscribe for a given topic every-time before waiting to receive message while for AWS and Azure we need to subscribe once during opening a MQTT connection.
Issue:
Sometime 5 sec device timeout occurs as it could not receive messages for subscribed topic from Google MQTT bridge. Adding multiple retries to overcome timeout issue was unsuccessful as issue still persist as device could not receive message from Google MQTT bridge after 45-60sec of device operation after powering on.
Is there is constraint with Google MQTT bridge to receive messages periodically without subscribing it every-time?
How can device receive messages without timing out (5 sec) from Google MQTT bridge?
Is there any workaround to recover a device once it got timed out with establishing MQTT reconnection?
I am using google iot core as well,the device side code for the mqtt client is golang while using paho mqtt package. this client package support OnConnect handler which while using this handler I achieve the recovery which I think you are looking for.
Via this handler I am re-subscribing to the "config" topic.
I think that google does not save the subscriptions which the clients are subscribed to and therefore the client needs to re-subscribe upon successful connection
Here's the golang code I've used (inspired by gingi007's answer, thank you!)
var onConn MQTT.OnConnectHandler
onConn = func(client MQTT.Client) {
fmt.Println("connected")
client.Subscribe(topic.Config, 1, handlerFunc)
}
mqttOpts.SetOnConnectHandler(onConn)
client := MQTT.NewClient(mqttOpts)
this way config updates keep flowing to my device, while if you subscribe outside of the onConnectHandler you'll just receive one config update when you connect.
I have an existing application which was running on solace jar v7.1.2 execute in pub/sub mode. Now we have upgraded to v10.1.1 and as part of implementing DR setup(Disaster Recovery), I have added one more host in the configuration with comma separated.
The application could connect to the primary host successfully, but during the switch-over, (ie from primary to DR) the application had failed to connect and i have received the below error. It connects to DR host if I restart my application.
com.solacesystems.jcsmp.JCSMPErrorResponseException: 400: Unknown Flow Name [Subcode:55]
at com.solacesystems.jcsmp.impl.flow.PubFlowManager.doPubAssuredCtrl(PubFlowManager.java:266)
at com.solacesystems.jcsmp.impl.flow.PubFlowManager.notifyReconnected(PubFlowManager.java:452)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel$ClientChannelReconnect.call(TcpClientChannel.java:2097)
... 5 more
|EAI-000376|||ERROR| |EAI-000376 JMS Exception occurred, Description: `Error sending message - unknown flow name ((JCSMPTransportException)
Need help to understand if we need to have some configuration to do the reconnect to the DR host for a smooth switch over.
In Solace JMS API versions earlier than 7.1.2.226, any sessions on which the clients have published Guaranteed messages will be destroyed after a DR switch‑over. To indicate the disconnect and loss of publisher flow the JMS API will generate this exception. Upon receiving these exceptions, the client application should create a new session. After a new session is established, the client application can republish any Guaranteed messages that had been sent but not acked on the previous session, as these message might not have been persisted and replicated.
However, this behavior was improved in version 7.1.2.226 and later so that the API handles this transparently. It is no longer required to implement code to catch this exception. Can you please verify that the application is not using an API earlier 7.1.2.226? This can be done by enabling debug-level logs.
As Alexandra pointed out, when using guaranteed messaging, as of version 7.1.2 the Solace JMS API guarantees delivery even in the case of failover. It is normal to receive INFO-level log messages that say "Error Response (400) - Unknown Flow Name", this does not indicate a problem, but exceptions (with stack traces) are a problem and indicate that delivery is not guaranteed.
Background: if the connection between the client and the broker (on the Solace server) is terminated unexpectedly, the broker maintains the flow state — but only for three minutes. The state is also copied to the HA mate broker to support failover (but not to the replication mate). If the client reconnects within three minutes, it can resume where it left off. If it reconnects after three minutes, the server will respond with the following (which will be echoed to the logs):
2019-01-04 10:00:59,999 INFO [com.solacesystems.jcsmp.impl.flow.PubFlowManager] (Context_2_Thread_reconnect_service) Error Response (400) - Unknown Flow Name
2019-01-04 10:00:59,999 INFO [com.solacesystems.jcsmp.impl.PubADManager] (Context_2_Thread_reconnect_service) Unknown Publisher Flow (flowId=36) recovered: 1 messages renumbered and resent (lastMessageIdSent =0)
That's okay: the client JMS library will automatically resend whatever messages are necessary, so guaranteed messaging is still guaranteed.
Also, just to confirm, the jar name indicates the version, so sol-jms-10.1.1.jar uses version 10.1.1.
I am using mqtt to implement one of the kind of email notification system. I am also planning to use it for trigger notifications inside the webapp. I am confused between whether MQTT stores data at server side itself when we throw on MQTT url with publisher id in JSON format? The reason i am asking this is because in my case, the MQTT stores only the last thrown data, if i send another one then the previous one get disappeared. I want to know is it present at MQTT side from birth(as MQ stands for Message queuing) & i haven't used or need to be implemented at server/consumer side?
There is a common error on Internet ... MQTT stands for MQ Telemetry Transport and not Message Queueing Telemetry Transport. It was created by IBM (with Eurotech) and it was part of the MQ products family in IBM.
MQTT hasn't queue. The broker receives the message on a topic and forwards it on all subscribers on that topic.
There are two main variations on this behaviour:
If the publisher send a message with the "retain" flag active, the broker store this message (only this). If a client subscribes to that topic, the broker immediately sends this last storage message. It's so called "last known message"
If the subscriber connects to the broker with "clean session" to false, the broker saves all subscriptions and all messages only if the client is offline. It's like a queue but not a very queue. With "clean session" false, if the client goes offline but some publishers send messages to topic it is subscribed, the broker store these messages. When the client returns online, it will receive all messages lost.
Paolo.