awsiot mqtt reinstantiating subscriptions - mqtt

I have a use case where the IoT-enabled device will be in patchy areas of connectivity (LTE/Mobile).
I am using Greengrass. While core GG services will re-establish connectivity and subscriptions, I have a component in which I subscribe to a topic:
import awsiot.greengrasscoreipc
import awsiot.greengrasscoreipc.client as client
from awsiot.greengrasscoreipc.model import (
QOS,
IoTCoreMessage,
SubscribeToIoTCoreRequest,
)
...
...
handler = StreamHandler()
ipc_client = awsiot.greengrasscoreipc.connect()
operation = ipc_client.new_subscribe_to_iot_core(handler)
topic_name = f"$aws/things/{os.environ['AWS_IOT_THING_NAME']}/tunnels/notify"
request = SubscribeToIoTCoreRequest(topic_name=topic_name, qos=QOS.AT_LEAST_ONCE)
future = operation.activate(request)
future.result(15)
My question is, does awsiot sdk, out of the box re-establish broken connections and reinstate previous subscriptions, or do I have to configure this mechanism?
Does QOS.AT_LEAST_ONCE give me this feature, or are there additional configuration parameters I need to supply at the awsiot.greengrasscoreipc.connect() level?

Related

How do I use Event Store DB client without continued memory usage growth?

I am using the event store client for .Net and I am struggling to find the correct way to use the client. When I register the client as a singleton in the .Net dependency injection and run my application over an extended period of time memory usage grows continuously with each subscription.
I create and register the client in the following way. A full minimal application that experiences the problem can be found here.
var esdbConnectionString = configuration.GetValue("ESDB_CONNECTION_STRING", "esdb://admin:changeit#localhost:2113?tls=false");
var eventStoreClientSettings = EventStoreClientSettings.Create(esdbConnectionString);
var eventStoreClient = new EventStoreClient(eventStoreClientSettings);
services.AddSingleton(eventStoreClient);
My application has a high number of short streams over an extended period of time
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
How I am creating and subscribing to streams:
var streamName = CreateStreamName();
var payload = new PingEvent { StreamNr = _currentStreamNumber };
var eventData = new EventData(Uuid.NewUuid(), typeof(PingEvent).Name, EventSerialization.SerializeEventData(payload));
await _client.AppendToStreamAsync(streamName, StreamState.Any, new[] { eventData });
var streamCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(30));
await _client.SubscribeToStreamAsync(streamName, FromStream.Start, async (sub, evnt, token) =>
{
if (evnt.Event.EventType == "PongEvent")
{
_previousStreamIsDone = true;
streamCancellationTokenSource.Cancel();
}
},
cancellationToken: streamCancellationTokenSource.Token);
Approaches attempted
Registering as Transient or Scoped
If I register the client as Transient or Scoped in .Net DI it is throwing thousands of exceptions internally and causing multiple problems.
Manually handling lifetime of client
By having a singleton service that handles the lifetime of the client I have attempted to every once in a while dispose of the client and create a new one, ensuring that there exists only one instance of the client at the same time. This results in same problem as registering the service as Transient or Scoped.
I am using version 22.0.0 of the Event Store client in .Net 6 against Event Store Database 21.10.0. The problems happens both when running on windows and on the standard aspnet:6.0 linux docker container.
By inspecting the results of these dotnet-dumps the memory growth seem to be happening inside this HashSet of ActiveCalls in the gRPC client.
I am hoping to find a way of using the client that does not lead to memory growth.
In your reproduction the leaked calls are coming from the extra read that you are issuing while processing an event received on the subscription.
There is an open issue (https://github.com/EventStore/EventStore-Client-Dotnet/issues/219) at the moment to deal with this better, but currently if you issue a read but don't consume all the events and don't cancel the read, then the call remains open. In your case this is happening if the slave has managed to reply Pong before the master has issued the read that results from receiving its own Ping in the subscription. That read will then contain the Ping and the Pong, only the Ping is read, and the call remains open.
For now, if you cancel those reads by passing the cancellation token that you are cancelling into the ReadStreamAsync call in ReadFromStartOfStreamToEnd, it should resolve your problem.
In case it's helpful for you, you can see the number of Current Calls live rather than waiting a long time to see the effect on memory:
dotnet-counters monitor --counters "Grpc.Net.Client" -p <processid>

JMRI keeps repeating MQTT messages to my Pico Wifi via Hivemq. Is there a way to stop this or reduce the repeitition?

I borrowed code from Toms Hardware on how to use MQTT and subscribe. JRMI is the publisher of the messages and it keeps repeating them over and over again. Is there anyway to have the message sent only once? I dont have this problem when I subscribe to MQTT via http://www.hivemq.com/demos/websocket-client/ The MQTT service I'm using is broker.hivemq.com
For those not familiar with JRMI, it is the JAVA program that model railroads use to control tracks,lighting, DCC etc. Ref: https://www.jmri.org/
The link to Tom's is here https://www.tomshardware.com/how-to/send-and-receive-data-raspberry-pi-pico-w-mqtt
The code adapted from Tom's is
import network
import time
from machine import Pin
from umqtt.simple import MQTTClient
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect("whatever","pwd")
time.sleep(5)
print(wlan.isconnected())
mqtt_server = 'broker.hivemq.com'
client_id = 'bigles'
topic_sub = b'/trains/track/turnout/#'
def sub_cb(topic, msg):
print("New message on topic {}".format(topic.decode('utf-8')))
msg = msg.decode('utf-8')
print(msg)
def mqtt_connect():
client = MQTTClient(client_id, mqtt_server, keepalive=60)
client.set_callback(sub_cb)
client.connect()
print('Connected to %s MQTT Broker'%(mqtt_server))
return client
def reconnect():
print('Failed to connect to MQTT Broker. Reconnecting...')
time.sleep(5)
machine.reset()
try:
client = mqtt_connect()
except OSError as e:
reconnect()
while True:
client.subscribe(topic_sub)
time.sleep(1)
The setup inside JRMI for MQTT (edit->preferences) is as follows:
JMRI, by default, publishes with "the retain option on". When you subscribe to a topic the broker will send you the most recent (if any) retained message. This occurs even if you already had an identical subscription as per the MQTT Spec:
If a Server receives a SUBSCRIBE Packet containing a Topic Filter that is identical to an existing Subscription’s Topic Filter then it MUST completely replace that existing Subscription with a new Subscription. The Topic Filter in the new Subscription will be identical to that in the previous Subscription, although its maximum QoS value could be different. Any existing retained messages matching the Topic Filter MUST be re-sent, but the flow of publications MUST NOT be interrupted [MQTT-3.8.4-3].
In your code you are calling Subscribe in a loop:
while True:
client.subscribe(topic_sub)
time.sleep(1)
To avoid the repeated messages move the subscribe out of the loop (you only need to subscribe once!). Something like the following (simplified!) code:
client = mqtt_connect()
client.subscribe(topic_sub)
while True:
client.wait_msg() // Use client.check_msg() if you have other stuff to do

Graph Lifecycle Notifications Not registering correct endpoint

I am trying to us the Lifecycle events within the Graph Beta API using code like this:
var subscription = new Subscription
{
Resource = $"users/{userObjectId}/mailFolders('{resource}')/messages",
ChangeType = "created,updated",
NotificationUrl = notificationWebHookUrl,
LifecycleNotificationUrl = lifecycleNotificationWebHookUrl,
ClientState = clientState,
ExpirationDateTime = DateTime.UtcNow + new TimeSpan(0, 0, 4200, 0),
};
However, even though I have supplied a different LifecycleNotificationUrl to the NotificationUrl, the initial requests to perform the validate request only go to the NotificationUrl endpoint not the LifecycleNotificationUrl endpoint. I have checked and I am definitely supplying different endpoint urls.
I am using 2 separate Azure Functions with Http triggers as the endpoints.
Also to note is that I am using ngrok for exposing my localhost Azure functions.
I understand that if you do not supply a LifecycleNotificationUrl that this is the behaviour that you should expect, but I am.
We currently have an open issue where the validation code is sending two validation requests to the notificationUrl and none to the lifecycleNotificationUrl. This is something we're trying to address, hopefully shortly. I suggest you follow this issue to get notified of any update on the matter.
Besides that, once the validation is passed, lifecycle notifications will get delivered to your lifecycleNotificationUrl and not your notificationUrl.

MQTT Client Publisher Acknowledgement M2MTT Liberary Code

I am using c# M2MQTT Client code to publish and subscribe the data. I have set the QOS Level 1 or 2. Do not know that how the publisher will get the notification when delivery completes. I have searched a lot on inter net but no code available. Please let me know if any one how to handle the acknowledgement at publisher end in c#.
MqttClient client = new MqttClient(IPAddress.Parse(mqttserverurl));
clientId = Guid.NewGuid().ToString();
client.Connect(clientId, uname, pwd);`enter code here`
client.Publish("testtopic", Encoding.UTF8.GetBytes("Hi"), MqttMsgBase.QOS_LEVEL_EXACTLY_ONCE, false);
You don't.
It's all handled internally by the MQTT client library and M2MQTT doesn't seem to have a on_publish callback.

How to get solace queue statistics from Solclient API? c#

I am looking to retrieve some Solace queue stats e.g. the current messages spooled count out of the maximum limit for us to set a threshold to stop publishing more messages to the queue.
Also, to subscribe to vpn events to track message discard rates.
By the time we receive errors e.g. MaxMsgUsageExceeded/SpoolOverQuota, it will be too late.
I can't seem to find any of these on SolaceSystems.Solclient.Messaging API
https://docs.solace.com/API-Developer-Online-Ref-Documentation/net/html/7f10bcf6-19f4-beff-0768-ced843e35168.htm
Would be great if someone could help
(using C# for this)
To poll for Solace queue stats from your C# application, you could use legacy SEMP over the message bus to make a SEMP request for the details that you want. Semp (Solace Element Management Protocol) is a request/reply protocol that uses an XML schema to identify all managed objects available in a message broker. Applications can use SEMP to manage and monitor a message broker.
To allow for legacy SEMP to be used over the message bus, as opposed to the management interface, it first needs to be enabled on the Solace PubSub+ message broker at the VPN level.
To publish a SEMP request with the Solace .Net Messaging API, perform the following steps:
Create a Session.
Create the message topic. “#SEMP//SHOW”
ITopic topic = ContextFactory.Instance.CreateTopic( “#SEMP/<router name>/SHOW”);
Create a request message and set its Destination to the topic in Step 2:
IMessage requestMsg = ContextFactory.Instance.CreateMessage();
requestMsg.Destination = topic;
Set the SEMP request string as the binary attachment.
string SOLTR_VERSION = "8_4_0" //change to the message-broker's version
string SEMP_SHOW_QUEUE = "<rpc semp-version=\"soltr/" + SOLTR_VERSION +
"<show><queue><name>queueName</name><detail></detail></queue></show></rpc>";
requestMsg.BinaryAttachment = Encoding.UTF8.GetBytes(SEMP_SHOW_QUEUE);
Call the SendRequest(…) method on Session.
IMessage replyMsg;
ReturnCode rc = session.SendRequest(requestMsg, out replyMsg, timeout);
The SEMP response is returned in replyMsg.
Obtain the binary attachment data from the reply message:
replyMsg.BinaryAttachment
The binary attachment contains the SEMP reply for the command topic in the publish request.
The Solace PubSub+ message broker does raise an event when an egress message is discarded. However, it is only sent out approximately once every 60 seconds for the specified client so it is not possible to get these exact rates.
It is possible for your .NET application to subscribe to VPN-level events over the message-bus. To do this, you must first enable the Solace PubSub+ message broker to publish the events. You can then subscribe to the special topic and receive the events as messages.
The topic to subscribe to is:
#LOG/<level>/VPN/<routerName>/<eventName>/<vpnName>
The different levels can use the * wildcard. For example, if you wish to subscribe to all VPN events of all levels for the VPN apple on router QA-NY1, the topic string would be:
#LOG/*/VPN/QA-NY1/*/apple
SEMP (starting in v2) is a RESTful API for configuring, monitoring, and administering a Solace PubSub+ broker.
1-The swapper page link is SEMP V2 API
2-The Swagger metadata definitions URL is located # http://{solace-sever-url}/SEMP/v2/config/spec
3- From Visual studio, add REST API Client
4-In the configuration dialog pass swagger metadata URL (defined at step 2), for code purpose I choose SolaceSemp as input value parameter for client namespace input.
4 Once you click ok, VS will create the client along with the models under SolaceSemp namespace
5 Start using the client as per following
using SolaceSemp;
using Microsoft.Rest;
var credentials = new BasicAuthenticationCredentials();
credentials.UserName = "place user name";
credentials.Password = "place password";
using (var client = new SolaceSempClient(credentials))
{
var model = client.GetAboutApi();
}

Resources