How to retrieve messages from Alpakka Mqtt Streaming client? - mqtt

I was following document for writing a Mqtt client subscriber using alpakka.
https://doc.akka.io/docs/alpakka/3.0.4/mqtt-streaming.html?_ga=2.247958340.274298740.1642514263-524322027.1627936487
After the code marked in bold, I’m not sure how could I retrieve/interact with subscribed messages. Any lead?
Pair<SourceQueueWithComplete<Command>, CompletionStage> run =
Source.<Command>queue(3, OverflowStrategy.fail())
.via(mqttFlow)
.collect(
new JavaPartialFunction<DecodeErrorOrEvent, Publish>() {
#Override
public Publish apply(DecodeErrorOrEvent x, boolean isCheck) {
if (x.getEvent().isPresent() && x.getEvent().get().event() instanceof Publish)
return (Publish) x.getEvent().get().event();
else throw noMatch();
}
})
.toMat(Sink.head(), Keep.both())
.run(system);
SourceQueueWithComplete<Command> commands = run.first();
commands.offer(new Command<>(new Connect(clientId, ConnectFlags.CleanSession())));
commands.offer(new Command<>(new Subscribe(topic)));
session.tell(
new Command<>(
new Publish(
ControlPacketFlags.RETAIN() | ControlPacketFlags.QoSAtLeastOnceDelivery(),
topic,
ByteString.fromString(“ohi”))));
// for shutting down properly
commands.complete();
commands.watchCompletion().thenAccept(done → session.shutdown());
Also, in the following example, it shows how to subscribe to the client but nothing about how to get messages after the subscription.
https://github.com/pbernet/akka_streams_tutorial/blob/master/src/main/scala/alpakka/mqtt/MqttEcho.scala
Will be grateful if anyone knows the solution or can point to any resource which uses the same connector as mqtt client and can retrieve messages.

The code to retrieve messages for the subscriber is hidden in the client method which is used for both publisher and subscriber:
...
//Only the Publish events are interesting for the subscriber
.collect { case Right(Event(p: Publish, _)) => p }
.wireTap(event => logger.info(s"Client: $connectionId received: ${event.payload.utf8String}"))
.toMat(Sink.ignore)(Keep.both)
.run()
https://github.com/pbernet/akka_streams_tutorial/blob/3e4484c5356e55522366e65e42e1741c18830a18/src/main/scala/alpakka/mqtt/MqttEcho.scala#L136
I was struggling with this connector and then tried an example with the one based on Eclipse Paho, which in the end looks better:
https://github.com/pbernet/akka_streams_tutorial/blob/3e4484c5356e55522366e65e42e1741c18830a18/src/main/scala/alpakka/mqtt/MqttPahoEcho.scala#L41
Paul

Related

How to properly configure SQS without using SNS topics in MassTransit?

I'm having some issues configuring MassTransit with SQS. My goal is to have N consumers which create N queues and each of them accept a different message type. Since I always have a 1 to 1 consumer to message mapping, I'm not interested in having any sort of fan-out behaviour. So publishing a message of type T should publish it directly to that queue. How exactly would I configure that? This is what I have so far:
services.AddMassTransit(x =>
{
x.AddConsumers(Assembly.GetEntryAssembly());
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host("aws", h =>
{
h.AccessKey(mtSettings.AccessKey);
h.SecretKey(mtSettings.SecretKey);
h.Scope($"{mtSettings.Environment}", true);
var sqsConfig = new AmazonSQSConfig() { RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(sqsConfig);
var snsConfig = new AmazonSimpleNotificationServiceConfig()
{ RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(snsConfig);
});
cfg.ConfigureEndpoints(context, new BusEnvironmentNameFormatter(mtSettings.Environment));
});
});
The BusEnvironmentNameFormatter class overrides KebabCaseEndpointNameFormatter and adds the environment as a prefix, and the effect is that all the queues start with 'dev', while the h.Scope($"{mtSettings.Environment}", true) line does the same for topics.
I've tried to get this working without configuring topics at all, but I couldn't get it working without any errors. What am I missing?
The SQS docs are a bit thin, but is at actually possible to do a bus.Publish() without using sns topics or are they necessary? If it's not possible, how would I use bus.Send() without hardcoding queue names in the call?
Cheers!
Publish requires the use of topics, which in the case of SQS uses SNS.
If you want to configure the endpoints yourself, and prevent the use of topics, you'd need to:
Set ConfigureConsumeTopology = false – this prevents topics from being created and connected to the receive endpoint queue.
Set PublishFaults = false – this prevents fault topics from being created when a consumer throws an exception.
Don't call Publish, because, obviously that will create a topic.
If you want to somehow establish a convention for your receive endpoint names that aligns with your ability to send messages, you could create your own endpoint name formatter that would use message types and then use those same names to call GetSendEndpoint using the queue:name short name syntax to Send messages directly to those queues.

Twilio "Unable to create Room" error comes up in JS API

For some reason I'm unable to create a room directly out of JS API like this:
TwillioVideo.connect(twillioToken, {name: 'my-name'})
.then(room => {
....
}, error => {
console.error('Unable to connect to Room: ' + error.message);
})
.connect method only works for me if the room was previously created, for instance:
I create the room first with C# like this:
public string CreateRoom(string roomName) {
TwilioClient.Init(_twilioSettings.AccountSid, _twilioSettings.AuthToken);
RoomResource room = RoomResource.Create(uniqueName: roomName);
return room.Sid;
}
then after it is created i can connect to it no problem.
So I'm forced to create a room in C# api and then use it in JS API. But i would rather avoid this step.
Also I did not find a way to determine if the room with the unique name already exist prior to calling RoomResource.Create(uniqueName: roomName) because if it does exist this method throws an exception. But i would rather return an SID of existing room in that case
Please advise
Twilio developer evangelist here.
In order to create rooms via the JS SDK, you need to have to have Client-side room creation enabled in your video settings in the Twilio console.

How to use bluetooth devices and FIWARE IoT Agent

I would like to use my bluetooth device (for example I'm going to create an app to be installed in a tablet) to send data (set of attributes) in Orion Context Broker via IoT Agent.
I'm looking for the FIWARE IoT Agent and probably I've to use IoT Agent LWM2M. Is it correct?
Thanks in advance and regards.
Pasquale
Assuming you have freedom of choice, you probably don't need an IoT Agent for that, you just need a service acting as a bluetooth receiver which can receive your message and pass it on using a recognisable transport.
For example, you can receive data using the following Stack Overflow answer
You can then extract the necessary information to identify the device and the context to be updated.
You can programmatically send NGSI requests in any language capable of HTTP - just generate a library using the NGSI Swagger file - an example is shown in the tutorials
// Initialization - first require the NGSI v2 npm library and set
// the client instance
const NgsiV2 = require('ngsi_v2');
const defaultClient = NgsiV2.ApiClient.instance;
defaultClient.basePath = 'http://localhost:1026/v2';
// This is a promise to make an HTTP PATCH request to the /v2/entities/<entity-id>/attr end point
function updateExistingEntityAttributes(entityId, body, opts, headers = {}) {
return new Promise((resolve, reject) => {
defaultClient.defaultHeaders = headers;
const apiInstance = new NgsiV2.EntitiesApi();
apiInstance.updateExistingEntityAttributes(
entityId,
body,
opts,
(error, data, response) => {
return error ? reject(error) : resolve(data);
}
);
});
}
If you really want to do this with an IoT Agent, you can use the IoT Agent Node lib and and create your own IoT Agent

rabbitMQ AlreadyClosedException when basicAck(knowledging)

My groovy code uses the Rabbit Native Plugin for grails:
def handleMessage(def body, MessageContext context) {
// With noAck=false, messages must be acknowledged manually with basic.ack.
boolean noAck = false
// send Ack on true and nack on false
if(processMessage(new SensorEvent(body))){
context.channel.basicAck(context.getEnvelope().getDeliveryTag(),noAck)
}else{
context.channel.basicNack(context.getEnvelope().getDeliveryTag(), false, false);
}
return ''
}
If I comment out the two lines of code that do the Ack and Nack everything works fine. If I uncomment the basicAck I get the following exception
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 1, class-id=60, method-id=80)
at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:195)
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:309)
at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:303)
at com.rabbitmq.client.impl.ChannelN.basicReject(ChannelN.java:1045)
at com.rabbitmq.client.impl.recovery.RecoveryAwareChannelN.basicReject(RecoveryAwareChannelN.java:72)
at com.rabbitmq.client.impl.recovery.AutorecoveringChannel.basicReject(AutorecoveringChannel.java:354)
I've seen advice saying to use the Subscription.Ack(). There is no Subscription class in the Java/Groovy rabbitMQ.
Any idea why I'm getting the exception?
Edit: since I'm using the Native Plugin I needed to create a consumer that implemented the interface
def handleMessage(def body, MessageContext context)
The subscribing is handled with:
/**
* Consumer configuration.
*/
static rabbitConfig = [
"queue": "my.queueName"
]
This error:
PRECONDITION_FAILED - unknown delivery tag 1,
Means that you are trying to ack a message on a different channel than the one which received said message. Delivery Tags are scoped per channel

Netty client acting as a service

I am currently working on a client-server application using netty, some of the clients are not going to be doing anything until they recieve a message. I have read the api and can´t find a way to do so. I mean I could try to have "in.readline()" on the main so it won´t end but it Doesn´t feel right. Also could have endless loops but I don´t think its the right way either.
The question here is: is there a way to bind the socket for incoming messages just like the server having the main method ending?
public void run(){
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap()
.group(group)
.channel(NioSocketChannel.class)
.handler(new ChatClientInitializer());
Channel channel = bootstrap.connect(host,port).sync().channel();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
System.out.println("Inserte su nombre");
String nombre = in.readLine();
MyClientChannel canal = new MyClientChannel(channel,nombre);
canal.write("SM",nombre);
in.readLine();
See that at the end I had to write "in.readline()" so the program wouldn´t end and the handler would be still up for incomming messages
The easiest thing to do would be to replace:
in.readLine();
With:
channel.closeFuture().await();
When the connection to the server is disconnected, the client will terminate.
You will also want to spend some time defining your client's life-cycle, so that the channel's state doesn't affect when your application is running and when it's not.

Resources