How to design a connector in go - connection

I am building a simple connector component in go, with these responsibilities:
Open, keep & manage connection to an external service (i.e. run in background).
Parse incoming data into logical messages and pass these messages to business logic component.
Send logical messages from business logic to external service.
I am undecided how to design the interface of the connector in go.
Variant A) Channel for inbound, function call for outbound messages
// Listen for inbound messages.
// Inbound messages are delivered to the provided channel.
func Listen(msg chan *Message) {...}
// Deliver msg to service
func Send(msg *Message) {...}
Variant B) Channel for inbound and outbound messages
// Listen for inbound messages + send outbound messages.
// Inbound messages are delivered to the provided msgIn channel.
// To send a message, put a message into the msgOut channel.
func ListenAndSend(msgIn chan *Message, msgOut chan *Message) {...}
Variant B seems cleaner and more "go-like" to me, but I am looking for answers to:
Is there an "idiomatic" way to do this in go?
alternatively, in which cases should variant A or B be preferred?
any other notable variants for this kind of problem?

Both approaches allow for only one listener (unless you keep track of the amount of listeners, which is a somewhat fragile approach), which is a limitation. It all depends on your programmatic preferences but I'd probably go with callbacks for incoming messages and a send method:
func OnReceive(func(*Message) bool) // If callback returns false, unregister it.
func Send(*Message)
Other than that, both of your proposed models are completely valid. The second seems more "orthogonal". An advantage of using a send method is that you can make sure it never blocks, as opposed to a "bare" channel.

Related

MQTT shared subscription

With a MQTT shared subscription, the message on the subscirbed topic would only be sent to one of the subscribing clients. Then, how the other clients of the group receive the message as they also subscribe to the same topic?
With a MQTT shared subscription, the message on the subscribed topic would only be sent to one of the subscribing clients.
Correct. With a normal (non‑shared) subscription the messages are sent to ALL subscribers; with shared subscriptions they are sent to ONE subscriber.
Prior to the introduction of shared subscriptions it was difficult to handle situations where you want multiple servers (for fault tolerance, load balancing etc) but only want to process each message once. Shared subscriptions provide a simple way of accomplishing this.
Then, how the other clients of the group receive the message as they also subscribe to the same topic?
If they are subscribing to the same shared subscription (with the same ShareName) then only one will receive the message; this is by design. If you want them all to receive the message then don't use a shared subscription. Alternatively you can establish multiple subscriptions (so all subscribers receive the message but only one processes it - but note the "could") as per the spec:
If a Client has a Shared Subscription and a Non‑shared Subscription and a message matches both of them, the Client will receive a copy of the message by virtue of it having the Non‑shared Subscription. A second copy of the message will be delivered to one of the subscribers to the Shared Subscription, and this could result in a second copy being sent to this Client.
There is an interesting bug in Java Paho (1.2.5) client that prevents working with shared topics that contains wildcards (#, +) https://github.com/eclipse/paho.mqtt.java/issues/827
Long story short, this will not work:
mqttClient.subscribe("$shared/group/some_topic/#", 1, (topic, message) -> System.out.println(topic));
instead it's required to use callbacks:
mqttClient.subscribe("$shared/group/some_topic/#", 1);
mqttClient.setCallback(new MqttCallback() {
#Override
public void connectionLost(final Throwable cause) {
}
#Override
public void messageArrived(final String topic, final MqttMessage message) throws Exception {
System.out.println(topic);
}
#Override
public void deliveryComplete(final IMqttDeliveryToken token) {
}
});

How to explicitly acknowledge/fail Amazon SQS FIFO queue from the listener without throwing an exception?

My application only listens to a certain queue, the producer is the 3rd party application. I receive the messages but sometimes based on some logic I need to send fail message to the producer so that the message is resend to my listener again until I decide to consume it and acknowledge it. My current implementation of this process is just throwing some custom exception. But this is not a clean solution, therefore can any one help me to send FAIL to producer without throwing exception.
My JMS Listener Factory settings:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactoryForQexpress(SQSErrorHandler errorHandler) {
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder()
.withRegion(RegionUtils.getRegion(StaticSystemConstants.getQexpressSqsRegion()))
.withAWSCredentialsProvider(new ClasspathPropertiesFileCredentialsProvider(StaticSystemConstants.getQexpressSqsCredentials()))
.build();
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setErrorHandler(errorHandler);
return factory;
}
My Listener Settings:
#JmsListener(destination = StaticSystemConstants.QUEXPRESS_ORDER_STATUS_QUEUE, containerFactory = "jmsListenerContainerFactoryForQexpress")
public void receiveQExpressOrderStatusQueue(String text) throws JSONException {
LOG.debug("Consumed QExpress status {}", text);
//here i need to decide either acknowlege or fail
...
if (success) {
updateStatus();
} else {
//todo I need to replace this with explicit FAIL message
throw new CustomException("Not right time to update status");
}
}
Please, share your experience on this. Thank you!
SQS -- internally speaking -- is fully asynchronous and completely decouples the producer from the consumer.
Once the producer successfully hands off a message to SQS and receives the message-id in response, the producer only knows that SQS has received and committed the message to its internal storage and that the message will be delivered to a consumer at least once.¹ There is no further feedback to the producer.
A consumer can "snooze" a message for later retry by simply not deleting it (see setSessionAcknowledgeMode docs) or by actively resetting the visibility timeout on the message instead of deleting it, which triggers SQS to leave the message in the in flight status until the timer expires, at which point it will again deliver the message for the consumer to retry.
Note, too, that a single SQS queue can have multiple producers and/or multiple consumers, as long as all the producers ask for and consumers provide identical services, but there is no intrinsic concept of which consumer or which producer. There is no consumer-to-producer backwards communication channel, and no mechanism for a producer to inquire about the status of an earlier message -- the design assumption is that once SQS has received a message, it will be delivered,² so no such mechanism should be needed.
¹at least once. Unless the queue is a FIFO queue, SQS will typically deliver the message exactly once, but there is not an absolute guarantee that the message will not be delivered more than once. Because SQS is a massive, distributed system that stores redundant copies of messages, it is possible in some edge case conditions for messages to be delivered more than once. FIFO queues avoid this possibility by leveraging stronger internal consistency guarantees, at a cost of reduced throughput of 300 TPS.
²it will be delivered assuming of course that you actually have a consumer running. SQS does not block the producer, and will allow you to enqueue an unbounded number of messages waiting for a consumer to arrive. It accepts messages from producers regardless of whether there are currently any consumers listening. The messages are held until consumed or until the MessageRetentionPeriod (default 4 days, max 14 days) timer expires for each message, whichever comes first.

Get related subscribed channel(s) in didReceiveStatus in PubNub for iOS objective C

When didReceiveStatus is called after subscribing to a channel, We are not able to retrieve the channel(s) that was just subscribed.
PNSubscribeStatus.data.subscribedChannel or PNSubscribeStatus.data.actualChannel are always null and PNSubscribeStatus.subscribedChannels gives all currently subscribed channels and not the ones that triggered the didReceiveStatus callback.
What are we doing wrong here ?
In SDK 4.0, didReceiveStatus returns a PNStatus, which according to the class documentation doesn't contain that extra information unless there's an error condition. For our application, we use that handler to monitor connection status to the PubNub server.
PubNub Message Received Channel Name in iOS
You should be able to get the channel that you received the message on but getting it depends on whether you are subscribed to the channel or to a channel group that contains the channel. This is sample code from the PubNub Objective-C for iOS SDK subscribe API Reference:
- (void)client:(PubNub *)client didReceiveMessage:(PNMessageResult *)message {
// Handle new message stored in message.data.message
if (message.data.actualChannel) {
// Message has been received on channel group stored in
// message.data.subscribedChannel
}
else {
// Message has been received on channel stored in
// message.data.subscribedChannel
}
NSLog(#"Received message: %# on channel %# at %#", message.data.message,
message.data.subscribedChannel, message.data.timetoken);
}
If you need other channels that the client is subscribed to, you can call the where-now API.
If you need to be more dynamic about what the reply-to channel should be, then just include that channel name in the message when it is published, assuming the publisher has prior knowledge about which channel this is. Or you can do a just in time lookup on your server as to which channel to reply to.
Here is PubNub support answer on this :
Actually status.data.subscribedChannel and status.data.actualChannel
dedicated to presence events and messages receiving callbacks where
information about sources of event is important.
In -client:didReceiveStatus: client doesn’t give information about
particular channel on which client has been subscribed. If client will
start track this information, there is no guarantee what it will
return expected value (as developer expect some channels to be there).
In previous version (3.x) all this information has been tracked, but
because it can be modified at any moment – result sometimes was
unpredictable.
Subscribe can be done in sequence of methods (one after another) like:
subscribe A1, subscribe C1, subscribe B1 and B2, unsubscribe C1 and B1
– this as result will end up with single call of
-client:didReceiveStatus: with resulting set of channels.
It always best practice just to check whether your channels is in
s_tatus.subscribedChannels_.
My comments:
The point of having asynchronous process is exactly not to think this as sequence of methods... We can not have the guaranty that subscriptions are done in the exact same order as the subscription request unless we block other subscription request until the previous one is done.

How can I apply timeout function to LMAX Disruptor Queue?

To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?

sync call from process with many incoming msgs

Need to implement sync call from proces which receives many incoming messages from other processes. Problem in distinguish - when msg in return to call arrived. Do i need to spawn additional process for extracting msgs from queue into buffer while return msg not encountered and then send it to main process and after it every else accepted.
The trick is to use a reference as a token for replication:
replicate() ->
{ok, Token} = db:ask_replicate(...),
receive
{replication_completed, Token} ->
ok
end
where Token is created with a call to make_ref(). Since no other message will match Token, you are safe. Other messages will be placed in the mailbox for later scrutiny.
However, the above solution does not take process crashes into account. You need a monitor on the DB server as well. The simplest way to get the pattern right is to let the mediator be a gen_server. Alternatively, you can read a chapter in LearnYouSomeErlang: http://learnyousomeerlang.com/what-is-otp#the-basic-server look at the synchronous call in the kitty_server.

Resources