Publishing a large text using Solace .NET publisher to a queue - solace

I am trying to publish a large text message to a Solace queue using Solace .NET APIs. And I have subscribed to that queue in a different JAVA application. It works absolutely fine when the message size is small. But if the message is large, subscriber cannot read the message.
messageToPublish = readFile();
IMessage message = ContextFactory.Instance.CreateMessage();
message.Destination = queue;
message.DeliveryMode = MessageDeliveryMode.Direct;
//message.BinaryAttachment = Encoding.ASCII.GetBytes(messageToPublish);
SDTUtils.SetText(message, messageToPublish);
session.Send(message);
Is there a way to run session.send(message) synchronously?
Thanks.

It is possible that the Solace Appliance/Virtual Message Router(VMR) has discarded the message.
On the Appliance/VMR, you can take a look at the queue statistics to determine what has happened to the message. Double click on the queue's name in SolAdmin to display the following window.
In this screenshot, my message was discarded because the spool quota was exceeded. (Note that I had configured an extremely tiny quota for a quick reproduction.)
Do note that you've elected to use MessageDeliveryMode.Direct, which means that the message will be delivered through a reliable, but not guaranteed channel.
There are no negative acknowlegements if a direct message cannot be delivered.
If the message must be guaranteed, MessageDeliveryMode.Persistent should be used.
In the event that a message cannot be delivered, a RejectedMessageError session event will be triggered to indiciate that a problem has occured.
You might want to refer to AdPubAck.cs sample code in the Solace .NET API for details.

There is also a possible printing bug with Eclipse.
Are you able to verify whether the message was actually received by the Java application?
String myReceivedText = ((TextMessage) message).getText();
System.out.println("myReceivedText.length = " + myReceivedText.length());
If the message can be received with the correct length, then it is likely to be this Eclipse bug:
java System.out.println() strange behavior long string
Running the Java application via the command line will display the long string.

Related

How does error handling work in SCTP Sockets API Extensions?

I have been trying to implement a wrapper library for the Linux interface to SCTP sockets, and I am not sure how to integrate the asynchronous style of errors (where they are delivered via events). All example code I have seen, if it deals with the errors at all, simply prints out the information related to the error when it is received, but inserting error-handling code there seems like it would be ineffective, because by that point all of the context related to the original message which was sent has been lost and only a 32-bit integer sinfo_context remains. It also seems that there is no way to directly tell when a given message has been acknowledged successfully by the remote peer, which would make it impossible to implement an approach which listens for errors after sending a message, because the context information for successfully-delivered messages could never be freed.
Is there a way to handle the errors related to a given sending operation as part of the call to a send function, or is there a different way to approach error handling for SCTP which does not lose the context of the error?
One solution which I have considered is using the SCTP_SENDER_DRY notification to tell when packets have been sent, however this requires sending only one packet at a time. Another idea is to use the peer's receiver window size together with the sinfo_cumtsn field of sctp_sndrcvinfo to calculate how much data has been acknowledged as fully received using the cumulative TSN, however there are a couple of disadvantages to this: first, it requires bookkeeping overhead to calculate a number of bytes received by the peer based on the cumulative TSN (especially if the peer's window size may change); second, it requires waiting until all earlier packets were received before reporting success, which seems to defeat the purpose of SCTP's multistreaming; and third, it seems like it would not work for unordered packets.

Jobs pushing to queue, but not processing

I am using AWS SQS. I am getting 2 issues.
Sometime, messages are present in the queue but I am not able to read that.
When I fetch, I am getting blank array, same like not any messages found in queue.
When I am deleting a message from queue then it gives me like
sqs.delete_message({queue_url: queue_url, receipt_handle: receipt_handle})
=> Aws::EmptyStructure
When I check in SQS (In AWS), message still present even I refresh page more then 10 times.
Can you help me why this happens ?
1. You may need to implement Long Polling.
SQS is a distributed system. By default, when you read from a queue, AWS returns you the response only from a small subset of its servers. That's why you receive empty array some times. This is known as Short Polling.
When you implement Long Polling, AWS waits until it gets the response from all it's servers.
When you're calling ReceiveMessage API, set the parameter WaitTimeSeconds > 0.
2. Visibility Timeout may be too short.
The Visibility Timeout controls how long a message currently being read by one poller is invisible to other pollers. If the visibility timeout is too short, then other pollers may start reading the message before your first poller has processed and deleted it.
Since SQS supports multiple pollers reading the same message. From the docs -
The ReceiptHandle is associated with a specific instance of receiving a message. If you receive a message more than once, the ReceiptHandle is different each time you receive a message. When you use the DeleteMessage action, you must provide the most recently received ReceiptHandle for the message (otherwise, the request succeeds, but the message might not be deleted).

Message sending in Erlang under the hood

Message sending in Erlang is asynchronous, meaning that a send expression such as PidB ! msg evaluated by a process PidA immediately yields the result msg without blocking the latter. Naturally, its side effect is that of sending msg to PidB.
Since this mode of message passing does not provide any message delivery guarantees, the sender must itself ascertain whether a message has been actually delivered by asking the recipient to confirm accordingly. After all, confirming whether a message has been delivered might not always be required.
This holds true in both the local and distributed cases: in the latter scenario, the sender cannot simply assume that the remote node is always available; in the local scenario, where processes live on the same Erlang node, a process may send a message to a non-existent process.
I am curious as to how the side effect portion of !, i.e, message sending, works at the VM-level when the sender and recipient processes live on the same node. In particular, I would like to know whether the sending operation completes before returning. By completes, I mean to say that for the specific case of local processes, the sender: (i) acquires a lock on the message queue of the recipient, (ii) writes the message directly into its queue, (iii) releases the lock and, (iv) finally returns.
I came across this post which I did not fully understand, although it seems to indicate that this could be the case.
Erik Stenman's The Beam Book, which explains many implementation details of the Erlang VM, answers your question in great detail in its "Lock Free Message Passing" section. The full answer is too long to copy here, but the short answer to your question is that yes, the sending process completely copies its message to a memory area accessible to the receiver. If you consult the book you'll find that it's more complicated than steps i-iv you describe in your question due to issues such as different send flags, whether locks are already taken by other processes, multiple memory areas, and the state of the receiving process.

msmq\storage keeps filling up with multicast queue

I want to create a simple publish subscribe setup where my publisher keeps broadcasting messages whether there are 0,1 or more subscribers and subscribers came and go when they need and read the latest messages. I don't want older messages to be read by the subscribers. For ex. if the publisher comes online and starts publishing, lets say it publishes 100 messages while there are currently no subscribers I want those messages to disappear. If a subscriber 1 comes online and 101st message is published that will be the first message seen by subscriber 1. This appears to be how multicast msmq works but the problem I am running into is that while my publisher is running, the \System32\msmq\storage will rapidly fill up with 4mb files, they have some autoincremented names, in my case usually r000001a.mq,r000001b.mq, or something similar.
I don't know how to manage how these files are created, there are no messages in my outgoing multicast queue, and these files show up whether or not I have any subscribers listening.
The only way I can clear these files is by restarting the message queuing service.
The code I'm using to publish these files is
using (var queue = new msmq.MessageQueue
("FormatName:MULTICAST=234.1.1.2:8001"))
{
var message = new msmq.Message();
message.BodyStream = snsData.ToJsonStream();
message.Label = snsData.GetMessageType();
queue.Send(message);
}
Is there any way I can programatically control how these .mq files get created? They will rapidly use up the allowable queue storage.
Thank you,
R*.MQ files are used to store express messages. It's just for efficiency, not recovery, as they are purged on a service restart as you are finding out. I would use Performance Monitor to find out which queue the messages are in - they have to be in a queue somewhere. Once you know the queue, you can work backwards - if it's a custom queue, check your code; if it's a system queue, then that would be interesting.

Erlang dead letter queue

Let's say my Erlang application receives an important message from the outside (through an exposed API endpoint, for example). Due to a bug in the application or an incorrectly formatted message the process handling the message crashes.
What happens to the message? How can I influence what happens to the message? And what happens to the other messages waiting in the process mailbox? Do I have to introduce a hierarchy of processes just to make sure that no messages are lost?
Is there something like Akka's dead letter queue in Erlang? Let's say I want to handle the message later - either by fixing the message or fixing the bug in the application itself, and then rerunning the message processing.
I am surprised how little information about this topic is available.
There is no information because there is no dead letter queue, if you application crashed while processing your message the message would be already received, why would it go on a dead letter queue (if one existed).
Such a queue would be a major scalability issue with not much use (you would get arbitrary messages which couldn't be sent and would be totally out of context)
If you need to make sure a message is processed you usually use a way to get a reply back when the message is processed like a gen_server call.
And if your messages are such important that it would be a catastrophe if lost you should probably persist it in a external DB, because otherwise if your computer crashes what would happen to all the messages in transit?

Resources