I am trying to send a load of messages to MSMQ in transactional. Had a storage issue and we bumped up the msmq storage on the receiving machine, with storage as is (1GB default) on sending machine. We still are receiving issue, and wanted to confirm if the storage should match up for both sending and receiving machines.
Please let me know if you have come across a similar issue, and what should be the ideal solution.
With transactional messages, the message exists as follows:
Message visible in outgoing queue on sender
Message visible in destination queue on receiver; message invisible in outgoing queue on sender, pending ACK message
Message visible in destination queue on receiver
So you will expect to have a time when a message takes up space on both machines. That shouldn't be for long, though, so you don't need to match storage sizes on sender and receiver.
Storage capacity should instead be set for a 'worst case' scenario. So assume a network outage. What is the maximum volume of unsent plus unacknowledged-but-sent messages you want to have sitting in the outgoing queue on the sender before you declare an emergency.
Similarly, assuming the app on the receiver has crashed, what volume of unprocessed messages in the destination queue constitutes an emergency?
Storage limits are there to protect your server.
They prevent the hard drive filling up (important)
They prevent kernel memory exhaustion (VERY important - see #4 https://blogs.msdn.microsoft.com/johnbreakwell/2006/09/18/insufficient-resources-run-away-run-away/)
So storage limit should be high enough to accommodate worst case scenarios but low enough to avoid exhausting kernel memory.
Related
I have a virtual socketCAN network. How do I block a particular ID from being sent on the network?
If a node is connected to a CAN bus, at the lowest level it cannot be prevented from sending any message externally.
However, there are 3 things that can be done:
Add a gateway - a device that separates the bus into multiple small buses and passes messages from each sub-buses to the others, it does not prevent any node from sending a message, but it will not pass it to the others. This solution have a few clear drawbacks - it requires a separate device with multiple CAN interfaces (up to the number of nodes on the bus), it adds a delay for each message, and it renders the ACK bit unusable.
Apply filters for the received messages in each node. Again, this will not prevent sending the message, but will drop the load on the nodes. Most CAN controllers have hardware support for filtering by ID or a bit mask of ID.
There are some CAN controllers that can block the sending of messages, again, this will require adding such controller and setting it up for each node in the CAN bus.
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/stream-integrations.html says:
"ActorPublisher and ActorSubscriber cannot be used with remote actors, because if signals of the Reactive Streams protocol (e.g. request) are lost the the stream may deadlock."
Does this mean akka stream is not location transparent? How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
I must have misunderstood something. Thanks for any clarification.
They are strictly a local facility at this time.
You can connect it to an TCP sink/source and it will apply back-pressure using TCP as well though (that's what Akka Http does).
How do I use akka stream to design a backpressure-aware client-server system where client and server are on different machines?
Check out streams in Artery (Dec. 2016, so 18 months later):
The new remoting implementation for actor messages was released in Akka 2.4.11 two months ago.
Artery is the code name for it. It’s a drop-in replacement to the old remoting in many cases, but the implementation is completely new and it comes with many important improvements.
(Remoting enables Actor systems on different hosts or JVMs to communicate with each other)
Regarding back-pressure, this is not a complete solution, but it can help:
What about back-pressure? Akka Streams is all about back-pressure but actor messaging is fire-and-forget without any back-pressure. How is that handled in this design?
We can’t magically add back-pressure to actor messaging. That must still be handled on the application level using techniques for message flow control, such as acknowledgments, work-pulling, throttling.
When a message is sent to a remote destination it’s added to a queue that the first stage, called SendQueue, is processing. This queue is bounded and if it overflows the messages will be dropped, which is in line with the actor messaging at-most-once delivery nature. Large amount of messages should not be sent without application level flow control. For example, if serialization of messages is slow and can’t keep up with the send rate this queue will overflow.
Aeron will propagate back-pressure from the receiving node to the sending node, i.e. the AeronSink in the outbound stream will not progress if the AeronSource at the other end is slower and the buffers have been filled up.
If messages are sent at a higher rate than what can be consumed by the receiving node the SendQueue will overflow and messages will be dropped. Aeron itself has large buffers to be able to handle bursts of messages.
The same thing will happen in the case of a network partition. When the Aeron buffers are full messages will be dropped by the SendQueue.
In the inbound stream the messages are in the end dispatched to the recipient actor. That is an ordinary actor tell that will enqueue the message in the actor’s mailbox. That is where the back-pressure ends on the receiving side. If the actor is slower than the incoming message rate the mailbox will fill up as usual.
Bottom line, flow control for actor messages must be implemented at the application level. Artery does not change that fact.
How much of memory used whenever we use from # (wildcard) to subscription into many topics? for example if we have over 10M topics, it's possible to use # to subscribe into all of them, or it caused to memory leaks?
This problem is strictly related to the MQTT broker and client implementation.
Of course, the MQTT standard specification doesn't provide any information on the features related to such implementation.
Paolo.
Extending on ppatierno's answer.
For most well designed brokers the number or scope (for wild card) subscriptions shouldn't really change the amount of memory used under normal circumstances . At most the storage should equate to the topic string that the client subscribes to, this will be matched against a incoming message to see if it should be delivered.
Where this may not hold true is with persistent subscriptions (where the clean session value is not set to true). In this case if a client disconnects then messages may be queued until it reconnects. The amount of memory consumed here will be a function of the number of messages and their size (plus what discard policy the broker may have) and not directly a function of the number of subscribed topics.
To answer the second part of your question, subscribing to 10,000,000 topics using the wildcard is not likely to cause a memory leak, but it may very well flood the client depending on how often messages are published on those topics.
we are implementing (or more reimplementing) a distributed software system. What we have are different processes (possibly running on different computers) that should communicate with each other (let's call these clients). We don't want them to directly communicate with each other, but instead use some kind of message broker.
Since we like to avoid implementing the message broker ourselves we would like to use an existing implementation. But we don't find a protocol or system that fully fulfilles our requirements.
MQTT with its publish-subscribe-mechanism seems nice and could even be used for point-to-point communication (where some specific topics are only subscribed by certain clients).
But it is (like JSM, STOMP, etc.) asynchronous. The sender sends a message into the broker and doesn't know whether it is ever delivered to it's recipient. We want that the sender gets informed about a successful delivery or an elapsed timeout (when no one is receiving the message).
Is there some protocol/implementation available that provides such synchronous messaging functionality?
(It would be nice however if asynchronous delivery would be possible, too)
The messaging by default is ( usually ) asynchronous .
You can considerer RabbitMQ, it contains the following features:
Publisher-confirms (in asynchronous way):
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
Transaction Commit:
https://www.rabbitmq.com/semantics.html
Messages TTL (to handle time out)
https://www.rabbitmq.com/ttl.html
With this features you can handle the time-out situations and the successful delivery.
If this is not enough you can use the RPC:
https://www.rabbitmq.com/tutorials/tutorial-six-java.html
Let me know if you need more information.
I work on IOCP Server in windows. And i have to send buffer to all connected socket.
The buffer size is small - up to 10 bytes. When i get notification for each wsasend in GetQueuedCompletionStatus, is there guarantee that the buffer was sent in one piece by single wsasend? Or should i put additional code, that check if all 10 bytes was sent, and post another wsasend if necessary?
There is no guarantee but it's highly unlikely that a send that is less than a single operating system page size would partially fail.
Failures are more likely if you're sending a buffer that is more than a single operating system page size in length and if you're not actively managing how many outstanding operations you have and how many your system can support before running out of "non paged pool" or hitting the "I/O page lock limit"
It's only possible to recover from a partial failure if you never have any other sends pending on that connection.
I tend to check that the value is as expected in the completion handler and abort the connection with an RST if it's not. I've never had this code execute in production and I've been building lots of different kinds of IOCP based client and server systems for well over 10 years now.