We're using Spring AMQP in the style of Spring Remoting with AMQP. I'm setting x-message-ttl on every message so that it expires immediately if it cannot be delivered immediately to a consumer. This works great, however, it leaves the producer waiting for the specified value of replyTimeout before failing with RemoteProxyFailureException (if I recall correctly). Is there any way I can make the producer fail immeditely if the message cannot be delivered (only waiting for the timeout if the message is actually received)?
The loose coupling of the architecture means there's no indication to the producer of the expiry.
There used to be an immediate flag but it was removed in rabbitmq 3.0.
On possible solution would be to configure a DLX/DLQ so the expired message can be consumed by another consumer, which can return an exception to the client.
EDIT:
Simply have the fallback consumer implement the same interface and have it throw an exception.
See this updated test case.
Related
MQTT newbie here
Developing on .NET with MqttNet library for EMQX broker:
I am using MQTTv5 feature 'user properties' to add a timestamp to my messages when published. That is working flawlessly.
However, I need to stamp the LWT messages too.
In my connect method, I can supply an LWT including the timestamp user property.
Now, when I subscribe to my LWT topic using MQQTX desktop client; I get those messages and LWTs; so far so good.
But when I terminate my programs process (by that, disconnect ungracefully); I immediately get an LWT message. The problem being that my 'timestamp' user property has the stamp from when the connection was established (and LWT first set).
I could leave the value empty in my connect-method, so empty value = ungraceful disconnect; but thats not very elegant
Is there a possibility to intercept LWT messages sent from the broker, and set the timestamp?
EDIT:
I found the 'rules engine', it letting me use a broker-timestamp. But I could only add it to the payload so far (optimally it would be a user property)
I don't think so, it would be up to the broker to set the timestamp as it is what actually publishes the LWT message when it notices the client has gone.
I don't believe there is anything at the MQTT spec level (I really need to re-read the v5 message properties stuff) to do that, but it might be something that could be done with an appropriate plugin in the broker if it supports such things.
We have a single page web application. One of the functions of the application is to supervise the connection path from the client back to the server. This is implemented with a periodic ajax http request in javascript to the server every 60 seconds. This request acts as a heartbeat.
After a session is started, the server looks for that heartbeat. If it fails to receive a heartbeat request after a reasonable amount of time, it takes specific action.
The client also looks for a response to that heartbeat request. If it fails to receive a response after a reasonable amount of time, it displays a message on the screen via javascript.
We are getting reports from the field where a Chrome version of Edge is failing. Communication between the client and server is apparently failing. The server is seeing those heartbeat requests cease – and taking that specific action. However, the client is not taking the expected action on its side. It’s not displaying the message indicating a failed heartbeat request. It’s almost appears as though the javascript stopped running altogether.
The thing is, though… The customer has reported that if they disable automatic updates to Microsoft Edge the application runs fine. If the checking of updates is allowed to occur, the application eventually fails as described above. Note that this is apparently happening when Edge is just checking for updates - it's already up to date.
Updates are being turned off using several guid-named registry keys at [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\EdgeUpdate].
Any thoughts?
I've got a rebus server process running on one machine and an MVC website on another. I configure the message routing in the MVC website with a local input and error queue and the destination pointing to the queue myinputqueue#BusServer and when it runs it sends the subscription message without error but nothing seems to appear on the destination queue. The receiving bus never acknowledges it or creates a subscription entry.
It's using msmq as the transport and all queues have full permissions to the Everyone group.
I'm assuming I've not configured something correctly so I hope this is the right forum to raise the question.
Appreciate any help.
Even though Everyone has access to the queues, and that sounds like there's nothing in the way of communicating freely, I'm not sure that it works that way.
In any case, I suggest you make sure your IIS app pool is runnig with some dedicated service user identity, which you can also use as the user account under which your server process is running.
With MSMQ, messages are never lost - so in your case, the subscription message is most likely sitting in an outgoing queue on the web server machine, or it might have been moved to the "Transactional dead-letter queue" which is where MSMQ ends up moving stuff that it cannot find a place for.
Could you try and take a look at the outgoing MSMQ queues and/or the transactional dead-letter queue to see if it's a user rights problem that is haunting you?
I'm building an API using Rails where requests come in and they need to be executed by a cluster of workers running on a different server (these workers call remote APIs and parse the data, etc...). I'm going to be using Sidekiq or Resque to handle the queueing/processing of that.
My issue is the client needs to wait while this is happening and the controller needs to return the response to the client once it's complete. How would I handle this in the controller? We're using a redis backend, so I was thinking something along the lines of subscribing to a pub/sub channel and waiting for the worker to publish a status message. The controller would wait for a set time period and then return a 'check back later' response to the client if it doesn't receive a message in time. What would be the best way to implement that, or is there a better solution?
Do not make your clients wait! There are a lot of issues if you make the controller block for a long running job:
Other programs may assume the request timed out (proxies, browsers, scripts, etc.)
It makes your API endpoints become a source for denial of service
It requires you to put more engineering work into web servers (since a rails process can't handle another web request while it's handling the blocking call)
Part of the reason of using Sidekiq or Resque is the avoid controllers that do heavily lifting during the http request.
Instead, background jobs should report their status to the database. Then web server should query and return to the client the latest status from the database.
If clients need more immediate feedback, you can:
make clients constantly poll
post request to the client (if the API consumer is another webserver)
use another protocol mechanism (eg - websockets).
we're using CometD 2 to achieve the connection between a central data provider and several backends consuming the data. Up to now, when one of the backends fails briefly, all messages posted in the meantime are lost. Now we heard about the "Acknowledge Extension" for CometD. It is supposed to create a server-side list of messages and delivers them when one of the clients reports to be back online. Here are some questions:
1) Does this also work with several clients?
2) The documentation (http://cometd.org/documentation/2.x/cometd-ext/ack) says: "Note that if the disconnected browser is disconnected for in excess of maxInterval (default 10s), then the client will be timed out and the unacknowledged queue discarded." -> does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Hence,
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
3) Is it really only necessary to add the two extensions in both the client and cometD server? We're using Jetty for the server and .NET Oyatel for the client. Does anyone have some experiences with this?
I'm sorry for this bunch of questions, but unfortunately, the CometD project isn't really well documented. I really appreciate any answers.
Cheers,
Chris
1) Does this also work with several Clients
Yes, it does. There is one message queue allocated for each client (see AcknowledgedMessagesClientExtension).
2) does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Yes, it does. When the client can't reach the server for maxInterval milliseconds, the server will throw away all state associated with that client.
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
maxInterval is a servlet parameter of the Cometd servlet. It is internally treated as a long value, so the maximal value for it is Long.MAX_VALUE.
Example configuration:
<init-param>
<!-- The max period of time, in milliseconds, that the server will wait for
a new long poll from a client before that client is considered invalid
and is removed -->
<param-name>maxInterval</param-name>
<param-value>10000</param-value>
</init-param>
Setting it to a high value means that the server will wait longer before throwing away the state associated with a client (from the time the client stops contacting the server).
I see two problems with this. First, the memory requirements of the server will potentially be higher (which may also make denial of service easier). Second, the RemoveListener isn't called on the Server before the maxInterval expires, which may require you to implement additional logic that differentiates between "momentarily unreachable" and "disconnected".
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
Yes, it is possible to configure the maxInterval to last for a few minutes.
An alternative would be to restore any server side state on every handshake. This can be achieved by adding a listener to "/meta/handshake" and publishing a message to a "/service/" channel (to make sure only the server receives the message), or by adding an additional property to the "ext" property of the handshake message. Be careful to let the client restore only valid state (sign it on the server if you must).
3) Is it really only necessary to add the two extensions in both the client and cometD server?
On the server it is sufficient to do something like:
bayeux.addExtension(new AcknowledgedMessagesExtension());
I don't know how you'd do it on Oyatel. In Javascript it suffices to simply include the extension (dojo.require or script include for jQuery).
When a client with the AckExtension connects to the server, a message similar to the following will be logged (from my Jetty console log):
[qtp959713667-32] INFO org.cometd.server.ext.AcknowledgedMessagesExtension - Enabled message acknowledgement for client 51vkuhps5qgsuaxhehzfg6yw92
Another note because it may not be obvious: the ack extension will only provide server to client delivery guarantee, not client to server. That is, when you publish a message from the client to the server, it may not reach the server and will be lost.
Once the message has made it to the server, the ack extension will ensure that all recipients connected at that time will receive the message (as long as they aren't unreachable for maxInterval milliseconds).
It is relatively straightforward to implement client-side retrying if you listen to notifications on "/meta/unsuccessful" and resend the message (the original message that failed is passed as message.request to the handler).