Implementing Acknowledge-Extension for CometD in Jetty/ASP.NET - asp.net-mvc

we're using CometD 2 to achieve the connection between a central data provider and several backends consuming the data. Up to now, when one of the backends fails briefly, all messages posted in the meantime are lost. Now we heard about the "Acknowledge Extension" for CometD. It is supposed to create a server-side list of messages and delivers them when one of the clients reports to be back online. Here are some questions:
1) Does this also work with several clients?
2) The documentation (http://cometd.org/documentation/2.x/cometd-ext/ack) says: "Note that if the disconnected browser is disconnected for in excess of maxInterval (default 10s), then the client will be timed out and the unacknowledged queue discarded." -> does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Hence,
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
3) Is it really only necessary to add the two extensions in both the client and cometD server? We're using Jetty for the server and .NET Oyatel for the client. Does anyone have some experiences with this?
I'm sorry for this bunch of questions, but unfortunately, the CometD project isn't really well documented. I really appreciate any answers.
Cheers,
Chris

1) Does this also work with several Clients
Yes, it does. There is one message queue allocated for each client (see AcknowledgedMessagesClientExtension).
2) does this mean that in case my client doesn't restore within the maxInterval, the messages are lost anyway?
Yes, it does. When the client can't reach the server for maxInterval milliseconds, the server will throw away all state associated with that client.
2.1) What's the maximal maxInterval? Which consequences does it have to set it to a high value?
maxInterval is a servlet parameter of the Cometd servlet. It is internally treated as a long value, so the maximal value for it is Long.MAX_VALUE.
Example configuration:
<init-param>
<!-- The max period of time, in milliseconds, that the server will wait for
a new long poll from a client before that client is considered invalid
and is removed -->
<param-name>maxInterval</param-name>
<param-value>10000</param-value>
</init-param>
Setting it to a high value means that the server will wait longer before throwing away the state associated with a client (from the time the client stops contacting the server).
I see two problems with this. First, the memory requirements of the server will potentially be higher (which may also make denial of service easier). Second, the RemoveListener isn't called on the Server before the maxInterval expires, which may require you to implement additional logic that differentiates between "momentarily unreachable" and "disconnected".
2.2) We'd need a secure mechanism for fail outs of at least a few minutes. Is this possible? Are there any alternatives?
Yes, it is possible to configure the maxInterval to last for a few minutes.
An alternative would be to restore any server side state on every handshake. This can be achieved by adding a listener to "/meta/handshake" and publishing a message to a "/service/" channel (to make sure only the server receives the message), or by adding an additional property to the "ext" property of the handshake message. Be careful to let the client restore only valid state (sign it on the server if you must).
3) Is it really only necessary to add the two extensions in both the client and cometD server?
On the server it is sufficient to do something like:
bayeux.addExtension(new AcknowledgedMessagesExtension());
I don't know how you'd do it on Oyatel. In Javascript it suffices to simply include the extension (dojo.require or script include for jQuery).
When a client with the AckExtension connects to the server, a message similar to the following will be logged (from my Jetty console log):
[qtp959713667-32] INFO org.cometd.server.ext.AcknowledgedMessagesExtension - Enabled message acknowledgement for client 51vkuhps5qgsuaxhehzfg6yw92
Another note because it may not be obvious: the ack extension will only provide server to client delivery guarantee, not client to server. That is, when you publish a message from the client to the server, it may not reach the server and will be lost.
Once the message has made it to the server, the ack extension will ensure that all recipients connected at that time will receive the message (as long as they aren't unreachable for maxInterval milliseconds).
It is relatively straightforward to implement client-side retrying if you listen to notifications on "/meta/unsuccessful" and resend the message (the original message that failed is passed as message.request to the handler).

Related

iOS - mobile application is sending two this same request in milliseconds apart

In our application we observe multiple ( two ) these same requests send from mobile application to server in milliseconds apart.
As we discuss the problem with dev team, they said they don't send two requests from an application perspective, but on the server-side, we see exactly these same two requests.
Does anybody know if iOS has this type of functionality to keep resending this same request in case of a lost connection or any other case? ( This is milliseconds that server doesn't respond yet )
The application should send only one request, wait for response success/failure, and then resend as needed. So far as we know, there is no logic in the application itself that will trigger sending two requests from the app to the server in milliseconds apart.
Thank you for any suggestions.
It's hard to tell without looking at the code or knowing your network infrastructure.
What I'd suggest to do first is to run the app through a debugging proxy server like Charles, Proxyman or mitmproxy. If it shows multiple requests, most likely the app is to blame, I'd bet on a concurrency bug.
If the debugging proxy shows just one request but your server observes two, you'll have to check your network infrastructure, it might be that some load balancer or reverse proxy is configured incorrectly.

how to sanitize mqtt message payload in server side?

I made an instant messaging app using MQTT protocol.
I want to add some extra data about messages in payload like sent time ( server time not client time ) and also provide kind of server side payload sanitizing.
Is it a good idea to add a third party client with superuser privileges between message sender and message receiver on broker's local machine to do this job ?
or is there any better idea ?
by the way I'm using EMQTT as message broker.
From a pure security view having direct peer to peer traffic (without filtering and sanitising) sounds like a dangerous idea. (At least in the Internet-of-things domain I would clearly object against it.)
Why? Because the clients are outside of your control (i.e. a hacker can re-engineer) and inject any traffic to exploit security holes on the receiving side of other clients.
So sanitising on the server side sounds like a very good idea.
I would suggest two topics: One (inbound) topic the clients use to publish messages, and another (outbound) topic used by clients to subscribe to messages. A server side component would then read the messages from inbound topic, sanitize it and publish to the outbound topic.
This de-coupeling makes it also easier to introduce MQTT payload changed: If you update the payload in a non-compatible way, introduce a new inbound topic and keep the old inbound topic too. This allows you to support old and new clients during the transition phase.

Confirming Deliveries with Spring AMQP

We're using Spring AMQP in the style of Spring Remoting with AMQP. I'm setting x-message-ttl on every message so that it expires immediately if it cannot be delivered immediately to a consumer. This works great, however, it leaves the producer waiting for the specified value of replyTimeout before failing with RemoteProxyFailureException (if I recall correctly). Is there any way I can make the producer fail immeditely if the message cannot be delivered (only waiting for the timeout if the message is actually received)?
The loose coupling of the architecture means there's no indication to the producer of the expiry.
There used to be an immediate flag but it was removed in rabbitmq 3.0.
On possible solution would be to configure a DLX/DLQ so the expired message can be consumed by another consumer, which can return an exception to the client.
EDIT:
Simply have the fallback consumer implement the same interface and have it throw an exception.
See this updated test case.

How do I ignore timeouts in erlang?

I have a server with a number of clients and each client is able to ask the server for information about the other clients. If they do so, the server have to get the information from each client and then return it to the asking client.
If two clients does this request at the same time, a deadlock might appear. The thing is that this request is done so often that the client would not have to care if it sometimes fails. How do I just ignore the timeout message that terminates everything when this problem appear?
Strict answer to your question
If you're using gen_server, then call/3 allows you to specify a timeout (and call/2 defaults to 5 seconds).
This code will either give you the gen_server's reply or the atom timeout if it failed.
Result = try gen_server:call(Target, Message, Timeout) of
Reply ->
Reply
catch
exit:{timeout, _} ->
timeout
end.
Better answer
evnu and rvirding recommended using asynchronous calls, which is a superior technique. Here are two possible ways to do this:
1. Server stores the data
Have clients periodically gen_server:cast/2 to the server to tell it their information. The server stores the latest information about each client. When a client wants to learn about its siblings, it calls gen_server:call/2 to the server.
The server call is synchronous because it doesn't need to contact any client; it's just returning the cached values.
2. Async return
The clients call gen_server:cast/2 to request data from the server. The server calls gen_server:call/2 to fetch data from each client on demand. Once the server has collected all data, it calls gen_server:cast/2 to pass the collected data back to the client that requested it.
Here, the clients are always waiting to handle requests from the server. The server calls the client synchronously, but can't deadlock because there is only one server.
3. More gen_servers
This one's hard to describe without knowing more about your code, but you could break the clients into more pieces. One piece to handle data requests and another piece to generate the requests.
Based on your description that the clients make this data request "so often", I think you should try the first method. If your clients are requesting data frequently enough, having the server collect and cache the client information will actually result in fresher data for the clients.

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources