Mule Message Redelivery - message

Does anyone know what is the difference between maxRedelivery attribute on a connector and defining idempotent-redelivery-policy on an endpoint? Are they both same?
One difference I know is in idempotent-redelivery-policy you can define dead-letter-queue to direct error messages on error queue. What happens when you define maxRedelivery on a connector? After max attempts is message discarded?
Please help

The maxRedelivery applies only for the jms transports and it is maintained for backwards compatibility. If the maxRedelivery threshold is overcome a MessageRedeliveredException is thrown and it's up to your configuration to handle it.
If you need a more configurable pattern I'd recommend you to use idempotent-redelivery-policy since it can be configured within many transports and offer you more options

Related

Logging in vernemq plugin

I am trying to implement logging on connected users in my vernemq client using erlang. From documentation, I found that this could be bad, due to the scalability of the client and the assumption that there might be a lot of clients connecting and disconnecting. This is not my case, I will just have a bunch of clients but a lot of messages. Anyway, to my question. Is it possible to change the log file when using error_logger? Or should I use a different module for logging? Log file can be in any location if it had to, but I need it separated from vernemqs console.log. A followup question would be, can I somehow get a floating window on logs? I don't need to keep logs from previous year and I don't want to manually clean them every day or week or something like that
Thanks for any responses
From OTP21 on, you should use logger instead of error_logger, although the error_logger API is kept for compatibility (it justs uses logger under the hood).
With logger, which you can configure with the system configuration, you can use file backends such as logger_std_h (check the example configurations).
In logger_std_h you can set file rotation.

How to check or log messages if we use Autosar PDUR

We use Autosar to implement automotive gateway, and the PDUR module can be configured to route message from one interface to other protocol interfaces.
My question is if we want to do message check, analysis or logging, how could we know what message routed by PDUR? should I configure all the message transfer to SW-C application layer for analysis or is there any other method to do above deep message inspection feature.
Thanks
Jack
When I hear analysis and logging, I already get bad headache, because of features, that are put into ECUs but rather should be tested with proper stress tests from outside, e.g. PDUs on network A is seen on network B after whatever milliseconds. For such logging, you usually need a EEPROM or FLASH with a certain P/E cycles, which will just add to the price of the ECU without much benefit. And it also impacts your ECUs performance.
Regarding PduR message based routing, you should be very careful because:
Depending on CanRxProcessing, the handling of the PduR routing is handled on Interrupt level, so your "deep message checks" increase your ISR runtime/locktime!
Certain features in Can and CanIf (and also other bus specific network components) might already discard received messages, so PduR might not even be informed about it (e.g. static DLC check, message in BasicCAN HRH is blocked by SW filtering)
Some messages might not be routed by PduR directly, like signal based routing is actually handled in Com not in PduR, maybe protocols are routed by protocol specific modules and not by PduR
CanTp can have multiple addressing formats, where the N_TA is in the first data byte. Here it is tricky to handle multiple connections, if you think about certina N_TAs not being routed
Not sure about SecOC, is the gateway only routing authenticated messages?
Some routed messages could be disabled/enabled on the fly (Routing path groups)
Routed messages from network to network usually have a so called routing relationships (Routing paths). In the end, there should be a table somehow, but this depends also on your implementation, e.g. Vector, ETAS, Elektrobit, ...
So, my opinion is, that deep message check, I don't see what you would gain at all here, in the ECU. I would rather prefer a proper Stress Test with certain tooling from the outside.

Measuring total number of active (HTTP) connections with Metrics in Dropwizard application

I have a Dropwizard application I am working with where I need to be able to monitor for active HTTP connections. I know Metrics provides classes for Instrumenting Jetty, and of interest to me is measuring total number of active connections....however the javadoc doesn't help me much, and I can't find any examples of how this specific functionality is implemented. Does anyone have any examples they can share?
I'm not sure what your exact use-case is but if you just need to be able to see active connections I think the simplest solutions are just using monitoring solutions like JMX, Datadog, New Relic, or Appdynamics. If you need it in the code I think you'd need to manually implement something. I'd recommend StatsD or Redis is that's the path you go down.

Handle SOAP calls with ESB/MessageBroker or Grails?

we are currently trying to determine a application architecture for an application that will need to accept a number of SOAP calls and also make SOAP calls. One of the design goals is simplicity and robustness which we need to take into account.
In the Grails space we could all tie this into one big Grails application but this gives headaches in the robustness aspect as and update of the Grails application will disable all incoming SOAP request.
I was wondering if splitting up the Grails app and combining this with something like ActiveMQ/ServiceMix/Mule etc is recommend? Any advice or comments are appreciated! And what kind of solution woud be a good candidate?
You can achieve some robustness with your monolithic Grails app by running it behind a network load balancer. This would allow you to perform no-downtime rolling upgrades.
Now this doesn't address other concerns like the need to deal with possibly unreachable remote SOAP services, etc... This is when a tool/framework, like Mule, can become helpful as it will provide you exception handling, retries and whatnot.
This is conditioned by the intended behavior of your SOAP bridge: is it asynchronous (ie. fire and forget, send the message to the bridge, get an immediate ACK and let the bridge do the remote dispatch whenever possible) or is it synchronous (ie. the caller of the bridge is held until a remote response is received and forwarded back to it).
If your bridge is fundamentally synchronous, I'd say you can stick with your single Grails app and use a load balancer. It will be up to the caller to deal with retries.
Otherwise, if it's async, consider a messaging middleware to help with the temporary message persistence and redelivery in case of failure.

Are there some general Network programming best practices?

I am implementing some networking stuff in our project. It has been decided that the communication is very important and we want to do it synchronously. So the client sends something the server acknowledges.
Are there some general best practices for the interaction between the client and the server. For instance if there isn't an answer from the server should the client automatically retry? Should there be a timeout period before it retries? What happens if the acknowledgement fails? At what point do we break the connection and reconnect? Is there some material? I have done searches but nothing is really coming up.
I am looking for best practices in general. I am implementing this in c# (probably with sockets) so if there is anything .Net specific then please let me know too.
First rule of networking - you are sending messages, you are not calling functions.
If you approach networking that way, and don't pretend that you can call functions remotely or have "remote objects", you'll be fine. You never have an actual "thing" on the other side of the network connection - what you have is basically a picture of that thing.
Everything you get from the network is old data. You are never up to date. Because of this, you need to make sure that your messages carry the correct semantics - for instance, you may increment or decrement something by a value, you should not set its value to the current value plus or minus another (as the current value may change by the time your message gets there).
If both the client and the server are written in .NET/C# I would recommend WCF insted of raw sockets as it saves you a from a lot of plumbing code with serialization and deserialization, synchronization of messages etc.
That maybe doesn't really answer your question about best practices though ;-)
The first thing to do is to characterize your specific network in terms of speed, probability of lost messages, nominal and peak traffic, bottlenecks, client and server MTBF, ...
Then and only then you decide what you need for your protocol. In many cases you don't need sophisticated error-handling mechanisms and can reliably implement a service with plain UDP.
In few cases, you will need to build something much more robust in order to maintain a consistent global state among several machines connected through a network that you cannot trust.
The most important thing I found is that messages always should be stateless (read up on REST if this means nothing to you)
For example if your application monitors the number of shipments over a network do not send incremental updates (+x) but always the new total.
In a common think about network programming, I think you should learn about :
1. Socket (of course).
2. Fork and Threading.
3. Locking process (use mutex or semaphore or others).
Hope this help..

Resources