Move only specific exception to DLQ in Spring Cloud Stream - amazon-sqs

I am using Spring Cloud Stream with AWS SQS. I've a use case where all exception except CustomException should move to DLQ. I am unable to find any reference for it. Can someone help here?

Use a ListenerContainerCustomizer to configure a DefaultErrorHandler with a custom recoverer - one that delegates to a DeadLetterPublishingRecoverer for just your custom exception.
See Error handling in Spring Cloud Stream Kafka in Batch mode for more information.

Related

How to configure Serilog for WCF

I am upgrading our old application for Serilog... One of the existing functionality is ... When log level = ERROR, it will log into local file and send 'WCF' request to the remote server, remote server will update database...
Basically it will log into multiple source(local file, remote database by sending wcf request) if it level is 'ERROR'.
I understand using 'rollingfile' appender to logging into local file.
However, i do not know how to configure 'WCF Service' for Serilog... is there any 'WCF SINK' can help me achieve this?
As of this writing there's no generic sink that makes "WCF" calls... You'd have to build your own sink, implementing the calls you need.
You can see a list of documented sinks in the "Provided Sinks" page on Serilog's wiki, and you can also see available sinks in NuGet.org.

Spring confluent schema deserialize example

Has any one used the spring-kafka 2.0.0.Release and created a consumer that uses confluent schema registry as a source for deserializing the message ?If so can you point me to an example?
The problem I'm trying to solve is i have a Debezium CDC connector on my kafka connect platform that streams events from MongoDB as they happen. I have to intercept those events transform and re-stream. To understand the event I have to deserialize the payload. I'm currently stuck at this step.
Sastry
This issue appears to be addressed a while back, so I would like to point out the following test-code and the specific part in it where you can configure kafka schema-registry.
Please take a look and see if it is clear or you need more help.

spring rabbitMQ blocking handler

I am facing problems when resource limits are reached with rabbitMQ , I saw the post
Spring AMQP: Register BlockedListener to Connection
There was a suggestion for a Jira issue , any improvement in this direction ?
especially it would have been nice if I can configure a blocking handler from XML side also.
Is there any way before a send I can check the channel status ( blocking ) since I get into an infinite blocked state if I send on a blocking channel since no timeout is available.
Your question isn't clear. There is no JIRA, so no support for that feature out-of-the-box. All you need to do is that workaround provided by Gary.
It is indeed isn't possible to configure BlockedListener via XML configuration, but that isn't too hard to enhance the connectionFactory after injection to some your bean via provided hook.
We will be appreciate if you raise a JIRA and provide the feedback how that should work from the Framework perspective.

Spring Cloud DataFlow Rabbit Source: how to intercept and enrich messages in a Source

I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.

Log failover with Serilog

Is it possible, using Serilog, to log to a webservice of mine and if throws an error (no internet, for instance) to log to a RollingFile.
Should only log to RollingFile if WebService fails.
You can implement this yourself by creating a custom sink that wraps another new RollingFileSink(...) and only forwards events if the web service call fails.
To do this you'd implement ILogEventSink or, if the web service accepts batches, create a subclass of PeriodicBatchingSink.

Resources