I'm using spring-cloud-aws to send a message to SQS FIFO queue.
It's failing with
The request must contain the parameter MessageGroupId
There doesn't seem to be anywhere on the QueueMessagingTemplate in spring-cloud-aws-messaging that allows me to set this mandatory MessageGroupId.
Is there currently a way of writing to a SQS FIFO queue in this manor or would I have to revert to directly using amazons API?
Spring Cloud AWS support FIFO queues since 2017, in accordance with: Add Support for FIFO SQS Queues #252
You just need to add the two required params(messageGroupId and messageDeduplicationId) like example below:
public void send(String topicName, Object message, String messageGroupId, String messageDeduplicationId) throws MessagingException {
Map<String, Object> headers = new HashMap<>();
headers.put("message-group-id", messageGroupId);
headers.put("message-deduplication-id", messageDeduplicationId);
messagingTemplate.convertAndSend(topicName, message, headers);
}
I dont believe that FIFO support is possible with versions 1.1.x of spring-cloud-aws due to how the QueueMessagingTemplate uses a QueueMessagingChannel that does not support configuring the SendMessageRequest in this way.
Examine https://github.com/spring-cloud/spring-cloud-aws/blob/master/spring-cloud-aws-messaging/src/main/java/org/springframework/cloud/aws/messaging/core/QueueMessageChannel.java#L78 for details.
I have opened https://github.com/spring-cloud/spring-cloud-aws/issues/246 for this reason, though have no idea if support will be added.
It also does not appear that I can use a custom QueueMessageTemplate; this would be a reasonable workaround if I could.
Related
I am using the event store client for .Net and I am struggling to find the correct way to use the client. When I register the client as a singleton in the .Net dependency injection and run my application over an extended period of time memory usage grows continuously with each subscription.
I create and register the client in the following way. A full minimal application that experiences the problem can be found here.
var esdbConnectionString = configuration.GetValue("ESDB_CONNECTION_STRING", "esdb://admin:changeit#localhost:2113?tls=false");
var eventStoreClientSettings = EventStoreClientSettings.Create(esdbConnectionString);
var eventStoreClient = new EventStoreClient(eventStoreClientSettings);
services.AddSingleton(eventStoreClient);
My application has a high number of short streams over an extended period of time
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
How I am creating and subscribing to streams:
var streamName = CreateStreamName();
var payload = new PingEvent { StreamNr = _currentStreamNumber };
var eventData = new EventData(Uuid.NewUuid(), typeof(PingEvent).Name, EventSerialization.SerializeEventData(payload));
await _client.AppendToStreamAsync(streamName, StreamState.Any, new[] { eventData });
var streamCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(30));
await _client.SubscribeToStreamAsync(streamName, FromStream.Start, async (sub, evnt, token) =>
{
if (evnt.Event.EventType == "PongEvent")
{
_previousStreamIsDone = true;
streamCancellationTokenSource.Cancel();
}
},
cancellationToken: streamCancellationTokenSource.Token);
Approaches attempted
Registering as Transient or Scoped
If I register the client as Transient or Scoped in .Net DI it is throwing thousands of exceptions internally and causing multiple problems.
Manually handling lifetime of client
By having a singleton service that handles the lifetime of the client I have attempted to every once in a while dispose of the client and create a new one, ensuring that there exists only one instance of the client at the same time. This results in same problem as registering the service as Transient or Scoped.
I am using version 22.0.0 of the Event Store client in .Net 6 against Event Store Database 21.10.0. The problems happens both when running on windows and on the standard aspnet:6.0 linux docker container.
By inspecting the results of these dotnet-dumps the memory growth seem to be happening inside this HashSet of ActiveCalls in the gRPC client.
I am hoping to find a way of using the client that does not lead to memory growth.
In your reproduction the leaked calls are coming from the extra read that you are issuing while processing an event received on the subscription.
There is an open issue (https://github.com/EventStore/EventStore-Client-Dotnet/issues/219) at the moment to deal with this better, but currently if you issue a read but don't consume all the events and don't cancel the read, then the call remains open. In your case this is happening if the slave has managed to reply Pong before the master has issued the read that results from receiving its own Ping in the subscription. That read will then contain the Ping and the Pong, only the Ping is read, and the call remains open.
For now, if you cancel those reads by passing the cancellation token that you are cancelling into the ReadStreamAsync call in ReadFromStartOfStreamToEnd, it should resolve your problem.
In case it's helpful for you, you can see the number of Current Calls live rather than waiting a long time to see the effect on memory:
dotnet-counters monitor --counters "Grpc.Net.Client" -p <processid>
I am looking to retrieve some Solace queue stats e.g. the current messages spooled count out of the maximum limit for us to set a threshold to stop publishing more messages to the queue.
Also, to subscribe to vpn events to track message discard rates.
By the time we receive errors e.g. MaxMsgUsageExceeded/SpoolOverQuota, it will be too late.
I can't seem to find any of these on SolaceSystems.Solclient.Messaging API
https://docs.solace.com/API-Developer-Online-Ref-Documentation/net/html/7f10bcf6-19f4-beff-0768-ced843e35168.htm
Would be great if someone could help
(using C# for this)
To poll for Solace queue stats from your C# application, you could use legacy SEMP over the message bus to make a SEMP request for the details that you want. Semp (Solace Element Management Protocol) is a request/reply protocol that uses an XML schema to identify all managed objects available in a message broker. Applications can use SEMP to manage and monitor a message broker.
To allow for legacy SEMP to be used over the message bus, as opposed to the management interface, it first needs to be enabled on the Solace PubSub+ message broker at the VPN level.
To publish a SEMP request with the Solace .Net Messaging API, perform the following steps:
Create a Session.
Create the message topic. “#SEMP//SHOW”
ITopic topic = ContextFactory.Instance.CreateTopic( “#SEMP/<router name>/SHOW”);
Create a request message and set its Destination to the topic in Step 2:
IMessage requestMsg = ContextFactory.Instance.CreateMessage();
requestMsg.Destination = topic;
Set the SEMP request string as the binary attachment.
string SOLTR_VERSION = "8_4_0" //change to the message-broker's version
string SEMP_SHOW_QUEUE = "<rpc semp-version=\"soltr/" + SOLTR_VERSION +
"<show><queue><name>queueName</name><detail></detail></queue></show></rpc>";
requestMsg.BinaryAttachment = Encoding.UTF8.GetBytes(SEMP_SHOW_QUEUE);
Call the SendRequest(…) method on Session.
IMessage replyMsg;
ReturnCode rc = session.SendRequest(requestMsg, out replyMsg, timeout);
The SEMP response is returned in replyMsg.
Obtain the binary attachment data from the reply message:
replyMsg.BinaryAttachment
The binary attachment contains the SEMP reply for the command topic in the publish request.
The Solace PubSub+ message broker does raise an event when an egress message is discarded. However, it is only sent out approximately once every 60 seconds for the specified client so it is not possible to get these exact rates.
It is possible for your .NET application to subscribe to VPN-level events over the message-bus. To do this, you must first enable the Solace PubSub+ message broker to publish the events. You can then subscribe to the special topic and receive the events as messages.
The topic to subscribe to is:
#LOG/<level>/VPN/<routerName>/<eventName>/<vpnName>
The different levels can use the * wildcard. For example, if you wish to subscribe to all VPN events of all levels for the VPN apple on router QA-NY1, the topic string would be:
#LOG/*/VPN/QA-NY1/*/apple
SEMP (starting in v2) is a RESTful API for configuring, monitoring, and administering a Solace PubSub+ broker.
1-The swapper page link is SEMP V2 API
2-The Swagger metadata definitions URL is located # http://{solace-sever-url}/SEMP/v2/config/spec
3- From Visual studio, add REST API Client
4-In the configuration dialog pass swagger metadata URL (defined at step 2), for code purpose I choose SolaceSemp as input value parameter for client namespace input.
4 Once you click ok, VS will create the client along with the models under SolaceSemp namespace
5 Start using the client as per following
using SolaceSemp;
using Microsoft.Rest;
var credentials = new BasicAuthenticationCredentials();
credentials.UserName = "place user name";
credentials.Password = "place password";
using (var client = new SolaceSempClient(credentials))
{
var model = client.GetAboutApi();
}
Our application is using org.springframework.cloud, spring-cloud-starter-stream-rabbit framework and we are trying to avoid sending specific messages to DLQ and also retrying them, this behaviour should be somehow dynamically, because, for the default messages, retries and DLQ should work.
According to this documentation:
Putting it All Together
And this useful post:
DLX in rabbitmq and spring-rabbitmq - some considerations of rejecting messages
It seems that ImmediateAcknowledgeAmqpException could be used in spring AMQP to mark a message as acknowledged and no further process it. However, when we use this code:
#StreamListener(LogSink.INPUT)
public void handle(Message<Map<String, Object>> message) {
if (message.getPayload().get("condition1").equals("abort")) {
throw new ImmediateAcknowledgeAmqpException("error, we don't want to send this message to DLQ");
}
...
}
The message is always send to DLQ
Our current configuration:
spring.cloud.stream:
bindings:
log:
consumer.concurrency: 10
destination: log
group: myGroup
content-type: application/json
rabbit.bindings:
log:
consumer:
autoBindDlq: true
republishToDlq: true
transacted: true
Are we missing something? Is there any other alternative to avoid publishing to DLQ and requeuing?
republishToDlq does not look at that exception; it only applies when the exception is thrown to the container (method causeChainHasImmediateAcknowledgeAmqpException()).
Republishing subverts that logic since no exception is thrown to the container.
Please open an issue against the Rabbit binder, republishToDlq should honor that exception and discard the failed message.
My application only listens to a certain queue, the producer is the 3rd party application. I receive the messages but sometimes based on some logic I need to send fail message to the producer so that the message is resend to my listener again until I decide to consume it and acknowledge it. My current implementation of this process is just throwing some custom exception. But this is not a clean solution, therefore can any one help me to send FAIL to producer without throwing exception.
My JMS Listener Factory settings:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactoryForQexpress(SQSErrorHandler errorHandler) {
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder()
.withRegion(RegionUtils.getRegion(StaticSystemConstants.getQexpressSqsRegion()))
.withAWSCredentialsProvider(new ClasspathPropertiesFileCredentialsProvider(StaticSystemConstants.getQexpressSqsCredentials()))
.build();
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setErrorHandler(errorHandler);
return factory;
}
My Listener Settings:
#JmsListener(destination = StaticSystemConstants.QUEXPRESS_ORDER_STATUS_QUEUE, containerFactory = "jmsListenerContainerFactoryForQexpress")
public void receiveQExpressOrderStatusQueue(String text) throws JSONException {
LOG.debug("Consumed QExpress status {}", text);
//here i need to decide either acknowlege or fail
...
if (success) {
updateStatus();
} else {
//todo I need to replace this with explicit FAIL message
throw new CustomException("Not right time to update status");
}
}
Please, share your experience on this. Thank you!
SQS -- internally speaking -- is fully asynchronous and completely decouples the producer from the consumer.
Once the producer successfully hands off a message to SQS and receives the message-id in response, the producer only knows that SQS has received and committed the message to its internal storage and that the message will be delivered to a consumer at least once.¹ There is no further feedback to the producer.
A consumer can "snooze" a message for later retry by simply not deleting it (see setSessionAcknowledgeMode docs) or by actively resetting the visibility timeout on the message instead of deleting it, which triggers SQS to leave the message in the in flight status until the timer expires, at which point it will again deliver the message for the consumer to retry.
Note, too, that a single SQS queue can have multiple producers and/or multiple consumers, as long as all the producers ask for and consumers provide identical services, but there is no intrinsic concept of which consumer or which producer. There is no consumer-to-producer backwards communication channel, and no mechanism for a producer to inquire about the status of an earlier message -- the design assumption is that once SQS has received a message, it will be delivered,² so no such mechanism should be needed.
¹at least once. Unless the queue is a FIFO queue, SQS will typically deliver the message exactly once, but there is not an absolute guarantee that the message will not be delivered more than once. Because SQS is a massive, distributed system that stores redundant copies of messages, it is possible in some edge case conditions for messages to be delivered more than once. FIFO queues avoid this possibility by leveraging stronger internal consistency guarantees, at a cost of reduced throughput of 300 TPS.
²it will be delivered assuming of course that you actually have a consumer running. SQS does not block the producer, and will allow you to enqueue an unbounded number of messages waiting for a consumer to arrive. It accepts messages from producers regardless of whether there are currently any consumers listening. The messages are held until consumed or until the MessageRetentionPeriod (default 4 days, max 14 days) timer expires for each message, whichever comes first.
I have created on replyQ and done biniding with one direct exchange.
Created the message by setting replyto property to "replyQ"
And sending the message on rabbit to the other service.
The service at other end getting the message and sending reply on given replyTo queue.
and now I am trying to read from a replyQ queue using
template.receiveAndConvert(replyQueue));
But getting null response and i can see the message in the replyQ.
That is the service is able to send the reply but am not able to read it from the given queue
Please help what is going wrong.
template.receiveAndConvert() is sync, blocked for some time one time function, where default timeout is:
private static final long DEFAULT_REPLY_TIMEOUT = 5000;
Maybe this one is your problem.
Consider to switch to ListenerContainer for continuous queue polling.
Another option is RabbitTemplate.sendAndReceive(), but yeah, with fixed reply queue you still get deal with ListenerContainer. See Spring AMQP Reference Manual for more info.
I don't know if this could help anyone, but I found out that declaring the expected Object as a parameter of a method listener did the work
#RabbitListener(queues = QUEUE_PRODUCT_NEW)
public void onNewProductListener(ProductDTO productDTO) {
// messagingTemplate.receiveAndConvert(QUEUE_PRODUCT_NEW) this returns null
log.info("A new product was created {}", productDTO);
}