Masstransit SQS AutoDelete not working for bus control queues - amazon-sqs

Bus is configured like this:
services.AddMassTransit(x =>
{
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Durable = true;
cfg.AutoDelete = true;
cfg.Host("us-east-2", h =>
{
});
});
});
I use request/response as well as send/receive. Each time my program is started, a new queue with "bus" in the name is created and stays there even after program is exited.
Is this by design?

Amazon SQS does not have the concept of a temporary queue. Therefore, in order to remove the bus queue, the bus must be stopped. Typically this is done by the MassTransit Hosted Service when the process exits (SIGTERM, whatever). If the bus is not stopped, however, the queue will remain and must be manually cleaned up.
The other option, assuming you have a single service instance (bus endpoint queues cannot be shared or request responses may be picked up by the wrong instance), is to force the same queue name for the bus endpoint:
cfg.OverrideDefaultBusEndpointQueueName("some-name-here");

Related

How do I use Event Store DB client without continued memory usage growth?

I am using the event store client for .Net and I am struggling to find the correct way to use the client. When I register the client as a singleton in the .Net dependency injection and run my application over an extended period of time memory usage grows continuously with each subscription.
I create and register the client in the following way. A full minimal application that experiences the problem can be found here.
var esdbConnectionString = configuration.GetValue("ESDB_CONNECTION_STRING", "esdb://admin:changeit#localhost:2113?tls=false");
var eventStoreClientSettings = EventStoreClientSettings.Create(esdbConnectionString);
var eventStoreClient = new EventStoreClient(eventStoreClientSettings);
services.AddSingleton(eventStoreClient);
My application has a high number of short streams over an extended period of time
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
How I am creating and subscribing to streams:
var streamName = CreateStreamName();
var payload = new PingEvent { StreamNr = _currentStreamNumber };
var eventData = new EventData(Uuid.NewUuid(), typeof(PingEvent).Name, EventSerialization.SerializeEventData(payload));
await _client.AppendToStreamAsync(streamName, StreamState.Any, new[] { eventData });
var streamCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(30));
await _client.SubscribeToStreamAsync(streamName, FromStream.Start, async (sub, evnt, token) =>
{
if (evnt.Event.EventType == "PongEvent")
{
_previousStreamIsDone = true;
streamCancellationTokenSource.Cancel();
}
},
cancellationToken: streamCancellationTokenSource.Token);
Approaches attempted
Registering as Transient or Scoped
If I register the client as Transient or Scoped in .Net DI it is throwing thousands of exceptions internally and causing multiple problems.
Manually handling lifetime of client
By having a singleton service that handles the lifetime of the client I have attempted to every once in a while dispose of the client and create a new one, ensuring that there exists only one instance of the client at the same time. This results in same problem as registering the service as Transient or Scoped.
I am using version 22.0.0 of the Event Store client in .Net 6 against Event Store Database 21.10.0. The problems happens both when running on windows and on the standard aspnet:6.0 linux docker container.
By inspecting the results of these dotnet-dumps the memory growth seem to be happening inside this HashSet of ActiveCalls in the gRPC client.
I am hoping to find a way of using the client that does not lead to memory growth.
In your reproduction the leaked calls are coming from the extra read that you are issuing while processing an event received on the subscription.
There is an open issue (https://github.com/EventStore/EventStore-Client-Dotnet/issues/219) at the moment to deal with this better, but currently if you issue a read but don't consume all the events and don't cancel the read, then the call remains open. In your case this is happening if the slave has managed to reply Pong before the master has issued the read that results from receiving its own Ping in the subscription. That read will then contain the Ping and the Pong, only the Ping is read, and the call remains open.
For now, if you cancel those reads by passing the cancellation token that you are cancelling into the ReadStreamAsync call in ReadFromStartOfStreamToEnd, it should resolve your problem.
In case it's helpful for you, you can see the number of Current Calls live rather than waiting a long time to see the effect on memory:
dotnet-counters monitor --counters "Grpc.Net.Client" -p <processid>

Managing Server Side Events with a Service Worker

I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client

How to let Lettuce notify application when connection is down?

We are using Lettuce in our project. We have a requirement to monitor the status of connection.
I know Lettuce can re-connect Redis when the connection is down. But is there some way to notify application that the connection is down/up?
Thanks,
Steven
Lettuce provides an event-model for connection events. You can subscribe to the EventBus and react to events published on the bus. There are multiple events, but for your case, you'd want to listen to connected and disconnected events:
ConnectionActivatedEvent: The logical connection is activated and can be used to dispatch Redis commands (SSL handshake complete, PING before activating response received)
ConnectionDeactivatedEvent: The logical connection is deactivated. The internal processing state is reset and the isOpen() flag is set to false.
Both events are fired after receiving Transport-related events such as ConnectedEvent respective DisconnectedEvent.
The following example illustrates how to consume these events:
RedisClient client = RedisClient.create()
EventBus eventBus = client.getresources().eventBus();
Disposable subscription = eventBus.get().subscribe(e -> {
if (e instanceOf ConnectionActivatedEvent) {
// …
}
});
…
subscription.dispose();
client.shutdown();
Please note that events are dispatched asynchronously. Anything that happens in the event listener should be non-blocking (i.e. if you need to call blocking code such as further Redis interaction, please offload this task to a dedicated Thread).
Read more
Lettuce Reference Documentation: Events

Make periodic HTTP requests with service worker

Is it possible to make HTTP requests in background with service worker, when users are not visiting my webpage. I want to make periodic requests to my webpage (e.g. 3 seconds)?
There is a feature called periodicSync, but i didn't understand how to use it.
I've not tried implementing this but for me the clearest overview has been this explanation.
Making periodic requests involves first handling the Service Worker ready event, invoking the periodicSync.register() function with config options. The register() function returns a Promise that allows you to deal with success or rejection of the periodic sync registration.
registration.periodicSync.register()
Pass a 'config' object parameter with the following properties:
tag
minPeriod
powerState
networkState
You may then register listeners against the periodicSync event. E.g (slightly simplified example based on the explanation.
self.addEventListener('periodicsync', function(event) {
if (event.registration.tag == 'my-tag') {
event.waitUntil(doTheWork()); // "do the work" asynchronously via a Promise.
}
else {
// unknown sync, may be old, best to unregister
event.registration.unregister();
}
});

How to explicitly acknowledge/fail Amazon SQS FIFO queue from the listener without throwing an exception?

My application only listens to a certain queue, the producer is the 3rd party application. I receive the messages but sometimes based on some logic I need to send fail message to the producer so that the message is resend to my listener again until I decide to consume it and acknowledge it. My current implementation of this process is just throwing some custom exception. But this is not a clean solution, therefore can any one help me to send FAIL to producer without throwing exception.
My JMS Listener Factory settings:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactoryForQexpress(SQSErrorHandler errorHandler) {
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder()
.withRegion(RegionUtils.getRegion(StaticSystemConstants.getQexpressSqsRegion()))
.withAWSCredentialsProvider(new ClasspathPropertiesFileCredentialsProvider(StaticSystemConstants.getQexpressSqsCredentials()))
.build();
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setErrorHandler(errorHandler);
return factory;
}
My Listener Settings:
#JmsListener(destination = StaticSystemConstants.QUEXPRESS_ORDER_STATUS_QUEUE, containerFactory = "jmsListenerContainerFactoryForQexpress")
public void receiveQExpressOrderStatusQueue(String text) throws JSONException {
LOG.debug("Consumed QExpress status {}", text);
//here i need to decide either acknowlege or fail
...
if (success) {
updateStatus();
} else {
//todo I need to replace this with explicit FAIL message
throw new CustomException("Not right time to update status");
}
}
Please, share your experience on this. Thank you!
SQS -- internally speaking -- is fully asynchronous and completely decouples the producer from the consumer.
Once the producer successfully hands off a message to SQS and receives the message-id in response, the producer only knows that SQS has received and committed the message to its internal storage and that the message will be delivered to a consumer at least once.¹ There is no further feedback to the producer.
A consumer can "snooze" a message for later retry by simply not deleting it (see setSessionAcknowledgeMode docs) or by actively resetting the visibility timeout on the message instead of deleting it, which triggers SQS to leave the message in the in flight status until the timer expires, at which point it will again deliver the message for the consumer to retry.
Note, too, that a single SQS queue can have multiple producers and/or multiple consumers, as long as all the producers ask for and consumers provide identical services, but there is no intrinsic concept of which consumer or which producer. There is no consumer-to-producer backwards communication channel, and no mechanism for a producer to inquire about the status of an earlier message -- the design assumption is that once SQS has received a message, it will be delivered,² so no such mechanism should be needed.
¹at least once. Unless the queue is a FIFO queue, SQS will typically deliver the message exactly once, but there is not an absolute guarantee that the message will not be delivered more than once. Because SQS is a massive, distributed system that stores redundant copies of messages, it is possible in some edge case conditions for messages to be delivered more than once. FIFO queues avoid this possibility by leveraging stronger internal consistency guarantees, at a cost of reduced throughput of 300 TPS.
²it will be delivered assuming of course that you actually have a consumer running. SQS does not block the producer, and will allow you to enqueue an unbounded number of messages waiting for a consumer to arrive. It accepts messages from producers regardless of whether there are currently any consumers listening. The messages are held until consumed or until the MessageRetentionPeriod (default 4 days, max 14 days) timer expires for each message, whichever comes first.

Resources