We're using Jersey (ver 2.22.2) to execute REST requests, and ApacheConnectorProvider together with PoolingHttpClientConnectionManager to manage our connections pool.
Is there a way to release manually connections from the leased connections list?
PoolingHttpClientConnectionManager provides methods to close expired and idle connections, but this will close and remove connections from the available connections list, which is not what I'm looking for.
The reason that I want to do it is because I want to avoid connections leaking.
The developer that is using the above service should always close the connection by doing response.readEntity() or response.close(), and if he forgets to do it, then I don't think that manually closing the connections is a good solution.
But if the connection wasn't close because of some unexpected issue, and remaining in the leased list, then I want to be able to close it by myself.
The same as Apache advising to write a daemon thread to clear expired connections ("Connection eviction policy"), I want to be able to clear connections from leased list as well.
Call response.close() inside finally block of your code or you can use java try-with-resources (java version > 7+) as well.
try{
ClosableHttpResposne response = .....;
}catch(Exception e){
// .....
}finally{
response.close();
}
or
try(ClosableHttpResposne response = .....){
}catch(Exception e){
}
Related
I am using the event store client for .Net and I am struggling to find the correct way to use the client. When I register the client as a singleton in the .Net dependency injection and run my application over an extended period of time memory usage grows continuously with each subscription.
I create and register the client in the following way. A full minimal application that experiences the problem can be found here.
var esdbConnectionString = configuration.GetValue("ESDB_CONNECTION_STRING", "esdb://admin:changeit#localhost:2113?tls=false");
var eventStoreClientSettings = EventStoreClientSettings.Create(esdbConnectionString);
var eventStoreClient = new EventStoreClient(eventStoreClientSettings);
services.AddSingleton(eventStoreClient);
My application has a high number of short streams over an extended period of time
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
How I am creating and subscribing to streams:
var streamName = CreateStreamName();
var payload = new PingEvent { StreamNr = _currentStreamNumber };
var eventData = new EventData(Uuid.NewUuid(), typeof(PingEvent).Name, EventSerialization.SerializeEventData(payload));
await _client.AppendToStreamAsync(streamName, StreamState.Any, new[] { eventData });
var streamCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(30));
await _client.SubscribeToStreamAsync(streamName, FromStream.Start, async (sub, evnt, token) =>
{
if (evnt.Event.EventType == "PongEvent")
{
_previousStreamIsDone = true;
streamCancellationTokenSource.Cancel();
}
},
cancellationToken: streamCancellationTokenSource.Token);
Approaches attempted
Registering as Transient or Scoped
If I register the client as Transient or Scoped in .Net DI it is throwing thousands of exceptions internally and causing multiple problems.
Manually handling lifetime of client
By having a singleton service that handles the lifetime of the client I have attempted to every once in a while dispose of the client and create a new one, ensuring that there exists only one instance of the client at the same time. This results in same problem as registering the service as Transient or Scoped.
I am using version 22.0.0 of the Event Store client in .Net 6 against Event Store Database 21.10.0. The problems happens both when running on windows and on the standard aspnet:6.0 linux docker container.
By inspecting the results of these dotnet-dumps the memory growth seem to be happening inside this HashSet of ActiveCalls in the gRPC client.
I am hoping to find a way of using the client that does not lead to memory growth.
In your reproduction the leaked calls are coming from the extra read that you are issuing while processing an event received on the subscription.
There is an open issue (https://github.com/EventStore/EventStore-Client-Dotnet/issues/219) at the moment to deal with this better, but currently if you issue a read but don't consume all the events and don't cancel the read, then the call remains open. In your case this is happening if the slave has managed to reply Pong before the master has issued the read that results from receiving its own Ping in the subscription. That read will then contain the Ping and the Pong, only the Ping is read, and the call remains open.
For now, if you cancel those reads by passing the cancellation token that you are cancelling into the ReadStreamAsync call in ReadFromStartOfStreamToEnd, it should resolve your problem.
In case it's helpful for you, you can see the number of Current Calls live rather than waiting a long time to see the effect on memory:
dotnet-counters monitor --counters "Grpc.Net.Client" -p <processid>
I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client
We are using Lettuce in our project. We have a requirement to monitor the status of connection.
I know Lettuce can re-connect Redis when the connection is down. But is there some way to notify application that the connection is down/up?
Thanks,
Steven
Lettuce provides an event-model for connection events. You can subscribe to the EventBus and react to events published on the bus. There are multiple events, but for your case, you'd want to listen to connected and disconnected events:
ConnectionActivatedEvent: The logical connection is activated and can be used to dispatch Redis commands (SSL handshake complete, PING before activating response received)
ConnectionDeactivatedEvent: The logical connection is deactivated. The internal processing state is reset and the isOpen() flag is set to false.
Both events are fired after receiving Transport-related events such as ConnectedEvent respective DisconnectedEvent.
The following example illustrates how to consume these events:
RedisClient client = RedisClient.create()
EventBus eventBus = client.getresources().eventBus();
Disposable subscription = eventBus.get().subscribe(e -> {
if (e instanceOf ConnectionActivatedEvent) {
// …
}
});
…
subscription.dispose();
client.shutdown();
Please note that events are dispatched asynchronously. Anything that happens in the event listener should be non-blocking (i.e. if you need to call blocking code such as further Redis interaction, please offload this task to a dedicated Thread).
Read more
Lettuce Reference Documentation: Events
We have a connection to postgres database that is configured with tomcat connection pool. The problem is that when connection becomes active it never goes back to idle.
When I start my microservice it has 0 active connections and 10 idle ones. After one hour of work there are 7 active and 3 idle. After weekend there were 100 active, it reached the limit and service was down.
Is there any way to configure tomcat connection pool to check active connections state and if they are stucked to close them?
Looks like your application is leaking connection. By default hibernate c3p0 provide facilities for detecting leaks , there are two parameters to configure :
5
true
After this it will print stack trace for long active connections and close them.
Recommended not to use on high load. If using another pool, search for a similar thing
As we have http timeouts inside our cluster, it seems that due to this there is a connection leak. I investigated and connection remains always active.
The solution for me was to enable abandoned connections verification.
private DataSource configureDataSource(String url, String user, String password, String driverClassName){
DataSource ds = DataSourceBuilder.create()
.url(url)
.username(user)
.password(password)
.driverClassName(driverClassName)
.build();
org.apache.tomcat.jdbc.pool.DataSource configuredDataSource = (org.apache.tomcat.jdbc.pool.DataSource) ds;
// some other configurations here
// ...
configuredDataSource.getPoolProperties()
.setRemoveAbandonedTimeout(300);
configuredDataSource.getPoolProperties()
.setRemoveAbandoned(true);
}
#Bean(name = "qaDataSource")
public JdbcTemplate getQaJdbcTemplate() {
DataSource ds = configureDataSource(qaURL, qaUsername, qaPassword ,qaDriverClassName);
return new JdbcTemplate(ds);
}
RemoveAbandoned and RemoveAbandonedTimeout flags mean that if some connection is in active state more that timeout value it will be closed. If you put this to your code ensure that this timeout is superior that the maximum query execution time for your service.
I wrote two small programs which tried to acquire the same Remote Mutex named "The Token":
ACE_Remote_Mutex token("The Token", 1, 1);
token.acquire();
ACE_OS::sleep(5);
token.release();
return 0;
Both of them got the following debug output:
(3078597488) acquired The Token
(4243|3078597488) BIG PROBLEMS with get_connection: Connection refused
error on remote acquire, releasing shadow mutex.
(3078597488) released The Token, owner is no owner
(4243|3078597488) BIG PROBLEMS with get_connection: Connection refused
(3078597488) release failed: Permission denied.
(3078597488) shadow: release failed
Does ACE_Remote_Mutex work only with some sort of "agent" like Corba broker? Can I modify my code?
Remote_Mutex uses Token Service to acquire lock. Token Service is not a CORBA service but it plays a role similar to it. Here is an example of svc.conf entry that starts Token Service dynamically:
dynamic Token_Service Service_Object *
../lib/netsvcs:_make_ACE_Token_Acceptor()
"-p 10202"