Blazor-server scoped services, closed connections, garbage collection - dependency-injection

If I have a scoped service:
services.AddSingleton<MyScopedService>();
And in that service, an HTTP request is made:
HttpClient client = _clientFactory.CreateClient();
StringContent formData = ...;
HttpResponseMessage response = await client.PostAsync(uri, formData);
string data = await response.Content.ReadAsStringAsync();
I read here that for an AddScoped service, the service scope is the SignalR connection.
If the user closes the browser tab before the response is returned, the MyScopedService code still completes.
Could someone explain what happens to that MyScopedService instance? When is it considered out of scope? After the code completes? Is the time until it's garbage collected predictable?
I have a Blazor-server project using scoped dependency injections (fluxor, and a CircuitHandler), and I'm noticing that the total app memory increases with each new connection (obviously), but takes a while (minutes) for the memory to come down after the browser tabs are closed.
Just wondering if this is expected, or if I could be doing something to let the memory usage recover more quickly. Or maybe I'm doing something wrong with my scoped services.

Add IDisposeAsync to your service then in your service :
public async ValueTask DisposeAsync() => await hubConnection.DisposeAsync();
This was copied from one of my own libraries I was facing the same issue. GC will not work if there are references to other objects...

Related

dart compute Illegal argument in isolate message

I am using compute to do some work while keeping the UI running. The compute was working until I added another http call before it.
The working code is as follow
final ListRequest request =
ListRequest(baseUrl: env['SERVER_URL']!, path: '/Items');
_mainController.updateListItems(
await compute(_service.getItems, request));
I read some articles saying the function compute calls should be a top level function or a static function. However, the getItems is an instance function and there was no exception.
Recently I added a few lines and the code became
final Filter? filter = await _service.getFilter();
final ListRequest request =
ListRequest(baseUrl: env['SERVER_URL']!, path: '/Items');
request.filter = filter;
_mainController.updateListItems(
await compute(_service.getItems, request));
getFilter is a http call to retrieve some filter parameters from the backend.
Then I got the following error
Invalid argument(s): Illegal argument in isolate message: (object extends NativeWrapper - Library:'dart:io' Class: _SecureFilterImpl#13069316)
My dart and flutter versions are
Dart SDK version: 2.15.1 (stable)
Flutter 2.8.1
Thank you
=========================================================
Update
The Filter is
Filter {
String? itemLocationSuburb;
String? itemLocationPostcode;
}
Your _service service presumably contains a HttpClient. When you make a request through this client, it opens a connection to the HTTP server, and may maintain the connection after the request completes.
The HttpClient cannot be sent through a SendPort when it has open connections, but it is included in the scope of the getItems method.
To work around this issue, you can do one of the following:
Disable persistent connections with the HttpClientRequest.persistentConnection property
Make a new HttpClient to send through the compute function every time
Implement a long-lived background isolate to maintain its own HttpClient
Use the HttpClient in the main isolate, and only perform other work like parsing with compute (there's no significant benefit to using an isolate to make HTTP requests anyway)

efficient async function that needs result from another async function in dart (http client)

From here Dart - Request GET with cookie we have this example of doing a get request with dart's built in HTTP library:
exampleCall() {
HttpClient client = new HttpClient();
HttpClientRequest clientRequest =
await client.getUrl(Uri.parse("http: //www.example.com/"));
clientRequest.cookies.add(Cookie("sessionid", "asdasdasqqwd"));
HttpClientResponse clientResponse = await clientRequest.close();
}
As you can see, multiple awaits are needed. Which means that if I try to do multiple concurrent exampleCall calls, they won't happen at the same time.
I cannot return a future because I must wait the client.getUrl() in order to do the clientResponse.
I also couldn't find a good alternative to use cookies on http calls. Dio seems to only support storing cookies from the server. Anyways, I'd like to know how to do in this way, but if there's a better way I'd like to know.
As you can see, multiple awaits are needed. Which means that if I try to do multiple concurrent exampleCall calls, they won't happen at the same time.
Not really sure what you mean here. Dart is single threaded so the concept of things happen "at the same time" is a little vauge. But if you follow the example later you should be able to call exampleCall() multiple times without the need of waiting on each other.
I cannot return a future because I must wait the client.getUrl() in order to do the clientResponse.
Yes you can if you mark the method as async:
import 'dart:convert';
import 'dart:io';
Future<List<String>> exampleCall() async {
final client = HttpClient();
final clientRequest =
await client.getUrl(Uri.parse("http://www.example.com/"));
clientRequest.cookies.add(Cookie("sessionid", "asdasdasqqwd"));
final clientResponse = await clientRequest.close();
return clientResponse
.transform(utf8.decoder)
.transform(const LineSplitter())
.toList();
}
The whole point of async methods is the ability to easily bundle multiple asynchronous calls into a single Future. Notice, that async methods must always return a Future but your return statement should not necessarily return a Future object (if you return a normal object, it will automatically be packed into a Future).
I also couldn't find a good alternative to use cookies on http calls. Dio seems to only support storing cookies from the server. Anyways, I'd like to know how to do in this way, but if there's a better way I'd like to know.
Not really sure about the whole cookie situation. :)

Prevent keeping unused DB connection

Problem description:
Lets have a service method which is called from controller:
class PaymentService {
static transactional = false
public void pay(long id) {
Member member = Member.get(id)
//long running task executing HTTP request
requestPayment(member)
}
}
The problem is if 8 users hit the same service in the same time and the time to execute the requestPayment(member) method is 30 seconds, the whole application gets stucked for 30 seconds.
The problem is even bigger than it seems, because if the HTTP request is performing well, nobody realizes any trouble. The serious problem is that availability of our web service depends on the availability of our external partner/component (in our use-case payment gateway). So when your partner starts to have performance issues, you will have them as well and even worse it will affect all parts of your app.
Evaluation:
The cause of problem is that Member.get(id) reserves a DB connection from pool and it keeps it for further use, despite requestPayment(member) method never needs to access DB. When next (9-th) request hits any other part of the application which requires DB connection (transactional service, DB select, ...) it keeps waiting (or timeouts if maxWait is set to lower duration) until the pool has an available connection, which can last even 30 seconds in our use case.
The stacktrace for the waiting thread is:
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:485)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1115)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
Or for timeout:
JDBC begin failed
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1167)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
... 7 more
Obviously the same issue happens with transactional service, however it makes much more sense since the connection is reserved for the transaction.
As a temporary solution its possible to increase the pool size with maxActive property on datasource, however it doesn't solve the real problem of holding an unused connection.
As a permanent solution its possible to enclose all DB operations to transactional behavior (withTransaction{..}, #Transactional), which returns the connection back to pool after commit (or to my surprise also withNewSession{..} works). But we need to be sure that the whole call chain from controller up to the requestPayment(member) method doesn't leak the connection.
I'd like to be able to throw an exception in the requestPayment(member) method if the connection is "leaked" (similar to Propagation.NEVER transactional behavior), so I can reveal the issue early during test phase.
After digging in the source code I've found the solution:
class PaymentService {
static transactional = false
def sessionFactory
public void pay(long id) {
Member member = Member.get(id)
sessionFactory.currentSession.disconnect()
//long running task executing HTTP request
requestPayment(member)
}
}
The above statement releases the connection back to pool.
If executed from transactional context, an exception is thrown (org.hibernate.HibernateException connnection proxy not usable after transaction completion), since we can't release such a connection (which is exactly what I needed).
Javadoc:
Disconnect the Session from the current JDBC connection. If the
connection was obtained by Hibernate close it and return it to the
connection pool; otherwise, return it to the application.
This is used by applications which supply JDBC connections to
Hibernate and which require long-sessions (or long-conversations)

webapi odata update savechanges issue - Unable to connect to remote server

In my mvc webapplication, I am using webapi to connect to my database through odata.
Both MVC WebApp and Odata WebApi are on different ports of Azure cloud service webrole endpoints.
MVC WebApp - 80
Odata WebApi - 23900
When I do a odataproxy updateobject and call savechanges like
odataProxy.UpdateObject(xxx);
odataProxy.SaveChanges(System.Data.Services.Client.SaveChangesOptions.PatchOnUpdate);
I am getting a weird exception on savechanges method call - unable to connect to remote server.
When I tried to look into inner exceptions, It says that - No connection could be made as the target machine actively refused it 127.0.0.1:23901
So if you observe the port number in the exception, it shows as 23901 and obviously this error should come as the request is supposed to hit 23900.
I am facing this exception only when running on azure cloud solution. Whenever I do an update request, it fails by hitting a wrong port (added by 1).
Another thing is, apart from this updateobject -> savechanges, rest all works like fetching data and adding data.
FWIW, I've just run across this same thing. Darn near annoying and I really hope it doesn't happen in production. I'm surprised no other people have come across this though.
The idea of creating a new context, attaching the object(s) and calling SaveChanges really repulsed me because not only does it practically break all forms of testing, it causes debug code and production code to be fundamentally different.
I was however able to work around this problem in another way, by intercepting the request just before it goes out and using reflection to poke at some private fields in memory to "fix" the port number.
UPDATE: It's actually easier than this. We can intercept the request generation process with the BuildingRequest event. It goes something like this:
var context = new Context(baseUri);
context.BuildingRequest += (o, e) =>
{
FixPort(e);
};
Then the FixPort method just needs to test the port number and build a new Uri, attaching it back to the event args.
[Conditional("DEBUG")]
private static void FixPort(BuildingRequestEventArgs eventArgs)
{
int localPort = int.Parse(LOCAL_PORT);
if (eventArgs.RequestUri.Port != localPort)
{
var builder = new UriBuilder(eventArgs.RequestUri);
builder.Port = localPort;
eventArgs.RequestUri = builder.Uri;
}
}
Here's the original method using reflection and SendingRequest2, in case anyone is still interested.
First we create a context and attach a handler to the SendingRequest2 event:
var context = new Context(baseUri);
context.SendingRequest2 += (o, e) =>
{
FixPort(e.RequestMessage);
};
The FixPort method then handles rewriting the URL of the internal request, where LOCAL_PORT is the port you expect, in your case 23900:
[Conditional("DEBUG")]
private static void FixPort(IODataRequestMessage requestMessage)
{
var httpWebRequestMessage = requestMessage as HttpWebRequestMessage;
if (httpWebRequestMessage == null) return;
int localPort = int.Parse(LOCAL_PORT);
if (httpWebRequestMessage.HttpWebRequest.RequestUri.Port != localPort)
{
var builder = new UriBuilder(requestMessage.Url);
builder.Port = localPort;
var uriField = typeof (HttpWebRequest).GetField("_Uri",
BindingFlags.Instance | BindingFlags.NonPublic);
uriField.SetValue(httpWebRequestMessage.HttpWebRequest, builder.Uri);
}
}
I have found the root cause and a temporary workaround.
Cause:
When you hit WebApi through some port :23900 in Azure compute emulator and do an update or delete operation, somehow the last request is blocking the port and because of the port walking feature in Azure emulator, it is jumping to next port where there is no service available which is causing the issue.
Even this issue is found only in development emulators.
Temp Workaround:
Use a different proxy to attach to updated context object and then save from the other proxy object.
var odataProxy1 = xxx;
var obj = odataProxy1.xyz.FirstOrDefault();
obj.property1="abcd";
...//Other update assignments
var odataProxy2 = xxx;
odataProxy2.AttachTo("objEntitySet",obj);
odataProxy2.UpdateObject(obj)
odataProxy2.SaveChanges(ReplaceOrUpdate);

Is an MVC Async controller that calls WebResponse still async?

We have a large library that makes a lot of HTTP calls using HttpWebRequest to get data. Rewriting this library to make use of async calls with the HTTPClient would be a bear. So, I was wondering if I could create async controllers that use a taskfactory to call into our library and whether the calls that are ultimately made via the WebClient would be asynch or they would still be synch. Are there any problems/side-effects I might cause by trying to mix async with the old HttpWebRequest?
If I'm understanding what you're proposing the answer is: no, changing the services the client talks to to be async would not help. The client would still block a CPU thread while the I/O is outstanding with the server, whether the server is async or not.
There's no reason to switch away from HttpWebRequest. You can use TaskFactory::FromAsync in .NET 4.0 to call HttpWebRequest::BeginGetResponse. That looks something like this:
WebRequest myWebRequest = WebRequest.Create("http://www.stackoverflow.com");
Task<WebResponse> getResponseTask = Task<WebResponse>.Factory.FromAsync(
myWebRequest.BeginGetResponse,
myWebRequest.EndGetResponse,
null);
getResponseTask.ContinueWith(getResponseAntecedent =>
{
WebResponse webResponse = getResponseAntecedent.Result;
Stream webResponseStream = webResponse.GetResponseStream();
// read from stream async too... eventually dispose of it
});
In .NET 4.5 you can still continue to use HttpWebRequest and use the new GetResponseAsync method with the new await features in C# to make life a heck of a lot easier:
WebRequest myWebRequest = WebRequest.Create("http://www.stackoverflow.com");
using(WebResponse webResponse = await myWebRequest.GetResponseAsync())
using(Stream webResponseStream = webResponse.GetResponseStream())
{
// read from stream async, etc.
}

Resources