We are using OpenRasta for a service to serve up binary resources (images and PDFs mostly). Some of these are relatively large (5-75MB). We were encountering issues with performance, and, using the ANTS memory profiler, determined that there appeared to be memory leaks.
We have been using StructureMap for DI, and found the following modification in a blog post:
public void HandleIncomingRequestProcessed()
{
HttpContextLifecycle.DisposeAndClearAll();
}
The leaks went away when we made a the further modification:
public void HandleIncomingRequestProcessed()
{
HttpContextLifecycle.DisposeAndClearAll();
ObjectFactory.EjectAllInstancesOf<IRequest>();
ObjectFactory.EjectAllInstancesOf<IResponse>();
ObjectFactory.EjectAllInstancesOf<ICommunicationContext>();
}
EDIT: This is not a good idea, as it will mess up concurrent requests. See the comments to the answer.
Essentially, I want to know, will this screw anything up? Or is it a pull request worth submitting?
Here are the before/after pics from the profiler:
Thanks-
Do a pull request. The SM support is poor and the context store is not in use, hence the problems you're seeing (amongst other problems that is).
That said, wouldn't evicting all instances of IRequest remove all the ones in the http context SM uses? You may want to check the documentation there.
Related
Would it be resource intensive to create a new URLSession for every single web request?
Some background:
I'm working on a library for making web requests. I'm trying to add a feature that allows downloading the result to a file that would also report its progress. For that, I'm going to have to become the session's delegate.
This wouldn't be a big deal except the public interface allows customizing the URLSession used for the requests. I don't want to override any customization the developer wants to do with its own delegate.
Right now, I'm thinking that the way to do this would be to secretly make a copy of the session they think is being used (yes I'm going to do more than copy the object itself) and then my internal delegate would call out to the original public session's methods. There could still be confusion/problems if they try to manipulate the session during the request, but that seems like a much smaller edge case.
My only concern right now is this might be very resource intensive if many requests are being made. Does anyone have a sense for that?
Yes, they are intensive. Here is a quote from Apple Staff on the developer forums.
This is a common anti-pattern, one that we specifically
warned against at at WWDC this year. Creating a session per request
is inefficient both on the CPU and, more importantly, on the network.
Specifically, it prevents connection reuse, which can radically slow
down back-to-back requests. This is especially bad for HTTP/2. We
encourage folks to group all similar tasks in a single session, using
multiple sessions only if you have different sets of tasks with
different requirements (like interactive tasks versus background
download tasks). That means that many simple apps can get away with
using a single statically-allocated session.
I am using gocraft/web in a project and am trying to debug some high memory usage. gocraft/web uses reflection to call handlers. I've set up the net/http/pprof profiler which works very well, but the largest block of memory, and the one that I am iterested in, only shows reflect.Value.call as the function. That's not very helpful.
How can I get around the fact that gocraft/web is using reflection and dig deeper into the memory profile?
Here's an example of the profile output I am seeing:
Thanks to #thwd for filing http://golang.org/issue/11786 about this. This is a display issue in pprof. All the data is there, just being hidden. You can get the data you need by invoking pprof with the -runtime flag. It will also show data you don't need, but it should serve as a decent workaround until Go 1.6 is out.
The short answer is that you can't directly. reflect.Value.call calls reflect.call which forwards to runtime.reflectcall which is an assembly routine implemented in the runtime, for example for amd64, here. This circumvents what the profiler can see.
Your best bet is to invoke your handlers without reflection and test them like that individually.
Also, enabling the profiler to follow reflective calls would arguably be an acceptable change to propose for the next Go iteration. You should follow the change proposal process for this.
Edit: issue created.
We're writing an application which uses our http api, and occasionally it will encounter errors, failed connections, timeouts, etc.
We'd like to, at least in beta, be able to capture these incidents, and forward them to ourselves somehow. Since obviously many of these issues could be due to the actual connection being down, this would need to queue these incidents and send them when a connection is available.
I tried googling for an answer for this but to no avail, came across a bunch of solutions which catch exceptions, but not just random "incidents" (could really just be a string we log somewhere, we'd just include all the details in it).
Short of writing my own coredata (or something) backed queue, I'm at a loss at what a solution for this could be.
Does anyone know of any libs/services which could help with this?
You might want to look into Testflight, or less general purpose, Parse. Not quite sure, but maybe HockeyKit offers a solution for this, too.
You can take a look at Bugfender, it's a product we have built to solve this problem. We have found that while developing an app there are a lot of issues that are not crashes, so we decided to make our own product to help us on this.
It's easy to integrate and you can get the devices logs. Our service works offline and online, we have spent a lot of time to make it reliable and easy to use.
Compared to other products, you don't need any crash to get the logs, you choose when you want them.
I have created WCF application which is running on Windows Service. It was installed using Windows Installer. I have followed procedure mentioned in following article for same.
http://msdn.microsoft.com/en-us/library/bb332338.aspx#msdnwcfhc_topic4
Most WCF properties are kept as default for net.tcp protocol, per call instance and so on.
Memory consumption of service keeps on increasing after every call and does not decrease. At the end it throws OutOfMemory consumption.
Application returns very heavy string based data. With memory-profiler I found memory is still allocated to string objects and increases during call.
As per my understanding string is managed objects should release data once out of scope.
Let me know if any other configuration/coding information is needed specifically.
There must be something keeping references to those strings in the code. Can you use your profiler to trace the references that are keeping the string objects alive?
After many unsuccessful attempts to handle LOH (http://msdn.microsoft.com/en-us/magazine/cc534993.aspx) which was very large string in my context, I have created custom class to handle it.
Instead of storing large string in single object am storing it as collection of small strings in custom object. .NET disposed it properly without creating above mentioned problem.
Other possible solution with worked for me is to use file object to store large file and access it as text reader. This works well and keeps footprint of application small. Unfortunately for me it did not work and accessing file location was not allowed for application.
It would be very difficult to answer this question without some code to look at. You can always call GC.Collect(GC.MaxGeneration) to force garbage collection and see if this doesn't reduce your memory consumption. Ideally this would only be temporary code to track down what is going on in the application. If forcing garbage collection does not reduce memory consumption then references to the strings must be being retained, via static member variables or whatever: having no conception of what the code is, any theory would be a shot in the dark
I have this web app that is running ASP .NET MVC 1.0 with LINQ 2 SQL
I'm noticing a very strange problem with LINQ 2 SQL throwing exceptions (mainly Specified cast invalid or Sequence contains more than one element) when under a certain amount of load.
The bigger problem is, I'm not talking about Real Heavy/Professional Stress Testing... Basically what I'm doing is I open FireFox and Chrome and hold down F5 for ten seconds in each (I call this poor man stress testing) - lo and behold; the web app is throwing these exceptions randomly for the next two or five minutes. If I restart the app from IIS7 (or restart WebDev if under Visual Studio) then immediately all is back to normal. Like nothing happened.
At first I was suspecting the way I handle the DataContext, maybe i'm supposed to dispose it at every Application_End from Global.asx, but that didn't change anything.
Right now I have a single public static DataContext object used by all requests. I'm not disposing it or re-creating it. Is that the right way to do it? Am I supposed to dispose it? When exactly should I dispose it?
There are several things that happen on every request - for example, in every page, the User object (for the current user) is loaded from the database and "LastSeen" attribute is updated to DateTime.Now. Other things (like Tag Cloud for example) are cached.
Any ideas why this is happening?
The DataContext class is not threadsafe - you need to create a new one for each operation. See this article by Rick Strahl (Linq to SQL DataContext Lifetime Management)
You should dispose the datacontext after every bunch of queries, and use it like
using(MyDataContext dc = new MyDataContext())
{
var x = dc.Table.Single(a=>a.Id=3);
//do some more related stuff, but make sure your connection won't be open too long
}
Don't have one static datacontext be used by every request. Would you use the same Connection object in normal ADO.Net for every request?!
See also http://blog.codeville.net/2007/11/29/linq-to-sql-the-multi-tier-story/
DataContext isn’t thread-safe, as far
as I know
You lose isolation and
cannot control when SubmitChanges() is
called - concurrent requests will
interfere with one another
Memory
leaks are pretty likely
Create a DC for every operation like Rob suggests OR use a IoC container to have a DC shared per request.
DO NOT DISPOSE DC's - they are designed to be lightweight. Disposing is not only not necessary but it can cause bad practices and maybe other threading issues in the future which may be even harder to track down.