JxBrowser memory footprint: handling DOM leaks - memory

We are using Vaadin as our back end technology.
Vaadin clients consume browser memory due to DOM leaks.
The memory consumed by the browser eventually leads to performance issues.
Questions:
Is there a way to monitor the Browser instance for memory consummation or performance deterioration?
Is there a way to dynamically restart the Browser instance without impacting the client, or limit the impact on the user experience?
Similar to:
JxBrowser takes huge RAM

At the moment, JxBrowser doesn't provide such functionality.
Well, if we are talking about "restarting" the Browser instance, I can suggest that you just dispose it and create a new Browser instance. It's like closing and opening a new tab in Google Chrome. Af far as I know, at the moment Chromium doesn't provide functionality that allows "restarting" the Browser instance without reloading already loaded web page and destroying session cookies.

Related

What can cause a security context token handle leak?

I have a native win 32 application that, during load testing as an HTTP server, causes the Working Set to increase over time. There are no memory leaks (confirmed by tracking Private Bytes in PerfMon and using FastMem to monitor memory usage during runtime). Note: the load is constant, with about 50 concurrent connections, so there is no significant variation.
Using Process Explorer, I've narrowed the problem down to Token handle leaks. I've also used madKernel to report on handle usage counts, which also confirms that token handles keep increasing.
To be precise, I'm seeing the following in Process Explorer:
All the token handles shown in Process Explorer have the same name: 'Doug-M46\Doug:ff739'):
There are no security (or other related API calls that require security credentials) that I can see in the code, but there must be something being called that causes this problem, I just don't know what else to look for.
I've used AQTime to try and track to source of the leak, but have not had any luck. At this point I'm considering hooking all the possible API calls that could cause this leak, so I can track it down, but I'd prefer to avoid such an extreme measure.
My application uses the ICS HTTP Server component in a separate thread to handle HTTP requests for my application (32-bit application, Delphi XE-2, ICS V8 Gold, Windows 7 Professional Build 7601: SP1).
Any insight into the cause of these handle leaks would be very much appreciated, as I've been trying to hunt them down for quite a while now.
References:
What can cause section handle leaks?

High CPU load after Razor page render completed

I get a weird behavior of Razor - after rendering a web page of approx 300 DIVs, with some user info in each, rendered in a loop, the CPU continues to run at 100% single-core load for about 30 seconds. No IO ops, no change in memory utilization, just burning CPU cycles.
The page is rendering data from the database, 300 records. It's not the database fault - I checked it by disabling DB access, replacing the records with dummy data and obtained the same behavior. The page is rendered and displayed in the browser, no other requests are active, so the server side code (at least my code) is idle.
UPD: The problem ONLY appears when the site is launched from within Visual Studio. Regardless whether it is hosted in IIS Express or IIS. Both running .NET 4.5.1, MVC 5.1.2. Opening the same site when devenv is not running makes the issue disappear.
Could anyone advise - whether you have experienced a similar issue and how you coped with it, and, how could I identify the piece of code that's causing the problem?
SOLVED! It's the Browser Link!
http://blogs.msdn.com/b/webdev/archive/2013/06/28/browser-link-feature-in-visual-studio-preview-2013.aspx
Disabling it solves the issue.
Eventually it all came down to VS Browser Link. (http://blogs.msdn.com/b/webdev/archive/2013/06/28/browser-link-feature-in-visual-studio-preview-2013.aspx)
Happens to be that smaller web pages work just fine, but larger pages cause a disproportionally higher load on the web-server process, making part of the server do something after the page is sent to the browser.
Disabling Browser Link solves the problem.

Memory leak issues after page refresh in Safari IPad

This question is more of a sanity check, I cannot logically explain on how this can be happening.
I have application that uses combination of WebSQL, knockout.js and Jquery Mobile UI. It's memory heavy and does have memory leaking (knockout.js and jquery Mobile Ajax navigation doesn't go together well), to the point where after 30 minutes of usage on IPad on Safari application crashes. There is not enough time to fix it all properly till the deadline (don't judge me), so we figured we would not ajax navigate by setting $.mobile.ajaxEnabled = false; every n navigation steps (~5-10), that should terminate all dominated objects and free up the memory - solve the leak. What am I now observing is even after refreshing application, (presumably) memory is still leaking as application is still crashing after prolonged use.
How can this be? How can memory still be retained after page refresh? How could I force memory to be released?

TWebbrowser massive memory leaks : no solution so far

I have an application that uses TWebbrowser to periodically navigate to a specific URL and extract some data. The app keeps runing 24x7 and does a lot of navigation in pages.
The problem is that TWebbrowser has a well-known memory leak problem, in which every time you navigate to a new page, the memory used for the application is increased. My app can easily use more than 2GB of RAM after some time. And after navigating hundred of times an 'Out of memory' or 'Out of system resources' exception is thrown and the only way to work around it is restarting the application.
The strange thing is FASTMM never shows these leaks. When I use my app for some minutes and close it, nothing is reported.
I've been searching for a solution for this problem for years (in fact since 2007 when I wrote the first version of my application). There are some workarounds but in fact, none of them solves the problem. For me the only workaround is really to close and open the app periodically.
I already tested the SetProcessWorkingSetSize approach, but it only shrinks the memory used by the app temporarily. After some seconds, the app uses a huge amount of memory again.
I also tried EmbeddedWB, but as it descends from TWebbrowser, it's plagued by the same issue.
By the way, I can't use a simple component like IdHTTP, because I need to do some JavaScript manipulation in the website visited.
Does anyone know if is there REALLY a solution for this problem?
QC#106829 describes one possible cause of memory leaks with TWebBrowser. Accessing Document (and any other properties that are implemented via TOleControl.GetIDispatchProp or TOleControl.GetIUnknownProp) causes leaks because it calls AddRef without ever calling Release. As a workaround, you can manually call Release, or you can patch the VCL (see here), or you can avoid the problematic properties (for example, by using browser.DefaultInterface.Document instead of browser.Document).

Which requests cause a w3wp process to grow considerably?

On a production environment, how can one discover which Asp.Net http requests, whether aspx or asmx or custom, are causing the most memory pressure within a w3wp.exe process?
I don't mean memory leaks here. It's a good healthy application that disposes all it's objects nicely. Microsoft's generational GC does it's work fine.
Some requests however, cause the w3wp process to grow its memory footprint considerably, but only for the duration of the request.
It is simply a question of the cost-efficiency and scalability of a production environment for a SAAS app, in order to regularly report back to the development department on their most memory hogging "pages", to return that (memory) pressure where it belongs, so to speak.
There doesn't seem to be anything like:
HttpContext.Request.PeakPrivateBytes or .CurrentPrivateBytes
or
Session.PeakPrivateBytes
You might want to use a tool like Performance Monitor to monitor the "Process\Working Set" for the W3WP.exe process and record it to a database. You then could could correlate it to the HTTP logs for the IIS Server.
It helps to have both the Perfmon data and HTTP logs both writing to an SQL database. Then you can use T-SQL to bring up requested pages by Date/Time around the time of the observed memory pressure. Use the DatePart function to build a Date/Time rounded to the desired accuracy of Second or Minute as needed.
Hope this helps.
Thanks,
-Glenn
If you are using InProc session state, all your session data is stored in w3wp's memory, and may be the cause of it growing.
I wouldn't worry about it.
It could be that the GC is happening during the request, and the CLR is allocating memory to move things around. Or it could be some other periodic servicing thing that comes along with ASPNET.
Unless you are prepared to go spelunking with perf counter analysis of generation 0,1,2 GC events , and etc, then I wouldn't worry about solving this "problem".
And it doesn't sound like it's a problem anyway - just a curiosity thing.

Resources