Any good Profiler with CPU details for ASP.net MVC 3 Web application? - asp.net-mvc

What are some good Profiler for asp.net mvc3 application with CPU usage details?
I have tried New Relic, Slimtune and Mini Profiler. All of them just provide what process is taking longer, but none of them actually give any insights as what is causing high cpu?
All of the above profiler did good job and we improved the process and now it no longer take long time to respond, it loads page in 300-500 ms, but now we're more concerned on CPU usage. Because sometimes application takes lots of high CPU very randomly, and we are trying to find what is causing this behaviour.

High CPU spikes can be for a number of reasons.
First thing I would do is determine if the 'random' high CPU is due to a Generation 2 garbage collections.
ASP.NET Case Study: High CPU in GC - Large objects and high allocation rates
Investigating Memory Issues
Memory Performance Counters
Garbage Collection and Performance
There are quite a few built-in performance counters that might be helpful:
ASP.NET Performance Monitoring, and When to Alert Administrators

Sam Saffron (one of the StackoverFlow authors by the way) has created a great command-line tool a while ago, but unfortunately has abandoned it.
A friend of mine forked the code to make it work in 2015:
https://github.com/jitbit/cpu-analyzer
(the page has a link to Sam's post explaining how to use it)

Related

EF6 first query memory usage

I'm using EF6.1.3 with a fairly large database structure (177 tables, 1673 columns). When I run my first query, according to memory profiling, EF is allocating 225MB. This seems a pretty heavy memory load.
I've been getting out of memory exceptions and whilst this isn't likely to be the main culprit, I am conscious that it is probably contributing to it.
Does this sound like typical memory usage? Is there any way of reducing it short of reducing the complexity of the structure?
Looks like Visual Studio's memory profiler was misreporting as I've been monitoring it via task manager and it uses <50MB for the whole app to get to that point.

How to know if I have to do memory profiling too?

I currently do CPU sampling of an ASP.NET Core application where I send huge number of requests(> 500K) to it. I see that the peak working set of the application is around ~300 MB which in my opinion is not huge considering the number of requests being made to the application. But what I have been observing is huge drop in requests per second when I enable certain pieces of functionality in my application.
Question:
Should I do memory profiling too? I ask this because even though the peak working set is ~300MB, there could be large number of short lived objects that could be created & collected by GC and since work by GC also counts as CPU, should I do memory profiling too to see if I allocate too much?
I will answer this question myself based on new information that I found out.
This is based on the tool PerfView, which provides information about GC and allocations.
When you open the GCStats view navigate the links to the process you care and you should see information like below:
Notice that view has the information has the % CPU Time spent Garbage Collecting. If you see this to be > 5% then it should be a cause of concern and you should start memory profiling.

Azure Website Memory Usage - what's OK

I have been trying to determine what is the best instance/size combination for 21 sites on Azure Websites due to what I think is memory pressure. CPU is not an issue < 3%.
I have tried 1 medium and 1/2 smalls. Medium improved overall performance by about 15ms response time on the busiest site (per New Relic). Probably because it double the cores (and memory).
Using the new preview portal's memory quote module:
1 or 2 smalls runs about 80%-90% average memory
1 medium runs about 70%
That makes no sense considering the medium is double the memory. Is the larger memory availability not forcing the GC to run as often on the medium instance?
What % memory can an instance run and it not impact performance.
Is 80-90 OK on the small instance?
Will adding instances to smalls help a memory problem since it basically just creates the same setup across all the instances and will eventually use up the same amount?
I have not been able to isolate any issues on performance on any of the 21 sites, but I don't want a surprise if I am running too close.
Run the sites on a smaller instance and set auto-scale to ramp when it gets to high CPU. Monitor how often it needs to scale and if it scales often just switch to a larger instance size permanently.
Coming back to this for some user edification. It seems that the ~80% range is normal and is just using "what's available." No perf issues or failures ever.
So if you come across this and are worries about the high memory usage, you'll need to keep an eye on it to determine if it's just normal or if you have memory leak.
I know, crappy answer :)

iOS Testing with Instruments - Best Practices

Since I'm developing iOS based applications, I'm also spending much time on getting started with testing applications and debugging performance issues.
After reading the instruments user guide, I'm now able to detect memory leaks, record the current memory size and cpu-usage and much more, which quiet helps a lot!
Now, in order to improve my testing strategy, I'm looking for a kind of bench mark values or standard values for the different instruments (such as cpu- usage, energy consumption, ...). You know what I mean? For example: I have a cpu-usage of 80% over a period of 10 seconds. Is that okay or should I think about performance optimization? Sure, in case of cpu-usage, it depends on the action that the app does over that period of time (e.g. reloading some data or something like that) but is there any rule of thumb or best practices for that?
I already did a research on the internet and only found a video of the iOS tech talk in london from Michael Jurewitz. In that talk, i figured out the following statements that are properly useful for me:
Activity Monitor: Can only be used to compare your app's resource usage to other apps
Allocations: A constantly growing allocations chart is obviously a bad sign for memory leaks. Allocations does not show the real memory size the app uses
VM Tracker: Show overall memory size; Rule of thumb: more than 100 MB dirty size of an app is far too much
...
Now I need some "rules of thumb", especially for the CPU Monitor (where is the boundary between good case and bad case?) and Energy Consumption (Level..?).
Do you have any tips for me or do you know some articles I can read through?
Thanks a lot!
Philipp

How to use AQTime's memory allocation profiler in a program that uses a large amount of memory?

I'm finding AQTime hard to use because it interferes with the original program too much. If I have a program that uses, for example, 300MB of ram I can use AQTime's allocation profiler without a problem, and find out where most of the memory is being used. However I notice that running under AQTime, the original program uses more like 1GB while it's being profiled.
Right now I'm trying to reduce memory usage in a program which is using 1.4GB of memory. If I run it under AQTime, then the original program uses all of the 2GB address space and crashes. I can of course invent a smaller set of test data and estimate how the memory usage will scale with the full data set - but the reason I'm using a profiler in the first place is to try to avoid this sort of guesswork.
I already have AQTime set to 'Collect stack information - None' and all the check boxes to do with checking memory integrity are switched off, and I've tried restricting the area being profiled to just a few classes but this doesn't seem to improve anything. Is there a way to use AQTime that produces a smaller overhead? Or failing that, what other approaches are there to get a good idea of the memory being used?
The app is written in Delphi 2010 and I'm using AQTime 6.
NB: On top of the increased memory usage, running under AQTime slows the app down an awful lot, making the whole exercise not just impossible but impractical too :-P
AFAIK the allocation profiler will track memory block allocation regardless of profiling areas. Profiling areas are used to track classes instantiation. Of course memory-profiling an application that allocates a large amount of memory is a issue, you may try to use the LARGE_ADRESS_AWARE flag, and the /3GB boot switch, or use a 64 bit system (as long as you have at least 4GB of memory, or more). Also you can take snapshot of the application state before it crashes, to see where the memory is allocated. Profiling takes time, anyway, you may have to let it run for a while.

Resources