I have a Windows 2016 Server (128GB Ram), running SQL Server 2016, and I'm seeing some unusual figures on the memory usage reporting.
I'm used to seeing SQL Server using a lot of memory (we have this one limited to 96GB) but I'm seeing this being reported strangely in Task Manager. I am seeing 75% memory usage but only 755.5MB against SQL Server. While there are other services running in the background, the total does not come anywhere near 75% of 128GB.
The 75% is the being reflected in Performance manager as I'd expect, with 96% being committed to SQL.
Internally in sql server, I am getting similar figures from internal reporting.
SELECT (physical_memory_in_use_kb / 1024) Phy_Memory_usedby_Sqlserver_MB
, (locked_page_allocations_kb / 1024) Locked_pages_used_Sqlserver_MB
, (virtual_address_space_committed_kb / 1024) Total_Memory_UsedBySQLServer_MB
, process_physical_memory_low
, process_virtual_memory_low
FROM sys.dm_os_process_memory;
I've been asked to investigate why the task manager is showing such a low usage when we would expect it to be so much higher. If there is a general misunderstanding on my part here, then please let me know. If there are any further tests I can perform to help track this down then I'm happy to do so.
Many thanks in advance.
Taskmanager only shows processes that allocate memory via VirtualAlloc. When Lock Pages is set in SQL Server, AllocateUserPhysicalPages is used for allocating and this does not show up in TaskManager. So you cannot rely on TaskManager for SQL memory usage.
Related
We're planning to evaluate and eventually potentially purchase perfino. I went quickly through the docs and cannot find the system requirements for the installation. Also I cannot find it's compatibility with JBoss 7.1. Can you provide details please?
There are no hard system requirements for disk space, it depends on the amount of business transactions that you're recording. All data will be consolidated, so the database reaches a maximum size after a while, but it's not possible to say what that size will be. Consolidation times can be configured in the general settings.
There are also no hard system requirements for CPU and physical memory. A low-end machine will have no problems monitoring 100 JVMs, but the exact details again depend on the amount of monitored business transactions.
JBoss 7.1 is supported. "Supported" means that web service and EJB calls can be tracked between JVMs, otherwise all application servers work with perfino.
I haven't found any official system requirements, but this is what we figured out experimentally.
We collect about 10,000 transactions a minute from 8 JVMs. We have a lot of distinct and long SQL queries. We use AWS machine with 2 VCPUs and 8GB RAM.
When the Perfino GUI is not being used, the CPU load is low. However, for the GUI to work properly, we had to modify perfino_service.vmoptions:
-Xmx6000m. Before that we had experienced multiple OutOfMemoryError in Perfino when filtering in the transactions view. After changing the memory settings, the GUI is running fine.
This means that you need a machine with about 8GB RAM. I guess this depends on the number of distinct transactions you collect. Our limit is high, at 30,000.
After 6 weeks of usage, there's 7GB of files in the perfino directory. Perfino can clear old recordings after a configurable time.
I am trying to find a definitive answer as to what the best setting is for the JVM Heap Size in Domino 8.5.3 FP4 - 64 Bit for Windows.
I know that by default it's set to 1024M. Some web sites have suggested that it's recommended to be 1G / 1024M - but that's the default setting so it that as good as it gets?
Other sites have said 25% of available RAM.
My Domino server has 12GB RAM available. It's currently got HTTPJVMMaxHeapSize = 1024M and Task Manager tells me that around 7GB of RAM is in use and nhttp.exe is using around 1.1GB. I want to increase this Heap to 2GB or 3GB if possible - will there be any issues doing to?
I'm running Windows Server 2008 R2 Standard Edition.
Speaking from an XPages perspective:
1 - Understand the dynamics of the working set and hardware
That is, understand what the application code, server runtime, and hardware profile is doing when processing a given working set (ie: the XPages application(s) within the server). Is the application coded in a non-optimal manner in terms of lifecycle execution and memory usage? Is the application making use of memory or disk persistence for component tree serialization? Is the server assigned an adequate amount of JVM memory? Is the hardware providing enough CPU and memory?
2 - Profile and monitor the working set with upper limit loads
To fully answer some of the questions in #1, detailed performance and scalability profiling must be carried out using tools like the XPages Toolbox and Eclipse Memory Analyzer. Furthermore, test the working set using Rational Performance Tester (or some other performance testing tool) to mimic real life concurrent workloads in a test environment. This allows you to set up a test environment where you can hit your application with (n) number of concurrent users using automation and collect that all invaluable data on health etc.
3 - Analyse the profile information to identify bottlenecks within the working set
Remember your working set can be one or more applications. Each doing something different, and having different load requirements. Be specific about the task at hand - do you want to tune the system more generally for all applications (for an average scale) or do you want the server to be fully optimized for a specific application (for a targeted scale)?
4 - Optimize the working set where applicable
Get in and make changes to the XSP / Java / ServerSide JavaScript code where applicable - use your knowledge of the XPages Request Processing Lifecycle and also look out for those hungry JVM memory consumers under high end load scenarios. Always favor disk persistence (disk storage is cheaper than RAM!) and code your custom Java objects and Managed Beans accordingly to cope with serialization and restoration. And make the trade-offs between function and speed in these scenario's where high scalability ends up burning CPU... a smarter UX with targetted functions / actions etc.
5 - Scale the hardware where applicable
Be prepared to increase cores, clock speeds, RAM, disk storage based on the needs of the working set - a cyclic approach to profiling, monitoring, and optimizing the working set will shed more and more light on the suitability of the hardware as this process evolves.
5 - Repeat from #2 until the working set and hardware performs and scales to the expected requirements / load expectancy of the system
I would recommend to set server's notes.ini param JavaVerboseGC=1 with console output into console.log in IBM_TECHNICAL_SUPPORT directory. After some time, take that log file and use IBM Support Assistant with tool IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer (GCMV).
This could help how to interpret collected data: http://www-01.ibm.com/support/docview.wss?uid=swg27013824&aid=1.
You definitely should do that if you get OutOfMemoryException.
Heap size set too high can lead to fewer GC runs but taking longer time - server will periodically "hang" for very short time. So too high setting is not recommended.
we have a server with 144GB memory, and running Sql Server and Analysis Services.
from analysis server setting panel, the 'TotalMemoryLimit' is set to 70, and 'LowMemoryLimit' is set to 55.
but the memory of SSAS is always around 18GB, and can not allocate more, even when there're lots of memory available.
processing job can not be done when there's not enough memory for SSAS.
it's in production environment, so i can't restart the server easily.
by the way, we dynamic adjust the totalmemorylimit and lowmemorylimit to make analysis services release memory.
it went well for several month, but the problem happened this morning.
and no matter how i set the 'totalmemorylimit' and lowmemorylimit, the memory of analysis is always around 18Gb
It could be SQL server grabbing and reserving a working set that's too big.
In SQL Server, go to Server Properties / Memory/ Check Dynamic memory configuration and set the maximum server memory to something sensible to get it to share nicely with SSAS.
Better yet, buy a seperate server instance for SSAS.
During a stress/load test of a ASP.NET app hosted in IIS, what should I be monitoring on the app server?
For example, the built in utility performance monitor in windows has a huge list of counters that I can monitor. But, I don't even know what half of these counters actually mean? I know I want to look at things like memory, processor, network....but it is pretty general.
How can I successfully find a problem area?
What counters some of you guys have used in the past?
These metrics we watch to determine if requests are being serviced promptly and the volume is scaling linearly with the applied load:
Queued Requests
Current Requests
Requests Executing
Requests Succeeded
Requests/sec
We will also watch these to look for application problems
Errors/sec
Unhandled Execution Errors/sec
To monitor the VM memory, we look at:
CLR Heap Size
CLR Generation 0, 1 & 2 Garbage collections
CLR Percent Time in GC
For locking conditions, we watch:
CLR Lock Contentions
CLR Lock Contention/sec
CLR Lock Contention Queue Length
Depending on the application we might look at others, like thread counts, but the above are the ones we look at most frequently.
I have windows 2003 terminal servers, multi-core. I'm looking for a way to monitor individual CPU core usage on these servers. It is possible for an end-user to have a run-away process (e.g. Internet Explorer or Outlook). The core for that process may spike to near 100% leaving the other cores 'normal'. Thus, the overall CPU usage on the server is just the total of all the cores or if 7 of the cores on a 8 core server are idle and the 8th is running at 100% then 1/8 = 12.5% usage.
What utility can I use to monitor multiple servers ? If the CPU usage for a core is "high" what would I use to determine the offending process and then how could I automatically kill that process if it was on the 'approved kill process' list?
A product from http://www.packettrap.com/ called PT360 would be perfect except they use SMNP to get data and SMNP appears to only give total CPU usage, it's not broken out by an individual core. Take a look at their Dashboard option with the CPU gauge 'gadget'. That's exactly what I need if only it worked at the core level.
Any ideas?
Individual CPU usage is available through the standard windows performance counters. You can monitor this in perfmon.
However, it won't give you the result you are looking for. Unless a thread/process has been explicitly bound to a single CPU then a run-away process will not spike one core to 100% while all the others idle. The run-away process will bounce around between all the processors. I don't know why windows schedules threads this way, presumably because there is no gain from forcing affinity and some loss due to having to handle interrupts on particular cores.
You can see this easily enough just in task manager. Watch the individual CPU graphs when you have a single compute bound process running.
You can give Spotlight on Windows a try. You can graphically drill into all sorts of performance and load indicators. Its freeware.
perfmon from Microsoft can monitor each individual CPU. perfmon also works remote and you can monitor farious aspects of Windows.
I'm not sure if it helps to find run-away processes because the Windows scheduler dos not execute a process always on the same CPU -> on your 8 CPU machine you will see 12.5 % usage on all CPU's if one process runs away.