How can I see the individual processes using memory in cPanel? - memory

I'm using GoDaddy hosting for my site, and noticed the physical memory usage is very high. Is there a way to see the individual processes using memory within cPanel?

Related

AWS server became slow after traffic increase

I have a single page Angular app that makes request to a Rails API service. Both are running on a t2xlarge Ubuntu instance. I am using a Postgres database.
We had increase in traffic, and my Rails API became slow. Sometimes, I get an error saying Passenger queue full for rails application.
Auto scaling on the server is working; three more instances are created. But I cannot trace this issue. I need root access to upgrade, which I do not have. Please help me with this.
As you mentioned that you are using T2.2xlarge instance type. Firstly I want to tell you should not use T2 instance type for production environment. Cause of T2 instance uses CPU Credit. Lets take a look on this
What happens if I use all of my credits?
If your instance uses all of its CPU credit balance, performance
remains at the baseline performance level. If your instance is running
low on credits, your instance’s CPU credit consumption (and therefore
CPU performance) is gradually lowered to the base performance level
over a 15-minute interval, so you will not experience a sharp
performance drop-off when your CPU credits are depleted. If your
instance consistently uses all of its CPU credit balance, we recommend
a larger T2 size or a fixed performance instance type such as M3 or
C3.
Im not sure you won't face to the out of CPU Credit problem because you are using Xlarge type but I think you should use other fixed performance instance types. So instance's performace maybe one part of your problem. You should use cloudwatch to monitor on 2 metrics: CPUCreditUsage and CPUCreditBalance to make sure the problem.
Secondly, how about your ASG? After scale-out, did your service become stable? If so, I think you do not care about this problem any more because ASG did what it's reponsibility.
Please check the following
If you are opening a connection to Database, make sure you close it.
If you are using jquery, bootstrap, datatables, or other css libraries, use the CDN links like
<link rel="stylesheet" ref="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/css/bootstrap-select.min.css">
it will reduce a great amount of load on your server. do not copy the jquery or other external libraries on your own server when you can directly fetch it from other servers.
There are a number of factors that can cause an EC2 instance (or any system) to appear to run slowly.
CPU Usage. The higher the CPU usage the longer to process new threads and processes.
Free Memory. Your system needs free memory to process threads, create new processes, etc. How much free memory do you have?
Free Disk Space. Operating systems tend to thrash when the file systems on system drives run low on free disk space. How much free disk space do you have?
Network Bandwidth. What is the average bytes in / out for your
instance?
Database. Monitor connections, free memory, disk bandwidth, etc.
Amazon has CloudWatch which can provide you with monitoring for everything except for free disk space (you can add an agent to your instance for this metric). This will also help you quickly see what is happening with your instances.
Monitor your EC2 instances and your database.
You mention T2 instances. These are burstable CPUs which means that if you have consistenly higher CPU usage, then you will want to switch to fixed performance EC2 instances. CloudWatch should help you figure out what you need (CPU or Memory or Disk or Network performance).
This is totally independent of AWS Server. Looks like your software needs more juice (RAM, StorageIO, Network) and it is not sufficient with one machine. You need to evaluate the metric using cloudwatch and adjust software needs based on what is required for the software.
It could be memory leaks or processing leaks that may lead to this as well. You need to create clusters or server farm to handle the load.
Hope it helps.

Azure app service availability loss. The memory counter Page Reads/sec was at a dangerous level

Environment:
Asp Net MVC app(.net framework 4.5.1) hosted on Azure app service with two instances.
App uses Azure SQL server database.
Also, app uses MemoryCache (System.Runtime.Caching) for caching purposes.
Recently, I noticed availability loss of the app. It happens almost every day.
Observations:
The memory counter Page Reads/sec was at a dangerous level (242) on instance RD0003FF1F6B1B. Any value over 200 can cause delays or failures for any app on that instance.
What 'The memory counter Page Reads/sec' means?
How to fix this issue?
What 'The memory counter Page Reads/sec' means?
We could get the answer from this blog. The recommended Page reads/sec value should be under 90. Higher values indicate insufficient memory and indexing issues.
“Page reads/sec indicates the number of physical database page reads that are issued per second. This statistic displays the total number of physical page reads across all databases. Because physical I/O is expensive, you may be able to minimize the cost, either by using a larger data cache, intelligent indexes, and more efficient queries, or by changing the database design.”
How to fix this issue?
Based on my experience, you could have a try to enable Local Cache in App
Service.
You enable Local Cache on a per-web-app basis by using this app setting: WEBSITE_LOCAL_CACHE_OPTION = Always
By default, the local cache size is 300 MB. This includes the /site and /siteextensions folders that are copied from the content store, as well as any locally created logs and data folders. To increase this limit, use the app setting WEBSITE_LOCAL_CACHE_SIZEINMB. You can increase the size up to 2 GB (2000 MB) per web app.
There is some memory performance problems can be listed
excessive paging,
memory shortages,
memory leaks
Memory counter values can be used to detect the presence of various performance problems. Tracking counter values both on a system-wide and a per-process basis helps you to pinpoint the cause in Azure such as in other systems.
Even if there is no change in the process, a change in the system can cause memory problems. the system-wide
researching in the azure:
Shared resources plans (Free and Basic) have memory limits as seen here: https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits#app-service-limits.
Quotas: https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-monitor
Also, you can check in the portal under your web app settings, search for “quotas”, and also check out “Diagnose and solve problems” and hit “metrics per instance (app service plan)” which will show you memory used for the plan.
A MemoryCache bug in .net 4 can also cause this type of behavior
https://stackoverflow.com/a/15715990/914284

Why is my web application's memory usage so high?

I have a C# MVC App that also uses EF.
It's working well but on my local dev machine IIS Express uses in the order of 100Mb of memory, but when its in the production environment it uses 600mb of memory and seems to be challenging the specs of our VPS.
The 600mb is taken from PerfMons private bytes counter on the app pool process. RedGates performance monitor however seems to say the private bytes is more in the order of 150mb - I'm not sure what the difference between the two measures is.
What is a reasonable guide to private bytes usage that should I expect PerfMon to report for a production site?
I read somewhere that private bytes may be reporting memory that is available to the application not necessarily memory that is currently allocated by the application. I still find it alarming that it has reached 500-600mb - presumably the OS must think the applications memory demand may peak there?
Should I be alarmed and any advice on how to figure out what is going on?
UPDATE
If I run it on Win7 with IIS it only consumes around 100mb. Similar to result from IIS Express - so does this mean its something more to do with the IIS configuration on my production machine?

Tomcat Minimum Memory :: Virtual Hosts vs. Multiple Instance

Trying to determine memory usage in a vanilla web app run through tomcat.
I assume that a virtual hosts setup will use significantly less memory than host-per-instance. What is the minimum memory footprint of a single host tomcat 7 instance? For each instance added does the memory footprint grow linearly, or can we share common resources among instances?
I would prefer a multi-instance setup, so as to isolate client sites (i.e. not affect other sites on redeploy or restart), but memory usage is the key. If each instance requires 512mb ram (like grails, for example), then I may have to take the virtual host route as I was not intending to use the 16GB ram available on tomcat alone!
Suggestions appreciated. BTW, only a handful of sites will incur significant load; the majority are small and draw on a client CMS (perhaps I can virtual hosts these sites and only host-per-instance with the "important" client sites)

Using the Web Console, Is it possible to Change the Membase Memory Quota?

Let's say I have a Membase server with a memory quota of 1GB. Is it possible to change it and make it larger, say 8GB, assuming the server gets a hardware upgrade?
Currently, I have the impression that Membase memory quota is unchangeable once set, which is very frustrating.
For 1.6.5 at least, you can change it from Web console. From Manage | Data Buckets, you can reset your RAM quota.
But for a cluster, it won't be that easy though because it is required to have same memory quota across all cluster nodes.

Resources