I have been having trouble with a page crashing on my website and I have worked out that it is because the memory limit is too low. I have read an article (http://codingcyber.com/how-to-increase-php-memory-limit-on-godaddy-hosting-882/#) and decided to buy more RAM and I am about to increase the memory limit.
Before I do, I just want to know what will happen if multiple users are using the site all at once. I guess a clearer way to explain my question is with an analogy.
If I have 2048mb ram and a memory_limit = 256mb, what happens if 20 users all login at once and use 50mb of the RAM? I imagine that since no one has exceeded the 256mb limit and the total RAM used (50mb * 20 users = 2000mb) is just under the 2048 limit, that the site should be ok, but I just want to confirm that this is correct (I've never done anything like this before).
Thanks for confirming or correcting.
Just spoke with godaddy and they explained that it's not a problem to have extra people as long as the TOTAL RAM used is lower than the site limit. If one person goes over it will only impact them. If everyone goes over the limit as a whole (unlikely in my case) that's when we will have problems...
Related
I currently do CPU sampling of an ASP.NET Core application where I send huge number of requests(> 500K) to it. I see that the peak working set of the application is around ~300 MB which in my opinion is not huge considering the number of requests being made to the application. But what I have been observing is huge drop in requests per second when I enable certain pieces of functionality in my application.
Question:
Should I do memory profiling too? I ask this because even though the peak working set is ~300MB, there could be large number of short lived objects that could be created & collected by GC and since work by GC also counts as CPU, should I do memory profiling too to see if I allocate too much?
I will answer this question myself based on new information that I found out.
This is based on the tool PerfView, which provides information about GC and allocations.
When you open the GCStats view navigate the links to the process you care and you should see information like below:
Notice that view has the information has the % CPU Time spent Garbage Collecting. If you see this to be > 5% then it should be a cause of concern and you should start memory profiling.
It is already the second time that I notice one of my Go program taking alot of memory (much more that I would expect) and that I do not understand why, so here I am.
I decided to profile the memory with pprof and the result of the top5 pprof memory profiling is as such:
1140.28MB of 1169.97MB total (97.46%)
Dropped 61 nodes (cum <= 5.85MB)
Showing top 5 nodes out of 15 (cum >= 33.89MB)
My problem is the following. In the profile we see that the program consumed roughly 1.2GB of memory (which is affordable for what I am doing, parsing and indexing logs). However, when I do a "top command" and look at the resident memory used by my program, it is rather around 10GB to 11GB of memory taken by the program which is quite a huge difference with the memory profiling.
So where are those Gigs of memory that I don't see in the profile ?
And why ?
How to troubleshout ?
Thanks by advance,
It's likely that extra memory usage is from the file system, esp. since you are probably scanning over a lot of disk.
See: http://www.linuxatemyram.com/
I have been trying to determine what is the best instance/size combination for 21 sites on Azure Websites due to what I think is memory pressure. CPU is not an issue < 3%.
I have tried 1 medium and 1/2 smalls. Medium improved overall performance by about 15ms response time on the busiest site (per New Relic). Probably because it double the cores (and memory).
Using the new preview portal's memory quote module:
1 or 2 smalls runs about 80%-90% average memory
1 medium runs about 70%
That makes no sense considering the medium is double the memory. Is the larger memory availability not forcing the GC to run as often on the medium instance?
What % memory can an instance run and it not impact performance.
Is 80-90 OK on the small instance?
Will adding instances to smalls help a memory problem since it basically just creates the same setup across all the instances and will eventually use up the same amount?
I have not been able to isolate any issues on performance on any of the 21 sites, but I don't want a surprise if I am running too close.
Run the sites on a smaller instance and set auto-scale to ramp when it gets to high CPU. Monitor how often it needs to scale and if it scales often just switch to a larger instance size permanently.
Coming back to this for some user edification. It seems that the ~80% range is normal and is just using "what's available." No perf issues or failures ever.
So if you come across this and are worries about the high memory usage, you'll need to keep an eye on it to determine if it's just normal or if you have memory leak.
I know, crappy answer :)
I'm planning an application which will involve loading many pictures at one time and thus requires a large chunk of memory. For example, I might have 50 image objects created at once, taking a total of 1GB of RAM. But when the user goes to load 20 more pictures, I'd like to make sure that amount of memory is already reserved and ready.
Now this part might seem a little backwards from normal. Rather than specifying how much memory my application shall reserve, instead I need to specify how much memory to leave free for other applications, and adjust my application's memory periodically according to this specification. I must say I've never worked with reserving memory at all, and especially won't know how to leave this remaining available memory.
So for example, if the computer has 2048 MB of RAM, and the option is set to leave 50 MB free for other applications, and there is already 10MB of RAM being used by other apps, then it should reserve 2048-50-10 = 1988 MB for my app.
The trouble I foresee is suppose the user opens another application which requires 1GB. My app has to catch this and shrink its self.
Does this even sound like a feasible approach? Basically, I need to make sure there is as much memory reserved as possible at any given time, while leaving a decent amount available for other apps. Would it make a significant impact on performance if I do this, or not much at all? I might be loading and unloading images at rapid paces, and I don't want it to reserve/free this memory on demand, I want it to stay reserved.
+1 for Sertac's mentioning of how SQL Server rides the line of allocating memory it needs, but releasing memory when Windows complains.
Applications can receive Window's complaints by using the CreateMemoryResourceNotification:
hLowMemory := CreateMemoryResourceNotification(LowMemoryResourceNotification);
Applications can use memory resource notification events to scale the
memory usage as appropriate. If available memory is low, the
application can reduce its working set. If available memory is high,
the application can allocate more memory.
Any thread of the calling
process can specify the memory resource notification handle in a call
to the QueryMemoryResourceNotification function or one of the wait functions.
The state of the object is signaled when the specified
memory condition exists. This is a system-wide event, so all
applications receive notification when the object is signaled. Note
that there is a range of memory availability where neither the
LowMemoryResourceNotification or HighMemoryResourceNotification object
is signaled. In this case, applications should attempt to keep the
memory use constant.
But it's also worth mentioning that you might as well allocate memory that you need. Your operating system has a very sophisiticated set of algorithms to swap out the least used memory when memory pressure is high. You can take advantage of this by simply allocating all the memory that you need. When Windows starts to run low, it will find those pages of memory that you are using the least and swap them out to disk. (This is how a well-known reverse proxy works).
The only thing left is to decide if you want to free some images when Windows says it's running low on RAM. But if you're not using the memory, it is going to be swapped out to disk for you.
It's not realistic to account for other apps. Just ignore them. The system will page things in and out as needed. If you really wanted to do this you'd have to dynamically adapt to other processes as they start and finish. That's really not realistic. What's more it's not practical to inquire of other processes how much memory they need. Leave it all to the system.
Set a budget for your app and make sure you don't exceed it. Keep the most recently used images in memory and when you approach your memory budget throw away the least recently used images to make space.
If you are stressing the available resources then make sure you use FastMM and enable LARGE_ADDRESS_AWARE for your app so that you get 4GB address space when running on a 64 bit OS.
There is chance were a heavy weight application that needs to be launched in a low configuration system.. (Especially when the system has too less memory)
Also when we have already opened lot of application in the system & we keep on trying opening new new application what would happen?
I have only seen applications taking time to process or hangs up for sometime when I try operating with it in low config. system with low memory and old processors..
How it is able to accomodate many applications when the memory is low..? (like 128 MB or lesser..)
Does it involves any paging or something else..?
Can someone please let me know the theory behind this..!
"Heavyweight" is a very vague term. When the OS loads your program, the EXE is mapped in your address space, but only the code pages that run (or data pages that are referenced) are paged in as necessary.
You will likely get horrible performance if pages need to constantly be swapped as the program runs (aka many hard page faults), but it should work.
Since your commit charge is near the commit limit, and the commit limit will likely have no room to grow, you will also likely recieve many malloc()/VirtualAlloc(..., MEM_COMMIT)/HeapAlloc()/{Local|Global}Alloc() failures so you need to watch the return codes in your program.
Some keywords for search engines are: paging, swapping, virtual memory.
Wikipedia has an article called Paging (Redirected from Swap space).
There is often the use of virtual memory. Virtual memory pages are mapped to physical memory if they are used. If a physical page is needed and no page is available, another is written to disk. This is called swapping and that explains why crowded systems get slow and memory upgrades have positive effects on performance.