I have a Delphi (hence 32-bit) CGI app running on a Windows 2008 64-bit server that has 24 Gb RAM with IIS7. The web service runs fine for a few days at a time (sometimes a few weeks) and then suddenly starts reporting "Not enough storage available to process this command."
Now I've seen this before in regular Windows apps and it normally means that the machine ran out of memory. In this instance, the server shows that only 10% of physical RAM is in use. On top of that, TaskManager shows only one instance of the CGI executable, with 14Mb allocated. And once it starts it keeps giving the error, regardless of actual server load. No way is this thing really running out of memory.
So I figured there is probably some maximum memory setting in IIS7 somewhere, but I couldn't find anything of the sort. Restarting the web server makes the problem go away until next time, but is probably not the best strategy.
Any ideas?
It might be an IRPStackSize issue as discussed here. And the particular cause mentioned in that article is not the only one, apparently.
The CGI does not seem to ever unload under IIS7, even though it seems to work under IIS6. This seems to be a problem with the CGI support on IIS7.
Related
I am deploying a Flask-based website on the server of Digital Ocean. And the website deployed is mainly static pages, config files and jsons.
This morning I found the memory usage has exceeded 51%. Here is the snapshot.
My memory is 512MB. Would someone please instruct me how to lower the memory usage? Thanks so much!
Update: I've use the "top" command in shell as suggested. Here is the snapshot, does it mean that it is the server itself eaten up those memories?
The memory issue is not related to my application.
I just received the answer from Digital Ocean. Here it is:
Hi there!
Thank you for contacting us! We can help with any memory issues you're having!
Since the Droplet is set up with only 512MB of RAM, once the system and any installed services start, it doesn't take much to push it past 50%. As a result, I don't think what you're seeing is necessarily abnormal under the circumstances. This leaves a few options: the Droplet can be resized and made larger to provide more memory (see https://www.digitalocean.com/community/tutorials/how-to-resize-your-droplets-on-digitalocean), you can add swap space to use part of the Droplet's file system as RAM (see https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04), or you can review the applications and services running on the Droplet and attempt to optimize them to reduce memory use.
We hope this is helpful! Please let us know if there is anything else we can do!
Regards,
I am assuming your are running a Linux server. If so, you can use the top command. It shows you all of the running processes and the system resources they are using. You would then be able to optimize from there.
I found out the cause! Linux borrows unused memory for disk caching. This makes it look like you are low on memory, but you are not! Everything is fine! If your application, or any other process needs more memory, Linux will automatically clear the cache and give memory for your application. Linux does this to speed up the system for you.
If, however, you find yourself needing to clear some RAM quickly to workaround another issue, like a VM misbehaving, you can force Linux to nondestructively drop caches using:
echo 3 | sudo tee /proc/sys/vm/drop_caches
I'm doing rails development locally (i.e. WEBrick 1.3.1). To test mobile, I put my iPhone 4s (iOS 8.1) on the same local network and load from the appropriate IP address.
It's really slow.
I watch the console and I can't figure out what the bottleneck is. I don't get the same behavior when I'm running the desktop browser locally. Of course, there's supposed to be some latency since the packets have to go over the wire, but it's unbelievably slow, on the order of more than 7 seconds. Sometimes not all the resources are loaded.
How can I improve load times for iOS/iPhone/mobile? Has anyone else run into this? For example, I thought perhaps it might be because we're loading fonts. Do local fonts (i.e. fonts that are on the system already) get optimized for rendering? This would explain some of the slow-down since we send a custom font.
if you have launched it online, are sure the perforamnce issues is not from your hosting provider. if you are on a shared server, may be your server are heavily overloads with contents from other users. when i first launched my application built with phonegap, i experience the same issues of server sluggishness. i have to move to another hosting server and perhaps my apps becomes super fast.
I have a C# MVC App that also uses EF.
It's working well but on my local dev machine IIS Express uses in the order of 100Mb of memory, but when its in the production environment it uses 600mb of memory and seems to be challenging the specs of our VPS.
The 600mb is taken from PerfMons private bytes counter on the app pool process. RedGates performance monitor however seems to say the private bytes is more in the order of 150mb - I'm not sure what the difference between the two measures is.
What is a reasonable guide to private bytes usage that should I expect PerfMon to report for a production site?
I read somewhere that private bytes may be reporting memory that is available to the application not necessarily memory that is currently allocated by the application. I still find it alarming that it has reached 500-600mb - presumably the OS must think the applications memory demand may peak there?
Should I be alarmed and any advice on how to figure out what is going on?
UPDATE
If I run it on Win7 with IIS it only consumes around 100mb. Similar to result from IIS Express - so does this mean its something more to do with the IIS configuration on my production machine?
The logs don't show anything different, and the computer is four times faster than the last one. Anyone know any common reasons why making a request to localhost would take a very long time?
I am using Mongrel.
Hard to give a solution based on the little information you give, so try to narrow it down. I would say that these three causes seem the most likely:
the database is slow. You can check this if your queries take a long time (check the logs). Perhaps you are using a slow connector (i.e. the default Ruby MySQL library), or your indexes haven't made it to your new machine.
Mongrel is slow. Check by starting it with Webrick and see if that's any better
your computer is slow. Perhaps it's running something else that's taking up CPU or memory. See your performance monitor (application to use for this differs per OS).
Could be a conflict between IPv4 and IPv6. If you're running Apache you have to take special steps to make it work nicely with IPv6 (my information here might be out of date.) I've found that an IPv6-enabled client would try to talk IPv6 to the server, and Apache would not receive the request. After it timed out the client would retry on IPv4.
I have written a program in Delphi 7 (includes a ModBus component that uses Indy). On my machine it uses Indy 9 and works fine. It communicates well with other machines via a ModBus protocol. However, when the program is run on a different machine, I get a CPU 90-100% load. Unfortunately this machine is not in my office but "on the other side of the world". How can I find out whether this machine is using Indy 9 or Indy 10? And, further, If it is running Indy 10, could that be the problem or is this very unlikely?
Definitive answer is No
If you compile your program with indy 9, even if using packages, it shall use INDY 9 to run. AFAIK, there's no way to compile the executable using INDY 9 and use INDY 10 at runtime, even if you want, and no way it happen by accident.
To find out whats causing the high CPU load you might try a profiler like AQTime or SamplingProfiler.
That will get you the method(s) that are running most of the time. Then you will be able to find out whats causing the problem.
Alternatively you could add some logging to your application.
To find the root cause you could prepare a test application which will go through a sequence of actions like opening / closing connections. If it asks the user for confirmation ("Continue ? y/n") before proceeding, the user can check the CPU load for every step to detect the critical operation.
Thanks for answers. I do not think this is an Indy issue though. On my Quad CPU PC the CPU load also goes up from 1-2 % to aprox. 25%. This happens if I keep the line open (connected). If I, however, disconnect the ModBus Server after every poll from the ModBus CLient side and let that PC reconnect, the CPU load is always low. WHat is normal? Having the line open all time, or connect and disconnect for every poll? The polling frequency is: in Idle mode : 2000ms, in active mode 500ms.
you need to add logs to ensure you know whats going on.
is it the connection itself that is causing you the issue? or is it the work performed while connected?
Logs will help you narrow this down and you may be able to alter you code to be less processor hungry.
using AQTime or SamplingProfiler as also suggest earlier will help you.
personally i always add logging to every application by default, alot of them require turning on but its there. Once the software it on site you never know what may change and simply turning the logs on can save you alot of time