Appfog instances vs memory - memory

I'm developing an API on Appfog and want to know what to focus on (more memory with one instances or more instances with lower memory).
Appfog gives you free 2GB of RAM and up to 16 instances if each instances get 128 MB RAM.
My application uses PHP, MySql and Memcachier.
I want to launch it soon and want to know which configuration is best for my server.
What is the benefit with more RAM or instances?
Thanks for helping :)
Best Regards,
Johnny

You want as many instances as your app will run without running out of memory :). More instances means better performance and uptime. However, if an instance runs out of memory it will be shut down leaving your app running with fewer instances until they all collapse. You can diagnose this problem with the af apps and af logs <appname> --all commands. If the app is running at < 100% regularly then the instance memory budget may be too low. When there are down instances, the logs command may reveal memory limit reached errors.
Memory Recommendations
Here are some memory recommendations to start out with: Wordpress with several installed plugins will need > 512mb to be stable. For lean custom PHP apps 128mb is usually sufficient but should be watched. If an app is using a framework try 256mb. These memory limits may seem high but it's really the peak memory usage not the average usage.
Load Test
Load testing using Seige can help find a memory / instance balance. It does this by determining if your app is peaking out over the memory limit. Scale the app down to 1 instance and siege with 5, 10, and 15 concurrent connections progressively increasing by 5 until the app falls over. If the app does stop, bump the memory up and try again.

Related

Multiple workers on machine - Memory management ( Resque - Rails )

We've migrated our Resque background workers from a ton of individual instances on Heroku to between four and ten m4.2xl (32GB Mem) instances on EC2, which is much cheaper and faster.
A few things I've noticed are that: The Heroku instances we were using had 1GB of RAM and rarely ran out of memory. I am currently allocating 24 workers to one machine, so about 1.3GB of memory per worker. However, because the processing power on these machines is so much greater, I think the OS has trouble reclaiming the memory fast enough and each worker ends of consuming more on average. Most of the day the system has 17-20GB memory free but when the memory intensive jobs are run, all 24 workers grab a job almost at the same time and then start growing. They get through a few jobs but then the system hasn't had time to reap memory and crashes if there is no intervention.
I've written a daemon to pause the workers before a crash and wait for the OS to free memory. I could reduce the number of workers per machine overall or have half of them unsubscribe from the problematic queues, I just feel there must be a better way to manage this. I would prefer to be making usage of more than 20% of memory 99% of the day.
The workers are setup to fork a process when they pick up a job from the queue. The master-worker processes are run as services managed with Upstart. I'm aware there are a number of managers which simply restart the process when it consumes a certain amount of memory such as God and Monit. That seems like a heavy handed solution which will end with too many jobs killed under normal circumstances.
Is there a better strategy I can use to get higher utilization with a lowered risk of running into Errno::ENOMEM?
System specs:
OS : Ubuntu 12.04
Instance : m4.2xlarge
Memory : 32 GB

Rails application servers

I've been reading information about how different rails application servers work for a while and some things got me confused probably because of my lack of knowledge in this field. Anyway, the following things got me confused:
Puma server has the following line about its clustered mode workers number in its readme:
On a ruby implementation that offers native threads, you should tune this number to match the number of cores available
So if I have, lets say, 2 cores and use rubinius as a ruby implementation, should I still use more than 1 process considering that rubinius use native threads and doesn't have the lock thus it uses all the CPU cores anyway, even with 1 process?
I understand it that I'd need to only increase the threads pool of the only process if I upgrade to a machine with more cores and memory, if it's not correct, please explain it to me.
I've read some articles on using Server-Sent Events with puma which, as far as I understand, blocks the puma thread since the browser keeps the connection open, so if I have 16 threads and 16 people are using my site, then the 17th would have to wait before one of those 16 leaves so it could connect? That's not very efficient, is it? Or what do I miss?
If I have a 1 core machine with 3Gb of RAM, just for the sake of the question, and using unicorn as my application server and 1 worker takes 300 MBs of memory and its CPU usage is insignificant, how many workers should I have? Some say that the number of workers should be equal to the number of cores, but if I set the workers number to, lets say, 7 (since I have enough RAM for it), it will be able to handle 7 concurrent requests, won't it? So it's just a question of memory and cpu usage and amount of RAM? Or what do I miss?

issues with CPU utilization

I am running ubuntu EC2 small instance with ruby on rails and mysql in the instance, my rails app consists of delayed jobs.
I have observed it suddenly my instance CPU utilization spikes out more then 70 % and the persists for more then a minute.
Ruby on rails app is not a CPU intense app, but I am not able to figure why and where exactly its happening,I have activated the Cloud watch monitoring, and also created alarm .
Kindly suggest me to figure out some tools to figure out which processes is a CPU intense.
How to I handle it.
Run top and see what's at the top of the list when the CPU usage is high.

Reducing Redmine's memory usage - Low Hanging Fruit

I am running a Redmine instance with Passenger and Nginx. With only a handful of issues in the database, Redmine consumes over 80mb of RAM.
Can anyone share tips for reducing Redmine's memory usage. The Redmine instance is used by 3 people and I am willing to sacrifice on speed.
There are not really and low hanging fruits. And if there were, we would've already included and activated them by default.
80 MB RSS (as opposed to virtual size which can be much more) is actually pretty good. In normal operation, it will use between 70 and 120 MB RSS per process (depending on the deployment model, rather few on passenger).
As andrea suggested, you can reduce your overall memory footprint by about one third when you use REE (Ruby Enterprise Edition, which is also free). But this saving can only achieved when you run more than one process (each requiring the above memory). REE achieves this saving by optimizing Ruby for a technique called Copy on Write, so that additional application processes take less memory.
So I'm sorry, your (hypothetical) 128 MB vServer will probably not suffice. For a small installation, you might be able to squeeze a minimal installation into 256MB, but it only starts to be anything but a complete pain in the ass at 512 MB (including database).
That's because of how Rails applications work in contrast to things like PHP. They require a running application server instance. That instance is typically able to answer one request at a time, using about the same amount of memory all the time. So your memory consumption is roughly equivalent to the number of application processes you run, independent of actual load. But if you tune your system properly, you can get quite a number of reqs/s out of one process.
May be i am replying very late but i got stuck in the same issue and I found a link to optimize ruby/rails memory usage, which works for me
http://community.webfaction.com/questions/2476/how-can-i-reduce-my-rubyrails-memory-usage-when-running-redmine
It may be helpful for someone else.

Can i limit apache+passenger memory usage on server without swap space

i'm running a rails application with apache+passenger on virtual servers that do not have any swap space configured.
The site gets decent amount of traffic with 200K+ daily requests and sometimes the whole system runs out of memory causing odd behaviour on whole system.
The question is that is there any way to configure apache or passenger not to run out of memory (e.g. gracefully restarting passenger instances when they start using, say more than 300M of memory).
Servers have 4GB of memory and currently i'm using passenger's PassengerMaxRequests option but it does not seem to be the most solid solution here.
At the moment, i also cannot switch to nginx so that is not an option to preserve some room.
Any clever ideas i'm probably missing are welcome.
Edit: My temporary solution
I did not go with restarting Rails instances when they exceed certain amount of memory usage. Engine Yard wrote great blog post on the ActiveRecord memory bloat issue. This is our main suspect on the subject. As i did not have much time to optimize application, i set PassengerMaxRequests to 300 and added extra 2GB memory to server. Things have been good since then. At first i was worried that re-starting Rails instances continuously makes it slow but it does not seem to have impact i should worry about.
If you mean "limiting" as killing those processes and if this is the only application on the server and it is a Linux, then you have two choices:
Set maximum amount of memory one process can have:
# ulimit -m
unlimited
Or use cgroups for similar behavior:
http://en.wikipedia.org/wiki/Cgroups
I would advise against restarting instances (if that is possible) that go over the "memory limit", because that may put your system in infinite loops where a process repeatedly reaches that limit and restarts.
Maybe you could write a simple daemon that constantly watches the processes, and kills any that go over a certain amount of memory. Be sure to log any information about the process that did this so you can fix the issue whenever it comes up.
I'd also look into getting some real swap space on there... This seems like a bad hack.
I have a problem where passenger processes end up going out of control and consuming too much memory. I wrote the following script which has been helping to keep things under control until I find the real solution http://www.codeotaku.com/blog/2012-06/passenger-memory-check/index. It might be helpful.
Passenger web instances don't contain important state (generally speaking) so killing them isn't normally a process, and passenger will restart them as and when required.
I don't have a canned solution but you might want to use two commands that ship with Passenger to keep track of memory usage and nr of processes: passenger-status and sudo passenger-memory-stats, see
Passenger users guide for Nginx or
Passenger users guide for Apache.

Resources