I have an small rails 3 app on a small linode 512 mb and I make the monitoring with monit my rails app.
I can see on monit the message Resource limit matched. You can see the next image:
http://overpic.net/viewer.php?file=x5d6ybfukwgz6a5845bu8.jpg
I'm newbie with monit.
Does this mean that I should increase my ram memory? or my cpu? or upgrade to linode 1GB?
Where can I find a good monit ebook?
Thank you!
Related
The Setup:
* Ubuntu 18.04 LTS
* Apache 2.4.29
* Passenger 6.0.16
* Ruby 2.3.8
* Rails 4.2.x
I have both staging and prod servers with the same setup on AWS EC2; they are both running the same kernel/build. I upgraded the Ruby/Rails version of my app from Ruby 2.1.x -> 2.3.8, and Rails 4.0 -> 4.2, first on staging then on production.
On staging, everything was working fine; pages were loading quickly and without issue. On prod, pages would start by loading quickly but pretty soon would degrade. The user CPU would max out at 99%+ eventually causing the app to go down and be unresponsive. The only solution was to restart Apache, roughly every 30min.
After a LOT of digging and testing, top -c showed that Passenger RubyApp would hit 100% CPU and soon after would stay "locked" at max CPU for each process, even if no one was using the site. I've been trying to change different settings both in Apache and Passenger but nothing seems to work. Effectively, as soon as we get a few people hitting the site in a particular way, ANY of the spun Passenger processes that hit 100% end up staying fairly high and either don't shut off or don't exit and burn CPU, as if there were some IO issue.
Right now Passenger and Apache configs are exactly the same on staging/prod and are the defaults.
Screenshots of the example top in prod with a few users using it.
And roughly same amount of people using on staging.
Staging looks far more accurate in terms of a Rails app -- I'd expect to see higher memory use than CPU. AWS Support was also baffled, as prod is on an XL and staging is on a Micro instance, and the AWS kernel versions were the same. Here's AWS monitoring around CPU usage... prod was updated on the 20th, but not a lot of people used it over the weekend, and really became a problem on Monday during working hours.
Any ideas of why this is happening on one server vs the other?? It's no particular request that causes it; it's literally any (or 2-3 requests coming in tandem) that will cause the CPU to spike to 100 and get stuck.
TIA.
We have several rails apps using passenger and apache on some ubuntu servers that get heavy load occasionally. We get datadog alerts that memory usage is high, get on the server, and do a top to see that passenger and ruby are using lots of memory, but how should I go about figuring out which one of the passenger/rails apps is the culprit? Or at least a list of apps using above a given threshold of memory?
I have only one RoR running on my server (and it's nginx) and I think your looking for
ps auxf
it shows me this for my one passenger instance:
nginx 28279 0.0 10.2 452128 107264 ? Sl Apr03 0:01 Passenger RackApp: /srv/http/redmine
The third column (10.2) is memory usage in %, the last columns shows the directory to the application. More about output here.
Currently I have a simpliest VPS: 1 core, 256 MB of RAM, Ubuntu 12.04 LTS. My application seems to be running fine enough (I'm using unicorn and nginx) but when I run my rake jobs:work command for my delayed_jobs, unicorn process is getting killed.
I was wondering if it is related to the RAM amount ?
When the unicorn process is up and running, free -m command shows me that around 230 MB of RAM are occupied.
I was wondering, how much RAM would I need in overall ? 512 ? 1024 ?
Which one should I go with ?
Would be very glad to receive any answers!
Thank you
Your DJ worker would run another instance of your Rails application, so you need to make sure that you have at least enough RAM for that other instance plus allowance for other processes you are running.
Check ps aux for the memory usage of your Rails app.
Run top and see how much physical memory is free (while your Rails app is running).
My guess is you'll have to bump up your RAM to 512 MB. You of course don't want your memory use to spill over to swap.
Of course, besides that, you also need to make sure that your application and database are optimized enough that there are no incredible spikes in memory usage.
You can start with
ulimit -S -a
to find out the limits of your environment
Currently I am using Heroku and have never deployed to a VPS, which regarding VPS prices in Europe should be a lot of cheaper.
On Heroku my application requires 9 dynos and 2 workers. I am interested in how much server resources do I need for hosting my Ruby on Rails application on VPS, having the following server configuration:
ubuntu
nginx
unicorn
postgresql
redis
memcached
Also, can I put the latter three on the same VPS instance or is it a better practice to host databases and memcached separately?
Is there any way I could calculate server requirements myself?
For example, to how many dynos/workers I could compare a VPS with 7,2 GHz, 3GB RAM and 50GB storage? Would it be enough for my application?
As a general rule of thumb, once you step above a few dynos and background processes the cost of a VPS vs the cost of Heroku will weigh the in VPS's favor.
However, the cost of the actual hosting is not your only cost. For instance, a VPS will require some admin work from yourself, be it setup, software installation and configuration and keeping things up to date and running smoothly. Note this doesn't include learning how to do all of this stuff.
Once you factor these costs in (assuming you're working for paying clients and not doing this for fun), the answer rests firmly with Heroku - there is no other platform of the same maturity that lets you just fire and forget a deploy - the time savings alone are worth it.
http://neilmiddleton.com/why-heroku-is-a-game-changer/
I have a Debian Linux VPS server for my production website (512MB).
I'm using Phusion Passenger with Apache to service my Rails 2.3.4 application with Ruby 1.9. I'm limiting the pool of Phusion passenger instances to 3
Although the traffic is relatively low, the server crashes at times and I notice (when using 'top' command) that there are many instances of apache (/usr/sbin/apache2 -k start) maybe like 20 of them taking up all the memory I have and the website become un-responsive.
I'm not sure what to do about this, where to start digging for potential issues or how to spot or limit the number of apache instances.
Thanks,
Tam
If you want to limit the number of Apache processes, look at the documentation for the Multi-Processing Modules. But if you don't have that much traffic (it depends on what you call "relatively low"), it should work out of the box.
Have you tried asking your question over at Server Fault ? You might get better answers there.