As described here, Heroku logs metrics for your dynos every ~20 seconds.
source=web.1 dyno=heroku.2808254.d97d0ea7-cf3d-411b-b453-d2943a50b456 sample#load_avg_1m=2.46 sample#load_avg_5m=1.06 sample#load_avg_15m=0.99
source=web.1 dyno=heroku.2808254.d97d0ea7-cf3d-411b-b453-d2943a50b456 sample#memory_total=21.00MB sample#memory_rss=21.22MB sample#memory_cache=0.00MB sample#memory_swap=0.00MB sample#memory_pgpgin=348836pages sample#memory_pgpgout=343403pages
These pile up quickly and distract from the log view, especially when they are every 20 seconds and there are 10+ dynos for a small web app.
Is it possible to change the frequency at which these are logged? Something like once per minute would cut the quantity down by a factor of 3 and would still be sufficient for my app's needs.
Thanks!
Edit: Heroku does provide a way of filter out certain types of logs while tailing, so in theory I could get rid of them entirely by excluding the system logs. But if there are other valuable things in the system logs, the question of how to specifically reduce (not filter out) these metrics still stands.
We had a similar problem and decided to just turn off the runtime-metrics logging altogether.
You can check to see if it's enabled with:
$ heroku labs -a you-app-name
=== App Features (your-app-name)
[+] log-runtime-metrics Emit dyno resource usage information into app logs
and remove it with
heroku labs:disable log-runtime-metrics -a your-app-name
heroku restart -a your-app-name
Related
I am expecting a large increase in traffic tomorrow(press, etc) to my website. I want to be able to manage the traffic without it going down. I know that I have to add dyno's and the cost is not too prohibitive because it is only temporary to keep the queue down.
I am using New Relic Add-On to Monitor
This post is really helpful: https://serverfault.com/questions/394602/how-to-prepare-for-a-huge-spike-in-traffic-for-launching-a-website-hosted-on-her
However after reading that post and pouring over heroku docs I have not been able to figure out how I actually do this...
What command can I run or from what interface can I spin up a new dyno to handle the added web traffic? Then, with what command do I run or from what interface do I eliminate that extra dyno when the traffic has return to normal levels?
Heroku talk about a bunch of different types of dyno's etc but I have no idea which one I am supposed to use and then how to actually spin up a new one.
Heroku has an platform-api gem that will be useful for you in this case. First add the gem to your Gemfile and install it.
gem platform-api
Once installed, setup the heroku client, preferably in an initializer
$heroku = PlatformAPI.connect(ENV['HEROKU_API_KEY', default_headers: {'Accept' => 'application/vnd.heroku+json; version=3'})
You can find how to get an Heroku API KEY here
Now, to the scaling part. I'm guessing you have some internal metrics to decide when to scale (by process count, request count, etc). The below command will scale up the number of dynos you require,
$heroku.formation.update(APP_NAME, PROCESS_NAME, {quantity: DYNO_COUNT})
This will scale the app's specified process to X number of dynos whenever required.
You can use the same command to scale down your number of dynos. Also, below command will help you get the number of dynos currently provisioned for a particular process
info = $heroku.formation.info(APP_NAME, PROCESS_NAME)
info['quantity']
To get the number of ALL dynos provisioned to your app, use
$heroku.app.info(APP_NAME)
EDIT:
In case you prefer it to do manually, then you can do this in the heroku's dashboard itself. Or if you prefer command line, install heroku toolbelt and below is the command for scaling the app.
heroku ps:scale process_name=dyno_count -a app_name
To get the list of dynos provisioned
heroku ps -a app_name
I have a RoR/Heroku app. Right now, my deploy process consists of checking in git, then running "git push heroku master". Problem is, this introduces a lag of about 10 seconds where my live site goes down before coming back up. This causes existing visitors on the site to get frustrated and leave, if they happen to notice it.
So what is the best practice way to avoid that?
git push heroku master
I read about setting up a "staging" environment, but would that help avoid this? I'd still have to run a git push heroku master, wouldn't I?
Heroku has a Labs feature that will pre-boot new dynos before shifting load from old dynos to new dynos. This way, new dynos will be fired up and ready when they start receiving requests and your users will see no delay when you update your app. Here's how to enable pre-boot:
heroku labs:enable -a myapp preboot
So right now I have an implementation of delayed_job that works perfectly on my local development environment. In order to start the worker on my machine, I just run rake jobs:work and it works perfectly.
To get delayed_job to work on heroku, I've been using pretty much the same command: heroku run rake jobs:work. This solution works, without me having to pay anything for worker costs to Heroku, but I have to keep my command prompt window open or else the delayed_job worker stops when I close it. Is there a command to permanently keep this delayed_job worker working even when I close the command window? Or is there another better way to go about this?
I recommend the workless gem to run delayed jobs on heroku. I use this now - it works perfectly for me, zero hassle and at zero cost.
I have also used hirefireapp which gives a much finer degree of control on scaling workers. This costs, but costs less than a single heroku worker (over a month). I don't use this now, but have used it, and it worked very well.
Add
worker: rake jobs:work
to your Procfile.
EDIT:
Even if you run it from your console you 'buy' worker dyno but Heroku has per second biling. So you don't pay because you have 750h free, and month in worst case has 744h, so you have free 6h for your extra dynos, scheduler tasks and so on.
I haven't tried it personally yet, but you might find nohup useful. It allows your process to run even though you have closed your terminal window. Link: http://linux.101hacks.com/unix/nohup-command/
Using heroku console to get workers onto the jobs will only create create a temporary dyno for the job. To keep the jobs running without cli, you need to put the command into the Procfile as #Lucaksz suggested.
After deployment, you also need to scale the dyno formation, as heroku need to know how many dyno should be put onto the process type like this:
heroku ps:scale worker=1
More details can be read here https://devcenter.heroku.com/articles/scaling
Using heroku logs --tail which works great for a few minutes. Then it stops displaying the logs. It seems that the ssh connection is timing out and dying. There is no error or message. Working in Ubuntu 11.04 on wired conneciton.
I added the following to ~/.ssh/config:
ServerAliveInterval 5
But it didn't work. Do I need anything else in the config file? How do I know if it is doing anything? How can I monitor the traffic and see the keepalive request? I am looking at System Monitor but don't see anything every 5 seconds.
Thanks.
Have you done all of this:
$ heroku config:add LOG_LEVEL=DEBUG
$ heroku addons:upgrade logging:expanded
$ heroku logs --tail
It turns out that I was looking for an answer to the wrong question. Why use the tail to save logs? It is problematic, labor intensive, and error prone.
The solution I found was Papertrail. Small sites are free. papertrailapp.com.
Here's the complete story from my blog: http://www.onlineinvestingai.com/blog/2011/08/07/better-logging-with-papertrail-on-heroku/
I have seen the same issue. I'm not certain that it is ssh that times out but something does. For now, we've placed our monitor in a loop so it resumes automatically in case of time-out.
We also use PaperTrail, however it has limits in the amount that you can transfer. We use PaperTrail for general purpose use, and tail the logs for the excrutiating-detail logs that quickly use up all avaiable PaperTrail capacity.
We're running 3 Apache Passenger servers sharing the same file system, each running 11 Rails applications.
We've set
PassengerPoolIdleTime = 0 to ensure none of the applications ever die out completely, and
PassengerMaxPoolSize = 20 to ensure we have enough processes to run all the applications.
The problem is that when I run passenger-memory-stats on one of the servers I see 210 VM's!
And when I run passenger-status I see 20 application instances (as expected)!
Anyone know what's going on? How do I figure out which of those 210 instances are still being used, and how do I kill those on a regular basis? Would PassengerMaxInstancesPerApp do anything to reduce those seemingly orphaned instances?
Turns out we actually have that many Apache worker processes running, and only 24 of them are Passenger processes (asked someone a little more experienced than myself). We are actually hosting many more websites and shared hosting accounts than I thought.
Thanks for all the replies though!
You can get a definitive answer as to how many Rails processes you have by running this command:
ps auxw | grep Rails | wc -l
I doubt you really do have over 100 processes running though, as at about 50 mb each they'd collectively be consuming over 5 gb of RAM and/or swap and your system would likely have slowed to a crawl.
Not so much an answer, but some useful diagnostic advice.
Try adding the Rack::ProcTitle middleware to your stack. We have been using it in production for years. Running ps aux should give info on what the worker is up to (idle, handing a specific request etc…).
I would not assume that these processes are being spawned directly by passenger. It could be something forking deeper into your app.
Maybe the processes are stuck during shutdown. Try obtaining their backtraces to see what they're doing.