How many Rails apps on 1 Heroku dyno? - ruby-on-rails

I just can't find how many apps you can host on heroku with one dyno?
I plan to host a lot of small apps with little traffic.
Thanks for your answers

Dynos are calculated on a per application basis.
However, this doesn't mean you need to buy 3 dynos to run 3 apps.
You can create 3 application each with 1 dyno.

One App per Dyno / subdomain.heroku.com.

Some explanation here: http://docs.heroku.com/performance#backlog-too-deep

I believe you can spin up another web process inside a web dyno. I've done it with workers. One worker dyno had 3 sub-processes. each a copy of the rails app, and each running independently on the database.
How you'd manage to spin up the correct application, I'm not sure... And you'd need a controller application.
I don't want to say it's not possible, because I don't believe that statement is at all constructive. I will say, spawning a new application with a 34$ a month extra dyno fee would be a better use of you time/money.
An additional concern. each web dyno allows for a limited amount of memory, and rails isn't exactly known for being light on memory. When I spawned sub-workers I ran into heaps of memory issues. So many that I eventually rolled the feature out. If I work for an afternoon to try to 'tweak' for the constrains, I've spent more of my bosses money than 4 months of extra dyno's, so I have to weigh it up.
Anyway... Here's how I forked workers
require 'heroku-api'
...
def self.fork_workers(iDesired = 5, iQueue = nil)
cmd = "rake jobs:work WORKER=MY_SERF"
cmd += " QUEUES=#{iQueue}" if(iQueue)
p cmd
if(RUBY_PLATFORM["mingw32"].nil?) #DON'T WORK ON WINDOWS
currentCount = Rush::Box.new.processes.filter(:cmdline => /#{cmd}/ ).size;
iDesired -= currentCount;
if(iDesired > 0)
iDesired.times { Rush::Box.new[Rails.root].bash( cmd, :background => true ) }
elsif(iDesired < 0)
end
end
end
Last note:
One dyno apps will go to sleep if left alone for an hour... Your users will feel the delay during wake up.
https://devcenter.heroku.com/articles/dynos#dyno-idling

Related

Rails - How do I prevent Sidekiq from slowing down the server?

I have a newsletter that I send out to my customers (~10k emails) every morning and sometimes happens that this Sidekiq job is taking some much CPU/memory performance that the website (Rails app) is not running and facing blackouts.
When I look at the Sidekiq dashboard, I see there is some problem (probably invalid email address and Sidekiq repeatedly trying to send it again?) with the newsletter and it's stuck.
How do I prevent this behavior and preclude repeating the Sidekiq task (which I believe that's the problem of the breakout)?
Here's my code:
rake task:
namespace :mailer do desc "Carrier blast - morning"
task :newsletter_morning => [:environment] do
NewslettertJob.perform_later
end
end
job definition:
class NewslettertJob < ApplicationJob
def perform
...
NewsletterMailer.morning_blast(data).deliver_now
end
end
and NewsletterMailer:
class NewsletterMailer < ApplicationMailer
def morning_blast(data)
...
customers.each do |customer|
yield customer, nil; next if customer.email.blank?
begin
Retryable.retryable( tries: 1, sleep: 30, on: [Net::OpenTimeout, Net::SMTPAuthenticationError, Net::SMTPServerBusy]) do
send_email(customer.email).deliver
end
send_email(customer.email).deliver
rescue Net::SMTPSyntaxError => e
error_msg = "Newsletter sending failed on #{Time.now} with: #{e.message}. e.inspect: #{e.inspect}"
logger.warn error_msg
yield customer, nil
next
end
end
end
end
What I want to achieve is that the newsletter will be sent out every morning and if Rails/Sidekiq faces a problem, it will simply shut itself down, so the newsletter will not affect the "life" on the main website (its server).
Thank you in advance for every advice. I am being stuck on this issue for a while now.
If your machine only has one core, Sidekiq and puma will fight for CPU. Lower Sidekiq's concurrency so it uses less CPU, or get a machine with multiple cores, or move Sidekiq to a different machine.
If a Sidekiq process is using 100% of a core, lower the concurrency setting. The default in Sidekiq 6.0 is 10, which is a good default but if you are just delivering emails you could probably bump that to 20. You can run multiple Sidekiq processes if you wish to utilize multiple cores to process jobs faster.
I think ideally, you should separate your background task servers from your web servers, that way background process won't impact on the performance of the web server. I work for a very high traffic/ high-load company, and we have an architecture of sorts in here.
There are explanations on how to stop retries in this answer: Disable automatic retry with ActiveJob, used with Sidekiq
Another thing, your e-mail sending is done synchronously (.deliver). This implicates on your task being a huge monolitical process with many customers, with huge impact on memory. Instead, you could use a deliver_later, so each customer get's it's own little worker. This will also help aliviate CPU and Memory usage. You could even create a worker for sending e-mails per customer, and use your monolitical Job to merely dispatch those.
class NewslettertJob < ApplicationJob
def perform
...
customers.each |customer| do
NewsletterMailer.morning_blast(customer, data).deliver_later if customer.email.present?
end
end
end
However, I think the silver bullet is separating your sidekiq server from your web server - having one server dedicated to background tasks. On your web server, you don't even start the sidekiq instances.

Rails Unicorn - Delay between starting request and reaching controller

I am using Unicorn as my app server for my Rails app, and am trying to figure out why there sometimes is sometimes a non-trivial (> 5 seconds) delay between the start of a request, and when it reaches my controller.
This is what my production.log prints out:
Started GET "/search/articles.json?q=mashable.com" for 138.7.7.33 at 2015-07-23 14:59:19 -0400**
Parameters: {"q"=>"mashable.com"}
Searching articles for keyword: mashable.com, format: json, Time: 2015-07-23 14:59:26 -0400
Notice how there is a 7 second delay in between STARTED GET: and "Searching articles for keyword", which is the first thing the controller method does.
articles.json is routed to my controller method "articles" which simply does this for now:
def articles
format = params[:format]
keyword = params["q"]
Rails.logger.info "Searching articles for keyword: #{keyword}, format: #{format}, Time: #{Time.now.to_s}"
end
This is my routes.rb
MyApp::Application.routes.draw do
match '/search/articles' => 'search#articles'
#more routes here, but articles is the first route
end
What could possibly cause this delay? Is it because an Unicorn worker is busy? Is it because an Unicorn worker is taking up too much memory which leads the system to be slow?
Note: I don't believe the delay is in making any database connections but I could be wrong. The code doesn't need to make a database call, and the max connections for my database is 1000, and there are usually at most 1-2 connections.
Three thoughts:
You'll probably be better served using Puma instead of Unicorn
It could be that your system is running out of memory, or it could have plenty of memory available: install New Relic to troubleshoot where the bottleneck is
It could also be that you have more Unicorn instances than the number of connections your DB allows, in which case the instance is having to wait for others to disconnect before it can connect. This would likely manifest itself with irregular 5-second delays rather than happening every time.
Actually, it might be caused by an before_filter callback, you should check it
I think it can be because of lack of memory and thus frequent garbage collection, which freeze whole system.
If it's a production problem it could be caused by slow clients sending requests. New Relic and Monit are good options. You could consider sending signals to Unicorn workers to restart them to better understand the problem.
You could also try adding preload_app true in your Unicorn config to speed up the startup time of worker processes.

Perform recurrent tasks in Rails without loading the full environment

In my application, I need to execute a task every 5 minutes. The task itself is quite lean, so that's not the problem: the point is that loading the Rails environment uses almost all the CPU of the server. Right now, I'm using whenever with a code like this in my schedule.rb:
every 5.minutes do
runner Task.perform
end
But the cost is too big. Is there anyway to execute this kind of tasks in a preloaded environment? Or any other better solution?
Consider using a background process (many gems, notably delayed job) and implementing a simple "cron" in your code with a polling loop something like
def perform
last = Time.now
while true
if last.sec % (5 * 60) != 0
do_task
last = Time.now
end
end
end
Warning: this example has more holes than a slice of swiss cheese. Write something better :-)
You could use a cron job or set it up to run in a bash script. These would have the advantage of not being tied to your application.

max-requests-per-worker in unicorn

I sought, but did not find, a max-requests-per-worker option in unicorn similar to gunicorn's max_requests or apache's MaxRequestsPerChild.
Does it exist?
If not, has anyone implemented it?
I'm thinking of putting it in the file where I have oobgc, since that gets control after every requests anyway. Does that sound about right?
The problem is that my unicorn workers are getting big and fat, and garbage collection is taking more and more of my CPU.
i've just released 'unicorn-worker-killer' gem. This enables you to kill Unicorn worker based on 1) Max number of requests and 2) Process memory size (RSS), without affecting the request. It's really easy to use. At first, please add this line to your Gemfile.
gem 'unicorn-worker-killer'
Then, please add the following lines to your config.ru.
# Unicorn self-process killer
require 'unicorn/worker_killer'
# Max requests per worker
use Unicorn::WorkerKiller::MaxRequests, 3072, 4096
# Max memory size (RSS) per worker
use Unicorn::WorkerKiller::Oom, (256*(1024**2)), (384*(1024**2))
It's highly recommended to randomize the threshold to avoid killing all workers at once.
Unicorn doesn't offer a max-requests.
The unicorn master will re-spawn any worker which exits and a worker will gracefully exit at the end of the current request when it receives a QUIT signal, so you could easily roll your own max request logic into your worker request life-cycle.
With Rails, something like the following in your application controller (alternatively, similar logic in a rack middleware)
after_filter do
##request_count ||= 0
Process.kill('QUIT',$$) if (##request_count += 1) > MAX_REQUESTS
end

Ruby on Rails: How to run things in the background?

When a new resource is created and it needs to do some lengthy processing before the resource is ready, how do I send that processing away into the background where it won't hold up the current request or other traffic to my web-app?
in my model:
class User < ActiveRecord::Base
after_save :background_check
protected
def background_check
# check through a list of 10000000000001 mil different
# databases that takes approx one hour :)
if( check_for_record_in_www( self.username ) )
# code that is run after the 1 hour process is finished.
user.update_attribute( :has_record )
end
end
end
You should definitely check out the following Railscasts:
http://railscasts.com/episodes/127-rake-in-background
http://railscasts.com/episodes/128-starling-and-workling
http://railscasts.com/episodes/129-custom-daemon
http://railscasts.com/episodes/366-sidekiq
They explain how to run background processes in Rails in every possible way (with or without a queue ...)
I've just been experimenting with the 'delayed_job' gem because it works with the Heroku hosting platform and it was ridiculously easy to setup!!
Add gem to Gemfile, bundle install, rails g delayed_job, rake db:migrate
Then start a queue handler with;
RAILS_ENV=production script/delayed_job start
Where you have a method call which is your lengthy process i.e
company.send_mail_to_all_users
you change it to;
company.delay.send_mail_to_all_users
Check the full docs on github: https://github.com/collectiveidea/delayed_job
Start a separate process, which is probably most easily done with system, prepending a 'nohup' and appending an '&' to the end of the command you pass it. (Make sure the command is just one string argument, not a list of arguments.)
There are several reasons you want to do it this way, rather than, say, trying to use threads:
Ruby's threads can be a bit tricky when it comes to doing I/O; you have to take care that some things you do don't cause the entire process to block.
If you run a program with a different name, it's easily identifiable in 'ps', so you don't accidently think it's a FastCGI back-end gone wild or something, and kill it.
Really, the process you start should be "deamonized," see the Daemonize class for help.
you ideally want to use an existing background job server, rather than writing your own. these will typically let you submit a job and give it a unique key; you can then use the key to periodically query the jobserver for the status of your job without blocking your webapp. here is a nice roundup of the various options out there.
I like to use backgroundrb, its nice it allows you to communicate to it during long processes. So you can have status updates in your rails app
I think spawn is a great way to fork your process, do some processing in background, and show user just some confirmation that this processing was started.
What about:
def background_check
exec("script/runner check_for_record_in_www.rb #{self.username}") if fork == nil
end
The program "check_for_record_in_www.rb" will then run in another process and will have access to ActiveRecord, being able to access the database.

Resources