Redis connection error on Heroku - ruby-on-rails

I have a Rails app which uses Resque for background jobs. This works fine locally, but after deploying to Heroku I get a connection error:
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)):
I see it tries to connect to localhost, which is not correct. I'm using the Heroku Redis :: Redis add-in and I have added the redis gem. This is how initializers.redis.rb looks like:
$redis = Redis.new(url: ENV["REDIS_URL"])
And this is my Procfile:
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1
In the config vars REDIS_URL is added. What is going wrong here?

It seems that when using Puma or Unicorn on Heroku, you need to add it also to the boot process. I'm using Puma, so I added this to config/puma.rb:
on_worker_boot do
# ...
if defined?(Resque)
Resque.redis = ENV["REDIS_URL"] || "redis://127.0.0.1:6379"
end
end
Here is a more detailed explanation for both Puma and Unicorn:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
https://devcenter.heroku.com/articles/rails-unicorn#caveats

Related

Sidekiq worker doesn't start processing on Heroku Container

I'm attempting to use sidekiq on Heroku Container, but no luck. I've been taking a week to fix this. Any help?
I describe what I did so far.
Setup sidekiq
Sidekiq.configure_server do |config|
config.redis = { url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/1') }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV.fetch('REDIS_URL', 'redis://localhost:6379/1') }
end
Set environment variables to Heroku app
Enable Redis add-on
redis screenshot1
redis screenshot2
setup 2 dynos for a worker and a web
Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -c 3 -q default
commands
heroku container:push web
heroku container:release web
heroku container:push worker
heroku container:release worker
heroku ps:scale web=1 worker=1
Even when I triggered worker job, I can't see any logs about queue or worker.
web itself works fine.
What I feel weird is this description.
heroku screeen
web rails server -b 0.0.0.0
worker rails server -b 0.0.0.0
Is this correct?

Heroku not starting workers

My Heroku app is not starting any workers. I scale the worker first:
heroku ps:scale resque=1 -a test-eagle
Scaling dynos... done, now running resque at 1:Free
Then when I check the workers, I see:
heroku ps:workers -a test-eagle
<app> is running 0 workers
What could be worng here? This is how my Procfile looks:
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1
Or is it because it is a free app which can only handle 1 web worker and no other dynos?
Edit:
When I check with heroku ps -a <appname> I see that just after starting the worker is crashed: worker.1: crashed. This is without doing anything in the application itself.
UPDATE: Well, I have a "free" app running that happens to run Puma, too. So, I updated Procfile as follows:
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1
After that, I pushed the app to Heroku and ran heroku ps:scale, as you specified. It worked as follows:
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps -a kctapp
=== web (Free): bundle exec puma -C config/puma.rb (1)
web.1: up 2016/06/06 19:38:24 -0400 (~ 1s ago)
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps:scale resque=1 -a kctapp
Scaling dynos... done, now running resque at 1:Free
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps -a kctapp
=== web (Free): bundle exec puma -C config/puma.rb (1)
web.1: up 2016/06/06 19:38:24 -0400 (~ 51s ago)
=== resque (Free): env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1 (1)
resque.1: crashed 2016/06/06 19:39:18 -0400 (~ -3s ago)
Note that it did crash. But, I don't have any code running there, so that could be why? Also, note that I use the "heroku ps" command as "heroku ps:workers" for me throws an error saying it is deprecated.
This is my config/puma.rb, if that helps:
workers Integer(ENV['WEB_CONCURRENCY'] || 4)
threads_count = Integer(ENV['MAX_THREADS'] || 8)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 5000
environment ENV['RACK_ENV'] || 'development'
With edit: I missed the scale command...
See scaling in Heroku here. The options that I see are web, worker, rake or console, not resque. I tried your command and it didn't recognize "that formation". I'm curious about it.
Checking a free app, it does not give you the option to add a worker dyno. Checking a hobby app, you can add workers to it. With professional apps, you can mix and match the dyno type between web and worker using 1X, 2X, and Performance dynos.

foreman fails to load environment variables

I'm using rails 4.0.0 and ruby 2.0.0
When I start the server with foreman some of the environment variables fail to load. It really bugs me that some of the variables are loaded.
foreman start -e development.env
Procfile
web: bundle exec passenger start -p $PORT -e $RAILS_ENV
worker: bundle exec rake jobs:work RAILS_ENV=$RAILS_ENV
development.env file
S3_BUCKET=bucketname
AWS_ACCESS_KEY_ID=accesskey
AWS_SECRET_ACCESS_KEY=secretaccesskey
RAILS_ENV=development
PORT=3000
In my application.rb file i've added some logging to help debug this problem
puts "PORT is #{ENV["PORT"].inspect}"
puts "RAILS_ENV is #{ENV["RAILS_ENV"].inspect}"
puts "S3_BUCKET is #{ENV["S3_BUCKET"].inspect}"
puts "AWS_ACCESS_KEY_ID is #{ENV["AWS_ACCESS_KEY_ID"].inspect}"
puts "AWS_SECRET_ACCESS_KEY is #{ENV["AWS_SECRET_ACCESS_KEY"].inspect}"
Once I start the server this is the output for the logging code
23:34:52 worker.1 | PORT is nil
23:34:52 worker.1 | RAILS_ENV is "development"
23:34:52 worker.1 | S3_BUCKET is nil
23:34:52 worker.1 | AWS_ACCESS_KEY_ID is nil
23:34:52 worker.1 | AWS_SECRET_ACCESS_KEY is nil
Why oh Why ? :-(
When I load the rails console with foreman it successfully loads the variables
foreman run -e development.env rails c
Try modify your development.env like
export S3_BUCKET=bucketname
export AWS_ACCESS_KEY_ID=accesskey
export AWS_SECRET_ACCESS_KEY=secretaccesskey
export RAILS_ENV=development
export PORT=3000
Then in the terminal
$ source /path/to/development.env
$ foreman start
Advanced
You can use dotenv to manage some of your environment variables without polluting your system environment. Though it can't manage those environment variables required for server booting like PORT.

Force Port in Foreman changes from 3000 to 3100

I have a .foreman file with the following line:
port: 3000
Then in my Procfile.dev I have the following:
web: bundle exec rails server -p $PORT
However, when running the server by doing:
foreman -f Procfile.dev, Ig get the following:
Rails 4.0.0 application starting in development on http://0.0.0.0:3100
Why is that happening? Why is not starting at 3000, but at the 3100?
After digging a little bit further, I realized that in my Procfile, I have the first line as:
redis: redis-server
When moving that to the second line, and placing the:
web: bundle exec rails server -p $PORT
As my first line, it works fine. Why would that order matter at all?

Unicorn & Heroku - is a different configuration for each dyno possible?

I'm currently running my app on 2 Heroku dynos. From what I've looked up so far I need to add something similar to config/unicorn.rb:
worker_processes 3
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
I have a few different background jobs to process, some need to be run single-threaded and some concurrently. The problem with this configuration is that on both Unicorn instances it will spawn exactly the same resque worker (same queues etc).
It would greatly simplify everything if I could change the type of queues each worker processes - or even have one instance running a resque worker and the other a sidekiq worker.
Is this possible?
Perhaps you are confusing unicorn worker_processes and Heroku workers?
you can use your Procfile to start Heroku workers for each queue and a unicorn process for handling web requests.
try this setup
/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
worker: bundle exec rake resque:work QUEUES=scrape,geocode,distance,mailer

Resources