I'm trying moving my Reddis server off to an external box. Have followed the Resque readme on github through.
In development mode, it loads the config just fine and connects to localhost on 6379:
resque.rb initialiser
rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
resque_config = YAML.load_file(rails_root + '/config/resque.yml')
Resque.redis = resque_config[rails_env]
resque.yaml
development: localhost:6379
playground: redis1.play.xxx.com:6379
production: redis1.pro.xxx.com:6379
However, in playground / production modes, it falls back to development server and doesn't connect. I'm assuming this is because unicorn's not declaring the environment correctly?
If I replace 'development' with 'playground' in the initialiser, it works.
I'm starting unicorn with:
unicorn -c config/unicorn.rb -E playground -l 8000 -D
How can I get it to pick up the correct conf??
Finally sorted although I don't really understand why... Won't accept my own answer for a couple of days if someone wants to interject.
By getting God to manage the service instead of starting / stopping manually, it picked up the correct environment.
Now I'm connecting to a remote redis service with no issues.
Related
I have my rails app , where my different services are in different different engines .I want to use different resque (and different redis and different workers) for my different engines .How to do it in rails ?
Since you have not shared any code, I am not sure how you have structured your app and how are you using Redis and workers. Hence I will have to assume many things while answering this question.
Let's say your structure is like this
root
engines
engine1
app
config
...
engine2
app
config
...
You can keep resque config like this
resque1 config with redis running of localhost on port 6380
#root/engines/engine1/config/resque.yml
development: localhost:6379 #redis1
test: localhost:6379
...
resque2 config with redis running of localhost on port 6380
#root/engines/engine2/config/resque.yml
development: localhost:6380
test: localhost:6380
...
rescue1 initializer
#root/engines/engine1/config/initializers/resque.yml
rails_root = File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
config_file = rails_root + '/engines/engine1/config/resque.yml'
resque_config = YAML::load(ERB.new(IO.read(config_file)).result)
Resque.redis = resque_config[rails_env]
rescue2 initializer
#root/engines/engine2/config/initializers/resque.yml
rails_root = File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
config_file = rails_root + '/engines/engine2/config/resque.yml'
resque_config = YAML::load(ERB.new(IO.read(config_file)).result)
Resque.redis = resque_config[rails_env]
You can start workers like this
RAILS_ENV=production resque-web rails_root/engines/engine1/config/initializers/resque.rb
RAILS_ENV=production resque-web rails_root/engines/engine2/config/initializers/resque.rb
Also, If you want to use a single redis and resque instance then, for redis you can namespace it https://github.com/resque/redis-namespace
and for resque you can define different sets of queues for each engine.
You can use the same redis with different namesapces for each engine.
Or if memory of the redis is your concen, you can trying using redis connection pool gem.
So far I had a simple application that only required the classic rails server to boot.
I have recently added the react_on_rails gem and it requires to boot a nodejs server to handle webpack and javascript stuff.
So I understand I need this foreman gem that is capable of managing several processes. So far so good, but then I'm still having a few problems understanding and deploying this enhanced app to my production environment (Phusion Passenger on Apache/Nginx)
So several questions :
Does passenger handle the transition from rails s to foreman start -f Procfile.dev automatically ?
If no then where do I setup things so passenger works ?
Side question : almost all google results refer to puppet when looking for foreman on passenger. Anyone could explain what puppet does in 1 line and if I really need it in production ? So far everythings runs smoothly on localhost with the foreman start -f Procfile.dev command so I don't know where this is coming from...
I am deploying my application to the Amazon Cloud using Capistrano, and I was expecting to have the rails + nodejs setup on every autoscaled instance (and Passenger would graciously handle all that). Am I thinking wrong ?
In our production environment we use eye to manage other processes related to the rails app. (Passenger will run from mod_passenger while the workers are controlled by eye)
And here is an example of how to start 4 concurrent queue_classic workers:
APP_ROOT = File.expand_path(File.dirname(__FILE__))
APP_NAME = File.basename(APP_ROOT)
Eye.config do
logger File.join(APP_ROOT, "log/eye.log")
end
Eye.application APP_NAME do
working_dir File.expand_path(File.dirname(__FILE__))
stdall 'log/trash.log' # stdout,err logs for processes by default
env 'RAILS_ENV' => 'production' # global env for each processes
trigger :flapping, times: 10, within: 1.minute, retry_in: 10.minutes
group 'qc' do
1.upto(4) do |i|
process "worker-#{i}" do
stdall "log/worker-#{i}.log"
pid_file "tmp/pids/worker-#{i}.pid"
start_command 'bin/rake qc:work'
daemonize true
stop_on_delete true
end
end
end
end
In my Ruby on Rails dev environment, I am starting Rails and Unicorn via Foreman in the typical way:
(Procfile:)
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
I am also running Pow. But not as a webserver. I'm just using Pow to direct http requests from mydomain.dev (port 80) to the port Unicorn is listening on.
You can do this by creating a pow file (mydomain.dev) containing the port number Unicorn is running on.
Given this setup, is it possible in my rails code to know what the port I started Unicorn on?
I'm only wanting to know this in my dev environment, it's not a production issue.
In my Rails code, I've tried a few different things, but none of them work:
Unicorn::Configurator::RACKUP[:port] - returned nothing
Rails::Server.new.options[:Port] - doesn't exist in Rails 4
Rack::Server.new.options[:Port] - returns default rack port (9292) not the one configured for this rack instance.
Is there a way to get the current rack instance from rails?
request.port - returns 80, which is the port that Pow is listening on. Pow is routing http traffic to Unicorn, which is on a different port.
None of these give me the port that Unicorn is running on.
Any ideas?
EDIT If you're wondering why I want to know this, it's because in my dev environment, I'm trying to dynamically create configuration files for Pow, so I can route http requests to Unicorn on the correct port.
If you're responding to a request in a controller or view, use the request object - this should work, although you say it does not:
request.port
If you're in an initialiser :
Rails::Server.new.options[:Port]
How to find the local port a rails instance is running on?
You should just be able to access it via ENV['PORT'], given it's value has been set to the $PORT environment variable.
I've sort of found a way to do this.
Create separate config files for Unicorn. unicorn.development.rb and unicorn.test.rb
Install the dotenv-rails gem
Inside my unicorn config files, do something like this:
# unicorn.development.rb:
require "dotenv"
Dotenv.load(
"./.env.local",
"./.env.development",
"./.env"
)
if ENV['UNICORN_PORT'].nil?
throw 'UNICORN_PORT not set in environment!'
end
worker_processes 3
timeout 30
preload_app true
listen ENV['UNICORN_PORT'], backlog: 64
... rest of unicorn config...
# unicorn.testing.rb:
require "dotenv"
Dotenv.load(
"./.env.local",
"./.env.testing",
"./.env"
)
if ENV['UNICORN_PORT'].nil?
throw 'UNICORN_PORT not set in environment!'
end
worker_processes 3
timeout 30
preload_app true
listen ENV['UNICORN_PORT'], backlog: 64
... rest of unicorn config...
In my .env.development and .env.testing environment files, set the UNICORN_PORT environment variable
Make sure you use the correct Unicorn config file to start the app. This can be done by using separate Procfiles for dev and testing.
# Procfile.dev
web: bundle exec unicorn -c ./config/unicorn.development.rb
# Procfile.testing
web: bundle exec unicorn -c ./config/unicorn.testing.rb
This appears to mostly work, but is not without it's issues...
Probably a bad idea, but whatever:
uport = `netstat -n --listening --tcp -p | grep unicorn | sed 's/.*:\([0-9]*\) .*/\1/'`
My puma config:
path = Dir.pwd + "/tmp/puma/"
threads 0,20
environment "production"
daemonize true
drain_on_shutdown true
bind "unix://" + path + "socket/puma.sock"
pidfile path + "pid/puma.pid"
state_path path + "pid/puma.state"
My environments/production.rb
MyApp::Application.configure do
config.log_level = :debug
end
I start my server:
starkers#ubuntu:~/Desktop/myspp$ pumactl -F config/puma.rb start
=> Booting Puma
=> Rails 4.0.2 application starting in production on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
config.eager_load is set to nil. Please update your config/environments/*.rb files accordingly:
* development - set it to false
* test - set it to false (unless you use a tool that preloads your test environment)
* production - set it to true
Puma 2.8.2 starting...
* Min threads: 0, max threads: 16
* Environment: production
* Listening on tcp://0.0.0.0:3000
I browse about my app. And my log/production.log is blank. Not sure why?
Directory access is 0777 throughout my app.
No idea what is causing this. Really need logs (obviously). Happening locally and remotely so it's something to do with my configuration. However I'm not sure what configuration. Is there anything in puma/ubuntu/rails that could be causing this?
development.log works perfectly. I've copy pasted my development.rb to my production.rb file. Literally identical. Okay? Identical development.rb and production .rb And yet:
RAILS_ENV=development rails s
populates development.log
and
RAILS_ENV=production rails s
leaves production.log as empty as Kim Kardashian's head.
Set bind at the end of config file:
path = Dir.pwd + "/tmp/puma/"
threads 0,20
environment "production"
daemonize true
drain_on_shutdown true
pidfile path + "pid/puma.pid"
state_path path + "pid/puma.state"
bind "unix://" + path + "socket/puma.sock"
I used command pumactl -F config/puma.rb start to start server (i guess there is no difference, but anyway).
And i would recommend to use #{} for path:
pidfile "#{path}pid/puma.pid"
state_path "#{path}pid/puma.state"
bind "unix://#{path}socket/puma.sock"
but it's your choice.
Hope it helps (for me you config didn't work too).
you can also add Puma logs:
stdout_redirect "#{Dir.pwd}/log/puma.stdout.log", "#{Dir.pwd}/log/puma.stderr.log"
Add this line before bind.
If you want to add the output of the server to a log, the easiest way to do this is by telling your system to do exactly that. Running your server start command like:
pumactl -F config/puma.rb start >> log/development.log
Will append each line of output from your server to the development log. Though to make things easier to debug, you may want to give each server its own log such as log/puma.log. If you do, you may wish to rewrite the file from scratch every time you start the server instead of keeping a cumulative log, if that's the case just turn the >> into a > such as:
pumactl -F config/puma.rb start > log/puma.log
However, if you have your system set up to automatically restart the server if it fails, using > will overwrite the log for what might have caused the crash when the server restarts.
Similarly, you can get your production.log working by starting your rails server like:
RAILS_ENV=production rails s >> log/production.log
If you want to run your server in the background like you might in your production environment, you can add a & character to the end like:
pumactl -F config/puma.rb start > log/puma.log &
If you do this you'll probably want to store the process identifier so you can kill the server later as ^C doesn't work for background processes. To store the process id, create another empty file somewhere like lib/pids/puma.pid and then export the process id of that puma server to the empty file like:
pumactl -F config/puma.rb start > log/puma.log &
echo $! > lib/pids/puma.pid
You would then be able to kill the server with:
kill `cat lib/pids/puma.pid`
It is important to remember that even if you append the output of the server to your development.log file, it will not show up in the output of your development rails server. If you want a live view of your log for debugging, you can use the tailf command such as:
tailf log/puma.log
For more information on the command line interface, the Command Line Crash Course is a good resource.
We're currently in transition between a Ruby/RoR 1.8/3.2 and a rewritten 2.0/4.0 app on the same server. Since I don't want to mess with the current resque configuration and risk updating any gems that might break production, I elected to setup a separate Redis server on a new port and use the newest version of Resque with it. Resque appears to be running fine; if I manually launch a worker rake resque:work QUEUE='*' and watch the process it runs a queued job from resque-scheduler. However, none of my workers, launched manually or via script, show up in the resque-web instance running. The stats page of resque-web shows it's looking at the second redis instance. Does anyone have experience with this or any ideas?
/config/initializers/resque.rb:
require 'resque_scheduler'
rails_root = ENV["RAILS_ROOT"] || File.dirname(__FILE__) + "/../.."
rails_env = ENV["RAILS_ENV"] || "demo"
resque_config = YAML.load_file(rails_root + "/config/resque.yml")
Resque.redis = resque_config[rails_env]
Resque.redis.namespace = "resque-reos:#{rails_env}"
/config/resque.yml
demo: localhost:6380