rails Rake and mysql ssh port forwarding - ruby-on-rails

I need to create a rake task to do some active record operations via a ssh tunnel.
The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt.
desc "Syncronizes the tablets DB with the Server"
task(:sync => :environment) do
require 'rubygems'
require 'net/ssh'
begin
Thread.abort_on_exception = true
tunnel_thread = Thread.new do
Thread.current[:ready] = false
hostname = 'host'
username = 'tunneluser'
Net::SSH.start(hostname, username) do|ssh|
ssh.forward.local(3333, "mysqlhost.com", 3306)
Thread.current[:ready] = true
puts "ready thread"
ssh.loop(0) { true }
end
end
until tunnel_thread[:ready] == true do
end
puts "tunnel ready"
Importer.sync
rescue StandardError => e
puts "The Database Sync Failed."
end
end
The task seems to hang at "tunnel ready" and never attempts the sync.
I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync.
This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here.
Any Ideas!?
Thanks

The issue is very likely the same as here:
Cannot connect to remote db using ssh tunnel and activerecord
Don't use threads, you need to fork the importer off in another process for it to work, otherwise you will lock up with the ssh event loop.

Just running the code itself as a ruby script (with Importer.sync disabled) seems to work without any errors. This would suggest to me that the issue is with Import.sync. Would it be possible for you to paste the Import.sync code?

Just a guess, but could the issue here be that your :sync rake task has the rails environment as a prerequisite? Is there anything happening in your Importer class initialization that would rely on this SSH connection being available at load time in order for it to work correctly?
I wonder what would happen if instead of having environment be a prereq for this task, you tried...
...
Rake::Task["environment"].execute
Importer.sync
...

Related

How do I see delayed_job jobs in production?

I'm have a server where I deploy using capistrano and I use delayed_jobs to do some mailing but at my server for some reason the jobs do not execute. The delayed_job process is running (running bin/delayed_job status answers me correctly saying there's a process there by some pid), but I don't know if the process just isn't executing my jobs or even if my jobs aren't being enqueued. Locally it all works fine, but at production staging in the server it does not.
I'd like to know if there's a way I can at least check what jobs are there, since I can't do it by accessing the console
Another gem that works with delayed job is delayed-web which you can find here https://github.com/tatey/delayed-web
you add it to your gemfile
gem 'delayed-web'
then run
rails generate delayed:web:install
this will generate an initializer file delayed_web.rb under config/initializers with the following:
Rails.application.config.to_prepare do
Delayed::Web::Job.backend = 'active_record'
end
and in config/application.rb this will be added for you as well by the generator
# config/application.rb
config.assets.enabled = true
config.assets.precompile << 'delayed/web/application.css'
In routes.rb it may add a route as well but if you are using devise then maybe you want to restrict access to admin user(s) only as follows:
authenticated :user, -> user { user.admin? } do
mount Delayed::Web::Engine, at: '/jobs'
end
Ok so I'm checked my jobs through the database itself, I entered psql through postgres user and did some queries in the delayed_jobs table, you can also try doing RAILS_ENV=production bin/delayed_jobs run (for rails 4, rails 3 use "script/" instead of "bin/") which will show you what the workers are doing while they execute the job.
You can also, as Swards commented above, use a gem to have a web interface for delayed_jobs: https://github.com/ejschmitt/delayed_job_web
If you still wanna see what was my problem with the email sending I've opened another question because it got to far away from what this one was about: What port to use sending email with SMTP (mailgun) in rails app on production server (DigitalOcean)?

Rake task failing to load :environment properly

I'm running a custom rake task...
namespace :import do
desc "Import terms of service as HTML from stdin"
task :terms => :environment do
html = STDIN.read
settings = ApplicationWideSetting.first
settings.terms_and_conditions = html
if settings.save
puts "Updated terms of service"
else
puts "There was an error updating terms of service"
end
end
end
The model ApplicationWideSetting is reported as undefined when running the task in the production environment. However, when running the task on other environments (ie. development, staging, test.) the task runs fine.
Running the process in rails console, in all environments, completes ok.
Does anyone know what's going on, things I could check?
note: I ran the task with
puts Rails.env
To check the shell environment var RAILS_ENV was getting set/read correctly. I've also tried both with and without the square brackets around the :environment dependency declaration.
additional info: Rails v3.2.14
further info: I've setup a completely fresh rails app, and the script works fine in any environment. Since the install in question is a real production environment, I'll have to setup another deploy and check it thoroughly. More info as I find it.
In a nutshell, Rails doesn't eager load models (or anything else) when running rake tasks on Production.
The simplest way to work with a model is to require it when you begin the rake task, and it should work as expected, in this case:
# explicitly require model
require 'application_wide_setting'
It's possible to eager load the entire rails app with:
Rails.application.eager_load!
However, you may have issues with some initializers (ie. devise)

Using Postgresql with Amazon Opsworks - Getting IP address in database.yml

I'm trying to get a basic rails app working with Postgres using Amazon Opsworks. Opsworks lacks built-in support for Postgres at the moment, but I'm using some cookbooks that I've found which seem to be well written. I've forked them all to my custom cookbooks at: https://github.com/tibbon/custom-opsworks-cookbooks
Anyway, where I'm stuck at the moment is getting the ip address of the master postgres database into the database.yml file. It seems that there should be multiple back-ends specified, kinda like how my haproxy server sees all the rails servers as 'backends'.
Has anyone gotten this working?
I had to add some custom JSON to my Rails layer.
Looked like this:
{
"deploy": {
"my-app-name": {
"database": {
"adapter":"mysql2",
"host":"xxx.xx.xxx.xx"
}
}
}
}
I believe you have to define a custom recipe that updates the database.yml and restarts the app server.
In this guide the same thing is done using a redis server as an example:
node[:deploy].each do |application, deploy|
if deploy[:application_type] != 'rails'
Chef::Log.debug("Skipping redis::configure as application #{application} as it is not an Rails app")
next
end
execute "restart Rails app #{application}" do
cwd deploy[:current_path]
command "touch tmp/restart.txt"
action :nothing
only_if do
File.exists?(deploy[:current_path])
end
end
redis_server = node[:opsworks][:layers][:redis][:instances].keys.first rescue nil
template "#{deploy[:deploy_to]}/current/config/redis.yml" do
source "redis.yml.erb"
mode "0660"
group deploy[:group]
owner deploy[:user]
variables(:host => (node[:opsworks][:layers][:redis][:instances][redis_server][:private_dns_name] rescue nil))
notifies :run, resources(:execute => "restart Rails app #{application}")
only_if do
File.directory?("#{deploy[:deploy_to]}/current")
end
end
end
I haven't tested this for myself yet but I believe I will soon, I'll try to update this answer as soon as I do.

How do I create a daemon to run an SMTP server within a Rails stack?

I'm running a rails app which, amongst other things, needs to role it's own SMTP server. Mini-SMTP-Server looks very good, but I don't know how to get it to run as a daemon. I'd like to be able to act on incoming messages and I need to have the full Rails stack available for other tasks.
I've looked at the daemons gem and it seems suitable but don't know how to wire it up to start listening for SMTP messages in a sensible fashion.
create a Rake smtp_server rake task, make sure it depends on environment and then write your code for smtp server in that task. Look at this thread for setting up rake task as daemon: Daemoninsing a rake task
desc 'smtp_server'
task :smtp_server => :environment do
# Create a new server instance listening at 127.0.0.1:2525
# and accepting a maximum of 4 simultaneous connections
server = MiniSmtpServer.new(2525, "127.0.0.1", 4)
# Start the server
server.start
# Join the thread to main pool
server.join
end

Start or ensure that Delayed Job runs when an application/server restarts

We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment).
Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be.
The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active).
Anyone have any ideas?
Regards,
Bernie
Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set:
#config/initializers/delayed_job.rb
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def process_is_dead?
begin
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead?
start_delayed_job
end
Some more cleanup ideas: The "begin" is not needed. You should rescue "no such process" in order not to fire new processes when something else goes wrong. Rescue "no such file or directory" as well to simplify the condition.
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def daemon_is_running?
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
true
rescue Errno::ENOENT, Errno::ESRCH # file or process not found
false
end
start_delayed_job unless daemon_is_running?
Keep in mind that this code won't work if you start more than one worker. And check out the "-m" argument of script/delayed_job which spawns a monitor process along with the daemon(s).
Check for the existence of the daemons PID file (File.exist? ...). If it's there then assume it's running else start it up.
Thank you for the solution provided in the question (and the answer that inspired it :-) ), it works for me, even with multiple workers (Rails 3.2.9, Ruby 1.9.3p327).
It worries me that I might forget to restart delayed_job after making some changes to lib for example, causing me to debug for hours before realizing that.
I added the following to my script/rails file in order to allow the code provided in the question to execute every time we start rails but not every time a worker starts:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
puts "delayed_job ready."
A little drawback that I'm facing with this though is that it also gets called with rails generate for example. I did not spend much time looking for a solution for that but suggestions are welcome :-)
Note that if you're using unicorn, you might want to add the same code to config/unicorn.rb before the before_fork call.
-- EDITED:
After playing around a little more with the solutions above, I ended up doing the following:
I created a file script/start_delayed_job.rb with the content:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
def kill_delayed(path)
begin
pid = File.read(path).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
kill_delayed(dj_pid_path)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
# spawn delayed
env = ARGV[1]
puts "spawing delayed job in the same env: #{env}"
# edited, next line has been replaced with the following on in order to ensure delayed job is running in the same environment as the one that spawned it
#Process.spawn("ruby script/delayed_job start")
system({ "RAILS_ENV" => env}, "ruby script/delayed_job start")
puts "delayed_job ready."
Now I can require this file anywhere I want, including 'script/rails' and 'config/unicorn.rb' by doing:
# in top of script/rails
START_DELAYED_PATH = File.expand_path('../start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
# in config/unicorn.rb, before before_fork, different expand_path
START_DELAYED_PATH = File.expand_path('../../script/start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
not great, but works
disclaimer: I say not great because this causes a periodic restart, which for many will not be desirable. And simply trying to start can cause problems because the implementation of DJ can lock up the queue if duplicate instances are created.
You could schedule cron tasks that run periodically to start the job(s) in question. Since DJ treats start commands as no-ops when the job is already running, it just works. This approach also takes care of the case where DJ dies for some reason other than a host restart.
# crontab example
0 * * * * /bin/bash -l -c 'cd /var/your-app/releases/20151207224034 && RAILS_ENV=production bundle exec script/delayed_job --queue=default -i=1 restart'
If you are using a gem like whenever this is pretty straightforward.
every 1.hour do
script "delayed_job --queue=default -i=1 restart"
script "delayed_job --queue=lowpri -i=2 restart"
end

Resources