Using Postgresql with Amazon Opsworks - Getting IP address in database.yml - ruby-on-rails

I'm trying to get a basic rails app working with Postgres using Amazon Opsworks. Opsworks lacks built-in support for Postgres at the moment, but I'm using some cookbooks that I've found which seem to be well written. I've forked them all to my custom cookbooks at: https://github.com/tibbon/custom-opsworks-cookbooks
Anyway, where I'm stuck at the moment is getting the ip address of the master postgres database into the database.yml file. It seems that there should be multiple back-ends specified, kinda like how my haproxy server sees all the rails servers as 'backends'.
Has anyone gotten this working?

I had to add some custom JSON to my Rails layer.
Looked like this:
{
"deploy": {
"my-app-name": {
"database": {
"adapter":"mysql2",
"host":"xxx.xx.xxx.xx"
}
}
}
}

I believe you have to define a custom recipe that updates the database.yml and restarts the app server.
In this guide the same thing is done using a redis server as an example:
node[:deploy].each do |application, deploy|
if deploy[:application_type] != 'rails'
Chef::Log.debug("Skipping redis::configure as application #{application} as it is not an Rails app")
next
end
execute "restart Rails app #{application}" do
cwd deploy[:current_path]
command "touch tmp/restart.txt"
action :nothing
only_if do
File.exists?(deploy[:current_path])
end
end
redis_server = node[:opsworks][:layers][:redis][:instances].keys.first rescue nil
template "#{deploy[:deploy_to]}/current/config/redis.yml" do
source "redis.yml.erb"
mode "0660"
group deploy[:group]
owner deploy[:user]
variables(:host => (node[:opsworks][:layers][:redis][:instances][redis_server][:private_dns_name] rescue nil))
notifies :run, resources(:execute => "restart Rails app #{application}")
only_if do
File.directory?("#{deploy[:deploy_to]}/current")
end
end
end
I haven't tested this for myself yet but I believe I will soon, I'll try to update this answer as soon as I do.

Related

How do I see delayed_job jobs in production?

I'm have a server where I deploy using capistrano and I use delayed_jobs to do some mailing but at my server for some reason the jobs do not execute. The delayed_job process is running (running bin/delayed_job status answers me correctly saying there's a process there by some pid), but I don't know if the process just isn't executing my jobs or even if my jobs aren't being enqueued. Locally it all works fine, but at production staging in the server it does not.
I'd like to know if there's a way I can at least check what jobs are there, since I can't do it by accessing the console
Another gem that works with delayed job is delayed-web which you can find here https://github.com/tatey/delayed-web
you add it to your gemfile
gem 'delayed-web'
then run
rails generate delayed:web:install
this will generate an initializer file delayed_web.rb under config/initializers with the following:
Rails.application.config.to_prepare do
Delayed::Web::Job.backend = 'active_record'
end
and in config/application.rb this will be added for you as well by the generator
# config/application.rb
config.assets.enabled = true
config.assets.precompile << 'delayed/web/application.css'
In routes.rb it may add a route as well but if you are using devise then maybe you want to restrict access to admin user(s) only as follows:
authenticated :user, -> user { user.admin? } do
mount Delayed::Web::Engine, at: '/jobs'
end
Ok so I'm checked my jobs through the database itself, I entered psql through postgres user and did some queries in the delayed_jobs table, you can also try doing RAILS_ENV=production bin/delayed_jobs run (for rails 4, rails 3 use "script/" instead of "bin/") which will show you what the workers are doing while they execute the job.
You can also, as Swards commented above, use a gem to have a web interface for delayed_jobs: https://github.com/ejschmitt/delayed_job_web
If you still wanna see what was my problem with the email sending I've opened another question because it got to far away from what this one was about: What port to use sending email with SMTP (mailgun) in rails app on production server (DigitalOcean)?

restarting rails app after deployment using mina

I have successfully deployed my app using mina but after I try to deploy the changes I made, the old one is shown. How do I restart the rails app using mina?
You might need to restart your passenger. For this run :
For passenger :
'passenger:restart'
You can write in your deployment script like below :
task deploy: :environment do
deploy do
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
to :launch do
invoke :'passenger:restart'
end
end
end

start thinking sphinx on rails server startup

I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.

Rails application to interface with local machine running Ubuntu

What im trying to do:
service-hosted rails app ( heroku or something )
user logs into application and wants to " DO THINGS "
" DO THINGS " entails running commands to the local machine i have here in my apartment
I've SSHed into a server before ... but i think this would be best setup if the server initiates the connection
I'm FAIRLY running a permanent SSH isnt the best idea
I'm not 100% sure on the process .. i just need information transfer between my hosted application .. and my local machine.
ruby socket set of commands which could possibly work?
any particular gem that would handle this?
Thanks ahead of time!
so far it looks like NetSSH is the answer im looking for
at command prompt:
$ gem install net-ssh
Next we create a new controller file:
app/controllers/ssh_connections_controller.rb
and inside ssh_connections_controller.rb file place:
def conn
Net::SSH.start( '127.0.0.1','wonton' ) do |session|
session.open_channel do |channel|
channel.on_close do |ch|
puts "channel closed successfully."
render :text => 'hits'
end
puts "closing channel..."
channel.close
end
session.loop
end
end
... and substitute your local settings...
'wonton' would be the name of whatever user you want to SSH in as
more to be updated!

rails Rake and mysql ssh port forwarding

I need to create a rake task to do some active record operations via a ssh tunnel.
The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt.
desc "Syncronizes the tablets DB with the Server"
task(:sync => :environment) do
require 'rubygems'
require 'net/ssh'
begin
Thread.abort_on_exception = true
tunnel_thread = Thread.new do
Thread.current[:ready] = false
hostname = 'host'
username = 'tunneluser'
Net::SSH.start(hostname, username) do|ssh|
ssh.forward.local(3333, "mysqlhost.com", 3306)
Thread.current[:ready] = true
puts "ready thread"
ssh.loop(0) { true }
end
end
until tunnel_thread[:ready] == true do
end
puts "tunnel ready"
Importer.sync
rescue StandardError => e
puts "The Database Sync Failed."
end
end
The task seems to hang at "tunnel ready" and never attempts the sync.
I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync.
This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here.
Any Ideas!?
Thanks
The issue is very likely the same as here:
Cannot connect to remote db using ssh tunnel and activerecord
Don't use threads, you need to fork the importer off in another process for it to work, otherwise you will lock up with the ssh event loop.
Just running the code itself as a ruby script (with Importer.sync disabled) seems to work without any errors. This would suggest to me that the issue is with Import.sync. Would it be possible for you to paste the Import.sync code?
Just a guess, but could the issue here be that your :sync rake task has the rails environment as a prerequisite? Is there anything happening in your Importer class initialization that would rely on this SSH connection being available at load time in order for it to work correctly?
I wonder what would happen if instead of having environment be a prereq for this task, you tried...
...
Rake::Task["environment"].execute
Importer.sync
...

Resources