Rails application to interface with local machine running Ubuntu - ruby-on-rails

What im trying to do:
service-hosted rails app ( heroku or something )
user logs into application and wants to " DO THINGS "
" DO THINGS " entails running commands to the local machine i have here in my apartment
I've SSHed into a server before ... but i think this would be best setup if the server initiates the connection
I'm FAIRLY running a permanent SSH isnt the best idea
I'm not 100% sure on the process .. i just need information transfer between my hosted application .. and my local machine.
ruby socket set of commands which could possibly work?
any particular gem that would handle this?
Thanks ahead of time!

so far it looks like NetSSH is the answer im looking for
at command prompt:
$ gem install net-ssh
Next we create a new controller file:
app/controllers/ssh_connections_controller.rb
and inside ssh_connections_controller.rb file place:
def conn
Net::SSH.start( '127.0.0.1','wonton' ) do |session|
session.open_channel do |channel|
channel.on_close do |ch|
puts "channel closed successfully."
render :text => 'hits'
end
puts "closing channel..."
channel.close
end
session.loop
end
end
... and substitute your local settings...
'wonton' would be the name of whatever user you want to SSH in as
more to be updated!

Related

Errno::ECONNRESET: Connection reset by peer in Rails using rest-client

We have a Ruby on Rails application and this has a "search" functionality (search for some company). From browser user key-in some name and hit search and this search make an rest api call to outside system and get us some search results.
We are using "rest-client" (for Ruby on Rails).
I noticed this seems to work for few hours and suddenly my search seems to be broken all of a sudden and I can see in my log I get:
Errno::ECONNRESET: Connection reset by peer
We tried to investigate this issue by looking in to logs and we dont see any logs.
If we need to make this search work again we need to restart the passenger and then it works immediately. This is happening only in production environment. I tested in staging it seems to work well.
Questions:
What could be causing this "reset issue"
Why on my prod passenger reset it starts to work again.
We use reset-client should be write a code to manually close connection when this exception happens.
Any issue in firewall could causing this?
Is there any code I can place in the exception to restart this connection so the next call is success.
Code:
def call
resp_data = RestClient.get(#request_url, #header)
rescue => error
puts 'Exception: ' error.message
end
Try to the following
resp_data = RestClient::Request.new(
method: :get,
url: #request_url, #=> https://api.example.com/auth2/endpoint
:headers => {
:Authorization => #header, #=> "Bearer access_token",
}
)
rescue => error
puts 'Exception: ' error.message
It's very wired. I met the same problem.
my script is shown as below: ( works great in my local machine, works greate in remote server, until someday the disk was full and this scripts dead, saying: Errno::ECONNRESET: Connection reset by peer)
ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
require 'rails'
require 'rubygems'
require 'rest-client'
url = 'https://api-cn.faceplusplus.com/cardpp/v1/ocridcard'
id_card = File.expand_path(File.dirname(__FILE__) + "/id_card.jpg")
puts "== id_card: #{id_card}"
response = RestClient.post url,
{
api_key: 'Td_XijYBZUMp',
api_secret: 'ihehEuWYwOWM',
image_file: File.open(id_card, 'rb')
}
puts "==response: "
puts response.inspect
my environment: ruby 2.5.0 , ubuntu server 16, with 8 core CPU, 50G ram
this problem happens when I found my hard disk was used 100%. no space left.
However, once I freed enough disk space, this problem exists.
after I restart my server, this problem exists.
when I run this script under rails, this problem exists.
However, when I run this script stand alone, it works fine.
so , finally, I turned to "curl" command. works great!
the working CURL script looks like:
$ curl -X POST "https://api-cn.faceplusplus.com/cardpp/v1/ocridcard" \
-F "api_key=Td_XijYBCOYh-Rf_kCMj" \
-F "api_secret=iheWYoQcbCPM9n2VS" \
-F "image_file=#scripts/id_card.jpg
I'm late on this, but in my case the problem is that I was using AWS Redis for ElastiCache and in there, I had a cluster with a primary endpoint and read-only endpoint.
This error message can show up if the application is having problems connecting to redis!
I was using the read-only endpoint instead of the primary one, and the primary is used to write data to redis.
Taking a close look at my endpoint, it was something like application-name.qwerty-ro.cache.amazonaws.com:6379 and I changed it for application-name.qwerty.cache.amazonaws.com:6379 without the -ro part which is what made it read-only.
I lost about 6 hours trying to figure it out, so hope it helps someone else!

How to update code at runtime in rails?

I have a simple scenario (Rails 4 using Passenger):
1) One developing machine.
2) Multiple customers of the system being developed on machine 1. The system runs at customer's facilities in a virtual machine that is identical of the developing machine.
In this system, we are trying to make a feature that shows (only for the administrator) a page on which he can click a button (update code) and the system would do:
Connect to git server.
Run git pull.
touch tmp/restart.txt.
We setup all certificates in order not to ask for a password, setup Passenger/Apache to use the same user which is the owner of Rails app and, in console, it works using this code:
....
item = "git pull"
#result = %x[ #{item} ]
....
But, when I run this inside my app, it doesn't do anything and doesn't output nothing yet.
One strange clue is that when I change the command for some command that doesn't have to access git server (for instance, git status), it works flawlessly (remember that, in console, at the same virtual machine, the code works)
If anyone could help...
I don't want to assume to much, but it sounds like you need to implement a Continuous Integration (CI) strategy. I assume that the Admin is going to push this "button" when they are informed that there is new code, correct?
Have you guys attempted to use something like Capistrano to push updates to the customer system?
EDIT:
Suggest using popen -
# Get new code
IO.popen "cd #{Rails.root} && git pull" do |io|
io.each { |line| Rails.logger.info line }
end
# Bundle if necessary
IO.popen "cd #{Rails.root} && bundle" do |io|
io.each { |line| Rails.logger.info line }
end
# Migrate if necessary
IO.popen "cd #{Rails.root} && rake db:migrate" do |io|
io.each { |line| Rails.logger.info line }
end
# Restart Passenger
IO.popen "cd #{Rails.root} && touch tmp/restart.txt" do |io|
io.each { |line| Rails.logger.info line }
end
You might also want to shove this in a shell script and call that.

start thinking sphinx on rails server startup

I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.

Using Postgresql with Amazon Opsworks - Getting IP address in database.yml

I'm trying to get a basic rails app working with Postgres using Amazon Opsworks. Opsworks lacks built-in support for Postgres at the moment, but I'm using some cookbooks that I've found which seem to be well written. I've forked them all to my custom cookbooks at: https://github.com/tibbon/custom-opsworks-cookbooks
Anyway, where I'm stuck at the moment is getting the ip address of the master postgres database into the database.yml file. It seems that there should be multiple back-ends specified, kinda like how my haproxy server sees all the rails servers as 'backends'.
Has anyone gotten this working?
I had to add some custom JSON to my Rails layer.
Looked like this:
{
"deploy": {
"my-app-name": {
"database": {
"adapter":"mysql2",
"host":"xxx.xx.xxx.xx"
}
}
}
}
I believe you have to define a custom recipe that updates the database.yml and restarts the app server.
In this guide the same thing is done using a redis server as an example:
node[:deploy].each do |application, deploy|
if deploy[:application_type] != 'rails'
Chef::Log.debug("Skipping redis::configure as application #{application} as it is not an Rails app")
next
end
execute "restart Rails app #{application}" do
cwd deploy[:current_path]
command "touch tmp/restart.txt"
action :nothing
only_if do
File.exists?(deploy[:current_path])
end
end
redis_server = node[:opsworks][:layers][:redis][:instances].keys.first rescue nil
template "#{deploy[:deploy_to]}/current/config/redis.yml" do
source "redis.yml.erb"
mode "0660"
group deploy[:group]
owner deploy[:user]
variables(:host => (node[:opsworks][:layers][:redis][:instances][redis_server][:private_dns_name] rescue nil))
notifies :run, resources(:execute => "restart Rails app #{application}")
only_if do
File.directory?("#{deploy[:deploy_to]}/current")
end
end
end
I haven't tested this for myself yet but I believe I will soon, I'll try to update this answer as soon as I do.

rails Rake and mysql ssh port forwarding

I need to create a rake task to do some active record operations via a ssh tunnel.
The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt.
desc "Syncronizes the tablets DB with the Server"
task(:sync => :environment) do
require 'rubygems'
require 'net/ssh'
begin
Thread.abort_on_exception = true
tunnel_thread = Thread.new do
Thread.current[:ready] = false
hostname = 'host'
username = 'tunneluser'
Net::SSH.start(hostname, username) do|ssh|
ssh.forward.local(3333, "mysqlhost.com", 3306)
Thread.current[:ready] = true
puts "ready thread"
ssh.loop(0) { true }
end
end
until tunnel_thread[:ready] == true do
end
puts "tunnel ready"
Importer.sync
rescue StandardError => e
puts "The Database Sync Failed."
end
end
The task seems to hang at "tunnel ready" and never attempts the sync.
I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync.
This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here.
Any Ideas!?
Thanks
The issue is very likely the same as here:
Cannot connect to remote db using ssh tunnel and activerecord
Don't use threads, you need to fork the importer off in another process for it to work, otherwise you will lock up with the ssh event loop.
Just running the code itself as a ruby script (with Importer.sync disabled) seems to work without any errors. This would suggest to me that the issue is with Import.sync. Would it be possible for you to paste the Import.sync code?
Just a guess, but could the issue here be that your :sync rake task has the rails environment as a prerequisite? Is there anything happening in your Importer class initialization that would rely on this SSH connection being available at load time in order for it to work correctly?
I wonder what would happen if instead of having environment be a prereq for this task, you tried...
...
Rake::Task["environment"].execute
Importer.sync
...

Resources