Launch a script on a separate server from a Rails app - ruby-on-rails

In my Rails app, when a user clicks a button it will currently launch an in-house created script in the background. For simplicity, let's just call it myScript. So in my Rails app, I basically have:
def run!
`myScript with some arguments`
end
Now this script will run as a process on the same machine that the Rails application is running on.
We want to host all of our Ruby/Rails apps on one server, and utilize a separate server for running the scripts. Is it possible to launch that script, but on a different machine? Let me know if you need additional information.

I use ssh for these types of things.
require 'net/ssh'
Net::SSH.start('server.com', 'username', password: "asdasd") do |ssh|
$stdout.print ssh.exec!("cdc && curl https://gist.github.com/mhenrixon/asdasd123123/raw/123123asdasd/update.rb | rails c production")
end
That's the easiest way of doing it I think but the sinatra/rails listener isn't a bad idea either.

To flat out steal Dogbert's answer: I'd go with a HTTP solution. Create a background job (Sidekick, Queue Classic) and have a simple job that does a get or a post or whatever on that second server.
The HTTP solution will involve a bit of a setup cost (time and learning probably) but in the end it will be a bit more robust than the SSH solution as you won't have to worry about IPs or users,etc. just a straight up URL. Plus if you are doing things with Capistrano,etc your deployments will be super easy.
Is there a reason why these jobs couldn't be run on the your webserver, but with a background process?

Related

Run a shell script on one rails application from another rails application

I need to run a shell script (e.g. df) on a server, say Client. The call to this script should be made from another independent rails application, say Monitor via REST Api and return the output in response to Monitor application.
This shell command should run on all application server instances of Client. Though I'm researching on it, it'll be quite helpful if anyone has done this already before.
I need to get following information from Client servers to Monitor application:
Disk space left on each Client server instance ,
Processes running on each Client server instance,
Should be able to terminate non-responsive Client instance.
Thanks
A simple command can be executed via:
result = `df -h /`
But it does not fullfill the requirement run on all application server instances of Client. For this you need to call every instance independently.
Another way can be to run your checks from a cron job and let the Client call Monitor. If a cron is not suited you can create an ActiveJob on every client, collect the data and call Monitor
You should also look for ruby libraries providing the data you need.
For instance sys/filesystem can provide data about your disk stats.

Reading pending work from PhantomJS

I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain

Callback (or equivalent) when container w/ Rails app finishes booting?

I have a docker container containing a rails app. Running the container starts a script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/init.sh, which does some busy work and then reaches out to a unicorn.rb script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/unicorn.conf.rb.
I have a clojure web app that can tell this container to run. The request to do this is nonblocking, and the user of the site will somehow be notified when the rails app is ready to receive requests.
I can think of various hacky ways to do this, but is there an idiomatic way to have the container let me know when the unicorn rails app is ready to receive web requests?.
I'd like to have it hit some callback url in my app but I'm open to other options. Thanks!
I don't get why you need to do it that way. Couldn't you just perform HTTP requests to the rails part (eg http:://my_page.com/status) and handle the response accordingly?

Using Pylons as a Web Backend

I am using Pylons for two things:
1) Serving API requests (returning JSONs describing my SQLAlchemy models)
2) Running a script 24/7 that fetches flight information from the internet (using HTTP) and pushes it into my DB (again using my models).
I am NOT using Pylons as a front end, but as a back end.
What would be the best way for my script to make HTTP request? is urllib / urllib2 my best option here?
How would I run my script constantly and not on a request serving basis? Is Celery / Cronjobs what I am looking for here?
Thanks!
Regarding your first question: yes, urllib/urllib2 is probably the best bet. It has very solid functionality for making HTTP requests to someone else.
Regarding your second question: Use your database. It's not super-scalable, but it's easy to implement a system where you have a flag in the database that is, essentially, an on-off switch for the application. Once that exists, make a page (with whatever security precautions you think prudent) that sets the flag and starts the application on a loop that continues indefinitely as long as the flag is set. A second page clears the flag, should you need to stop the HTTP requests without killing the entire server process. Instead of 'pages,' they could also be shell scripts or short standalone scripts. The important part is that you can implement this without requiring Celery or cron (although if you're already familiar with either, have at).

Store ssh connections in rails

I have a rails app that needs to communicate with a couple of servers through ssh. I'm using the Net::SSH library and it works great. I would like however to be able to cache/store the ssh connections somehow between requests (something like OpenSSH multiplexing).
So, i can't store them in a key-value store like Memcached or Redis (because ssh connections are not serializable).
I don't want to store them in a session because they are meant to be used by all users (and besides i think it needs to be serializable also).
I managed to get this working with class variables and initiliazer constants. I know that class variables don't replicate between servers (in production), and i'm pretty certain initializer constants also don't. Something like:
initializer:
SSH = {}
model:
class Server
def connection
require 'net/ssh'
SSH[name] ||= Net::SSH.start(ip, "root", :password => password)
end
end
OpenSSH multiplexing would be great but i'm not sure if i could do that through the Net::SSH ruby library (i'm back to storing the master connection somewhere).
Are there any other solutions? Or if not, which one is the least evil of them all?
Perhaps rather than trying to share sockets across requests which is bound to end up causing pain and suffering you could delegate to a background processor of some kind? You could set up an ssh tunnel and use DRb to talk across it as if it was just a local network daemon, or any of the large number of networked asynchronous job handling daemons.
http://ruby-toolbox.com/categories/queueing.html
To keep the SSH connection up between requests, you'll need to spawn off a background process. The background process can open up a pipe or some other sort of interprocess communication method, the handle to which you can store in a serializable way.
Note that this is a non-trivial exercise, which is why I've only described it at high-level detail.

Resources