I need to run a shell script (e.g. df) on a server, say Client. The call to this script should be made from another independent rails application, say Monitor via REST Api and return the output in response to Monitor application.
This shell command should run on all application server instances of Client. Though I'm researching on it, it'll be quite helpful if anyone has done this already before.
I need to get following information from Client servers to Monitor application:
Disk space left on each Client server instance ,
Processes running on each Client server instance,
Should be able to terminate non-responsive Client instance.
Thanks
A simple command can be executed via:
result = `df -h /`
But it does not fullfill the requirement run on all application server instances of Client. For this you need to call every instance independently.
Another way can be to run your checks from a cron job and let the Client call Monitor. If a cron is not suited you can create an ActiveJob on every client, collect the data and call Monitor
You should also look for ruby libraries providing the data you need.
For instance sys/filesystem can provide data about your disk stats.
Related
My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.
I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain
I have a docker container containing a rails app. Running the container starts a script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/init.sh, which does some busy work and then reaches out to a unicorn.rb script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/unicorn.conf.rb.
I have a clojure web app that can tell this container to run. The request to do this is nonblocking, and the user of the site will somehow be notified when the rails app is ready to receive requests.
I can think of various hacky ways to do this, but is there an idiomatic way to have the container let me know when the unicorn rails app is ready to receive web requests?.
I'd like to have it hit some callback url in my app but I'm open to other options. Thanks!
I don't get why you need to do it that way. Couldn't you just perform HTTP requests to the rails part (eg http:://my_page.com/status) and handle the response accordingly?
In my Rails app, when a user clicks a button it will currently launch an in-house created script in the background. For simplicity, let's just call it myScript. So in my Rails app, I basically have:
def run!
`myScript with some arguments`
end
Now this script will run as a process on the same machine that the Rails application is running on.
We want to host all of our Ruby/Rails apps on one server, and utilize a separate server for running the scripts. Is it possible to launch that script, but on a different machine? Let me know if you need additional information.
I use ssh for these types of things.
require 'net/ssh'
Net::SSH.start('server.com', 'username', password: "asdasd") do |ssh|
$stdout.print ssh.exec!("cdc && curl https://gist.github.com/mhenrixon/asdasd123123/raw/123123asdasd/update.rb | rails c production")
end
That's the easiest way of doing it I think but the sinatra/rails listener isn't a bad idea either.
To flat out steal Dogbert's answer: I'd go with a HTTP solution. Create a background job (Sidekick, Queue Classic) and have a simple job that does a get or a post or whatever on that second server.
The HTTP solution will involve a bit of a setup cost (time and learning probably) but in the end it will be a bit more robust than the SSH solution as you won't have to worry about IPs or users,etc. just a straight up URL. Plus if you are doing things with Capistrano,etc your deployments will be super easy.
Is there a reason why these jobs couldn't be run on the your webserver, but with a background process?
I am writing a rails app, and I would like to use node.js and socket.io to integrate a chat feature into my app. I plan on having my rails app deployed on one server, and my chat deployed on a much smaller server (to save money). My reasoning for this is, it is OK if a chat message takes 30s to send, but it is not OK for a page to take 30s to load.
Anyway, in order for this to work, I need Rails to server the socket.io client files. If my small node server serves the client files, then the small server will bottleneck the larger one. I have a basic chat prototype up and running, but it only works with node serving the client files. What do I have to do in order to have rails serve the client files?
Thanks in advanced.
So here is the solution I decided upon. Instead of figuring out what client files I need to serve, I decided to let the Node server handle the client javascript. In order to ensure that the Node server does not bottleneck the Rails server, I lazy load the socket.io-client file. The relevant coffee script is:
$ ->
$.getScript('http://localhost:8080/socket.io/socket.io.js')
.done (script, textStatus) ->
socket = io.connect('http://localhost:8080')
setupSocket(socket)
Where http://localhost:8080 is your Node host/port. setupSocket is a function I wrote that handles setting all the event handlers.
Most probably you are running into the "Same Origin Policy" restriction. (Check your console log) Your main page is downloaded from the RoR host, so your scripts can only initiate a connection to that host.
In other words, this may not be possible.