Callback (or equivalent) when container w/ Rails app finishes booting? - ruby-on-rails

I have a docker container containing a rails app. Running the container starts a script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/init.sh, which does some busy work and then reaches out to a unicorn.rb script similar to this: https://github.com/defunkt/unicorn/blob/master/examples/unicorn.conf.rb.
I have a clojure web app that can tell this container to run. The request to do this is nonblocking, and the user of the site will somehow be notified when the rails app is ready to receive requests.
I can think of various hacky ways to do this, but is there an idiomatic way to have the container let me know when the unicorn rails app is ready to receive web requests?.
I'd like to have it hit some callback url in my app but I'm open to other options. Thanks!

I don't get why you need to do it that way. Couldn't you just perform HTTP requests to the rails part (eg http:://my_page.com/status) and handle the response accordingly?

Related

Can a server request be routed to a daemon in Rails 5?

Yesterday, I have been reading about daemons and considering it for use with a Rails app. I want to have a Ruby server (the daemon) to handle a specific request when it receives one, so it continuously waits for a request in the background (I am not sure whether this will be a proper use case of a daemon, so correct me if I am wrong).
Is there a way to use routes.rb in Rails 5 to route a request to a daemon?
p.s. Please, don't suggest that I should use a standard controller action to handle requests because there is a requirement I need to fulfil that prevented me from using one. Bottom line, I just want that specific request to be handled by a daemon instead of being handled by the main Rails app.

Run a shell script on one rails application from another rails application

I need to run a shell script (e.g. df) on a server, say Client. The call to this script should be made from another independent rails application, say Monitor via REST Api and return the output in response to Monitor application.
This shell command should run on all application server instances of Client. Though I'm researching on it, it'll be quite helpful if anyone has done this already before.
I need to get following information from Client servers to Monitor application:
Disk space left on each Client server instance ,
Processes running on each Client server instance,
Should be able to terminate non-responsive Client instance.
Thanks
A simple command can be executed via:
result = `df -h /`
But it does not fullfill the requirement run on all application server instances of Client. For this you need to call every instance independently.
Another way can be to run your checks from a cron job and let the Client call Monitor. If a cron is not suited you can create an ActiveJob on every client, collect the data and call Monitor
You should also look for ruby libraries providing the data you need.
For instance sys/filesystem can provide data about your disk stats.

How to prevent Rails controller from hanging when making a web service call

I have a Rails controller that I'm calling when a user loads a specific page. The controller makes a call to a 3rd party web service. However, when the web service is down, my Rails controller just hangs. I'm not able to navigate to another page, log out, or refresh the page...all of these tasks wait for the web service call to complete before being executed. In the event that the web service call never completes, I have to restart my Rails app in order for it to be functional again.
Is there a standard way of preventing this from happening? I am using the Faraday gem to make web service calls. I suppose I could set a timeout value when making my web service call. However, ideally I would like any user action of navigating to another page to halt this web service call immediately. Is this possible?
I believe this is happening because you are probably using a Rack web implementation that can only handle one request at a time. Unicorn is like that where it is event driven. Very much like Node. You should think about fixing this first with a timeout. So if you are using Faraday, you can do something like req.options.timeout = 5 to have a timeout.
Then I recommend using Puma. If that's not an option, you should adjust your server settings to allow more than one connection at a time. For Unicorn, I believe it is worker_processes.

How to serve socket.io client files in rails

I am writing a rails app, and I would like to use node.js and socket.io to integrate a chat feature into my app. I plan on having my rails app deployed on one server, and my chat deployed on a much smaller server (to save money). My reasoning for this is, it is OK if a chat message takes 30s to send, but it is not OK for a page to take 30s to load.
Anyway, in order for this to work, I need Rails to server the socket.io client files. If my small node server serves the client files, then the small server will bottleneck the larger one. I have a basic chat prototype up and running, but it only works with node serving the client files. What do I have to do in order to have rails serve the client files?
Thanks in advanced.
So here is the solution I decided upon. Instead of figuring out what client files I need to serve, I decided to let the Node server handle the client javascript. In order to ensure that the Node server does not bottleneck the Rails server, I lazy load the socket.io-client file. The relevant coffee script is:
$ ->
$.getScript('http://localhost:8080/socket.io/socket.io.js')
.done (script, textStatus) ->
socket = io.connect('http://localhost:8080')
setupSocket(socket)
Where http://localhost:8080 is your Node host/port. setupSocket is a function I wrote that handles setting all the event handlers.
Most probably you are running into the "Same Origin Policy" restriction. (Check your console log) Your main page is downloaded from the RoR host, so your scripts can only initiate a connection to that host.
In other words, this may not be possible.

Using Pylons as a Web Backend

I am using Pylons for two things:
1) Serving API requests (returning JSONs describing my SQLAlchemy models)
2) Running a script 24/7 that fetches flight information from the internet (using HTTP) and pushes it into my DB (again using my models).
I am NOT using Pylons as a front end, but as a back end.
What would be the best way for my script to make HTTP request? is urllib / urllib2 my best option here?
How would I run my script constantly and not on a request serving basis? Is Celery / Cronjobs what I am looking for here?
Thanks!
Regarding your first question: yes, urllib/urllib2 is probably the best bet. It has very solid functionality for making HTTP requests to someone else.
Regarding your second question: Use your database. It's not super-scalable, but it's easy to implement a system where you have a flag in the database that is, essentially, an on-off switch for the application. Once that exists, make a page (with whatever security precautions you think prudent) that sets the flag and starts the application on a loop that continues indefinitely as long as the flag is set. A second page clears the flag, should you need to stop the HTTP requests without killing the entire server process. Instead of 'pages,' they could also be shell scripts or short standalone scripts. The important part is that you can implement this without requiring Celery or cron (although if you're already familiar with either, have at).

Resources