Is there a way to disconnect console messaging from Hyperstack message queue? - ruby-on-rails

In Hyperstack every state change enters a message queue through a websocket mechanism to inform every application client for model/app state changes. So if you update a model from my browser sessions, everyone else connected at the time can see it in their session (if there are the necessary permissions).
This is even done from console 'sessions'. You change a model from Rails console and changes automatically propagate to all connected web clients.
For this to be done the web application part has to be operational (i.e. rails server, must be up and running).
The problem is that there are two situations where you might not want console updates to propagate to the client:
when rails server is not operational, for any reason and you want to interact with the application through it's console (until rails server is up again)
You want to perform batch updates through console or rake tasks and you don't want the overhead of keeping clients informed.
Is there a way to to quickly turn of messaging from the console or some kind of toggle method for that purpose?

If the rails server is not up it will not try to send messages (however see note at end)
But the case of a rake task that you want to run while the server IS up, is interesting. I don't think there is any published way to turn off the "remote process -> server" push, but this patch will accomplish the same:
module Hyperstack
def self.send_to_server(*args)
# drop the message on the floor
end
end
Just stick that in the rake task.
Regarding the server "not being up" the one case that does not work is if the server is in fact "up" but simply never responds. See https://github.com/hyperstack-org/hyperstack/issues/144 for details. If you are trying to debug a server problem then the same patch above will help until that issue is fixed.

Related

Running code repeatedly in Ruby on Rails with Heroku for an indefinite period

I am attempting to build a web application with Ruby on Rails that users site up for and get an email alert when a certain event happens.
As such, I need to be able to make an API call and then based on the JSON response, send the alert, but I need a way to have this API call happen repeatedly for an indefinite amount of time automatically. I am also using Heroku at this time if that needs to be taken into account.
Thanks for your help.
This sound like a cron job in plain old linux. Heroku calls this addon Scheduler. You have to define the task withing lib/tasks/scheduler.rake
For further information read the heroku docs for scheduler here

How to process a request with Rails while not locking browser

I've built a CRUD app that allows clients to scrape links. When the client clicks a button rails goes to the controller and runs the script (I can see all the activity in terminal), but there is not feedback on the frontend. Also, the user can't visit any other pages on the website while the script is running.
I script can take a long time so I want the client to be able to click a button, be redirected to another page and the process to start. The user can leave the page if he wants.
I would also like some sort of way to send an email to the user after it's completed.
Would my backend be able to run many tasks at once, right?
You need a background worker. The idea is to initiate the work when the user "Clicks a button", then let background worker perform the hard work while the user can continue browsing.
At the end the user is notified (email or other means) and the job result is accessible.
Off course several workers can work at the same time.
Have a look at sidekiq, resque or delayed_job
You can try EventMachine, if you are using supporting server(Thin, for example)
EM.next_tick{ p "hello, it's next tick!"}
Will be printed asyncronously

Sending data from an analytics engine to a Rails server

I have an analytics engine which periodically packages a bunch of stats in JSON format. I want to send these packages to a Rails server. Upon a package arriving, the Rails server should examine it, generate a model instance out of it (for historical purposes), and then display the contents to the user. I've thought of two approaches.
1) Have a little app residing on the same host as the Rails server to be listening for these packages (using ZeroMQ). Upon receiving a package, the app would invoke a Rails action through CURL, passing on the package as a parameter. My concern with this approach is that my Rails server checks that only signed-in users can access actions which affect models. By creating an action accessible to this listening app (and therefore other entities), am I exposing myself to a major security flaw?
2) The second approach is to simply have the listening app dump the package into a special database table. The Rails server will then periodically check this table for new packages. Upon detecting one or more, it will process them and remove them from the table.
This is the first time I'm doing something like this, so if you have techniques or experiences you can share for better solutions, I'd love to learn.
Thank you.
you can restrict access to a certain call by limiting the host name that is allowed for the request in routes.rb
post "/analytics" => "analytics#create", :constraints => {:ip => /127.0.0.1/}
If you want the users to see updates, you can use polling to refresh the page every minute orso.
1) Yes you are exposing a major security breach unless :
Your zeroMQ app provides the needed data to do authentification and authorization on the rails side
Your rails app is configured to listen only on the 127.0.0.1 interface and is thus not accessible from the outside
Like Benjamin suggests, you restrict specific routes to certain IP
2) This approach looks a lot like what delayed_job does. You might wanna take a look there : https://github.com/collectiveidea/delayed_job and use a rake task to add a new job.
In short, your listening app will call a rake task that will add a custom delayed_job when receiving a packet. Then let delayed_job handle the load. You benefit from delayed_job goodness (different queues, scaling, ...). The hard part is getting the result.
One idea would be to associated a unique ID with each job, and have the delayed_job task output the result in a data store wich associated the job ID with the result. This data store can be a simple relational table
+----+--------+
| ID | Result |
+----+--------+
or a memecache/redis/whatever instance. You just need to poll that data store looking for the result associated with the job ID. And delete everything when you are done displaying that to the user.
3) Why don't you directly POST the data to the rails server ?
Following Benjamin's lead, I implemented a filter for this particular action.
def verify_ip
#ips = ['127.0.0.1']
if not #ips.include? request.remote_ip
redirect_to root_url
end
end
The listening app on the localhost now invokes the action, passing the JSON package received from the analytics engine as a param. Thank you.

Suggestions for how to write a service in Rails 3

I am building an application which will send status requests to users (via email & sms) on a regular basis. I want to execute the service each hour which will:
Query the database for all requests that need to be sent (based on some logic)
Send the requests through Amazon's Simple Email Service (this is already working)
Write a record of the status request notification back to the data store
I am considering wrapping up this series of operations into a single controller with an end point that can be called remotely to kick off the process within the rails app.
Longer term, I will break this process out into an app that can be run independently of my rails app, but for now I'm just trying to keep it simple.
My first inclination is to build the following:
Controller with the following elements:
A method which will orchestrate the steps outlined above (and can be called externally)
A call to the status_request model which will bring back a collection of request needing to be sent
A loop to iterate through the pending requests, which will:
Make a call to my AWS Simple Email Service module to actually send the email, and
Make a call to the status_request model to log the request back to the database
Model:
A method on my status_request model which will bring back a collection of requests that need to be sent
A method in my status_request model which will log that a notification was sent
Since this will behave as a service that gets called periodically from an outside scheduler I don't think I'll need a view for this operation. (Will, of course, need views to show users and admins what requests have been sent, but that's later...).
As someone new to Rails, I'm asking for review of this approach and any suggestions you may have.
Thanks!
Instead of a controller which Jeff pointed out exposes a security risk, you may just want to expose a rake task and use cron to invoke it on an hourly basis.
If you are still interested in building a controller, look at devise gem and its single access token, token_authenticatable, for securing the methods you are exposing.
You may also want to look at delayed_job or resque to offload the call to status_request and the loop to AWS simple service to a background worker process.
You may want a seperate controller and view for the log file so you can review progress on demand.
And if you want to get real fancy use Amazon SNS to send you alerts when the service reaches some unacceptable level of failures, backlog, etc.
Since you are trying to invoke this from an outside process, your approach should work. You could also have a worker process that processes task when they are there.
You will need routes to expose your service, and you may want to also make security decisions. How will the service that invokes your application authenticate so all others can't hit it at will?
Another consideration should be how many emails are you sending. If there are enough, we may want to look into the fact that writing this sort of loop is going to be extremely top heavy; and may affect users on the current system if it's a web application.
In the end, there are many ways to do this. I would focus on the performance/usage you expect as well as security. There's never one perfect way to solve a problem like this, and your way should just be aware of the variables it will need to be operating within.
Resque and Redis might be helpful to you in scheduling and performing operatio n .They are simple and superfast, [here](http://railscasts.com/episodes/271-resque] is a simple tut on same.

WebSockets server that will complete the job after the connection is made - Ruby, Rails

I want to use something like EventMachine websockets to push status updates to the client as they happen.
My application crawls round a section of a website screen scraping relevant details of a user's search. I want to push any screen scraping captures to the client as they happen. I also want to persist these changes to the database. I also want the job to complete even if the user closes down the browser.
At the moment, the job is initiated from the client (browser) and the job is placed on a resque queue that completes the job. The client polls the database and displays the results.
I want to have a play around with websockets but I don't think I can get the same behaviour. It is more important that the results are persisted and the job completes than the real time pushes.
Am I wrong in the assumption that this cannot be done?
Have you looked at faye. Masseging With Faye(RailsCasts). You can keep on using the rescue queue to get the job completed and push the message to subscriber(your web client) as and when you find the results.

Resources