Is it possible to use 'push' services at the back-end? - ruby-on-rails

I'm using pusher gem to manipulate my front-end from an external API. It works fine, no problem with that.
But the thing I wonder is if there is a possibility to use push notifications at the back-end of my application? I spent a serious amount of time investigating this but couldn't find something useful.
Let me summarize:
I have an application and another API application which is tightly interacting with other. Sometimes I want to use my API to send notification to my main application and I want to be able to manipulate data at the back-end of my main application regarding the data received from API side. These are things like 'an action was completed/started/succeed' etc...
I understand that 'pusher' receives push notifications by JavaScript at the front-end. But I believe that there must be a way to use those notifications at the back-end as well.
If there is another way (maybe Faye? Websocket) to do that I'd love to learn what it is. Any clue would be appreciated.
Is it something doable?
Thank you

Pusher is a backend system too (to "push" updates to channels)
Endpoints
I think you may be interested in endpoints
From what I can gather, it seems you're looking to trigger the transfer of data to an endpoint once an action occurs in your API? For example:
User signs up on "API" app
API app sends "notification" to main app
Main app increases user count by 1
The way I can see this working is by either using ajax, or sending a curl request to your main app's endpoint (set in routes), triggering the action:
#main_app/config/routes.rb
post "endpoint", to: "application#endpoint"
#main_app/controllers/application_controller.rb
def endpoint
#count = Option.increment!(:user_count)
end
This will allow you to manipulate your data in the backend of your "main" app
API
The tricky, non-conventional part comes when you want to send the data from your API app to your Main app (this is where you got the "pusher" idea from)
I would personally look at sending a standard HTTP request to the Main app endpoint, probably with Curl (if from the backend):
Curl on Ruby on Rails
Rails curl syntax
You may want to install curb (CUrl RuBy) here: https://github.com/taf2/curb
I could write some code if you wanted?

I had asked the same question to the Pusher's support team and I got the exact answer I was looking for.
You can install a client library on your server
(http://pusher.com/docs/client_libraries) if there is one for your
server. You can then subscribe to a client channel this way.
In my case, I use Ruby gem which can be reached from https://github.com/pusher/pusher-ruby-client .

Related

.NET MVC Custom Mail Server

I'm in the process of planning the development of a mail-server to hand the sending of email across our multiple websites. Below is a description of what we are planning to implement and I'd like your opinion/suggestions.
We use ASP.NET MVC and have many web-sites hosted on Azure. We currently send mail internally within each of the web applications using SMTPServer.Send(). Obviously this is not the ideal way to send emails when you have a decently busy set of websites because the send mail call is blocking and cannot guarantee mails are sent. With this I'm worried out getting an influx of mail requests when we launch our next website (we think it'll get a decent amount of traffic and lots of emails will be sent).
My plan of action : to build a centralised mail-server that runs in the background (we use azure and this will be simply another web-application). When each one of our web applications wants to send a mail, instead of doing this internally, it'll call a web method on the mail-server called sendMail() this function will accept certain parameters and insert the mail parameters, content etc. into a database. The mail server will then poll the database at fixed time intervals, select a set of unsent emails and attempt to send them using the same SMTPServer.Send() function. If an email fails for some reason we won't flag it as sent and in the next poll interval the email will be selected again and another send attempt will be made. (we will cap the number of send attempts to say 20).
This will allow each of the websites to run smoothly without having loads of blocking send mail calls internally and the mail server will handle all the sending sequentially and in a controlled environment as a separate standalone web-application.
Thanks in advance!
Looks like a good design, Don't know the entire scenario which let to you building something like an email server. The problem has been solved well by using a service that already exist like Office 365.
Your design is good, My suggestions would be the following,
You can use Azure WebJobs to build the polling agent. You can make the web job run as a scheduled web job that does the polling and sending the mail and it can be written very clean as a simple console app.
You can use Azure API App to build the SendMail() call and you can use the Azure AD Auth on the API to authenticate the caller of the API using the Authentication and Authorization feature to easily secure your email server. You can also enable CORS easily as well to make sure you receive requests from other websites and process it.
Some issues I foresee for you,
Volume and Scaling : You can only process so much email between each polling. If you cannot then you will need to create another polling agent which will making things complicated as they need to know they are picking different sets of emails to send. If you volume is going to be low you should be fine.
Challenge : Why can't the websites send the mail them selves ? And then record it on the database for tracking. All you have to do build module or a component that they use on their web page to create and send the mail. Polymer 1.0 works well for this scenario.
Hope this helps to get you started.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

Can I use github-services hook to post my feeds to other services?

Github has developed github-services hook to push commits to other services like bugzilla, campfire, basecamp ..
Can one use the same github-services hook to push my application data to other services? If yes how may I integrate github-services to my Rails application.
Any Help ? Any suggestion ?
Update Can I integrate github-services hook source code as Sinatra application inside my Rails application ? How may I call other services(bugzilla, campfire, basecam, twitter) hooks from my application triggers ?
As example, When one user post something on other user's wall than message should be sent to the other services like bugzilla,campfire,basecamp, twitter ...
The Post-Receive Url is the simplest hook to perform such notification. It triggers a POST to a pre-configured Url whenever a pushis performed on the repository.
You could start with this Github.help page on testing web hooks to understand the format of what is being POSTed and how the service reacts. This is done thanks a very useful service: PostBin.
This help page gives a simple example of what one would have to implement on a Sinatra server to parse the POSTed JSON:
post '/' do
push = JSON.parse(params[:payload])
"I got some JSON: #{push.inspect}"
end
This gist goes a little further and show some really basic JSON data extraction.
If you want to go further, you can configure, through the GitHub API, some additional hooks to listen to more events (new issue, new fork, download, ...).
I think you are looking for an easy way to post your app's data to many other web services.
github-services is designed to take git commit information and push it to other services that accept that commit information... so if your app's data looks enough like github's payload, then those other services that work with github-services will work with your app.
But I suspect your app is not like github and your data is different than a git commit. In that case, you could make use of the code in 'services/' as examples of how to implement event handlers in your app. This one for Campfire uses the Tinder gem, for example: https://github.com/github/github-services/blob/master/services/campfire.rb
Then your WallPostsController#create could call a method that posts data in the format you choose to the various services. If you're going to post to many services, you may want to do it in an asynchronous job (DelayedJob, resque, etc.) because calls to many external services will take quite a while.

Posting multipart form data with Ruby

I'm building a rails app that interacts with a 3rd party API
When a user uploads a file to rails, it should be forwarded on to the 3rd party site via an HTTP POST.
In some cases, the upload can be several hundred MBs.
At the moment, I've just been re-posting to the API using Net::HTTP and accessing the multipart form object like so
#tempfile = params[:video][:file_upload].tempfile
This is hella slow though and feels kinda dirty.
Is there a better way to do this?
Is it possible to have the user post directly to the 3rd party service or do you have to handle the API through your Rails stack? Ideally you would be able to do this and would not have to load the file into your stack and then re-post it to the API. If you can't post directly, I would recommend seeing if the API has a streaming service so that you can send parts of the file instead of the entire thing at once. Either way I think you'll start running into Timeout errors on your side and on the API side with large files, so you'll have to increase your own timeouts or create a different type of streaming file uploader.
Spin up a background job using DelayedJob. In the delayed job, you could try rails redirect_to.
https://github.com/tobi/delayed_job
http://apidock.com/rails/ActionController/Base/redirect_to

Ideas for web application with external input and realtime notification

I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )

Resources