Realtime rails application [closed] - ruby-on-rails

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have simple and straightforward rails app. And it requires to reload a page each time when I want to send an update or check new info available on the server.
I want to do two things:
Send data to my rails app without reloading a page
Show updates to the data which are done by other users (also without reloading a page)
Any recommendations on gems, framework, technologies to accomplish these two tasks?

For the first point, classical rails ajax call (with data-remote attribute) should do the job.
For the second, you should consider to use sockets, with services like pusher or faye. Take a look at this gem, which permits to sync partials : https://github.com/chrismccord/sync
If you can't use sockets, the classical fallback is a periodic ajax call on your backend.

I wonder whether any gem wraps all these details
Pusher
Asynchronicity
Asynchronicity is standard functionality for web applications - you'll have to open an asynchronous request to your server, which will allow you to then receive as much data as you want.
Asynchronous connections are best defined within the scope of HTTP. HTTP is stateless, meaning it treats your requests as unique every time (does not persist the data / connectivity). This means you can generally only send single requests, which will yield a response; nothing more.
Asynchronous requests occur in parallel to the "stateless" requests, and essentially allow you to receive updated responses from the server through means other than the standard HTTP protocol, typically through Javascript
--
There are 2 main ways to initiate "asynchronous" requests:
SSE's (Server Sent Events) - basically Ajax long polling
Websockets (opens a perpetual connection)
--
SSE's
Server sent events are an HTML5 technology, which basically allows you to "ping" a server via Javascript, and manage any updates which comes through:
A server-sent event is when a web page automatically gets updates from
a server.
This was also possible before, but the web page would have to ask if
any updates were available. With server-sent events, the updates come
automatically.
Setting up SSE's is simple:
#app/assets/javascripts/application.js
var source = new EventSource("/your/endpoint");
source.onmessage = function(event) {
alert(event.data)
};
Although native in every browser except IE, SSEs have a major drawback, which is they act very similarly to long-polling, which is super inefficient.
--
Websockets
The second thing you should consider is web sockets. These are by far recommended, but not having set them up so far, I don't have much specific information on how to use them.
I have used Pusher before though, which basically creates a third-party websocket for you to connect with. Websockets only connect once, and are consequently far more efficient than SSE's
I would recommend at least looking at Pusher - it sounds exactly like what you need

Related

How to write minimal Slack script without a server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I joined a Slack team and now I want to play with the bots there. But there seem to be lots of different ways and they all involve some server with API.
Isn't there an easy way to write a script (is that a bot) for end users? I write a file, load it into the slack app and it works?
My first idea (just to try it out) was to respond to certain keywords automatically from my own account.
There are four types of custom Slack integrations:
Incoming webhooks: your code sends an HTTP POST to Slack to post a message
Custom slash commands: Slack sends your code an HTTP POST when someone says /<whatever>
Outgoing webhooks: roughly the same as slash commands, but they can respond to any word at the beginning of a message
Bot users: your code connects to Slack via a WebSocket and sends and receives events
In all of these cases, you need code running somewhere to actually do the work. (In the case of the bot, that code can run anywhere with network connectivity. In the other cases, you'll need a server that's listening on the internet for incoming HTTP/HTTPS requests.)
Slack itself never hosts/runs custom code. I'd say https://beepboophq.com/ is the closest thing to what you're looking for, since they provide hosting specifically for Slack bots.
Another option for things like slash commands is https://www.webscript.io (which I own). E.g., here's the entirety of a slash command running on Webscript that flips a coin:
return {
response_type = 'in_channel',
text = (math.random(2) == 1 and 'Heads!' or 'Tails!')
}
If you want to do something really basic, you may consider this service
https://hook.io/
you can set up a webhook there using the provided url + you token (you can pass it as env variable) and code simple logic
I hope it helps
There are plenty of solutions for that.
You can use premade solutions like:
https://hook.io
https://www.zapier.com
https://www.skriptex.io (disclaimer: that's my app)
Or you can setup a hubot instance, and host it by yourself.
Their API is also good, and you can just create a Slack app, bind it to some commands, and it will interact with one of your servers.

How does Rails handle multiple incoming requests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How does Rails, handle multiple requests from different users without colliding? What is the logic?
E.g., user1 logged in and browses the site. At same time, user2, user3, ... log in and browse. How does rails manage this situation without any data conflict between users?
One thing to bear in mind here is that even though users are using the site simultaneously in their browsers, the server may still only be handling a single request at a time. Request processing may take less than a second, so requests can be queued up to be processed without causing significant delays for the users. Each response starts from a blank slate, taking only information from the request and using it to look up data from the database. it does not carry anything over from one request to the next. This is called the "stateless" paradigm.
If the load increases, more rails servers can be added. Because each response starts from scratch anyway, adding more servers doesn't create any problems to do with "sharing of information", since all information is either sent in the request or loaded from the database. It just means that more requests can be handled per second.
When their is a feeling of "continuity" for the user, for example staying logged into a website, this is done via cookies, which are stored on their machine and sent through as part of the request. The server can read this cookie information from the request and, for example, NOT redirect someone to the login page as the cookie is telling them they have logged in already as user 123 or whatever.
In case your question is about how Rails differ users the answer will be that is uses cookies to store session. You can read more about it here.
Also data does not conflict since you get fresh instance of controller for each request
RailsGuides:
When your application receives a request, the routing will determine
which controller and action to run, then Rails creates an instance of
that controller and runs the method with the same name as the action.
That is guaranteed not by Rails but by the database that the webservice uses. The property you mentioned is called isolation. This is among several properties that a practical database has to satisfy, known as ACID.
This is achieved using a "session": a bunch of data specific to the given client, available server-side.
There are plenty of ways for a server to store a session, typically Rails uses a cookie: a small (typically around 4 kB) dataset that is stored on user's browser and sent with every request. For that reason you don't want to store too much in there. However, you usually don't need much, you only need just enough to identify the user and still make it hard to impersonate him.
Because of that, Rails stores the session itself in the cookie (as this guide says). It's simple and requires no setup. Some think that cookie store is unreliable and use persistence mechanisms instead: databases, key-value stores and the like.
Typically the workflow is as follows:
A session id is stored in a cookie when the server decides to initialize a session
A server receives a request from the user, fetches session by its id
If the session says that it represents user X, Rails acts as if it's actually him
Since different users send different session ids, Rails treats them as different ones and outputs data relevant to a detected one: on a per-request basis.
Before you ask: yes, it is possible to steal the other person's session id and act in that person's name. It's called session hijacking and it's only one of all the possible security issues you might run into unless you're careful. That same page offers some more insight on how to prevent your users from suffering.
As additional case You could use something like a "puma" multithread server...

Using Gmail for Rails App advisable? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am currently building a crowdfunding web application with Rails and in order to send registration confirmations, password resets or just newsletters I need a mail service.
Currently I am using a regular Gmail account, is doing so advisable? And to which service should I switch once business gets going?
It's fine as long as you don't have too many mails to send out. Gmail has limits on the amount of mail that you can send and receive.
You can find it here: https://support.google.com/a/answer/166852?hl=en
Other than the limits, there is not much of a problem using Gmail. I can't answer the next part of your question unfortunately as I don't have much of an experience in that area.
I've found using GMail to be fairly reliable. But you do need to be aware of throttling. This probably won't be a problem for your registration confirmations or password resets... but may be someday for your newsletters. I forget the specifics, but if you send out more than about one thousand emails per hour (see link in #Vinay's answer for specifics) they start to get throttled -- which lasts for a period of tiem during which any emails sent simply don't get sent.
Despite GMail's decent reliability, you should consider using a resque, skidekiq, or delayed-job service for the actual sending of the emails. This is jsut a good policy for all external services and GMail is no different, in the end. Using a background job for your mail sender allows you to retry an email send until it works. This helps when either the Gmail SMTP service goes down or when you have a bug in your email sending code.
The question about what service to switch to when you outgrow GMail is very much a matter of opinion. Which is the type of question we try to avoid on Stack Overflow (and the reason why your question has a close vote on it already).
I am a big fan of SendGrid, especially if you are are running on heroku. It is simple to add to a rails app on heroku. https://addons.heroku.com/sendgrid
It will most likely be free when you launch, the free version supports up to 200 emails/day (if you get beyond that, then you are doing well and can afford to pay for it). It also some nice tools that help you identify which emails aren't being delivered and why.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

Ideas for web application with external input and realtime notification

I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )

Resources