I am currently planning a complex application using ruby on rails and ember.js. What I have seen about ember-data so far is that it caches records automatically; post.comments will first result in a Ajax-call to fetch all comments for the given post, but if the user visits the same route the next time, it will just fetch the records from the store-cache.
The problem is: What if another user added a comment to this post? How to tell ember it has to reload its cache because something changed?
I already thought about a solution using websockets to tell clients which stuff to reload - but I don't think this is best-practice. And in addition, I can't imagine this isn't a common problem, so I am wondering what other developers are doing to solve this issue.
I tried to implement model updating in (experimental) chat application. I have used SSE: ActionController::Live on server side (Ruby on Rails) and EventSource on client side.
Simplified code:
App.MessagesRoute = Ember.Route.extend({
activate: function() {
if (! this.eventSource) {
this.eventSource = new EventSource('/messages/events');
var self = this;
this.eventSource.addEventListener('message', function(e) {
var data = $.parseJSON(e.data);
if (data.id != self.controllerFor('messages').get('savedId')) {
self.store.createRecord('message', data);
}
});
}
}
});
App.MessagesController = Ember.ArrayController.extend({
actions: {
create: function() {
var data = this.getProperties('body');
var message = this.store.createRecord('message', data);
var self = this;
message.save().then(function (response) {
self.set('savedId', response.id);
});
}
}
});
The logic is simple: I'm getting each new record from EventSource. Then, if record was created by another client, the application detects it and new record being added to store using ember-data's createRecord. Suppose this logic may have some caveats, but at least it serves well as 'proof of concept'. Chat is working.
Full sources available here: https://github.com/denispeplin/ember-chat/
I have something to say about reloading: you probably don't want to perform full reloading, it's resource-consuming operation. Still, your client side needs some way to know about new records. So, getting new records one-by-one via SSE is probably the best option.
If you just want to get rid of caching you can force a reload every time user navigates to comments route. But this largely depends on what you are trying to acheieve, I hope comments is just an example.
If you want your ui to get updated automagically with changes in server, you need some communication with server, some polling mechanism like websocket or polling from a webworker. Then you may reload the list of changed records sent from server. You are probably on the right track with this.
You can as well take a look at the orbitjs standalone library that integrates well with Ember. This is more useful if you require local storage as well and got to manage the multiple data sources.
This is really a common problem with any web application, no matter what framework you are using. From my point of view, there are two main options. One: You have a service that polls the server to check to see if there are any changes that would require you to reload some of your models, have that service return those model IDs and refresh them. The other option is as you suggested, using a websocket and pushing notifications of model changes/new models themselves.
I would opt to actually just send the comment model itself, and push it into the Ember store and the associated post object. This would reduce the need to hit the server with a hard refresh of your model. I am currently using this method with my Ember app, where there is an object that contains overview data based on all the models in my app, and when a change is made in the backend, my websocket server pushes the new overview data to me application.
UPDATE:: I meant for this to be a comment, not an answer, oh well.
I've had this same issue with mobile app development. While websockets seemed like the first answer, I was worried about scalability issues with limited server resources. I decided to stick with the Ajax call to fetch newly modified records. This way server resources are only used if the user is active. However, as others pointed out, returning all comments every single time you need data makes your cacheing useless and is slow. I suggest updating your rails server to accept an optional timestamp. If the timestamp is not supplied, then every comment is retrieved. If a timestamp is supplied, then only return comments where the updated_at column is >= the supplied timestamp. This way, if no comments were added or modified since your last call, then you quickly get back an empty list and can move on. If results are returned, you can then merge them with your existing list and show the updated comments.
Example of fetching newly created or modified comments
if params.has_key?(:updated_since)
comments = Post.find(params[:id]).comments.where("updated_at >= ?", params[:updated_since])
else
comments = Post.find(params[:id]).comments
end
Related
I ran into a weird problem looking for the best solution that i can get here. I'm developing a rails application which displays data from a common database that is being used by another application(nodejs). All CRUD operations happens at the other platform. In the rails app, we just query and display the data.
In rails app, I need to auto update views without refreshing. for example
def index
#states = State.page(params[:state_page])
#level_one_companies = Company.includes(:state)
.where(level: 1)
.order('created_at DESC').limit(20)
#level_two_companies = Company.includes(:state)
.where(level: 2)
.order('created_at DESC').limit(20)
end
On the index page i will have tables for each of these and I need to refresh tables when new data is added to the state, (or) Level 1 (or) Level 2 companies.
I know i can go with two ways to auto update views i.e
Action Cable.
Data pooling at a time interval using Jquery.
Usually while using Action Cable we will broadcast data from server after a record is created in the db(after .save in create action (or) after_save callback from model). However, I'm not creating any records through rails app.
My first question is, is there any way use action cable in this case?.
So i went with the second option and it's working fine. But it making too many db calls after every X seconds.
is there any way to reduce queries to update views?
What is the best way i can go with here? Any help highly appreciated. Thanks.
if your tags are set correctly, you are using postgres as a database.
postgres offers a publish-subscribe machanism that you could use in combination with action-cable in order to listen to changes in your database.
in this gist, you can find an example for postgres-pubsub with server-sent-events. it should be simple to translate this into action-cable compatible code.
You can create a trigger on a table (creates/updates/deletes) that fires a notification on a "channel", and you can listen to said channel for events. I use socketcluster, listen from workers and broadcast to consumers (browsers and mobile apps).
First you create the trigger:
CREATE FUNCTION deletes_notify_trigger() RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
BEGIN
PERFORM pg_notify('deletes_channel', ('DELETED' || ';;' || OLD.id )::text );
RETURN new;
END;
$$;
and
CREATE TRIGGER deletes_trigger AFTER DELETE ON events FOR EACH ROW EXECUTE PROCEDURE deletes_notify_trigger();
You can add any stuff you want in the data packets being broadcast, in my case I just need the ID of the record. For creates and updates you could send the full row or only some selected columns. You can also send it as JSON in PG 9.2 (I think) and higher. I use 9.1 so I concat with ;; separators.
Make sure your code takes at most 10% of the time between your queries otherwise if you do complex joins or updates or other operations you will notice significant drops in performance. You want this to be as simple and fast as possible, keep it down to basic operations and do any heavy-lifting on the application layer.
Then, the watcher and broadcasting to consumers (in my case node, socketcluster and the pg gem, in your case you can use JS, Python, Ruby, whatever you like)
var global_deletes = socket.subscribe('deletes_channel');
pg.connect(connectionString, function(err, client) {
client.on('notification', function(dbmsg) {
console.log(dbmsg.payload);
var payload = dbmsg.payload.split(";;"); // you can use JSON instead
if (payload[0] == "DELETED") { // when a DELETE is received...
global_deletes.publish(dbmsg.payload);
var tchannel = socket.subscribe('events-'+ payload[1]); // join the channel we want to broadcast
setTimeout( () => tchannel.publish(dbmsg.payload), 50); // send the update to all consumers
setTimeout( () => tchannel.unsubscribe(), 100);
}
);
var query = client.query("LISTEN deletes_channel"); // turn on notifications from the server
});
I have websocket price data streaming in to my rails api app which I want to keep updated so any api requests get an updated response. It would be too expensive to save each update to the database. How can I do this? In Ember I can modify the model and it persists. It doesn't seem to happen in rails.
Channel controller:
def receive(message)
#ActionCable.server.broadcast('channel', message)
platform = Platform.find(params[:id]);
market = platform.markets.find_by market_name: message["market_name"]
market.attributes = {
market.price = message.values["price"],
etc......
}
#market.save [this is too expensive every time]
end
Am I going about this in the right way? It also seems inefficient to use find every time I want to update which could be multiple times per second. In Ember I created a record Id lookup array so I could quickly match the market_name, I don't see how to do this in rails.
Persistence to some store is the only way you can have other threads respond with latest value.
Instead of 3 queries( 2 selects and 1 update) you can do it with just 1 update
Market.where(platform_id: params[:id], market_name: message["market_name"]).
update_all(price: message.values["price"])
With proper index, you might have a sub-ms performance for each update.
Depending on your business need:
If you are getting tons of updates for a market every second(making all prior stale and useless), you can choose to ignore few and not fire update at all.
I'm just trying to use ReactRB with reactive-record.
So the deal is in render part I think. When I'm setting param :user, type: User in React Component class, I can't see any data in my table. Of course Model User in public folder, as this requirement in ReactRB.
Well, in console I see that server is fetching nothing, but right data returned.
What I'm missing? Thanks for the help!
The key for answer is in this screenshot
The details are that the data comes back from the server as a json blob
reactive-record decodes it, but counts on the fact that if you try to json parse a simple string, it raises an error.
opal 0.10 no longer raises standard error, so the whole thing just hangs up.
Just thinking about this... there is a known problem in Opal https://github.com/opal/opal/issues/1545 and this causes a problem in reactive-record. Please make sure that you are not using opal 0.10
One thing to keep in mind is that reactive-record lazy loads records, and attributes. So unless someplace in your render, you access a particular record/attribute that attribute will not show up on the client.
Its hard to tell more without a bit more of your code posted, but here is some help:
Lets say your component looks like this:
class Foo < React::Component::Base
param :user, type: User
def render
"user.name = #{user.name}"
end
end
and someplace either in a controller or in a layout you do this:
render_component '::Foo', {user: User.first}
You might try something very simple like this, just to get familiar with how things work.
What happens should be this: You will render your view and a placeholder for the first User will be sent to the component, during rendering the component looks for that user's name attribute, which it does not have, so that is queued up to fetch from the server. Rendering will complete, and eventually the data will come down from the server, the local model's data will be updated, and components displaying that data will be rerendered.
During prerendering all the above happens internal to the server, and when the component has been rendered the final html is delivered along with all the model data that was used in rendering the component. So on first load if all is working you will not see any fetches from the server.
So if you try out the above small example, and then go into your javascript console you can say things like this:
Opal.User.$first()
and you will see the underlying model data structure returned (I am translating from JS into ruby above... ruby methods all start with $)
You can then do this:
Opal.User.$first().$name()
And you can even do this (assuming there are at least 2 user models):
Opal.User.$find(2).$name()
You should have something like "DummyValue" returned, but then there will be a server fetch cycle in the console, then if you repeat the above command you will get back the actual value!!!
This may not be the best forum for more details, if you need to drop by https://gitter.im/reactrb/chat for more help
I have a class method (placed in /app/lib/) which performs some heavy calculations and sub-http requests until a result is received.
The result isn't too dynamic, and requested by multiple users accessing a specific view in the app.
So, I want to schedule a periodic run of the method (using cron and Whenever gem), store the results somewhere in the server using JSON format and, by demand, read the results alone to the view.
How can this be achieved? what would be the correct way of doing that?
What I currently have:
def heavyMethod
response = {}
# some calculations, eventually building the response
File.open(File.expand_path('../../../tmp/cache/tests_queue.json', __FILE__), "w") do |f|
f.write(response.to_json)
end
end
and also a corresponding method to read this file.
I searched but couldn't find an example of achieving this using Rails cache convention (and not some private code that I wrote), on data which isn't related with ActiveRecord.
Thanks!
Your solution should work fine, but using Rails.cache should be cleaner and a bit faster. Rails guides provides enough information about Rails.cache and how to get it to work with memcached, let me summarize how I would use it in your case
Heavy method
def heavyMethod
response = {}
# some calculations, eventually building the response
Rails.cache.write("heavy_method_response", response)
end
Request
response = Rails.cache.fetch("heavy_method_response")
The only problem here is that when ur server starts for the first time, the cache will be empty. Also if/when memcache restarts.
One advantage is that somewhere on the flow, the data u pass in is marshalled into storage, and then unmartialled on the way out. Meaning u can pass in complex datastructures, and dont need to serialize to json manually.
Edit: memcached will clear your item if it runs out of memory. Will be very rare since its using a LRU (i think) algoritm to expire things, and I presume you will use this often.
To prevent this,
set expires_in larger than your cron period,
change your fetch code to call the heavy_method if ur fetch fails (like Rails.cache.fetch("heavy_method_response") {heavy_method}, and change heavy_method to just return the object.
Use something like redis which will not delete items.
I am looking at using Ember.js for a new, Rails-backed, app (using Active Model Serializers). I am struggling to get my head around the framework, so maybe this is a bit of a newbie question.
My data structure is like this (simplified):
Event Days --
Events --
* Participants
* Location
Inside of an 'Event Day' there can be thousands of events (and inside an event dozens of participants and all of their data).
It seems 'wrong' that when I want to get a listing of event days I load some JSON that has the not only the EventDays but also all the Events (and then all the data from everything inside there)... basically, it loads the whole tree!
I thought I could solve this problem by using custom Serializers for the actions, but, at some point I need to get the data and Ember seems to never call the server again.
So, if I load EventDays and simply have no event data inside it Ember never calls the server to update the EventDay object when I click through to a 'show' method.
I don't know if I am being clear here. I am hoping someone who is a little ahead of me in this can understand what I am driving at!
Really I think it boils down to 2 questions:
1)How to properly filter out information on requests so that only the local objects are filled in (i.e. on a call to an index method I need a list of event days without children, but on a call to a show method I need a single event day filled with the next level down)
2) How to get Ember to 'reload' an object at the appropriate time to fill out the appropriate content
Maybe I am looking at this wrong - missing the point of something like Ember - and if so I welcome pointers to appropriate tutorials but I can't find anything (even on the Ember site) that explains how to do anything other than load the whole tree at once. With Gigs of data, this seems slow, a definite browser-killer and just plain wrong.
I appreciate my StackOverflow brethren helping me learn!
edit
As I was immediately down voted for some reason I will add code:
Client Side:
App.EventDay = DS.Model.extend({
day: DS.attr('date'),
events: DS.hasMany("Event", {async: true})
});
Server Side:
class EventDaySerializer < ActiveModel::Serializer
attributes :id, :day
has_many :events, embed: :ids, key: :events
end
edit 2
after kertap's suggestion I added the async attribute and updated the serialiser code above.
The json is here:
{"event_day":
{"id":2,
"day":"2013-12-05",
"events":[1,2,3,4,5,6,7,8]
}
}
It is worth noting that if I do not use the key: :events parameter in the serialiser things come back as "event_ids": [1,2,3,4] which, you would think is right, but causes Ember to not see the events.
Also worth noting is that if I do this:
HorseFeeder.ApplicationSerializer = DS.ActiveModelSerializer.extend({});
Then nothing works at all! I get Error while loading route: Error: Assertion Failed: The response from a findAll must be an Array, not undefined
I really don't think it should be this difficult to get the basic wiring of Ember and Rails to work...
You can tell ember data that a relationship is asynchronous.
App.EventDay = DS.Model.extend({
day: DS.attr('date'),
events: DS.hasMany("Event", {async: true})
});
If you do this and get all EventDays you will get the list of EventDays from the server side. The server side should respond with the ids of the events that are contained in an event day or you can provide a url that is a link to all events. Ember won't load the events until you need them.
Then when you call the show method for an event day and in your template you get all events for that day ember data will go off and fetch the data for events.