I need to monitor user changes on active record level with associations. This changes should be displayed on a stream like page. Since every user has different permission on different areas of my application also the audit stream has to be limited to their permissions.
I know about Paper trail (does not work on all type of relations:( ), Vestal versions, acts_as_audit and so on, but do not know what is the best practise to use this gems as described in my case.
What is the best practise to audit AR changes including associations and display in an activity stream with different user rights ?
A simple way to audit AR activities is provided by Rails with Observer.
Observer allows to add callback function on Models. You can then update an Acitivty model allowing to see what has been changed. Take note that Observer are independent of Controller. What or Whoever change your model should pass through your observer. The drawback is that it can slow down your application.
A more efficient but also more complicated way would be to use EventMachine. See this article for an introduction about it.
A way between EventMachine and Observer could be to use delayed jobs with Observers. If you have to compute something when auditing, it can allow you to do it without slowing down your application like a sleeping turtle. There's some good tips here about Delayed Jobs.
Related
Are there any benefits of using wisper gem over rails-observers?
They both looks quite similar at first sight, but wisper seems to be more supported by community (based on GH stars, commits and releases). Are there any significant differences between them?
Rails observers suffer the same problems as ActiveRecord callbacks, primarily that they can't be turned off. With ActiveRecord callbacks you forever couple the model to whatever is referenced in the callback and whatever side effect occurs in the callback always happens in all circumstances. Using observers only really moves the problem.
What if we want to use our model in a context, which we don't foresee today, in which we don't want the observer callback to occur?
If you search StackOverflow for “Rails callbacks”, a large number of
the results pertain to seeking means to avoid issuing the callback in
certain contexts. It almost seems as though Rails developers discover
a need to avoid callbacks as soon as they discover their existence.
ref: http://samuelmullen.com/2013/05/the-problem-with-rails-callbacks/
Wisper (disclaimer, author here) allows publishers (e.g. a model) to broadcast events when something significant happens. Listeners are subscribed to the publishers, at runtime. The publishers and subscribers know nothing of each other. Instead of having a hard dependency on each other, they only depend on the event.
Our systems, by necessity, have dependencies between objects, they need to communicate. However we want this dependency to be as light as possible so that an object can be used, in the future, in different contexts we currently might not be able to foresee today. Also see Connascence for examples of light/hard dependencies.
Using Wisper, in different contexts you can choose to subscribe a listener or not.
For example in a controller I might subscribe some listeners, but in my unit test I might want to test the model isolation, without these side effect occurring. Another example might be a rake task, where I want to save a model but do not want certain side effects to occur. Or an admin controller where I do/don't want something to which should/shouldn't happen in other contexts.
Finally Wisper has built-in support for asynchronous broadcasting of events which Observers do not, they are always synchronous.
I am trying to build an offline synchronization capability into my iOS App and would like to get some feedback/advice from the community on the strategy and best practice to be followed to do the same. The app details are as follows:
The app shows a digital catalog to users and allows them to perform actions like creating and placing orders, among others.
Currently the app only works when online, and we have APIs for all actions like viewing the catalog, creating/placing orders which return JSON data.
We would like to provide offline/synchronization capability to users, through which users can view the catalog and create/place orders while offline, and when they come online the order details will be synchronized and updated to our server.
We would also like to pull the latest data from the server, and have the app keep itself up to date in case of catalog changes or order changes that happened at the Server while the app was offline.
Can you guys help me to come with the best design and approach for handling this kind of functionality?
I have done something similar just in the beginning of this year. After I read about NSOperationQueue and NSOperation I did a straight forward approach:
Whenever an object is changed/added/... in my local database, I add a new "sync"-operation to the queue and I do not care about, if the app is online or offline (I added a reachability observer which either suspended the queue or takes it back working; of course, I do re-queueing if an error occurs (lost network during sync)). The operation itself reads/writes the database and does the networking stuff. My ViewController use a NSFetchedResultsController (with delegate=self) to get callbacks on changes. In some cases I needed some extra local data (it is about counting objects), where I have used NSManagedObjectContextObjectsDidChangeNotification.
Furthermore, I have used Multi-Context CoreData which sounded quite reasonable to use (I have only two contexts).
To get notified about changes from your server, I believe that iOS 7 has something new for you.
On the server side, you should read a little for the actual approach you want to go for: i.e. Data Synchronization by Dan Grover or Developing Android REST Client Applications (of course there are many more good articles out there).
Caution: you might be disappointed when you expect an easy solution. Your requirement is not unusual, but the solution might become more complex than you expect - depending on the "business rules" and other reasonable requirements. If you intelligently restrict your requirements you may find a solution which you can implement yourself, otherwise you may also consider to use a commercial product.
I could imagine, that if you design the business logic such that it takes an offline state into account and exposes this explicitly in the business logic, you may find a solution which you can implement yourself with moderate effort. What I mean by this is for example, when a user creates an order, it is initially in "not committed" stated. The order will only be committed when there is access to the server and if the server gives the "OK" that this order can actually be placed by this user. The server may also deny the order, sending corresponding messages to the user.
There are probably quite a few subtle issues that may arise due to the requirement of eventual consistency.
See also this question which contains pointers to solutions from commercial products, and if you visit their web sites give valuable information about the complexity of the problem and how this can be solved.
I have an old SL4/ria app, which I am looking to replace with breeze. I have a question about memory use and caching. My app loads lists of Jobs (a typical user would have access to about 1,000 of these jobs). Additionally, there are quite a few lookup entity types. I want to make sure these are cached well client-side, but updated per session. When a user opens a job, it loads many more related entities (anywhere from 200 - 800 additional entities) which compose multiple matrix-style views for the jobs. A user can view the list of jobs, or navigate to view 1 job at a time.
I feel that I should be concerned with memory management, especially not knowing how browsers might deal with this. Originally I felt this should all be 1 EntityManager and I would detachEntities when user navigates away from a job, but I'm thinking this might benefit from multiple managers by intended lifetime. Or perhaps I should create a new dataservice & EntityManager each time the user navigates to a new hash '/#/' area, since comments on clear() seems to indicate that this would be faster? If I did this, I suppose I will be using pub/sub to notify other viewmodels of changes to entities? This seems complex and defeating some of the benefits of breeze as the context.
Any tips or thoughts about this would be greatly appreciated.
I think I understand the question. I think I would use a multi-manager approach:
Lookups Manager - holds once-per session reference (lookup) entities
JobsView Manager - "readonly" list of Jobs in support of the JobsView
JobEditor Manager - One per edit session.
The Lookups Manager maintains the canonical copy of reference entities. You can fill it once with a single call to server (see docs for how). This Lookups Manager will Breeze-export these reference entities to other managers which Breeze-import them as they are created. I am assuming that, while numerous and diverse, the total memory footprint of reference entities is pretty low ... low enough that you can afford to have more than one copy in multiple managers. There are more complicated solutions if that is NOT so. But let that be for now.
The JobsView Manager has the necessary reference entities for its display. If you only displayed a projection of the Jobs, it would not have Jobs in cache. You might have an array and key map instead. Let's keep it simple and assume that it has all the Jobs but not their related entities.
You never save changes with this manager! When editing or creating a Job, your app always fires up a "Job Editor" view with its own VM and JobEditor Manager. Again, you import the reference entities you need and, when editing an existing Job, you import the Job too.
I would take this approach anyway ... not just because of memory concerns. I like isolating my edit sessions into sandboxes. Eases cancellation. Gives me a clean way to store pending changes in browser storage so that the user won't lose his/her work if the app/browser goes down. Opens the door to editing several Jobs at the same time ... without worrying about mutually dependent entities-with-changes. It's a proven pattern that we've used forever in SL apps and should apply as well in JS apps.
When a Job edit succeeds, You have to tell the local client world about it. Lots of ways to do that. If the ONLY place that needs to know is the JobsView, you can hardcode a backchannel into the app. If you want to be more clever, you can have a central singleton service that raises events specifically about Job saving. The JobsView and each new JobEditor communicate with this service. And if you want to be hip, you use an in-process "Event Aggregator" (your pub/sub) for this purpose. I'd probably be using Durandal for this app anyway and it has an event aggregator in the box.
Honestly, it's not that complicated to use and importing/exporting entities among managers is a ... ahem ... breeze. Well worth it compared to refreshing the Jobs List every time you return to it (although you'll want a "refresh button" too because OTHER users could be adding/changing those Jobs). You retain plenty of Breeze benefits: querying, validation, change-tracking, batch saves, entity navigation (those reference lists work "for free" in Breeze).
As a refinement, I don't know that I would automatically destroy the JobEditor view/viewmodel/manager when I returned to the JobsView. In my experience, people often return to the same Job that they just left. I might hold on to a view so you could go back and forth quickly. But now I'm getting tricky.
I am using Rails 3.1.0 with Devise 2.1.0. I would like to limit the number of times a user can perform an action in a day. The main purpose of this limitation is to prevent spam.
I see many questions similar to this one but was wondering if there is a specific way to accomplish what I am trying to do through Devise.
For the actions that create model instances, the number of times an action has been performed in a day is easy to keep track of. However, at least one action that I would like to restrict does not create a model instance, so I'm not sure what to do about it.
I was also wondering if this is a legitimate/effective way of preventing spam (in addition to requiring users to register and sign in to perform the actions).
Personally, I find these sorts of systems to be over-complications. Unless spam is an existing, provable problem I'm not sure adding in a system that's likely to be rather extensive is a good use of time and energy.
Alternatives to this would be requiring registration through a third-party service (say Facebook) and using either captchas or exciting and new negative captchas.
That said, if you want to do this, I think the best place to keep track of it would be in an ephemeral data store. Redis would be really good for this since you can use queues. In the actions that you want to restrict, add a timestamp to the queue, and before you allow the user to perform said action, check the number of elements in the queue, purging ones that are too old while you do so.
That's sort of pseudo-codey, but should at least help you get started.
I'm working on a Ruby on Rails site.
In order to improve performance, I'd like to build up some caches of various stats so that in the future when displaying them, I only have to display the caches instead of pulling all database records to calculate those stats.
Example:
A model Users has_many Comments. I'd like to store into a user cache model how many comments they have. That way when I need to display the number of comments a user has made, it's only a simple query of the stats model. Every time a new comment is created or destroyed, it simply increments or decrements the counter.
How can I build these stats while the site is live? What I'm concerned about is that after I request the database to count the number of Comments a User has, but before it is able to execute the command to save it into stats, that user might sneak in and add another comment somewhere. This would increment the counter, but then by immediately overwritten by the other thread, resulting in incorrect stats being saved.
I'm familiar with the ActiveRecord transactions blocks, but as I understand it, those are to guarantee that all or none succeed as a whole, rather than to act as mutex protection for data on the database.
Is it basically necessary to take down the site for changes like these?
Your use case is already handled by rails. It's called counter cache. There is a rails cast here: http://railscasts.com/episodes/23-counter-cache-column
Since it is so old, it might be out of date. The general idea is there though.
It's generally not a best practice to co-mingle application and reporting logic. Send your reporting data outside the application, either to another database, to log files that are read by daemons, or to some other API that handle the storage particulars.
If all that sounds like too much work then, you don't really want real time reporting. Assuming you have a backup of some sort (hot or cold) run the aggregations and generate the reports on the backup. That way it doesn't affect running application and you data shouldn't be more than 24 hours stale.
FYI, I think I found the solution here:
http://guides.ruby.tw/rails3/active_record_querying.html#5
What I'm looking for is called pessimistic locking, and is addressed in 2.10.2.