Logging data changes for synchronization - ruby-on-rails

I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.

I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.

You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.

(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!

Related

Rails: working on temporary instance between requests and then commit changes to database

I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)

What are CourseCompletions and when are they created?

I see the entries in the API documentation for getting "CourseCompletion" objects. But do not see how these are entered in the Learning Environment. Can you explain what these objects are?
CourseCompletion records are essentially meta-data type notes that you can attach to a user/course-offering combination to make a record of a user having "completed" a course on such-and-such a date. The course completion record can also carry an expiry date for when the "completion" becomes out of date or no longer relevant. These features are not heavily used by D2L customers, and are not exposed through the Web UI.
I don't believe there is any automation within the back-end service around the creation or modification of these records (for example, there isn't an event in the system when a course completion record would get created: a client would need to manually create such a record when it wants one to exist).

How to design a good QuickBooks integration solution

I have now played with the QBO and QBD APIs and feel I have a fair understanding of how it thinks and how to interact with it. So now it is time to design the actual integration solution.
Inside my application you can create new customers, quote services, perform services, and soon, pass invoices to QuickBooks, sounds easy.
But what if the customer is not in QB yet? No problem - for each invoice I will look up the customer (need the id anyway) and if it doesn’t exist, add it. But if I have to look up the customer for each invoice it seems like it might be slow. I will likely have 30,000 customers and have 500-3000 invoices per day.
So my question is this; what are others doing?
a) Are you storing the QB id for each customer in your data?
b) How do you detect address changes (changed in your app and changed in QB)?
c) Is the batch submission interface so much faster I should use that?
Thanks for your help!
We often times do store the QB id in our database for use. If we post an invoice into QB, we'll then store the QB id for future use if we need to modify it.
As far as detecting changes on the customer record and other info, there's a couple ways to handle the conflict resolution. One is to keep a timestamp on your side as to when changes are made. You can then compare this with the timestamp of the last change on the QB record and then make your decision as to which one gets updated.
FreddyMac,
To detect changes on the Intuit side you can construct a query with a CDCasOf Filter, which will return only the data that has changed since a date you provide. (ChangeDataCapture as of)
https://ipp.developer.intuit.com/0010_Intuit_Partner_Platform/0050_Data_Services/0500_QuickBooks_Windows/0100_Calling_Data_Services/0015_Retrieving_Objects
You need to keep track of data changes on your side.
The batch submission is not faster, its just easier for you to write the code.
The IPP SDK can queue the API calls for your and aggregate the responses.
regards,
Jarred

How do I update only the properties of my models that have changed in MVC?

I'm developing a webapp that allows the editing of records. There is a possibility that two users could be working on the same screen at a time and I want to minimise the damage done, if they both click save.
If User1 requests the page and then makes changes to the Address, Telephone and Contact Details, but before he clicks Save, User2 requests the same page.
User1 then clicks save and the whole model is updated using TryUpdateModel(), if User2 simply appends some detail to the Notes field, when he saves, the TryUpdateModel() method will overwrite the new details User1 saved, with the old details.
I've considered storing the original values for all the model's properties in a hidden form field, and then writing a custom TryUpdateModel to only update the properties that have changed, but this feels a little too like the Viewstate we've all been more than happy to leave behind by moving to MVC.
Is there a pattern for dealing with this problem that I'm not aware of?
How would you handle it?
Update: In answer to the comments below, I'm using Entity Framework.
Anthony
Unless you have any particular requirements for what happens in this case (e.g. lock the record, which of course requires some functionality to undo the lock in the event that the user decides not to make a change) I'd suggest the normal approach is an optimistic lock:
Each update you perform should check that the record hasn't changed in the meantime.
So:
Put an integer "version" property or a guid / rowversion on the record.
Ensure this is contained in a hidden field in the html and is therefore returned with any submit;
When you perform the update, ensure that the (database) record's version/guid/rowversion still matches the value that was in the hidden field [and add 1 to the "version" integer when you do the update if you've decided to go with that manual approach.]
A similar approach is obviously to use a date/time stamp on the record, but don't do that because, to within the accuracy of your system clock, it's flawed.
[I suggest you'll find fuller explanations of the whole approach elsewhere. Certainly if you were to google for information on NHibernate's Version functionality...]
Locking modification of a page while one user is working on it is an option. This is done in some wiki software like dokuwiki. In that case it will usually use some javascript to free the lock after 5-10 minutes of inactivity so others can update it.
Another option might be storing all revisions in a database so when two users submit, both copies are saved and still exist. From there on, all you'd need to do is merge the two.
You usually don't handle this. If two users happen to edit a document at the same time and commit their updates, one of them wins and the other looses.
Resources lockout can be done with stateful desktop applications, but with web applications any lockout scheme you try to implement may only minimize the damage but not prevent it.
Don't try to write an absolutely perfect and secure application. It's already good as it is. Just use it, probably the situation won't come up at all.
If you use LINQ to SQL as your ORM it can handle the issues around changed values using the conflicts collection. However, essentially I'd agree with Mastermind's comment.

Allow users to remove their account

I am developing a gallery which allows users to post photos, comments, vote and do many other tasks.
Now I think that it is correct to allow users to unsubscribe and remove all their data if they want to. However it is difficult to allow such a thing because you run the risk to break your application (e.g. what should I do when a comment has many replies? what should I do with pages that have many revisions by different users?).
Photos can be easily removed, but for other data (i.e. comments, revisions...) I thought that there are three possibilities:
assign it to the admin
assign it to a user called "removed-user"
mantain the current associations (i.e. the user ID) and only rename user's data (e.g. assign a new username such as "removed-user-24" and a non-existent e-mail such as "noreply-removed-user-24#mysite.com"
What are the best practices to follow when we allow users to remove their accounts? How do you implement them (particularly in Rails)?
I've typically solved this type of problem by having an active flag on user, and simply setting active to false when the user is deleted. That way I maintain referential integrity throughout the system even if a user is "deleted". In the business layer I always validate a user is active before allowing them to perform operations. I also filter inactive users when retrieving data.
The usual thing to do is instead of deleting them from a database, add a boolean flag field and have it be true for valid users and false for invalid users. You will have to add code to filter on the flag. You should also remove all relevant data from the user that you can. The primary purpose of this flag is to keep the links intact. It is a variant of the renaming the user's data, but the flag will be easier to check.
Ideally in a system you would not want to "hard delete" data. The best way I know of and that we have implemented in past is "soft delete". Maintain a status column in all your data tables which ideally refers to the fact whether the row is active or not. Any row when created is "Active" by default; however as entries are deleted; they are made inactive.
All select queries which display data on screen filter results for only "active records". This way you get following advantages:
1. Data Recovery is possible.
2. You can have a scheduled task on database level, which can take care of hard deletes of once in a way; if really needed. (Like a SQL procedure or something)
3. You can have an admin screen to be able to decide which accounts, entries etc you'd really want to mark for deletion
4. A temperory disabling of account can also be implemented with same solution.
In prod environments where I have worked on, a hard delete is a strict No-No. Infact audits are maintained for deletes also. But if application is really small; it'd be upto user.
I would still suggest a "virtual delete" or a "soft delete" with periodic cleanup on db level; which will be faster efficient and optimized way of cleaning up.
I generally don't like to delete anything and instead opt to mark records as deleted/unpublished using states (with AASM i.e. acts as state machine).
I prefer states and events to just using flags as you can use events to update attributes and send emails etc. in one foul swoop. Then check states to decide what to do later on.
HTH.
I would recommend putting in a delete date field that contains the date/time the user unsubscribed - not only to the user record, but to all information related to that user. The app should check the field prior to displaying anything. You can then run a hard delete for all records 30 days (your choice of time) after the delete date. This will allow the information not to be shown (you will probably need to update the app in a few places), time to allow the user to re-subscribe (accidental or rethinking) and a scheduled process to delete old data. I would remove ALL information about the member and any related comments about the member or their prior published data (photos, etc.)
I am sure it changing lot since update with Data Protection and GDPR, etc.
the reason I found this page as I was looking for advice because of new Apply policy on account deletion requirements extended https://developer.apple.com/news/?id=i71db0mv
We are using Ruby on Rails right now. Your answers seem a little outdated? or not or still useful right now
I was thinking something like that
create a new table “old_user_table” with old user_id , First name, Second name, email, and booking slug.
It will allow keep all users who did previous booking. And deleted their user ID in the app. We need to keep all records for booking for audit purpose in the last 5 years in the app.
the user setup with this app, the user but never booking, then the user will not transfer to “old_user_table” cos the user never booking.
Does it make sense? something like that?

Resources