How do I track or get notified whenever a record is inserted or updated in a DB? I would like to notify an external application of the changes in near real time whenever such changes in DB occur. Are there DBMS independent and application programming language independent ways of doing this? If not, then is it possible with MS Access and MS SQL Server in particular? I'm looking to avoid continuous polling of DB of course.
With SQL Server it is possible to load a DLL within SQL Server itself and call methods from this with extended stored procedures. The DLL could then notify other applications - usually via a TCP socket.
I think the latest version of Microsoft SQL Server allows you to raise events in your .NET code based on server conditions and events. I haven't tried it, and I haven't heard of any 'DBMS independent' way of doing this (without polling DB every X milliseconds).
With MS-Access, I keep track of record changes or record additions with fields in the main table that store the user name and date when the record is created or updated.
You need to use a windows API to record the user name, usually run when switchboard form is opened.
I am digging to find a away to track specific changes. My data base is used for project management. I would like to keep track of what specifically was changed, not just who and when that I have now.
I think this meets the requirements of the original question. I can later add the windows API that reads the name of the user.
Private Sub Form_BeforeInsert(Cancel As Integer)
Me!UserCreated = UCase(CurrentUser())
Me!DateCreated = Now()
End Sub
Private Sub Form_BeforeUpdate(Cancel As Integer)
Me!DateModified = Now()
Me!UserModified = UCase(CurrentUser())
End Sub
-- Mike
To do this with SQL Server, you use the Notification Service - write a dll that subscribes to notifications from the DB for data updates which you can process in some way.
However, MS has said that they are removing this from SQL Server 2008.
Oracle has something similar (though they tend to leave their technology in place), but I've not seen anything that is database-neutral.
Related
Im working on an azure database just adding a couple of Stored Procedures and just making sure the program I'm building with it in .NET is all aligned properly.
I'm not going to go into the Stored Procedure itself nor the Program I'm developing because I don't believe the problem is there, I have a Development program and database which is using the exact same code and they work fine, I'm using Microsoft's SQL Server Management Studio to handle everything on the servers side.
The only difference to the current setup is that I myself scripted a bunch of the Stored procedures and a single View of a table that I did not create....(I did not create the table, but I made a view for it which is a slightly different format)
The person creating most of these databases and table is one of the database administrators I guess (not Microsoft, but an employee of the company using their services), I on the other hand am a freelance programmer and I'm guessing I have somewhat limited access to the server (limited credentials).....although it's allowing me to do more or less anything I need to do like creating SP's etc.
My current (and only problem) is a single stored procedure that runs through without an error does not update the table (the table i did not create) the Stored Procedure just inserts a couple of records and then deletes a record from the same table.......
It deletes the record just fine but for some reason the INSERT doesn't insert anything.
Again, this works fine on another Development database and the programs are sending the exact same strings but this new database just doesn't want to play along.....
Could this be a permission problem I'm having between my stored procedure and the table I did not create?
I would love to dump this onto the admin guy (and already did but he dumped it back on me haha) so I just want to be sure I'm not wasting his time....... and give him something solid to go on.
Thanks for your help Paul S.
I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.
I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.
You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.
(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!
I have now played with the QBO and QBD APIs and feel I have a fair understanding of how it thinks and how to interact with it. So now it is time to design the actual integration solution.
Inside my application you can create new customers, quote services, perform services, and soon, pass invoices to QuickBooks, sounds easy.
But what if the customer is not in QB yet? No problem - for each invoice I will look up the customer (need the id anyway) and if it doesn’t exist, add it. But if I have to look up the customer for each invoice it seems like it might be slow. I will likely have 30,000 customers and have 500-3000 invoices per day.
So my question is this; what are others doing?
a) Are you storing the QB id for each customer in your data?
b) How do you detect address changes (changed in your app and changed in QB)?
c) Is the batch submission interface so much faster I should use that?
Thanks for your help!
We often times do store the QB id in our database for use. If we post an invoice into QB, we'll then store the QB id for future use if we need to modify it.
As far as detecting changes on the customer record and other info, there's a couple ways to handle the conflict resolution. One is to keep a timestamp on your side as to when changes are made. You can then compare this with the timestamp of the last change on the QB record and then make your decision as to which one gets updated.
FreddyMac,
To detect changes on the Intuit side you can construct a query with a CDCasOf Filter, which will return only the data that has changed since a date you provide. (ChangeDataCapture as of)
https://ipp.developer.intuit.com/0010_Intuit_Partner_Platform/0050_Data_Services/0500_QuickBooks_Windows/0100_Calling_Data_Services/0015_Retrieving_Objects
You need to keep track of data changes on your side.
The batch submission is not faster, its just easier for you to write the code.
The IPP SDK can queue the API calls for your and aggregate the responses.
regards,
Jarred
I know that there are a few questions like this, but the question is more in respect to this specifict situation.
Im developing a platform for taking tests online. A test is a set of images and belonging questions. Its being hosted on Azure and using MVC 4.
How I would love that if the user have taken half the test and the brower crashes or something makes it deviate from the test and comes back, it will give the option to resume.
I have one idea my self, but would like to know if theres other options. I was considering to use the localstorage. When a user starts a test, the information for the test is saved in localstorage and every time he moved on to a new image, it updates the local state. Then if the test-player is loaded it will check if any ongoing tests are avalible.
How could i do it? any one witch similar problem/solution.
Local Storage is not a good choice, because it is specific to each instance. That means if you have two instances of a Web Role (the recommended minimum), then each instance would have it's own local storage. They are not shared, and there is no way to access local storage on a specific machine.
You really have two options. You could use a database like SQL Azure, or use Azure caching. Azure caching is probably easier, since it's super easy to serialize/deserialize complex objects, but the downside is that caching is only valid for 72 hours. If a cached object isn't accessed/updated in 72 hours, it gets purged.
I would not recommend you storing this information on the client browser. The user has access to local storage, cookies, etc ... and could modify it. You could store the test start time in your database on the server. Then every time the user sends a request to the server in order to answer a question you would verify if the test is still active or the maximum allowed time has elapsed.
I have a website which useses a mysql database for its whole operation . But for a new requirement i need to query a external oracle database( used by other component) and compile a list of items and display in a page in the website. How is it possible to connect to a external database just for rendering a single page.
And is it possible to cache the queried result for say 1 month before invalidating the cache and get the updated list of items. i dont want query the external oracle db for each request.
Why not a monthly job that just copies the data from the Oracle database into the MySQL database ?
As stated by Myers, a simple solution is to accept a data feed. For example, a cron job could pull data from the Oracle database at defined intervals, say daily or weekly, and then insert the data into your web application's local MySQL database. The whole process could be essentially transparent to your web application. The caching interval, or how long you go between feeds, would be up to you.
I'll also point out that this could be an opportunity for an API that would more readily support sharing of data between applications. This would, of course, be more work than a simple data feed, but has the possibility of being more useful to more people.