Check for database records every few seconds - delphi

I have Delphi 10.1 and Firebird 3 database, I need to check for new record every 2-3 seconds and display on the main screen.
Currently I am using a timer which checks for new records on net database after every 3 seconds and display in the form if new records are available in the database/table.
but this is not the right way to perform the task, what is best practice for continues checking database records.
Thanks

You can write an insertion trigger and post a database event from there. For client implementation with FireDAC see the Database Alerts topic.

Related

Core Data transactions from Public database are coming very slow

I'm developing an app that uses CloudKit as it's main database. I have a relatively small database (around 200 entries, each with 2-3 relationships) that I offer pre-populated as as the public database (new in iOS 14).
I noticed that the CK mirroring is very very slow. I get the first 5-6 transactions in a matter of seconds an that I have to wait around 2 minutes for all the data to populate.
As I can't show partial results (I can't allow the user to see the main entity if it's relations are not fetched yet) this is a big problem for me.
Is there a way to speed up the CK mirroring process? (make it more efficient)?
How can I diagnose what is taking that much? Apple recommended in the last WWDC to use this public database as an initial set of data, but people will get frustrated if the initial app load takes 2 minutes :o
This is kinda expected behaviour, lets say for example you have 500 entries without any relationships and you want to fetch then with CKQueryOperation, when the operation is added to the public container it will not return all of the 500 entries at once, it will return at least 100 then it will query for the next 100 using the cursor and so on.
Edit
Is there a way to speed up the CK mirroring process? (make it more
efficient)?
No
These operations are maybe slow because of network issues, you should take that in consideration.

rails activerecord objects persistence for a specified time

In a rails 3 app, I need to create objects who are persistent for specified time. For example, say I created an activerecord object (of a model) but need it to be around only for say 12 hours. The activerecord object needs to be automatically destroyed from the database after 12 hours. There is no need for the deletion to happen through controller, it could happen directly on the database. What is the best way to achieve this? There might be thousands of such objects getting created and destroyed at any timer and each needs to be destroyed at specified time.
I looked around and found some gems - whenever & delayed job. whenever is more of a wrapper on cron and hence not suitable considering the number of activerecord objects to be managed. I am yet to investigate 'delayed job' which uses background process, but wanted to get some feedback on available options.
Thanks in advance.
Why don't you run a cron/whenever job every few minutes that queries the database for objects where created_at > 12.hours.ago? Thus you manage to delete all objects by query instead of going one by one.
every 5.minutes do
Model.where(:created_at > 12.hours.ago).delete_all
end
If you need to be more precise, I can imagine using Redis keys for your objects and setting the time expiration on keys at 12 hours, though you would need to figure out how to connect with Redis.
In any case, something needs to be observing the database for this to happen, or you have expiring keys.

Interbase transaction monitoring

I have a very strange problem with transactions in Interbase 7.5 which seem to be stuck.
I can track the problem with IBConsole -> right click DB -> Performance Monitor -> Transactions
Usually this list should show only a few active transaction. But I get several hundred active transactions when I start my application (a web module for an apache webserver using Delphi 7 Interbase components, e.g. IBQuery, IBTransaction, ...)
Transaction type is always listed as snapshot, if this is of relevance.
I have already triple checked all sql statements and cannot find anything that should produce such problems...
Is there any way get the sql statements of a specific transaction?
Any other suggestion how to find such a problem would be very welcome.
Is there any way get the sql statements of a specific transaction?
Yes, you can SELECT from TMP$STATEMENTS WHERE TRANSACTION_ID = .... That's from memory, but should get you started.
In IB Performance Monitor, you can locate the transaction from the statements tab, using the button on the toolbar. Can't remember if you can go the other way in that app. It's been a long time since I wrote it!
Active IBX data-sets require an active transaction all the time. If you don't have active data-sets just don't forget to commit all the active transactions.
If you have active data-sets, you can configure all your components to use the same TIbTransaction object, and you can also configure the unique TIbTransaction to commit or rollback after a idle time-out period via the IdleTimer and DefaultAction properties.
Terminating the transaction (by manually or automatically committing or rolling back) will close all the linked datasets (TIBQuery, TIBTable and the like).
You may be tempted to use the CommitRetaining or RollbackRetaining methods to terminate the transaction without closing the related data-sets, but this may affect the performance of the server, and my advise is to always avoid using it.
If you want to improve your application, you should consider changing your database connection layer or introducing a in-memory capable dataset over IBX, for example, Delphi's TClientDataSet, which allows you to retrieve data and retain it in memory while closing all the underlying datasets (and transactions), while allowing you to use the traditional Insert/Append/Edit/Delete methods to modify the data and then apply that changes to the database in a new short-time transaction.

When perform sql insert?

I am using a VCL TCPServer components which fires events everytime a data is
received on a tcp port. Within the event data is available into text
parameter of the procedure. Because I want to save this data into mysql
database I am wondering which is the best approach to this problem.
Directly use an INSERT SQL command within the procedure for every data received
or store the data in a memory (i.e. TStrings) and then calling
a function every X (using Timer) minutes excute the INSERT command?
Thanks
Ok Sorry.
It's not really an answer to your question, but do consider the risk of your application failing (for any reason) between receiving the data and executing the INSERT.
If you use a local store of some kind as an intermediate to mitigate this risk, consider the risk of a crash while that store is being updated.
RDBMS vendors go to great lengths to ensure that data that has been accepted by successful completion of an INSERT, UPDATE or similar command will not be lost or corrupted. Can you put in similar effort?
Generally speaking, if you are accepting data piecemeal rather than in bulk, I would probably keep an open connection to the database and insert data as you receive it, and only send an acknowledgement back to the consumer of your application once it has been pushed to the database.

Insert 100k rows in database from website

I have a website where the user can upload an excel spreadsheet to load data in a table. There can be a few 100k rows in the excel spreadsheet. When he uploads the file the website needs to insert an equal amount of rows in a database table.
What strategy should i take to do this? I was thinking of displaying a "Please wait page" until the operation is completed but i want him to be able to continue browsing the website. Also, since the database at that time will be kind of busy - wouldn't that stop people from working on the website?
My data access layer is in NHibernate.
Thanks,
Y
Displaying a please wait page would be pretty unfriendly as your user could be wating quite a while and would block threads on your web server.
I would upload the file, store it and create an entry in a queue (you'll need anouther table for this) to indicate that there is a file waiting to be processed. You can then have another process (which could even run on it's own server) which picks up tasks from this queue table and processes the xls file in it's own time.
I would create an upload queue that would submit this request to. Then the user could just check in on the queue every once in a while. You could store the progress of the batch operation in the queue as the rows are processed.
Also, database servers are robust, powerful, multi-tasking systems. Unless you have observed a problem with the website while the inserts are happening don't assume it will stop people from working on the website.
However, as far as insert or concurrent read/write performance goes there are mechanisms to deal with this. You could use the "INSERT LOW PRIORITY" syntax in MySQL or have your application throttle the inserts by sleeping a millisecond between each insert. Also, how you craft your insert statements, wether you use bound parameters or not, and wether you use multi-valued inserts can affect the insert performance and how it affects clients to a large degree.
On Submit you could pass the DB Operation to a asynchronous RequestHandler and set a Session Value when its done.
While the asynch process is in progress you can check the Session Value on each request and if it is set (operation = completed) display a message, eg in a modal or whatever message mechanism you have.

Resources