Related
I'm developing a software that displays information in a DBGrid via a TSimpleDataSet (dbExpress components)
The software in question is used on 2 different computers by 2 different people.
They both view and edit the same information at different times.
I'm trying to figure out a way to automatically update the DBGrid (or rather, the DataSet, right?) on Computer B once Computer A makes a change to a row (edits something/whatever) and vice-versa.
Currently I've set up a TButton named Refresh that once clicked executes the following code:
procedure TForm2.actRefreshDataExecute(Sender: TObject);
begin
dbmodule.somenameDataSet.MergeChangeLog;
dbmodule.somenameDataSet.ApplyUpdates(-1);
dbmodule.somenameDataSet.Refresh;
dbmodule.somename1DataSet.MergeChangeLog;
dbmodule.somename1DataSet.ApplyUpdates(-1);
dbmodule.somename1DataSet.Refresh;
dbmodule.somename2DataSet.MergeChangeLog;
dbmodule.somename2DataSet.ApplyUpdates(-1);
dbmodule.somename2DataSet.Refresh;
dbmodule.somename3DataSet.MergeChangeLog;
dbmodule.somename3DataSet.ApplyUpdates(-1);
dbmodule.somename3DataSet.Refresh;
end;
This is fine and works as intended, once clicked.
I'd like an auto update feature for this, for example when Computer A edits information in a row, Computer B's DBGrid should update it's display accordingly, without the need to click the refresh button.
I figured I would use a TTimer and set it at a specific interval, on both software on both PC's.
My actual question is:
Is there a better way than a TTimer for this? If so, please elaborate.
Also, if the TTimer route is the way to go any further info you might find useful to state would be appreciated (pro's and con's and so on)
I'm using Rad Studio 10 Seattle and dbExpress components, the datasets connect to a MySQL database on my hosting where my website is.
Thanks!
Well, Ken White and Sertac Akyuz are certainly correct that using a server-originated notification to determine when to refresh your local dataset is preferable to continually re-reading all the data you are using from the server.
The problem AFAIK is that there is no Emba-supplied notification system which works with MySql. See this list of databases supported by FireDAC's Database Alerts:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_(FireDAC)
and note that it does not list MySql.
Luckily, I think there is a work-around which should be viable for a v. small system like yours currently is. As I understand it, you and your colleague's PCs are on a LAN and the MySql Server is outside your LAN and on the internet. In that situation, it doesn't need a round trip to the server for one of you to get a notification that the other has changed something in the database. Using an analogy akin to Ken's, you can, as it were, lean over the desk and say to your colleague "Hey, I've changed something, so you need to refresh your data."
A very low-tech way of implementing that would be to have somewhere on your LAN a resource that both of you can easily get at, which you can update when you make a change to the DB that means that the other of you should update your data from the server. One way to do that is to have a small, shared datafile with a number of records in it, one per server db table, which has some sort of timestamp or version-ID number which gets updated when you update the corresponding server table. Then, you can periodically check (poll) this datafile to see whether a given table has changed since you last checked; obviously, if it has, you then re-read the data you want from it from the server and update your local record of the info you read from the shared file.
You can update the shared file using handlers for the events of your Delphi client-side datasets.
There are a number of variations on this theme that I'm sure will be apparent to you; the implementational details really don't matter.
To update the shared file I'm talking about, you will need to lock it while writing to it. This answer:
How do I get the handle for locking a file in Delphi?
will show you how to do that.
Of course, the shared local resource doesn't have to be a data file. One alternative would be to use a Microsoft Message Queue service, which is sometimes used for this kind of thing, but has a steeper learning curve than a shared data file.
By the way, this kind of thing is far easier to do (at least on a small scale like you have) if you use 3-tier database access (e.g. using datasnap).
In a three tier system, only the middle tier (a Delphi datasnap server which you write, but it's not that hard) talks to the server, and the clients only talk to the middle tier. This makes it easy for the middle tier server to notify the other client(s) when one of them changes the db data.
The three-tier arrangement also helps minimise the security problems with accessing a database server via the internet, because you only need one secure connection to the server, not one per client. But that's straying a bit far from your immediate problem.
I hope all this is clear, if not, ask.
Just use a timer and make it refresh the dataset every 5 min. No big deal.
If the usage is not frequent then you can set it to fire every 10 or 15 min.
There is nothing wrong with the timer if it set on longer intervals.
Today's broadband connection's can easily handle the traffic so can Access.
If the table is not huge of course.
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.
I have a very strange problem with transactions in Interbase 7.5 which seem to be stuck.
I can track the problem with IBConsole -> right click DB -> Performance Monitor -> Transactions
Usually this list should show only a few active transaction. But I get several hundred active transactions when I start my application (a web module for an apache webserver using Delphi 7 Interbase components, e.g. IBQuery, IBTransaction, ...)
Transaction type is always listed as snapshot, if this is of relevance.
I have already triple checked all sql statements and cannot find anything that should produce such problems...
Is there any way get the sql statements of a specific transaction?
Any other suggestion how to find such a problem would be very welcome.
Is there any way get the sql statements of a specific transaction?
Yes, you can SELECT from TMP$STATEMENTS WHERE TRANSACTION_ID = .... That's from memory, but should get you started.
In IB Performance Monitor, you can locate the transaction from the statements tab, using the button on the toolbar. Can't remember if you can go the other way in that app. It's been a long time since I wrote it!
Active IBX data-sets require an active transaction all the time. If you don't have active data-sets just don't forget to commit all the active transactions.
If you have active data-sets, you can configure all your components to use the same TIbTransaction object, and you can also configure the unique TIbTransaction to commit or rollback after a idle time-out period via the IdleTimer and DefaultAction properties.
Terminating the transaction (by manually or automatically committing or rolling back) will close all the linked datasets (TIBQuery, TIBTable and the like).
You may be tempted to use the CommitRetaining or RollbackRetaining methods to terminate the transaction without closing the related data-sets, but this may affect the performance of the server, and my advise is to always avoid using it.
If you want to improve your application, you should consider changing your database connection layer or introducing a in-memory capable dataset over IBX, for example, Delphi's TClientDataSet, which allows you to retrieve data and retain it in memory while closing all the underlying datasets (and transactions), while allowing you to use the traditional Insert/Append/Edit/Delete methods to modify the data and then apply that changes to the database in a new short-time transaction.
Is there an option in DynammoDB to store auto incremented ID as primary key in tables? I also need to store the server time in tables as the "created at" fields (eg., user create at). But I don't find any way to get server time from DynamoDB or any other AWS services.
Can you guys help me with,
Working with auto incremented IDs in DyanmoDB tables
Storing server time in tables for "created at" like fields.
Thanks.
Actually, there are very few features in DynamoDB and this is precisely its main strength. Simplicity.
There are no way automatically generate IDs nor UUIDs.
There are no way to auto-generate a date
For the "date" problem, it should be easy to generate it on the client side. May I suggest you to use the ISO 8601 date format ? It's both programmer and computer friendly.
Most of the time, there is a better way than using automatic IDs for Items. This is often a bad habit taken from the SQL or MongoDB world. For instance, an e-mail or a login will make a perfect ID for a user. But I know there are specific cases where IDs might be useful.
In these cases, you need to build your own system. In this SO answer and this article from DynamoDB-Mapper documentation, I explain how to do it. I hope it helps
Rather than working with auto-incremented IDs, consider working with GUIDs. You get higher theoretical throughput and better failure handling, and the only thing you lose is the natural time-order, which is better handled by dates.
Higher throughput because you don't need to ask Dynamo to generate the next available IDs (which would require some resource somewhere obtaining a lock, getting some numbers, and making sure nothing else gets those numbers). Better failure handling comes when you lose your connection to Dynamo (Dynamo goes down, or you are bursty and your application is doing more work than currently provisioned throughput). A write-only application can continue "working" and generating data complete with IDs, queueing it up to be written to dynamo, and never worry about ID collisions.
I've created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API.
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.