(Edit 2: Information has come in, thankfully. It's only accessed via ODBC, where I can set connection charset to match anything I set for the MySQL database. So this whole question is reduced to: If I set connection charset of both ODBC and database to UTF-8, am I done and it will not corrupt the data? (server/database charset is latin1). I will happily accept a simple yes or no with a short explanation why.
--------------- the rest is original background information, not really relevant anymore.--------------
(Edit: to clarify: the data is already imported correctly, this is ONLY a question about connection charset.)
I'm trying to make a decision whether to change the connection character set to that of collation and server for my newly created MySQL database.
The server and collation charsets for it are set to the same as that of the old server running MySQL 4.1 - latin1. Some data is already imported with mysql.exe and is verified as correct.
The connection character set was not changed from the default UTF-8 when the server admin installed MySQL 5.5 and created the new database at one point in time.
The old server reported no variables set for collation or connection character set, so my questions are: (I've assumed the current setting, UTF-8, should be the most correct route)
where to look for what to set it to (hints from experience)
if I choose to set it to latin1, if scripts writing to the database somewhere could corrupt the data again
or if it can be switched on and off at will without ever corrupting the data, i.e. if I can postpone the decision.
The database server admin doesn't have the information to give me an answer, and with the information missing from the old database, I would like to make sure but can't.
It's a 1GB .sql file which has required gradual editing in TextPad to import, so it would save time to know if this is a critical setting.
My best plan at the moment is to not change the setting, import the data, and backup - and if a script corrupts data due to incorrectly set connection charset, the backup is restored. Do you see problems with this plan?
Oh my, I really should take more time reading your question, yes, the whole point of having a database is for data to persist, if you keep restoring the backup due to incorrect charset issues the data is not persistent, a better plan would be to update your script so that all use the same charset and convert any fields that are not in that charset in the database to the same charset.
When migrating data,
set destination data charset+collation to the same as for the ancient MySQL database version. You may have to change Type= to Engine=, timestamp(xx) to timestamp, and some float(xx,xx) to double in the huge sqldump with f.ex. TextEdit before importing it with mysql.exe
Find out what, if any, connections will write to the tables and if their source charset differs from that of the database. If so, modify the datasource's connection properties so that the connection converts the source data to the database charset.
Related
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.
I recently modified my iOS app to enable serialized mode for both a database encrypted using SQLCipher and a non-encrypted database (also SQLite). I also maintain a static sqlite3 connection for each database, and each is only opened once (by simply checking for null values) and shared throughout the lifetime of the app.
The app is required to have a sync-like behavior which will download a ton of records from a remote database at regular intervals using a soap request and update the contents of the local encrypted database. Of course, the person using the app may or may not be updating or reading from the database, depending on what they're doing, so I made the changes mentioned in the above paragraph.
When doing short term testing, there doesn't appear to be any issue with how things work, and I've yet experience any problem.
However, some users are reporting that they've lost access to the encrypted database, and I'm trying to figure out why.
My thoughts are as follows: Methods written by another developer declared all sqlite3_stmt's to be static (I believe this code was in the problematic release). In the past I've noticed crashes when two threads using a particular method run simultaneously. One thread finalizes, modifies or replaces a sqlite3_stmt while another thread is using it. A crash doesn't always occur because he has wrapped most of his SQLite code in try/catch blocks. If it's true that SQLite uses prepare and finalize to implement locking, could the orphaning of sqlite3_stmt's which occurs due to their static nature in this context be putting the database into an inoperable state? For example, when a statement acquires an exclusive lock after being stepped is replaced by an assignment in the same method running in another thread?
I realize that this doesn't necessarily mean that the database will become permanently unusable, but, consider this scenario:
At some point during the app's lifetime it will re-key the encrypted database and that key is stored in another database. Suppose that it successfully re-keys the encrypted database, but then the new key is not stored in the other database because of what I mentioned above.
Provided that the database hasn't become corrupted at some point (I'm not really counting on this being the case), this is the only explanation I can come up with for why the user may not be able to use the encrypted database after restarting the iOS app, seeing as the app would be the only one to access the database file.
Being that I can't recreate this issue, I can only speculate about what the reasoning might be. What thoughts do you have? Does this seem like a plausible scenario for something that happens rarely? Do you have another idea of something to look into?
If the database is rekeyed, and the key for the database is not successfully stored in the other database, then it could certainly cause the problem.
We are having a strange situation while trying to dbexport/dbimport an Informix database.
while importing the DB we got the error:
1213 - Character to numeric conversion error
I checked at which does does the import stops.
I edited the corresponding file (sed -n '1745813,1745815p' table.unl) and have seen data that look to be corrupt.
3.0]26.0]018102]0.0]20111001.0]0.0]77.38]20111012.0]978]04]0.0072]6.59]6.59]29.93]29.93]77.38]
3.0]26.0]018102]0.0]20111001.0]0.0]143.69]20111012.0]978]04]0.0144]6.59]6.59]48.79]48.79]143.69]
]0.000/]]-0.000000000000000000000000000000000000000000000000000044]8\00\00\07Ú\00\00Õ²\00\00\07P27\00\00\07Ú\00\00i]-0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000999995+']-49999992%(000000000000000000.0]-989074999997704800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-999992%(0000000000000000000000.0]]]Ú\00\00]*00000015056480000000000000000000000000000000000000000000000000000000000.0]-92%'9999)).'000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-;24944999992%(000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]-81%-999994;2475200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]]-97704751999992%(00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0]
The first two lines are OK. The rest seems to be corrupt data.
I do not know how the data appears here since it does not appear in select statement.
I exported only the affected table and figured out that the same data is there.
I looked for a filter that matches all the rows, I used it in another export. This time the corrupt data is not there.
Any idea about what might be the reason behind this?
Best Regards
Arthur
Arthur,
Trying answering the question, why the database is generating corrupted data.
You will need to investigate.
The common causes is :
Occur some crash at your OS/Hardware
Occur some lack of energy
Occur some crash at your database or they process are killed by some admin.
After any of problems above, your FS become corrupt and probably at recovery (fsck) mess the database data.
Probably you are working with a Journaling FS , which ext3,ext4,ntfs is...
If you don't know anything about any events like described before, you need to investigate into the online.log of our Informix database , looking for any start of the engine without a regular shutdown before. Look at your OS logs will help too to found for any involuntary restart of the OS (lack of energy or crash).
Now about the solutions.
Recover a backup
Then you can export just the table corrupted and replace it at your dbexport.
You can do this with archecker. (must be Informix version grater of 10.FC4)
This article maybe will help you if need : Table Level Restore - Pretty Useful Stuff
Export your table just like your describe at the comments.
But this will not recover the corrupt data, they just will "save" the "good" data and discard the "bad" data.
created a new table copy of the first one.
Insert into table 2 select * from table1 where (my filter which matched all rows)
recreated table indexes
renamed tables
Depending how is bad is the corrupted data sometimes you not able to export all "good" data at just one select, you need workaround the "bad" data , check this IBM article:
Unloading around table corruption
Ways to prevent this kind of problem or make easily any recover
First, of course, there is no way to prevent any crash...
What you can do is try minimize the damage after any crash.
Do not use journal file systems!
(at linux, use ext2 FS or RAW devices)
Enable KAIO (for RAW) or DIRECT_IO (of any FS) at Informix configuration.
This will prevent the database to use the OS cache, making more secure the process of writing data at your disk. At some situations this can slower down or speed up your database, depends a lot of your hardware/storage.
Configure your backup to work and test/check it with some frequency.
I recommend configure the backup of full database + logical logs backup.
Depending the version of the Informix and which license you have, you maybe have the rights to configure a cold RSS server ("cluster" secondary node) which will work as active-passive mode at different server and will reduce dramatically you chances to loose any data after any crash at the mainly server.
After any crash, run oncheck to detect the if occur some corruption :
How to use oncheck to detect corruption
I am working on a content management application in which the data being stored on the database is extremely generic. In this particular instance a container has many resources and those resources map to some kind of digital asset, whether that be a picture, a movie, an uploaded file or even plain text.
I have been arguing with a colleague for a week now because in addition to storing the pictures, etc - they would like to store the text assets on the file system and have the application look up the file location(from the database) and read in the text file(from the file system) before serving to the client application.
Common sense seemed to scream at me that this was ridiculous and if we are bothering to look up something from the database, we might as well store the text in a database column and have it served along up with the row lookup. Database lookup + File IO seemed sounds uncontrollably slower then just Database Lookup. After going back and forth for some time, I decided to run some benchmarks and found the results a little surprising. There seems to be very little consistency when it comes to benchmark times. The only clear winner in the benchmarks was pulling a large dataset from the database and iterating over the results to display the text asset, however pulling objects one at a time from the database and displaying their text content seems to be neck and neck.
Now I know the limitations of running benchmarks, and I am not sure I am even running the correct idea of "tests" (for example, File system writes are ridiculously faster then database writes, didn't know that!). I guess my question is for confirmation. Is File I/O comparable to database text storage/lookup? Am I missing a part of the argument here? Thanks ahead of time for your opinions/advice!
A quick work about what I am using:
This is a Ruby on Rails application,
using Ruby 1.8.6 and Sqlite3. I plan
on moving the same codebase to MySQL
tomorrow and see if the benchmarks are
the same.
The major advantage you'll get from not using the filesystem is that the database will manage concurrent access properly.
Let's say 2 processes need to modify the same text as the same time, synchronisation with the filesystem may lead to race conditions, whereas you will have no problem at all with everyhing in database.
I think your benchmark results will depend on how you store the text data in your database.
If you store it as LOB then behind the scenes it is stored in an ordinary file.
With any kind of LOB you pay the Database lookup + File IO anyway.
VARCHAR is stored in the tablespace
Ordinary text data types (VARCHAR et al) are very limited in size in typical relational database systems. Something like 2000 or 4000 (Oracle) sometimes 8000 or even 65536 characters. Some databases support long text
but these have serious drawbacks and are not recommended.
LOBs are references to file system objects
If your text is larger you have to use a LOB data type (e.g. CLOB in Oracle).
LOBs usually work like this:
The database stores only a reference to a file system object.
The file system object contains the data (e.g. the text data).
This is very similar to what your colleague proposes except the DBMS lifts the heavy work of
managing references and files.
The bottom line is:
If you can store your text in a VARCHAR then go for it.
If you can't you have two options: Use a LOB or store the data in a file referenced from the database. Both are technically similar and slower than using VARCHAR.
I did this before. Its a mess, you need to keep the filesystem and the database synchronized all the time, so that makes the programming more complicated, as you would guess.
My advice is either go for an all filesystem solution, or all database solution, depending on the data. Notably, if you require lots of searches, conditional data retrieval, then go for database, otherwise fs.
Note that database may not be optimized for storage of large binary files. Still, remember, if you use both, youre gonna have to keep them synchronized, and it doesnt make for an elegant nor enjoyble (to program) solution.
Good luck!
At least, if your problems come from the "performance side", you could use a "no SQL" storage solution like Redis (via Ohm, for example), or CouchDB...
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.