Related
I have two stored procedures, the first one (SP1) creates a table, and the second one (SP2) uses that table.
I cannot run both SP sequentially in one statement.
CALL SP1();
CALL SP2();
I get an error saying " Could not find table/view..."
when SP2 was created, the table existed due to testing SP1, but now I got rid of all the tables that are created temporarily during runtime SP2 doesn't run.
I realize I can just run SP1 and then SP2 and that will work, but there are 39 SPs and they take about 2 hours when running one after another, so waiting would make the process take even longer.
The only solution I can think of is to just leave the tables that are needed and truncate them rather than drop them. If nothing else can be done I will have to do that, but I rather drop them. The person that created this SPs was evil enough to create over 500 of these tables so I don't want them permanently in my schema.
As a side note, these tables are used for debugging mainly so they can be dropped after a few days of execution, once the processes validated.
I wonder if there is any way to ignore the error or something like or any other workaround.
Thanks in advance for your time and expertise.
Regards,
Leonardo
I found a way to run it regardless of the missing tables.
I just had to put each SP in a different DO BEGIN.... END; block, that seems to be only validating one block at a time and since the previous block creates the table that use going to be used on block number 2, everything seems to be working fine.
DO
BEGIN
CALL SP1();
END;
DO
BEGIN
CALL SP2();
END;
and so on.
The blocks have a bit more things in them, I just put the SP call to simplify them.
I am just leaving this here in case someone else has the same problem as I have.
I would still appreciate it if someone has a better solution.
This sounds a lot like that those tables are in fact temporary tables used to "transfer" data between the different stored procedures.
With HANA it is not necessary nor recommended to dynamically create tables for that use.
Instead, use table variables and "hand over" the data via table typed parameters.
That way, no tables have to be created/dropped at runtime at all.
Also, since table variables are effectively handled as internal, session local temporary tables, a lot of overhead is removed (i.e. those tables don't need to take care of logging or concurrent access) which makes them a lot more lightweight and better performing than dynamically created tables.
Another negative aspect of dynamically creating objects is that this always comes with locking of the database catalog, checking for and invalidating potentially dependent objects, and numerous internal caches.
If possible, one really should avoid this if performance is important.
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.
I am using the Entity Framework 4.1 within an application. One of the requirments is to execute some stored procedures on the database out of which some take quite some time. Further, those stored procedures do not return any results so I need to only start them and forget about them.
Naturally, .NET will wait for these operations to complete so after some time it throws an exception that the "Timeout period has expired".
I know that I could fix that by setting the CommandTimeout property to a higher value, however I am looking for an alternative solution (If such even exists).
Is it possible to execute stored procedures using the Entity Framework as Fire-and-Forget?
Any help will be appreciated.
Regards
Stored procedures don't support fire and forget execution. You can either use plain ADO.NET and execute query asynchronously on separate connection (with BeginExecuteNonQuery). EF doesn't support asynchronous execution. Another more complex way which behaves like fire and forget is creating SQL Job with single step execution your stored procedure. Instead of calling your stored procedure you will call sp_start_job which returns immediately after starting the job and job will execute asynchronously without returning any other result back to your application.
Althoug both Jet/OLE DB parameters are relativly well documented I fail to understand the difference between these two connection parameters:
The first one:
Jet OLEDB:Transaction Commit Mode
(DBPROP_JETOLEDB_TXNCOMMITMODE)
Indicates whether Jet writes data to
disk synchronously or asynchronously
when a transaction is committed.
The second one:
Jet OLEDB:User Commit Sync
(DBPROP_JETOLEDB_USERCOMMITSYNC)
Indicates whether changes that were
made in transactions are written in
synchronous or asynchronous mode.
What's the difference? When to use which?
This is very long, so here's the short answer:
Don't set either of these. The default settings for these two options are likely to be correct. The first, Transaction Commit Mode, controls Jet's implicit transactions, and applies outside of explicit transactions, and is set to YES (asynchronous). The second controls how Jet interacts with its temporary database during an explicit transaction and is set to NO (synchronous). I can't think of a situation where you'd want to override the defaults here. However, you might want to set them explicitly just in case you're running in an environment where the Jet database engine settings have been altered from their defaults.
Now, the long explanation:
I have waded through a lot of Jet-related resources to see if I can find out what the situation here is. The two OLEDB constants seem to map onto these two members of the SetOptionEnum of the top-level DAO DBEngine object (details here for those who don't have the Access help file available):
dbImplicitCommitSync
dbUserCommitSync
These options are there for overriding the default registry settings for the Jet database engine at runtime for any particular connection, or for permanently altering the stored settings for it in the registry. If you look in the Registry for HLKM\Software\Microsoft\Jet\X.X\ you'll find that under the key there for the Jet version you're using there are keys, of which two are these:
ImplicitCommitSync
UserCommitSync
The Jet 3.5 Database Engine Programmer's Guide defines these:
ImplicitCommitSync: A value of Yes indicates that Microsoft Jet will wait for commits to finish. A value other than Yes means that Microsoft Jet will perform commits asynchronously.
UserCommitSync: When the setting has a value of Yes, Microwsoft Jet will wait for commits to finish. Any other value means that Microsoft Jet will perform commits asynchronously.
Now, this is just a restatement of what you'd already said. The frustrating thing is that the first has a default value of NO while the second defaults to YES. If they really were controlling the same thing, you'd expect them to have the same value, or that conflicting values would be a problem.
But the key actually turns out to be in the name, and it reflects the history of Jet in regard to how data writes are committed within and outside of transactions. Before Jet 3.0, Jet defaulted to synchronous updates outside of explicit transactions, but starting with Jet 3.0, IMPLICIT transactions were introduced, and were used by default (with caveats in Jet 3.5 -- see below). So, one of these two options applies to commits OUTSIDE of transactions (dbImplicitCommitSync) and the other for commits INSIDE of transactions (dbUserCommitSync). I finally located a verbose explanation of these in the Jet Database Engine Programmer's Guide (p. 607-8):
UserCommitSynch
The UserCommitSynch setting determines
whether changes made as part of an
explicit transaction...are written to
the database in synchronous mode or
asynchronous mode. The default value...is Yes, which specifies
asynchronous mode. It is not
recommended that you change this value
because in synchronous mode, there is
no guarantee that information has been
written to disk before your code
proceeds to the next command.
ImplicitCommitSync
By default, when
performing operations that add,
delete, or update records outside of
explicit transactions, Microsoft Jet
automatically performs internal
transactions called implicit
transactions that temporarily save
data in its memory cache, and then
later write the data as a chunk to the
disk. The ImplicitCommitSync setting
determines whether changes made by
using implicit transactions are
written to the database in synchronus
mode or asynchronous mode. The default
value...is No, which specifies that
these changes are written to the
database in asynchronous mode; this
provides the best performance. If you
want implicit transactions to be
written to the database in synchronous
mode, change the value...to Yes. If
you change the value...you get
behavior similar to Microsoft Jet
versions 2.x and earlier when you
weren't using explicit transactions.
However, doing so can also impair
performance considerably, so it is not
recommended that you change the value
of this setting.
Note: There is no longer a need to use
explicit transactions to improve the
performance of Microsoft Jet. A
database application using Microsoft
Jet 3.5 should use explicit
transactions only in situations where
there may be a need to roll back
changes. Micosoft Jet can now
automatically perform implicit
transactions to improve performance
whenever it adds, deletes or changes
records. However, implicit
transactions for SQL DML statements
were removed in Microsoft Jet
3.5...see "Removal of Implicit Transactions for SQL DML Statements"
later in this chapter.
That section:
Removal of Implicit Transactions for SQL DML Statements
Even with all the work in Microsoft
Jet 3.0 to eliminate transactions in
order to obtain better performance,
SQL DML statements were still placed
in an implicit transaction. In
Microsoft Jet 3.5, SQL DML statements
are not placed in an implicit
transaction. This substantially
improves performance when running SQL
DML statements that affect many
records of data.
Although this change provides a
substantial performance improvement,
it also introduces a change to the
behavior of SQL DML statements. When
using Microsoft Jet 3.0 and previous
versions that use implicit
transactions for SQL DML statements,
an SQL DML statement rolls back if any
part of the statement is not
completed. When using Microsoft Jet
3.5, it is possible to have some of the records committed by SQL DML
statement while others are not. An
example of this would be when the
Microsoft Jet cache is exceeded. The
data in the cache is written to disk
and the next set of records is
modified and placed in the cache.
Therefore, if the connection is
terminated, it is possible that some
of the records were saved to disk, but
others were not. This is the same
behavior as using DAO looping routines
to update data withoug an explicit
transaction in Microsoft Jet 3.0. If
you want to avoid this behavior, you
need to add explicit transactions
around the SQL DML statement to define
a set of work and you must sacrifice
the performance gains.
Confused yet? I certainly am.
The key point to me seems to me to be that dbUserCommitSync seems to control the way Jet writes to the TEMPORARY database it uses for staging EXPLICIT transactions, while dbImplicitCommitSync relates to where Jet uses its implicit transactions OUTSIDE of an explicit transaction. In other words, dbUserCommitSync controls the behavior of the engine while inside a BeginTrans/CommitTrans loop, while dbImplicitCommitSync controls how Jet behaves in regard to asynch/synch outside of explicit transactions.
Now, as to the "Removal of Implicit Transactions" section: my reading is that implicit transactions apply to updates when you're looping through a recordset outside of a transaction, but no longer apply to a SQL UPDATE statement outside a transaction. It stands to reason that an optimization that improves the performance of row-by-row updates would be good and wouldn't actually help so much with a SQL batch update, which is already going to be pretty darned fast (relatively speaking).
Also note that the fact that it is possible to do it both ways is what enables DoCmd.RunSQL to make incomplete updates. That is, a SQL command that would fail with CurrentDB.Execute strSQL, dbFailOnError, can run to completion if executed with DoCmd.RunSQL. If you turn off DoCmd.SetWarnings, you don't get a report of an error, and you don't get the chance to roll back to the initial state (or, if you are informed of the errors and decide to commit, anyway).
So, what I think is going on is that SQL executed through the Access UI is wrapped in a transaction by default (that's how you get a confirmation prompt), but if you turn off the prompts and there's an error, you get the incomplete updates applied. This has nothing to do with the DBEngine settings -- it's a matter of the way the Access UI executes SQL (and there's an option to turn it off/on).
This contrasts to updates in DAO, which were all wrapped in the implicit transactions starting with Jet 3.0, but starting with Jet 3.5, only sequential updates were wrapped in the implicit transactions -- batch SQL commands (INSERT/UPDATE/DELETE) are not.
At least, that's my reading.
So, in regard to the issue in your actual question, in setting up your OLEDB connection, you'd set the options for the Jet DBEngine for that connection according to what you were doing. It seems to me that the default Jet DBEngine settings are correct and shouldn't be altered -- you want to use implicit transactions for edits where you're walking through a recordset and updating one row at a time (outside of an explicit transaction). On the other hand, you can wrap the whole thing in a transaction and get the same result, so really, this only applies to cases where you're walking a recordset and updating and have not used an explicit transaction, and the default setting seems quite correct to me.
The other setting, UserCommitSync, seems to me to be something you'd definitely want to leave alone as well, as it seems to me to apply to the way Jet interacts with its temp database during an explicit transaction. Setting it to asynchronous would seem to me to be quite dangerous as you'd basically not know the state of the operation at the point that you committed the data.
You'd think that USERCOMMITSYNC=YES would be the option to commit synchronously. And that is the cause of the confusion.
I spent ages googling on this topic because I found that the behavior I was getting with old vb6 applications was not the same as I get in .net oledb/jet4
Now I really should back up what I'm going to say with a link to the actual page(s) I read but I can't find those pages now.
Anyway, I was browsing MSDN website and found a page that described a 'by design' error in Jet3 which transposed the functionality of USERCOMMITSYNC meaning a value of NO gets synchronous commit.
Therefore MS set the default to NO and we get synchronous commit by default. Exactly as described above by David Fenton. A behavior we've all come to accept.
But, the document then went on to explain that the behavior in oledb/Jet4 has been changed. Basically MS fixed their bug and now a setting of USERCOMMITSYNC=YES does what it says.
But did they change the default? I think not because now my explicit transactions are NOT committing synchronously in .Net applications using oledb/jet4.
I need to insert 800000 records into an MS Access table. I am using Delphi 2007 and the TAdoXxxx components. The table contains some integer fields, one float field and one text field with only one character. There is a primary key on one of the integer fields (which is not autoinc) and two indexes on another integer and the float field.
Inserting the data using AdoTable.AppendRecord(...) takes > 10 Minutes which is not acceptable since this is done every time the user starts using a new database with the program. I cannot prefill the table because the data comes from another database (which is not accessible through ADO).
I managed to get down to around 1 minute by writing the records to a tab separated text file and using a tAdoCommand object to execute
insert into table (...) select * from [filename.txt] in "c:\somedir" "Text;HDR=Yes"
But I don't like the overhead of this.
There must be a better way, I think.
EDIT:
Some additional information:
MS Access was chosen because it does not need any additional installation on the target machine(s) and the whole database is contained in one file which can be easily copied.
This is a single user application.
The data will be inserted only once and will not change for the lifetime of the database. Though, the table contains one additional field that is used as a flag to indicate that the corresponding record in another database has been processed by the user.
One minute is acceptable (up to 3 minutes would be too) and my solution works, but it seems too complicated to me, so I thought there should be an easier way to do this.
Once the data has been inserted, the performance of the table is quite good.
When I started planning/implementing the feature of the program working with the Access database the table was not required. It only became necessary later on, when another feature was requested by the customer. (Isn't that always the case?)
EDIT:
From all the answers I got so far, it seems that I already got the fastest method for inserting that much data into an Access table. Thanks to everybody, I appreciate your help.
Since you've said that the 800K records data won't change for the life of the database, I'd suggest linking to the text file as a table, and skip the insert altogether.
If you insist on pulling it into the database, then 800,000 records in 1 minute is over 13,000 / second. I don't think you're gonna beat that in MS Access.
If you want it to be more responsive for the user, then you might want to consider loading some minimal set of data, and setting up a background thread to load the rest while they work.
It would be quicker without the indexes. Can you add them after the import?
There are a number of suggestions that may be of interest in this thread Slow MSAccess disk writing
What about skipping the text file and using ODBC or OLEDB to import directly from the source table? That would mean altering your FROM clause to use the source table name and an appropriate connect string as the IN '' part of the FROM clause.
EDIT:
Actually I see you say the original format is xBase, so it should be possible to use the xBase ISAM that is part of Jet instead of needing ODBC or OLEDB. That would look something like this:
INSERT INTO table (...)
SELECT *
FROM tablename IN 'c:\somedir\'[dBase 5.0;HDR=NO;IMEX=2;];
You might have to tweak that -- I just grabbed the connect string for a linked table pointing at a DBF file, so the parameters might be slightly different.
Your text based solution seems the fastest, but you can get it quicker if you could get an preallocated MS Access in a size near the end one. You can do that by filling an typical user database, closing the application (so the buffers are flushed) and doing a manual deletion of all records of that big table - but not shrinking/compacting it.
So, use that file to start the real filling - Access will not request any (or very few) additional disk space. Don't remeber if MS Access have a way to automate this, but it can help much...
How about an alternate arrangement...
Would it be an option to make a copy of an existing Access database file that has this table you need and then just delete all the other data in there besides this one large table (don't know if Access has an equivalent to something like "truncate table" in SQL server)?
I would replace MS Access with another database, and for your situation I see Sqlite is the best choice, it doesn't require any installation into client machine, and it's very fast database and one of the best embedded database solution.
You can use it in Delphi in two ways:
You can download the Database engine Dll from Sqlite website and use Free Delphi component to access it like Delphi SQLite components or SQLite4Delphi
Use DISQLite3 which have the engine built in, and you don't have to distribute the dll with your application, they have a free version ;-)
if you still need to use MS Access, try to use TAdoCommand with SQL Insert statment directly instead of using TADOTable, that should be faster than using TADOTable.Append;
You won't be importing 800,000 records in less than a minute, as someone mentioned; that's really fast already.
You can skip the annoying translate-to-text-file step however if you use the right method (DAO recordsets) for doing the inserts. See a previous question I asked and had answered on StackOverflow: MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?
Don't use INSERT INTO even with DAO; it's slow. Don't use ADO either; it's slow. But DAO + Delphi + Recordsets + instantiating the DbEngine COM object directly (instead of via the Access.Application object) will give you lots of speed.
You're looking in the right direction in one way. Using a single statement to bulk insert will be faster than trying to iterate through the data and insert it row by row. Access, being a file-based database will be exceedingly slow in iterative writes.
The problem is that Access is handling how it optimizes writes internally and there's not really any way to control it. You've probably reached the maximum efficiency of an INSERT statement. For additional speed, you should probably evaluate if there's any way around writing 800,000 records to the database every time you start the application.
Get SQL Server Express (free) and connect to it from Access an external table. SQL express is much faster than MS Access.
I would prefill the database, and hand them the file itself, rather than filling an existing (but empty) database.
If the data you have to fill changes, then keep an ODBC access database (MDB file) synchronized on the server using a bit of code to see changes in the main database and copy them to the access database.
When the user requests a new database zip up the MDB, transfer it to them, and open it.
Alternately, you may be able to find code that opens and inserts data into databases directly.
Alternately, alternately, you may be able to find another format (other than csv) which access can import that is faster.
-Adam
Also check to see how long it takes to copy the file. That will be the lower bound of how fast you can write data. In db's like SQL, it usually takes a bulk load utility to get close to that speed. As far as I know, MS never created a tool to write directly to MS Access tables the way bcp does. Specialized ETL tools will also optimize some of the steps surrounding the insert, such as the way SSIS does transformations in memory, DTS likewise has some optimizations.
Perhaps you could open a ADO Recordset to the table with lock mode adLockBatchOptimistic and CursorLocation adUseClient, write all the data to the recordset, then do a batch update (rs.UpdateBatch).
If it's coming from dbase, can you just copy the data and index files and attach directly without loading? Should be pretty efficient (from the people who bring you FoxPro.) I imagine it would use the existing indexes too.
At the least, it should be a pretty efficient single-command Import.
how much do the 800,000 records change from one creation to the next? Would it be possible to pre populate the records and then just update the ones that have changed in the external database when creating the new database?
This may allow you to create the new database file quicker.
How fast is your disk turning? If it's 7200RPM, then 800,000 rows in 3 minutes is still 37 rows per disk revolution. I don't think you're going to do much better than that.
Meanwhile, if the goal is to streamline the process, how about a table link?
You say you can't access the source database via ADO. Can you set up a table link in MS Access to a table or view in the source database? Then a simple append query from the table link would copy the data over from the source database to the target database for you. I'm not sure, but I think this would be pretty fast.
If you can't set up a table link until runtime, maybe you could build the table link programatically via ADO, then build the append query programatically, then invoke the append query.
HI
The best way is Bulk Insert from txt File as they said
you should insert your record's in txt file then bulk insert the txt file into table
that time should be less than 3 second.