Recently, a program that creates an Access db (a requirement of our downstream partner), adds a table with all memo columns, and then inserts a bunch of records stopped working. Oddly, there were no changes in the environment that I could see and nothing in any diffs that could have affected it. Furthermore, this repros on any machine I've tried, whether it has Office or not and if it has Office, whether it's 32- or 64-bit.
The problem is that when you open the db after the program runs, the destination table is empty and instead there's a MSysCompactError table with a bunch of rows.
Here's the distilled code:
var connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=corrupt.mdb;Jet OLEDB:Engine Type=5";
// create the db and make a table
var cat = new ADOX.Catalog();
try
{
cat.Create(connectionString);
var tbl = new ADOX.Table();
try
{
tbl.Name = "tbl";
tbl.Columns.Append("a", ADOX.DataTypeEnum.adLongVarWChar);
cat.Tables.Append(tbl);
}
finally
{
Marshal.ReleaseComObject(tbl);
}
}
finally
{
cat.ActiveConnection.Close();
Marshal.ReleaseComObject(cat);
}
using (var connection = new OleDbConnection(connectionString))
{
connection.Open();
// insert a value
using (var cmd = new OleDbCommand("INSERT INTO [tbl] VALUES ( 'x' )", connection))
cmd.ExecuteNonQuery();
}
Here are a couple of workarounds I've stumbled into:
If you insert a breakpoint between creating the table and inserting the value (line 28 above), and you open the mdb with Access and close it again, then when the app continues it will not corrupt the database.
Changing the engine type from 5 to 4 (line 1) will create an uncorrupted mdb. You end up with an obsolete mdb version but the table has values and there's no MSysCompactError. Note that I've tried creating a database this way and then upgrading it to 5 programmatically at the end with no luck. I end up with a corrupt db in the newest version.
If you change from memo to text fields by changing the adLongVarWChar on line 13 to adVarWChar, then the database isn't corrupt. You end up with text fields in the db instead of memo, though.
A final note: in my travels, I've seen that MSysCompactError is related to compacting the database, but I'm not doing anything explicit to make the db compact.
Any ideas?
As I replied to HasUp:
According MS support, creation of Jet databases programmatically is deprecated. I ended up checking in an empty model database and then copying it whenever I needed a new one. See http://support.microsoft.com/kb/318559 for more info.
Related
the ASP.NET_SessionState table grows all the time, already at 18GB, not a sign of ever deleting expired sessions.
we have tried to execute DynamoDBSessionStateStore.DeleteExpiredSessions, but it seems to have no effect.
our system is running fine, sessions are created and end-users are not aware of the issue. however, it doesn't make sense the table keeps growing all the time...
we have triple checked permissions/security, everything seems to be in order. we use SDK version 3.1.0. what else remains to be checked?
With your table being over 18 GB, which is quite large (in this context), it does not surprise me that this isn't working after looking at the code for the DeleteExpiredSessions method on GitHub.
Here is the code:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
Search search = table.Scan(config);
do
{
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
} while (!search.IsDone);
batchWrite.Execute();
}
The above algorithm is executed in two parts. First it performs a Search (table scan) using a filter is used to identify all expired records. These are then added to a DocumentBatchWrite request that is executed as the second step.
Since your table is so large the table scan step will take a very, very long time to complete before a single record is deleted. Basically, the above algorithm is useful for lazy garbage collection on small tables, but does not scale well for large tables.
The best I can tell is that the execution of this is never actually getting past the table scan and you may be consuming all of the read throughput of your table.
A possible solution for you would be to run a slightly modified version of the above method on your own. You would want to call the the DocumentBatchWrite inside of the do-while loop so that records will start to be deleted before the table scan is concluded.
That would look like:
public static void DeleteExpiredSessions(IAmazonDynamoDB dbClient, string tableName)
{
LogInfo("DeleteExpiredSessions");
Table table = Table.LoadTable(dbClient, tableName, DynamoDBEntryConversion.V1);
ScanFilter filter = new ScanFilter();
filter.AddCondition(ATTRIBUTE_EXPIRES, ScanOperator.LessThan, DateTime.Now);
ScanOperationConfig config = new ScanOperationConfig();
config.AttributesToGet = new List<string> { ATTRIBUTE_SESSION_ID };
config.Select = SelectValues.SpecificAttributes;
config.Filter = filter;
Search search = table.Scan(config);
do
{
// Perform a batch delete for each page returned
DocumentBatchWrite batchWrite = table.CreateBatchWrite();
List<Document> page = search.GetNextSet();
foreach (var document in page)
{
batchWrite.AddItemToDelete(document);
}
batchWrite.Execute();
} while (!search.IsDone);
}
Note: I have not tested the above code, but it is just a simple modification to the open source code so it should work correctly, but would need to be tested to ensure the pagination works correctly on a table whose records are being deleted as it is being scanned.
So, I Googled the following, but cannot seem to find an answer.
I've got a few Access tables that I am trying to update from Delphi.
The one table updates perfectly, so I know my database connection is active (the connection is set when the main form loads).
The problem is the second table. I keep on getting the error: "Dataset not in edit or insert mode".
In the formshow procedure I've got the following code:
//Connect to the individuals details table
dmoDonations.tblInd.Connection:= dmoDonations.adoDonations;
dmoDonations.tblInd.ReadOnly:= False; //Enable read / write
dmoDonations.tblInd.Active:= False;
dmoDonations.tblInd.TableName:='Individuals';
dmoDonations.tblInd.Active:= True;
dmoDonations.tblInd.Append; //Insert new record
dmoDonations.cdsInd.Open;
dmoDonations.cdsInd.Active:= True;
dmoDonations.cdsInd.ReadOnly:= False;
dmoDonations.cdsInd.Refresh;
And the rest of the code to update the table is under the updateButtonClick procedure:
dmoDonations.tblInd.Insert; //Insert new record
dmoDonations.tblInd['Name']:= edtName.Text;
dmoDonations.tblInd['Address1']:= edtAddress1.Text;
dmoDonations.tblInd['Address2']:= edtAddress2.Text;
dmoDonations.tblInd['Address3']:= edtAddress3.Text;
dmoDonations.tblInd['Phone']:= edtPhone.Text;
dmoDonations.tblInd['Email']:= edtEmail.Text;
dmoDonations.tblInd.Post; //Saves new records
It's at the last line where the error pops up when I step through the code. Why would this error occur even though the table is open and active and I've inserted / appended the table (have tried both options in both sections of code)?
I want to use the following method to flag people in the Person table so that they can be processed. These people must be flagged as "In Process" so that other threads do not operate on the same rows.
In SQL Management Studio the query works as expected. When I call the method in my application I receive the row for the person but with the old status.
Status is one of many navigation properties off of Person and when this query returns it is the only property returned as a proxy object.
// This is how I'm calling it (obvious, I know)
var result = PersonLogic.GetPeopleWaitingInLine(100);
// And Here is my method.
public IList<Person> GetPeopleWaitingInLine(int count)
{
const string query =
#"UPDATE top(#count) PERSON
SET PERSON_STATUS_ID = #inProcessStatusId
OUTPUT INSERTED.PERSON_ID,
INSERTED.STATUS_ID
FROM PERSON
WHERE PERSON_STATUS_ID = #queuedStatusId";
var queuedStatusId = StatusLogic.GetStatus("Queued").Id;
var inProcessStatusId = StatusLogic.GetStatus("In Process").Id;
return Context.People.SqlQuery(query,
new SqlParameter("count", count),
new SqlParameter("queuedStateId", queuedStateId),
new SqlParameter("inProcessStateId", inProcessStateId)
}
// update | if I refresh the result set then I get the correct results
// but I'm not sure about this solution since it will require 2 DB calls
Context.ObjectContext().Refresh(RefreshMode.StoreWins, results);
I know it is an old question but this could help somebody.
It seems you are using a global Context for your query, EF is designed to retain cache info, if you allways need fresh data must use a fresh context to retrieve it. as this:
using (var tmpContext = new Contex())
{
// your query here
}
This create the context and recycle it. This means no cache was stored and next time it gets fresh data from database not from cache.
I'm writing an application that pulls changesets from TFS and exports a csv file that describes the latest changes for use in a script to push those changes into ClearCase. The "latest" doesn't necessarily mean the latest, however. If a file was added and then edited, I only need to know that the file was added, and get the latest version so that my script knows how to properly handle it. Most of this is fairly straight-forward. I'm getting hung up on files that have been renamed or moved, as I do not want to show that item as being deleted, and another item added. To uphold the integrity of ClearCase, I need to have in the CSV file that the item is moved or renamed, along with the old location and the new location.
So, the issue I'm having is tracing a renamed (or moved) file back to the previous name or location so that I can correlate it to the new location/name. Where in the API can I get this information?
Here is your answer:
http://social.msdn.microsoft.com/Forums/en/tfsgeneral/thread/f9c7e7b4-b05f-4d3e-b8ea-cfbd316ef737
Using QueryHistory you can find out that an item was renamed, then using its previous changeset (previous to the one that says it was renamed) you can find its previous name.
You will need to use VersionControlServer.QueryHistory in a manner similar to the following method. Pay particular attention to SlotMode which must be false in order for renames to be followed.
private static void PrintNames(VersionControlServer vcs, Change change)
{
//The key here is to be sure Slot Mode is enabled.
IEnumerable<Changeset> queryHistory =
vcs.QueryHistory(
new QueryHistoryParameters(change.Item.ServerItem, RecursionType.None)
{
IncludeChanges = true,
SlotMode = false,
VersionEnd = new ChangesetVersionSpec(change.Item.ChangesetId)
});
string name = string.Empty;
var changes = queryHistory.SelectMany(changeset => changeset.Changes);
foreach (var chng in changes)
{
if (name != chng.Item.ServerItem)
{
name = chng.Item.ServerItem;
Console.WriteLine(name);
}
}
}
EDIT: Moved the other solution up. What follows worked when I was testing against a pure Rename change but broke when I tired against a Rename and Edit change.
This is probably the most efficient way to get the previous name. While it works (TFS2013 API against as TFS2012 install), it look like a bug to me.
private static string GetPreviousServerItem(VersionControlServer vcs, Item item)
{
Change[] changes = vcs.GetChangesForChangeset(
item.ChangesetId,
includeDownloadInfo: false,
pageSize: int.MaxValue,
lastItem: new ItemSpec(item.ServerItem, RecursionType.None));
string previousServerItem = changes.Single().Item.ServerItem;
//Yep, this passes
Trace.Assert(item.ServerItem != previousServerItem);
return previousServerItem;
}
it would be used like:
if (change.ChangeType.HasFlag(ChangeType.Rename))
{
string oldServerPath = GetPreviousServerItem(vcs, change.Item);
// ...
}
I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)