I have a method to retrieve configuration details from a table MyConfiguration. The code currently being used is:
Query query;
QueryRun queryRun;
QueryBuildDataSource qbds;
MyConfiguration config;
int rowCount;
query = new Query();
qbds = query.addDataSource(tableNum(MyConfiguration));
queryRun = new QueryRun(query);
rowCount = SysQuery::countTotal(queryRun);
The table has 0 or 1 rows; there is an if statement of what process to use if there are configuration settings or to use the defaults.
Issue
Although there is a row in the table the query is intermittently returning 0 rows.
Update
Thanks to David's input I have simplified the code:
MyConfiguration config;
select firstOnly useSettings, firstField, secondField from config;
// This wasn't included in the original example, but demonstrates how it's used.
if(config){
// These variables are defined in classDeclaration
useCustom = config.useSettings;
first = config.firstField;
second = config.secondField;
}
else
{
// No custom configuration, use defaults.
useCustom = 0;
}
This code is in a method that is called when the primary method is called to find the configuration to be used.
When I run my test methods in the development environment all the tests pass (the configuration is being read for each test). However when the primary method is called from a button's click event the select isn't returning anything (I've checked this in the debugger). This causes the application to run using the defaults instead of the configured values. If I manually, in the debugger, move the execution past the if the second select also doesn't return any values.
Both the test and the form execute the method in the same way, but are getting different results from the select statement.
Your code looks right.
However the following may be easier to work with and debug
MyConfiguration config;
int rowCount;
;
select firstonly config;
if(config)
{
//Record exists
}
else
{
//Record does not exist
}
Related
I use NReco.Data in my Asp.NetCore Application to make db-calls, because I don't want to use EF and DataTable isn't supported yet.
Now I need to call a StoredProcedure and get Multiple RecordSets (or Dictionarylists).
At the moment I called this:
dbAdapter.Select($"STOREDNAME #{nameof(SQLPARAMETER)}", SQLPARAMETER).ToRecordSet()
But the stored gives me more than 1 recordset, can anyone help me to get the others?
Currently NReco.Data.DbDataAdapter has no API for processing multiple result sets returned by single IDbCommand.
You can compose IDbCommand by yourself, execute data reader and read multiple result sets in the following way:
IDbCommand spCmd; // lets assume that this is DB command for 'STOREDNAME'
RecordSet rs1 = null;
RecordSet rs2 = null;
spCmd.Connection.Open();
try {
using (var rdr = spCmd.ExecuteReader()) {
rs1 = RecordSet.FromReader(rdr);
if (rdr.NextResult())
rs2 = RecordSet.FromReader(rdr);
}
} finally {
spCmd.Connection.Close();
}
As NReco.Data author I think that support for multiple result sets may be easily added to DbDataAdapter API (I've just created an issue for that on github).
-- UPDATE --
Starting from NReco.Data v.1.0.2 it is possible to handle multiple result sets in the following way:
(var companies, var contacts) = DbAdapter.Select("exec STOREDNAME").ExecuteReader(
(rdr) => {
var companiesRes = new DataReaderResult(rdr).ToList<CompanyModel>();
rdr.NextResult();
var contactsRes = new DataReaderResult(rdr).ToList<ContactModel>();
return (companiesRes, contactsRes);
});
In the same manner DataReaderResult can map results to dictionaries or RecordSet if needed.
Do I need to pass a class objects to the Model method and process it one at a time?
Eg.
public async Task<int> SaveCollectionValues(Foo foo)
{
....
//Parameters
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
(OR)
2. Shall I pass the Collection value to the Model method and use foreach to iterate through the collection
public async Task<int> SaveCollectionValues(FooCollection foo)
{
....
//Parameters
foreach(Foo obj in foo)
{
MySqlParameter prmID = new MySqlParameter("pID", MySqlDbType.Int32);
prmID.Value = foo.ID;
sqlCommand.Parameters.Add(prmID);
....
}
....
}
I just need to know which of the above mentioned method would be efficient to use?
Efficient is a bit relative here since you didn't specify which database. Bulk insert might change from one to another DB. SQL Server, for instance, uses BCP, while MySQL has a way to disable some internals while sending many insert/update commands.
Apart from that, if you're submitting a single collection at once and that should be handled as a single transaction, than the best option, from both code organization and SQL optimization, is to use both connection sharing and a single transaction object, as follows:
public void DoSomething(FooCollection collection)
{
using(var db = GetMyDatabase())
{
db.Open();
var transaction = db.BeginTransaction();
foreach(var foo in collection)
{
if (!DoSomething(foo, db, transaction))
{ transaction.Rollback(); break; }
}
}
}
public bool DoSomething(Foo foo, IDbConnection db, IDbTransaction transaction)
{
try
{
// create your command (use a helper?)
// set your command connection to db
// execute your command (don't forget to pass the transaction object)
// return true if it's ok (eg: ExecuteNonQuery > 0)
// return false it it's not ok
}
catch
{
return false;
// this might not work 100% fine for you.
// I'm not logging nor re-throwing the exception, I'm just getting rid of it.
// The idea is to return false because it was not ok.
// You can also return the exception through "out" parameters.
}
}
This way you have a clean code: one method that handles the entire collection and one that handles each value.
Also, although you're submitting each value, you're using a single transaction. Besides of a single commit (better performance), if one fails, the entire collection fails, leaving no garbage behind.
If you don't really need all that transaction stuff, just don't create the transaction and remove it from the second method. Keep a single connection since that will avoid resources overuse and connection overhead.
Also, as a general rule, I like to say: "Never open too many connections at once, specially when you can open a single one. Never forget to close and dispose a connection unless you're using connection poolling and know exactly how that works".
I am getting a single entity by using a method fetchEntityByKey, after that I am loading navigation property for the entity by entityAspect.loadNavigationProperty. But loadNavigationProperty always make a call to the server, what I am wondering if I can first check it from cache, if it is exist then get it from there otherwise go the server. How is it possible? Here is my current code
return datacontext.getProjectById(projectId)
.then(function (data) {
vm.project = data;
vm.project.entityAspect.loadNavigationProperty('messages');
});
Here is a function that I encapsulated inside datacontext service.
function getProjectById(projectId) {
return manager.fetchEntityByKey('Project', projectId)
.then(querySucceeded, _queryFailed);
function querySucceeded(data) {
return data.entity;
}
}
Also, how is it possible to load navigation property with some limit. I don't want to have all records for navigation property at once for performance reason.
You can use the EntityQuery.fromEntityNavigation method to construct a query based on an entity and a navigationProperty . From there you can execute the resulting query locally, via the EntityManager.executeQueryLocally method. So in your example once you have a 'project' entity you can do the following.
var messagesNavProp = project.entityType.getProperty("messages");
var query = EntityQuery.fromEntityNavigation(project, messagesNavProp);
var messages = myEntityManager.executeQueryLocally(query);
You can also make use of the the EntityQuery.using method to toggle a query between remote and local execution, like this:
query = query.using(FetchStrategy.FromLocalCache);
vs
query = query.using(FetchStrategy.FromServer);
please take a look here: http://www.breezejs.com/sites/all/apidocs/classes/EntityManager.html
as you can see fetchEntityByKey ( typeName keyValues checkLocalCacheFirst ) also have a third optional param that you can use to tell breeze to first check the manager cache for that entity
hope this helps
I'm writing a command line application that checks the SPFieldCollection returned by the SPWeb.Fields property, but it's not behaving as I'd like. I have hundreds of SPWebs and it's definitely touching them all, but for all but the initial SPWeb, it's returning an empty Fields property. What am I doing wrong?
string siteUrl = "http://webroot/sitecoll";
using (SPSite siteCol = new SPSite(siteUrl))
{
using(SPWeb outerWeb = siteCol.OpenWeb())
{
foreach (SPWeb innerWeb in siteCol.AllWebs)
{
LogMessageToFile(String.Format("Checking {0}", innerWeb.Url)); //executed for each of the hundreds of innerWebs
if (innerWeb.Fields.ContainsField("Year"))
{
// Never accessed after the first time through because innerWeb.Fields is empty
}
}
}
}
SPweb.Fields live at the site collection level.
Unless you specifically create the fields at the subsite levels you will get 0 returned.
I am using asp.net mvc for an application. I've taken some guidance from Rob Conery's series on the MVC storefront. I am using a very similar data access pattern to the one that he used in the storefront.
However, I have added a small difference to the pattern. Each class I have created in my model has a property called IsNew. The intention on this is to allow me to specify whether I should be inserting or updating in the database.
Here's some code:
In my controller:
OrderService orderService = new OrderService();
Order dbOrder = orderService.GetOrder(ID);
if (ModelState.IsValid)
{
dbOrder.SomeField1 = "Whatever1";
dbOrder.SomeField2 = "Whatever2";
dbOrder.DateModified = DateTime.Now;
dbOrder.IsNew = false;
orderService.SaveOrder(dbOrder);
}
And then in the SQLOrderRepository:
public void SaveOrder(Order order)
{
ORDER dbOrder = new ORDER();
dbOrder.O_ID = order.ID;
dbOrder.O_SomeField1 = order.SomeField1;
dbOrder.O_SomeField2 = order.SomeField2;
dbOrder.O_DateCreated = order.DateCreated;
dbOrder.O_DateModified = order.DateModified;
if (order.IsNew)
db.ORDERs.InsertOnSubmit(dbOrder);
db.SubmitChanges();
}
If I change the controller code so that the dbOrder.IsNew = true; then the code works, and the values are inserted correctly.
However, if I set the dbOrder.IsNew = false; then nothing happens...there are no errors - it just doesn't update the order.
I am using DebuggerWriter here: http://www.u2u.info/Blogs/Kris/Lists/Posts/Post.aspx?ID=11 to trace the SQL that is being generated, and as expected, when the IsNew value is true, the Insert SQL is generated and executed properly. However, when IsNew is set to false, there appears to be no SQL generated, so nothing is executed.
I've verified that the issue here (LINQ not updating on .SubmitChanges()) is not the problem.
Any help is appreciated.
In your SaveOrder method you are always creating a new ORDER object. You need to change this so that if order.IsNew is false, it retrieves the existing one from the DB and updates it instead.
public void SaveOrder(Order order)
{
ORDER dbOrder;
if (order.IsNew)
{
dbOrder = new ORDER();
dbOrder.O_ID = order.ID;
}
else
{
dbOrder = (from o in db.ORDERS where o.O_ID == order.ID select o).Single();
}
dbOrder.O_SomeField1 = order.SomeField1;
dbOrder.O_SomeField2 = order.SomeField2;
dbOrder.O_DateCreated = order.DateCreated;
dbOrder.O_DateModified = order.DateModified;
if (order.IsNew)
db.ORDERs.InsertOnSubmit(dbOrder);
db.SubmitChanges();
}
I think you have the problem that your entity is detached from your context.
You should try to attach your entity back to your context if you want to update. The downside of LINQtoSQL is that for the re-attachment you'll need the original state of the object when it was detached...
Another solution is to re-get your entity from the context and copy all the data from your entity in the parameter. This will do until you'll have more complex entities.
What tvanfosson said.
I would just like to add that I use logic where if Id equals default(0 or Empty if using guids), then I assume it is new. Otherwise if I have the id passed in, then I go get the existing object and update it.