Ho to use a custom mapper with mapstruct with nested values and conditional values - mapping

I am trying to map one object to another using mapstrut and currently facing some challenges on how to use it for some cases.
public class TargetOrderDto {
String id;
String preferedItem;
List<Item> items;
String status;
Address address;
}
public class Item {
String id;
String name;
}
public abstract class TargetOrderMapper {
#Autowired
private StatusRepository statusRepository;
#Mappings({
#Mapping(target = "id", source = "reference"),
#Mapping(target = "preferedItem", source = ""), // Here I need to loop through these values checking for a single value with a specific tag
#Mapping(target = "items", source = "items"), // List of objects to another list of different data types.
#Mapping(target = "status", source = "remoteStatus") // may need to extract a value from a repository
})
abstract OrderDto toTargetOrderDto(RemoteOrder remoteOrder);
}
// Remote Data
public class RemoteOrder {
String reference;
List<Item> items;
String remoteStatus;
}
public class RemoteItem {
String id;
String flag;
String description;
}
These are the current scenarios that I have failed to get my head around (maybe I am mapping a complex object).
preferedItem :
for this, I need to loop though the items in the order and identify the item with a specific flag. (if it matches then I take that value else I use null)
items :
I need to convert this to a list of 2 different lists; List from List, all have different mapping rules of their own.
remoteStatus :
This one is abit more tricky, I need to extract the status from remoteOrder then lookit up in the db using the statusRepository for an alternate mapped value in db.
any help is highly appreciated.

You can't do business logic with MapStruct. So keep mappings simple and define your own methods were it comes to conditional mappings in list. Note: you can write your own method and MapStruct will select it. Also, from this own implementation you can refer to MapStruct methods again.
public abstract class TargetOrderMapper {
#Autowired
private StatusRepository statusRepository;
#Mappings({
#Mapping(target = "id", source = "reference"),
#Mapping(target = "preferedItem", source = ""), // Here I need to loop through these values checking for a single value with a specific tag
#Mapping(target = "items", source = "items"), // List of objects to another list of different data types.
#Mapping(target = "status", source = "remoteStatus") // may need to extract a value from a repository
})
abstract OrderDto toTargetOrderDto(RemoteOrder remoteOrder);
protected List<Item> toItemList(List<Item> items) {
// do what ever you want..
// and call toItem during iterating.
}
protected abstract Item toItem(Item item);
}
The same goes for status. I added a FAQ entry some time ago about list (mainly about updating, but I guess the same applies here).
About lookups, you can use #MappingContext to pass down a context that contains the logic to access a DB. See here

Related

Neo4j - Custom converter for field of type List

I am trying to write a custom converter for a nested object so that this object gets saved as string in Neo4j database.
I am using #Convert annotation on my field and passing ImageConverter.class which is my AttributeConverter class.
Everything works fine as expected and I am able to save string representation of Image class in Neo4j db.
However, now instead of single image I want to have List<Image> as my nested field. In this case, putting #Convert(ImageConverter.class) doesn't work.
I see that there is a class called ConverterBasedCollectionConverter which gets used when I have a field of type List<LocalDateTime.
However, I couldn't find any exammples on how to use this class in case of custom converters.
Please can anyone help me with this or if there is any other approach to use custom converter on field of type List.
I am using Neo4j (version 3.4.1) and Spring-data-neo4j (5.0.10.RELEASE) in my application. I am also using OGM.
PS: I am aware that it is advised to store nested objects as separate node establishing a relationship with parent object. However, my use case demands that the object be stored as string property and not as separate node.
Regards,
V
It is not so difficult as I assumed it would be.
Given a class (snippet)
#NodeEntity
public class Actor {
#Id #GeneratedValue
private Long id;
#Convert(MyImageListConverter.class)
public List<MyImage> images = new ArrayList<>();
// ....
}
with MyImage as simple as can be
public class MyImage {
public String blob;
public MyImage(String blob) {
this.blob = blob;
}
public static MyImage of(String value) {
return new MyImage(value);
}
}
and a converter
public class MyImageListConverter implements AttributeConverter<List<MyImage>, String[]> {
#Override
public String[] toGraphProperty(List<MyImage> value) {
if (value == null) {
return null;
}
String[] values = new String[(value.size())];
int i = 0;
for (MyImage image : value) {
values[i++] = image.blob;
}
return values;
}
#Override
public List<MyImage> toEntityAttribute(String[] values) {
List<MyImage> images = new ArrayList<>(values.length);
for (String value : values) {
images.add(MyImage.of(value));
}
return images;
}
}
will print following debug output on save that I think is what you want:
UNWIND {rows} as row CREATE (n:Actor) SET n=row.props RETURN row.nodeRef as ref, ID(n) as id, {type} as type with params {type=node, rows=[{nodeRef=-1, props={images=[blobb], name=Jeff}}]}
especially the images part.
Test method for this looks like
#Test
public void test() {
Actor jeff = new Actor("Jeff");
String blobValue = "blobb";
jeff.images.add(new MyImage(blobValue));
session.save(jeff);
session.clear();
Actor loadedActor = session.load(Actor.class, jeff.getId());
assertThat(loadedActor.images.get(0).blob).isEqualTo(blobValue);
}
I am came up with a solution to my problem. So, in case you want another solution along with the solution provided by #meistermeier, you can use the below code.
public class ListImageConverter extends ConverterBasedCollectionConverter<Image, String>{
public ListImageConverter() {
super(List.class, new ImageConverter());
}
#Override
public String[] toGraphProperty(Collection<Image> values) {
Object[] graphProperties = super.toGraphProperty(values);
String[] stringArray = Arrays.stream(graphProperties).toArray(String[]::new);
return stringArray;
}
#Override
public Collection<Image> toEntityAttribute(String[] values) {
return super.toEntityAttribute(values);
}
}
ImageConverter class just implements AttributeConverter<Image, String> where I serialize and deserialize my Image object to/from json.
I chose to go with this approach because I had Image field in one object and List<Image> in another object. So just by changing #Convert(ListImageConverter.class) to #Convert(ImageConverter.class) I was able to save list as well as single object in Neo4j database.
Note: You can skip overriding toEntityAttribute method if you want. It doesn't add much value.
However you have to override toGraphProperty as within Neo4j code it checks for presence of declared method with name toGraphProperty.
Hope this helps someone!
Regards,
V

How to do group by key on custom logic in cloud data flow

I am trying to achieve the Groupby key based on custom object in cloud data flow pipe line.
public static void main(String[] args) {
Pipeline pipeline = Pipeline.create(PipelineOptionsFactory.create());
List<KV<Student,StudentValues>> studentList = new ArrayList<>();
studentList.add(KV.of(new Student("pawan", 10,"govt"),
new StudentValues("V1", 123,"govt")));
studentList.add(KV.of(new Student("pawan", 13223,"word"),
new StudentValues("V2", 456,"govt")));
PCollection<KV<Student,StudentValues>> pc =
pipeline.apply(Create.of(studentList));
PCollection<KV<Student, Iterable<StudentValues>>> groupedWords =
pc.apply(GroupByKey.<Student,StudentValues>create());
}
I just wanted to groupBy both the PCollection record based on the Student object.
#DefaultCoder(AvroCoder.class)
static class Student /*implements Serializable*/{
public Student(){}
public Student(String n, Integer i, String sc){
name = n;
id = i;
school = sc;
}
public String name;
public Integer id;
public String school;
#Override
public boolean equals(Object obj) {
System.out.println("obj = "+obj);
System.out.println("this = "+this);
Student stObj= (Student)obj;
if (stObj.Name== this.Name){
return true;
} else{
return false;
}
}
}
I have overridden the equals method of my custom class, but each time i am getting same instance of Student object to compare inside equals method.
Ideally it sholud compare first student key with second one.
Whats wrong i am doing here.
Why do you think you are doing anything wrong? The keys of each element are serialized (using the AvroCoder you specified) and the GroupByKey can group all of the elements with the same serialized representation together. After that it doesn't need to compare the students to make sure that the values with the same key have been grouped together.

NewtonSoft json Contract Resolver with MVC 4.0 Web Api not producing the output as expected

I am trying to create a conditional ContractResolver so that I can control the serialization differently depending on the web request/controller action.
For example in my User Controller I want to serialize all properties of my User but some of the related objects I might only serialize the primitive types. But if I went to my company controller I want to serialize all the properties of the company but maybe only the primitive ones of the user (because of this I don't want to use dataannotations or shouldserialize functions.
So looking at the custom ContractResolver page i created my own.
http://james.newtonking.com/projects/json/help/index.html?topic=html/ContractResolver.htm
It looks like this
public class IgnoreListContractResolver : DefaultContractResolver
{
private readonly Dictionary<string, List<string>> IgnoreList;
public IgnoreListContractResolver(Dictionary<string, List<string>> i)
{
IgnoreList = i;
}
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
if(IgnoreList.ContainsKey(type.Name))
{
properties.RemoveAll(x => IgnoreList[type.Name].Contains(x.PropertyName));
}
return properties;
}
}
And then in my web api controller action for GetUsers i do this
public dynamic GetUsers()
{
List<User> Users = db.Users.ToList();
List<string> RoleList = new List<string>();
RoleList.Add("UsersInRole");
List<string> CompanyList = new List<string>();
CompanyList.Add("CompanyAccesses");
CompanyList.Add("ArchivedMemberships");
CompanyList.Add("AddCodes");
Dictionary<string, List<string>> IgnoreList = new Dictionary<string, List<string>>();
IgnoreList.Add("Role", RoleList);
IgnoreList.Add("Company", CompanyList);
GlobalConfiguration
.Configuration
.Formatters.JsonFormatter
.SerializerSettings
.ContractResolver = new IgnoreListContractResolver(IgnoreList);
return new { List = Users, Status = "Success" };
}
So when debugging this I see my contract resolver run and it returns the correct properties but the Json returned to the browser still contains entries for the properties I removed from the list.
Any ideas what I am missing or how I can step into the Json serialization step in webapi controllers.
*UPDATE**
I should add that this is in an MVC4 project that has both MVC controllers and webapi controllers. The User, Company, and Role objects are objects (created by code first) that get loaded from EF5. The controller in question is a web api controller. Not sure why this matters but I tried this in a clean WebApi project (and without EF5) instead of an MVC project and it worked as expected. Does that help identify where the problem might be?
Thanks
*UPDATE 2**
In the same MVC4 project I created an extension method for the Object class which is called ToJson. It uses Newtonsoft.Json.JsonSerializer to serialize my entities. Its this simple.
public static string ToJson(this object o, Dictionary<string, List<string>> IgnoreList)
{
JsonSerializer js = JsonSerializer.Create(new Newtonsoft.Json.JsonSerializerSettings()
{
Formatting = Formatting.Indented,
DateTimeZoneHandling = DateTimeZoneHandling.Utc,
ContractResolver = new IgnoreListContractResolver(IgnoreList),
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
js.Converters.Add(new Newtonsoft.Json.Converters.StringEnumConverter());
var jw = new StringWriter();
js.Serialize(jw, o);
return jw.ToString();
}
And then in an MVC action i create a json string like this.
model.jsonUserList = db.Users.ToList().ToJson(IgnoreList);
Where the ignore list is created exactly like my previous post. Again I see the contract resolver run and correctly limit the properties list but the output json string still contains everything (including the properties I removed from the list). Does this help? I must be doing something wrong and now it seems like it isn't the MVC or web api framework. Could this have anything to do with EF interactions/ proxies /etc. Any ideas would be much appreciated.
Thanks
*UPDATE 3***
Process of elimination and a little more thorough debugging made me realize that EF 5 dynamic proxies were messing up my serialization and ContractResolver check for the type name match. So here is my updated IgnoreListContractResolver. At this point I am just looking for opinions on better ways or if I am doing something terrible. I know this is jumping through a lot of hoops just to use my EF objects directly instead of DTOs but in the end I am finding this solution is really flexible.
public class IgnoreListContractResolver : CamelCasePropertyNamesContractResolver
{
private readonly Dictionary<string, List<string>> IgnoreList;
public IgnoreListContractResolver(Dictionary<string, List<string>> i)
{
IgnoreList = i;
}
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
string typename = type.Name;
if(type.FullName.Contains("System.Data.Entity.DynamicProxies.")) {
typename = type.FullName.Replace("System.Data.Entity.DynamicProxies.", "");
typename = typename.Remove(typename.IndexOf('_'));
}
if (IgnoreList.ContainsKey(typename))
{
//remove anything in the ignore list and ignore case because we are using camel case for json
properties.RemoveAll(x => IgnoreList[typename].Contains(x.PropertyName, StringComparer.CurrentCultureIgnoreCase));
}
return properties;
}
}
I think it might help if you used Type instead of string for the ignore list's key type. So you can avoid naming issues (multiple types with the same name in different namespaces) and you can make use of inheritance. I'm not familiar with EF5 and the proxies, but I guess that the proxy classes derive from your entity classes. So you can check Type.IsAssignableFrom() instead of just checking whether typename is a key in the ignore list.
private readonly Dictionary<Type, List<string>> IgnoreList;
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
// look for the first dictionary entry whose key is a superclass of "type"
Type key = IgnoreList.Keys.FirstOrDefault(k => k.IsAssignableFrom(type));
if (key != null)
{
//remove anything in the ignore list and ignore case because we are using camel case for json
properties.RemoveAll(x => IgnoreList[key].Contains(x.PropertyName, StringComparer.CurrentCultureIgnoreCase));
}
return properties;
}
Then the ignore list must be created like this (I also used the short syntax for creating the list and dictionary):
var CompanyList = new List<string> {
"CompanyAccesses",
"ArchivedMemberships",
"AddCodes"
};
var IgnoreList = new Dictionary<Type, List<string>> {
// I just replaced "Company" with typeof(Company) here:
{ typeof(Company), CompanyList }
};
Be aware that, if you use my code above, adding typeof(object) as the first key to the ignore list will cause this entry to be matched every time, and none of your other entries will ever be used! This happens because a variable of type object is assignable from every other type.

Json and Circular Reference Exception

I have an object which has a circular reference to another object. Given the relationship between these objects this is the right design.
To Illustrate
Machine => Customer => Machine
As is expected I run into an issue when I try to use Json to serialize a machine or customer object. What I am unsure of is how to resolve this issue as I don't want to break the relationship between the Machine and Customer objects. What are the options for resolving this issue?
Edit
Presently I am using Json method provided by the Controller base class. So the serialization I am doing is as basic as:
Json(machineForm);
Update:
Do not try to use NonSerializedAttribute, as the JavaScriptSerializer apparently ignores it.
Instead, use the ScriptIgnoreAttribute in System.Web.Script.Serialization.
public class Machine
{
public string Customer { get; set; }
// Other members
// ...
}
public class Customer
{
[ScriptIgnore]
public Machine Machine { get; set; } // Parent reference?
// Other members
// ...
}
This way, when you toss a Machine into the Json method, it will traverse the relationship from Machine to Customer but will not try to go back from Customer to Machine.
The relationship is still there for your code to do as it pleases with, but the JavaScriptSerializer (used by the Json method) will ignore it.
I'm answering this despite its age because it is the 3rd result (currently) from Google for "json.encode circular reference" and although I don't agree with the answers (completely) above, in that using the ScriptIgnoreAttribute assumes that you won't anywhere in your code want to traverse the relationship in the other direction for some JSON. I don't believe in locking down your model because of one use case.
It did inspire me to use this simple solution.
Since you're working in a View in MVC, you have the Model and you want to simply assign the Model to the ViewData.Model within your controller, go ahead and use a LINQ query within your View to flatten the data nicely removing the offending circular reference for the particular JSON you want like this:
var jsonMachines = from m in machineForm
select new { m.X, m.Y, // other Machine properties you desire
Customer = new { m.Customer.Id, m.Customer.Name, // other Customer properties you desire
}};
return Json(jsonMachines);
Or if the Machine -> Customer relationship is 1..* -> * then try:
var jsonMachines = from m in machineForm
select new { m.X, m.Y, // other machine properties you desire
Customers = new List<Customer>(
(from c in m.Customers
select new Customer()
{
Id = c.Id,
Name = c.Name,
// Other Customer properties you desire
}).Cast<Customer>())
};
return Json(jsonMachines);
Based on txl's answer you have to
disable lazy loading and proxy creation and you can use the normal methods to get your data.
Example:
//Retrieve Items with Json:
public JsonResult Search(string id = "")
{
db.Configuration.LazyLoadingEnabled = false;
db.Configuration.ProxyCreationEnabled = false;
var res = db.Table.Where(a => a.Name.Contains(id)).Take(8);
return Json(res, JsonRequestBehavior.AllowGet);
}
Use to have the same problem. I have created a simple extension method, that "flattens" L2E objects into an IDictionary. An IDictionary is serialized correctly by the JavaScriptSerializer. The resulting Json is the same as directly serializing the object.
Since I limit the level of serialization, circular references are avoided. It also will not include 1->n linked tables (Entitysets).
private static IDictionary<string, object> JsonFlatten(object data, int maxLevel, int currLevel) {
var result = new Dictionary<string, object>();
var myType = data.GetType();
var myAssembly = myType.Assembly;
var props = myType.GetProperties();
foreach (var prop in props) {
// Remove EntityKey etc.
if (prop.Name.StartsWith("Entity")) {
continue;
}
if (prop.Name.EndsWith("Reference")) {
continue;
}
// Do not include lookups to linked tables
Type typeOfProp = prop.PropertyType;
if (typeOfProp.Name.StartsWith("EntityCollection")) {
continue;
}
// If the type is from my assembly == custom type
// include it, but flattened
if (typeOfProp.Assembly == myAssembly) {
if (currLevel < maxLevel) {
result.Add(prop.Name, JsonFlatten(prop.GetValue(data, null), maxLevel, currLevel + 1));
}
} else {
result.Add(prop.Name, prop.GetValue(data, null));
}
}
return result;
}
public static IDictionary<string, object> JsonFlatten(this Controller controller, object data, int maxLevel = 2) {
return JsonFlatten(data, maxLevel, 1);
}
My Action method looks like this:
public JsonResult AsJson(int id) {
var data = Find(id);
var result = this.JsonFlatten(data);
return Json(result, JsonRequestBehavior.AllowGet);
}
In the Entity Framework version 4, there is an option available: ObjectContextOptions.LazyLoadingEnabled
Setting it to false should avoid the 'circular reference' issue. However, you will have to explicitly load the navigation properties that you want to include.
see: http://msdn.microsoft.com/en-us/library/bb896272.aspx
Since, to my knowledge, you cannot serialize object references, but only copies you could try employing a bit of a dirty hack that goes something like this:
Customer should serialize its Machine reference as the machine's id
When you deserialize the json code you can then run a simple function on top of it that transforms those id's into proper references.
You need to decide which is the "root" object. Say the machine is the root, then the customer is a sub-object of machine. When you serialise machine, it will serialise the customer as a sub-object in the JSON, and when the customer is serialised, it will NOT serialise it's back-reference to the machine. When your code deserialises the machine, it will deserialise the machine's customer sub-object and reinstate the back-reference from the customer to the machine.
Most serialisation libraries provide some kind of hook to modify how deserialisation is performed for each class. You'd need to use that hook to modify deserialisation for the machine class to reinstate the backreference in the machine's customer. Exactly what that hook is depends on the JSON library you are using.
I've had the same problem this week as well, and could not use anonymous types because I needed to implement an interface asking for a List<MyType>. After making a diagram showing all relationships with navigability, I found out that MyType had a bidirectional relationship with MyObject which caused this circular reference, since they both saved each other.
After deciding that MyObject did not really need to know MyType, and thereby making it a unidirectional relationship this problem was solved.
What I have done is a bit radical, but I don't need the property, which makes the nasty circular-reference-causing error, so I have set it to null before serializing.
SessionTickets result = GetTicketsSession();
foreach(var r in result.Tickets)
{
r.TicketTypes = null; //those two were creating the problem
r.SelectedTicketType = null;
}
return Json(result);
If you really need your properties, you can create a viewmodel which does not hold circular references, but maybe keeps some Id of the important element, that you could use later for restoring the original value.

Using Stored Procedures with Linq To Sql which have Additional Parameters

I have a very big problem and can't seem to find anybody else on the internet that has my problem. I sure hope StackOverflow can help me...
I am writing an ASP.NET MVC application and I'm using the Repository concept with Linq To Sql as my data store. Everything is working great in regards to selecting rows from views. And trapping very basic business rule constraints. However, I'm faced with a problem in my stored procedure mappings for deletes, inserts, and updates. Let me explain:
Our DBA has put a lot of work into putting the business logic into all of our stored procedures so that I don't have to worry about it on my end. Sure, I do basic validation, but he manages data integrity and conflicting date constraints, etc... The problem that I'm faced with is that all of the stored procedures (and I mean all) have 5 additional parameters (6 for inserts) that provide information back to me. The idea is that when something breaks, I can prompt the user with the appropriate information from our database.
For example:
sp_AddCategory(
#userID INT,
#categoryName NVARCHAR(100),
#isActive BIT,
#errNumber INT OUTPUT,
#errMessage NVARCHAR(1000) OUTPUT,
#errDetailLogID INT OUTPUT,
#sqlErrNumber INT OUTPUT,
#sqlErrMessage NVARCHAR(1000) OUTPUT,
#newRowID INT OUTPUT)
From the above stored procedure, the first 3 parameters are the only parameters that are used to "Create" the Category record. The remaining parameters are simply used to tell me what happened inside the method. If a business rule is broken inside the stored procedure, he does NOT use the SQL 'RAISEERROR' keyword when business rules are broken. Instead, he provides information about the error back to me using the OUTPUT parameters. He does this for every single stored procedure in our database even the Updates and Deletes. All of the 'Get' calls are done using custom views. They have all been tested and the idea was to make my job easier since I don't have to add the business logic to trap all of the various scenarios to ensure data quality.
As I said, I'm using Linq To Sql, and I'm now faced with a problem. The problem is that my "Category" model object simply has 4 properties on it: CategoryID, CategoryName, UserId, and IsActive. When I opened up the designer to started mapping my properties for the insert, I realized that there is really no (easy) way for me to account for the additional parameters unless I add them to my Model object.
Theoretically what I would LIKE to do is this:
// note: Repository Methods
public void AddCategory(Category category)
{
_dbContext.Categories.InsertOnSubmit(category);
}
public void Save()
{
_dbContext.SubmitChanges();
}
And then from my CategoryController class I would simply do the following:
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(FormCollection collection)
{
var category = new Category();
try
{
UpdateModel(category); // simple validation here...
_repository.AddCategory(category);
_repository.Save(); // should get error here!!
return RedirectToAction("Index");
}
catch
{
// manage friendly messages here somehow... (??)
// ...
return View(category);
}
}
What is the best way to manage this using Linq to Sql? I (personally) don't feel that it makes sense to have all of these additional properties added to each model object... For example, the 'Get' should NEVER have errors and I don't want my repository methods to return one type of object for Get calls, but accept another type of object for CUD calls.
Update: My Solution! (Dec. 1, 2009)
Here is what I did to fix my problem. I got rid of my 'Save()' method on all of my repositories. Instead, I added an 'Update()' method to each repository and actually commit the data to the database on each CUD (ie. Create / Update / Delete) call.
I knew that each stored procedure had the same parameters, so I created a class to hold them:
public class MySprocArgs
{
private readonly string _methodName;
public int? Number;
public string Message;
public int? ErrorLogId;
public int? SqlErrorNumber;
public string SqlErrorMessage;
public int? NewRowId;
public MySprocArgs(string methodName)
{
if (string.IsNullOrEmpty(methodName))
throw new ArgumentNullException("methodName");
_methodName = methodName;
}
public string MethodName
{
get { return _methodName; }
}
}
I also created a MySprocException that accepts the MySprocArgs in it's constructor:
public class MySprocException : ApplicationException
{
private readonly MySprocArgs _args;
public MySprocException(MySprocArgs args) : base(args.Message)
{
_args = args;
}
public int? ErrorNumber
{
get { return _args.Number; }
}
public string ErrorMessage
{
get { return _args.Message; }
}
public int? ErrorLogId
{
get { return _args.ErrorLogId; }
}
public int? SqlErrorNumber
{
get { return _args.SqlErrorNumber; }
}
public string SqlErrorMessage
{
get { return _args.SqlErrorMessage; }
}
}
Now here is where it all comes together... Using the example that I started with in my initial inquiry, here is what the 'AddCategory()' method might look like:
public void AddCategory(Category category)
{
var args = new MySprocArgs("AddCategory");
var result = _dbContext.AddWidgetSproc(
category.CreatedByUserId,
category.Name,
category.IsActive,
ref args.Number, // <-- Notice use of 'args'
ref args.Message,
ref args.ErrorLogId,
ref args.SqlErrorNumber,
ref args.SqlErrorMessage,
ref args.NewRowId);
if (result == -1)
throw new MySprocException(args);
}
Now from my controller, I simply do the following:
[HandleError(ExceptionType = typeof(MySprocException), View = "SprocError")]
public class MyController : Controller
{
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Category category)
{
if (!ModelState.IsValid)
{
// manage friendly messages
return View(category);
}
_repository.AddCategory(category);
return RedirectToAction("Index");
}
}
The trick to managing the new MySprocException is to simply trap it using the HandleError attribute and redirect the user to a page that understands the MySprocException.
I hope this helps somebody. :)
I don't believe you can add the output parameters to any of your LINQ classes because the parameters do not persist in any table in your database.
But you can handle output parameters in LINQ in the following way.
Add the stored procedure(s) you whish to call to your .dbml using the designer.
Call your stored procedure in your code
using (YourDataContext context = new YourDataContext())
{
Nullable<int> errNumber = null;
String errMessage = null;
Nullable<int> errDetailLogID = null;
Nullable<int> sqlErrNumber = null;
String sqlErrMessage = null;
Nullable<int> newRowID = null;
Nullable<int> userID = 23;
Nullable<bool> isActive=true;
context.YourAddStoredProcedure(userID, "New Category", isActive, ref errNumber, ref errMessage, ref errDetailLogID, ref sqlErrNumber, ref sqlErrMessage, ref newRowID);
}
I haven' tried it yet, but you can look at this article, where he talks about stored procedures that return output parameters.
http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx
Basically drag the stored procedure into your LINQ to SQL designer then it should do the work for you.
The dbContext.SubmitChanges(); will work only for ENTITY FRAMEWORK.I suggest Save,Update and delete will work by using a Single Stored procedure or using 3 different procedure.

Resources