I'm looking for a dynamic way to implement search in my MVC 1.0 application.
Let's say that I have a user control containing a textbox, a dropdown and a button. The user will put a query in the textbox, select the column from which to search in the dropdown and then press the search button.
On doing the above activity I want to do this in the model:
context.MyViewOrTableName.Where(p => (p.ColumnNameFromTheDropdown.Contains(DataFromTheTextbox)));
Whether the above scenario is possible in MVC 1.0 and if so then how?
Any help would be appreciated.
Solution:
context.MyViewOrTableName.Where("" + ColumnNameFromTheDropdown + ".Contains(#0)", DataFromTheTextbox);
This happened only after including the namespace System.Linq.Dynamic created by Scott and referred by Omar in the post below.
I'm currently doing a similar thing.
That is, I have an MVC View which contains various search options (checkbox, dropdown, textbox), and I wanted an elegant way to return "search results".
So I created a simple class - e.g "ProductSearchCriteria".
This class contains nothing but getters/setters for the different search options (which I populate when the form is submitted via model binding).
I then accept this type as a parameter on my BLL method:
public ICollection<Product> FindProductsForCriteria(ProductSearchCriteria criteria)
{
return _repository // GenericRepository<Product>
.Find() // IQueryable<Product>
.WithSearchCriteria(criteria) // IQueryable<Product>
.ToList(); // List<Product>
}
As to how to apply the filters, well that depends on a few things. Firstly, I don't know if your using Linq-To-Sql, NHibernate, Entity-Framework, etc. Also it depends on your architecture (repository).
You're not going to be able to "dynamically" apply filters via lambda expressions (not easily, anyway).
What I did was create an extension method to gracefully apply the filters:
public static IQueryable<Product> WithSearchCriteria(this IQueryable<Product> source, ProductSearchCriteria criteria)
{
var query = source;
if (criteria.SearchFilterOne != null)
query = query.Where(x => x.FieldInModel == criteria.SearchFilterOne);
// inspect other criteria
}
As I said, it depends on your architecture and ORM. I use Entity Framework 4.0, which supports deferred execution, meaning I can build up queries on my objects (IQueryable), and apply filters before executing the query.
What you're looking for is a way to build dynamic LINQ queries. You can search for some details about it and the options out there, however, I believe that the Dynamic Linq library that Scott Guthrie wrote is exactly what you're looking for.
It lets you build queries from strings:
var query =
db.Customers.
Where("City = #0 and Orders.Count >= #1", "London", 10).
OrderBy("CompanyName").
Select("new(CompanyName as Name, Phone)");
Related
I am learning MVC4 in Visual Studio and I have many questions about it. My first statement about MVC is that MVC's Model doesnt do what I expected. I expect Model to select and return the data rows according to the needs.
But I read many tutorial and they suggest me to let Model return ALL the data from the table and then eliminate the ones I dont need in the controller, then send it to the View.
here is the code from tutorials
MODEL
public class ApartmentContext : DbContext
{
public ApartmentContext() : base("name=ApartmentContext") { }
public DbSet<Apartment> Apartments { get; set; }
}
CONTROLLER
public ActionResult Index()
{
ApartmentContext db = new ApartmentContext();
var apartments = db.Apartments.Where(a => a.no_of_rooms == 5);
return View(apartments);
}
Is this the correct way to apply "where clause" to a select statement? I dont want to select all the data and then eliminate the unwanted rows. This seems weird to me but everybody suggest this, at least the tutorials I read suggest this.
Well which ever tutorial you read that from is wrong (in my opinion). You shouldn't be returning actual entities to your view, you should be returning view models. Here's how I would re-write your example:
public class ApartmentViewModel
{
public int RoomCount { get; set; }
...
}
public ActionResult Index()
{
using (var db = new ApartmentContext())
{
var apartments = from a in db.Apartments
where a.no_of_rooms == 5
select new ApartmentViewModel()
{
RoomCount = a.no_of_rooms
...
};
return View(apartments.ToList());
}
}
Is this the correct way to apply "where clause" to a select statement?
Yes, this way is fine. However, you need to understand what's actually happening when you call Where (and various other LINQ commands) on IQueryable<T>. I assume you are using EF and as such the Where query would not execute immediately (as EF uses delayed execution). So basically you are passing your view a query which has yet to be run and only at the point of where the view attempts to render the data is when the query will run - by which time your ApartmentContext will have been disposed and as a result throw an exception.
db.Apartments.Where(...).ToList();
This causes the query to execute immediately and means your query no longer relys on the context. However, it's still not the correct thing to do in MVC, the example I have provided is considered the recommended approach.
In our project, we will add a Data Access Layer instead of accessing Domain in controller. And return view model instead of Domain.
But your code, you only select the data you need not all the data.
If you open SQL Profiler you'll see that's a select statement with a where condition.
So if it's not a big project I think it's OK.
I can't see these tutorials but are you sure it's loading all the data? It looks like your using entity framework and entity framework uses Lazy laoding. And Lazy loading states:
With lazy loading enabled, related objects are loaded when they are
accessed through a navigation property.
So it might appear that your loading all the data but the data itself is only retrieved from SQL when you access the object itself.
I have an ASP.NET MVC website. In my backend I have a table called People with the following columns:
ID
Name
Age
Location
... (a number of other cols)
I have a generic web page that uses model binding to query this data. Here is my controller action:
public ActionResult GetData(FilterParams filterParams)
{
return View(_dataAccess.Retrieve(filterParams.Name, filterParams.Age, filterParams.location, . . .)
}
which maps onto something like this:
http://www.mysite.com/MyController/GetData?Name=Bill .. .
The dataAccess layer simply checks each parameter to see if its populated to add to the db where clause. This works great.
I now want to be able to store a user's filtered queries and I am trying to figure out the best way to store a specific filter. As some of the filters only have one param in the queryString while others have 10+ fields in the filter I can't figure out the most elegant way to storing this query "filter info" into my database.
Options I can think of are:
Have a complete replicate of the table (with some extra cols) but call it PeopleFilterQueries and populate in each record a FilterName and put the value of the filter in each of field (Name, etc)
Store a table with just FilterName and a string where I store the actual querystring Name=Bill&Location=NewYork. This way I won't have to keep adding new columns if the filters change or grow.
What is the best practice for this situation?
If the purpose is to save a list of recently used filters, I would serialise the complete FilterParams object into an XML field/column after the model binding has occurred. By saving it into a XML field you're also giving yourself the flexibility to use XQuery and DML should the need arise at a later date for more performance focused querying of the information.
public ActionResult GetData(FilterParams filterParams)
{
// Peform action to get the information from your data access layer here
var someData = _dataAccess.Retrieve(filterParams.Name, filterParams.Age, filterParams.location, . . .);
// Save the search that was used to retrieve later here
_dataAccess.SaveFilter(filterParams);
return View(someData);
}
And then in your DataAccess Class you'll want to have two Methods, one for saving and one for retrieving the filters:
public void SaveFilter(FilterParams filterParams){
var ser = new System.Xml.Serialization.XmlSerializer(typeof(FilterParams));
using (var stream = new StringWriter())
{
// serialise to the stream
ser.Serialize(stream, filterParams);
}
//Add new database entry here, with a serialised string created from the FilterParams obj
someDBClass.SaveFilterToDB(stream.ToString());
}
Then when you want to retrieve a saved filter, perhaps by Id:
public FilterParams GetFilter(int filterId){
//Get the XML blob from your database as a string
string filter = someDBClass.GetFilterAsString(filterId);
var ser = new System.Xml.Serialization.XmlSerializer(typeof(FilterParams));
using (var sr = new StringReader(filterParams))
{
return (FilterParams)ser.Deserialize(sr);
}
}
Remember that your FilterParams class must have a default (i.e. parameterless) constructor, and you can use the [XmlIgnore] attribute to prevent properties from being serialised into the database should you wish.
public class FilterParams{
public string Name {get;set;}
public string Age {get;set;}
[XmlIgnore]
public string PropertyYouDontWantToSerialise {get;set;}
}
Note: The SaveFilter returns Void and there is no error handling for brevity.
Rather than storing the querystring, I would serialize the FilterParams object as JSON/XML and store the result in your database.
Here's a JSON Serializer I regularly use:
using System.IO;
using System.Runtime.Serialization.Json;
using System.Text;
namespace Fabrik.Abstractions.Serialization
{
public class JsonSerializer : ISerializer<string>
{
public string Serialize<TObject>(TObject #object) {
var dc = new DataContractJsonSerializer(typeof(TObject));
using (var ms = new MemoryStream())
{
dc.WriteObject(ms, #object);
return Encoding.UTF8.GetString(ms.ToArray());
}
}
public TObject Deserialize<TObject>(string serialized) {
var dc = new DataContractJsonSerializer(typeof(TObject));
using (var ms = new MemoryStream(Encoding.UTF8.GetBytes(serialized)))
{
return (TObject)dc.ReadObject(ms);
}
}
}
}
You can then deserialize the object and pass it your data access code as per your example above.
You didn't mention about exact purpose of storing the filter.
If you insist to save filter into a database table, I would have following structure of the table.
FilterId
Field
FieldValue
An example table might be
FilterId Field FieldValue
1 Name Tom
1 Age 24
1 Location IL
3 Name Mike
...
The answer is much more simple than you are making it:
Essentially you should store the raw query in its own table and relate it to your People table. Don't bother storing individual filter options.
Decide on a value to store (2 options)
Store the URL Query String
This id be beneficial if you like open API-style apps, and want something you can pass nicely back and forth from the client to the server and re-use without transformation.
Serialize the Filter object as a string
This is a really nice approach if your purpose for storing these filters remains entirely server side, and you would like to keep the data closer to a class object.
Relate your People table to your Query Filters Table:
The best strategy here depends on what your intention and performance needs are. Some suggestions below:
Simple filtering (ex. 2-3 filters, 3-4 options each)
Use Many-To-Many because the number of combinations suggests that the same filter combos will be used lots of times by lots of people.
Complex filtering
Use One-To-Many as there are so many possible individual queries, it less likely they are to be reused often enough to make the extra-normalization and performance hit worth your while.
There are certainly other options but they would depend on more detailed nuances of your application. The suggestions above would work nicely if you are say, trying to keep track of "recent queries" for a user, or "user favorite" filtering options...
Personal opinion
Without knowing much more about your app, I would say (1) store the query string, and (2) use OTM related tables... if and when your app shows a need for further performance profiling or issues with refactoring filter params, then come back... but chances are, it wont.
GL.
In my opinion the best way to save the "Filter" is to have some kind of json text string with each of the "columns names"
So you will have something in the db like
Table Filters
FilterId = 5 ; FilterParams = {'age' : '>18' , ...
Json will provide a lot of capabilities, like the use of age as an array to have more than one filter to the same "column", etc.
Also json is some kind of standard, so you can use this "filters" with other db some day or to just "display" the filter or edit it in a web form. If you save the Query you will be attached to it.
Well, hope it helps!
Assuming that a nosql/object database such as Berkeley DB is out of the question, I would definitely go with option 1. Sooner or later you'll find the following requirements or others coming up:
Allow people to save their filters, label, tag, search and share them via bookmarks, tweets or whatever.
Change what a parameter means or what it does, which will require you to version your filters for backward compatibility.
Provide auto-complete functions over filters, possibly using a user's filter history to inform the auto-complete.
The above will be somewhat harder to satisfy if you do any kind of binary/string serialization where you'll need to parse the result and then process them.
If you can use a NoSql DB, then you'll get all the benefits of a sql store plus be able to model the 'arbitrary number of key/value pairs' very well.
Have thought about using Profiles. This is a build in mechanism to store user specific info. From your description of your problem its seems a fit.
Profiles In ASP.NET 2.0
I have to admit that M$ implementation is a bit dated but there is essentially nothing wrong with the approach. If you wanted to roll your own, there's quite a bit of good thinking in their API.
I have a question for how to implement multiple criteria for a repository pattern in ASP.net MVC. Imagine a POCO class in EF4
public class people
{ String Name {get;set;}
float Height {get;set;}
float Weight {get;set;}
int age {get;set;}
....
}
If I build up a repository as IPeopleRepository, what kind of methods should I implement for a multiple criteria search (e.g Age > 30, Height >80). Those criteria would be related to the properties in the class and some of the input could be null. Of course I can write a method like
People SearchPeople (int age, float height.....)
but I have to judge if every variable would be null and append onto the search queries..
So do you have any good ideas on how to implement this function in EF?
It sounds like your looking for something like the Specification pattern.
There is a great article involving EF4 / POCO / Repository / Specification pattern here.
Although i like the pattern, i find it a bit overkill in simple scenarios.
I ended up using the "pipes and filters" technique - basically IQueryable<T> extension methods on your objects to make your repository code fluent.
For a search criteria however, i would be tempted to allow the consuming code to supply the predicate, then you don't have to worry about the parameters.
So the definition would be like this:
public People SearchPeople(Expression<Func<People,bool>> predicate)
{
return _context.People.SingleOrDefault(predicate);
}
Then the code simply supplies the predicate.
var person = _repository.SearchPeople(p => p.Age > 30 && p.Height > 80);
Some people don't like this technique, as it gives too much "power" to the consumer, because they might supply a predicate like p.Id > 0 and return all the rows in the database.
To counteract that, provide an optional parameter for maxRows. If it's not supplied, default to 100 rows.
First, you need to think if you really need repository search method.
You might want to do direct queries instead of wrapping them to repository.
However, if you think you need the search method than you will likely use something like this:
private People SearchPeople(int? age, float? height)
{
var baseQuery = db.People;
if (age != null)
baseQuery = baseQuery.Where(arg => arg.Age > age);
if (height != null)
baseQuery = baseQuery.Where(arg => arg.Height > height);
return baseQuery.ToList();
}
Although you didn't want to do this, I can't think of better solution.
Basically I think there are three options:
Use Specification pattern and create as many single specification as you need then you can more complex specification by combining them via And/Or/Not operators. You can look at here for an example http://code.google.com/p/linq-specifications/
Create a search method that accept an input predicate, it's most simple one since it leaves all criteria filtering works to consumers.
Create a search method with different criteria, then build dynamic Linq expression. There is a PredicateBuilder here: http://www.linqpad.net (look for LinqKit project).
I am refactoring an MVC project to make it testable. Currently the Controller uses the Entity Framework's context objects directly to ask for the required data. I started abstract this and it just doesn't work. Eventually I have an IService and an IRepository abstraction, but to describe the problem let's just look at the IRepository. Many people advise an interface with functions which return some of these: IQueriable<...>, IEnumerable<...>, IList<...>, SomeEntityObject, SomeDTO. Then when one wants to test the service layer they can implement the interface with a class which doesn't go to the database to return these.
Problem: Using linq to entities I have lazy (deferred) loading in my toolset. This is actually very useful, because my controller action functions know which data they need for the view and I didn't ask for more than required. However linq to anythingelse doesn't have lazy loading. So when my IRepository functions return any of the above mentioned things I lose lazy loading. I extended my interface with functions like "GetAnything" and "GetAnythingDeep" but it's not enough: it has to be much more fine-grained. Which would result about 5-6 functions for the same type of object, depending on the properties I want to get in the result. Maybe could be a general function with some "include properties" parameter, but I don't like that too.
Eventually atm I think if I want to make it testable that will result either much less efficient or much more complicated code. Sounds not right.
Btw I was thinking about to change the data source behind the entity model to either xml or some object data soruce, and so I could keep the linq to entities. I found that it's not supported out of the box... which is also sad: this means that entity framework means database source - not a really useful abstraction.
Specific example:
Entity objects:
Article, Language, Person. Relations: Article can have 1-N languages, and one Person (publisher).
ViewModel object:
ArticleDeepViewModel: Contains all the properties of the article, including the languages and the Name of the Person (it's for view the article, so no need for the other properties of the person).
Controller action which will return this view should get the data from somewhere.
Code before modifications:
using (var context = new Entities.Articles())
{
var article = (from a in context.Articles.Include("Languages")
where a.ID == ID
select new ViewArticleViewModel()
{
ID = a.ID,
Headline = a.Headline,
Summary = a.Summary,
Body = a.Body,
CreatedBy = a.CreatedByEntity.Name,
CreatedDate = a.CreatedDate,
Languages = (from l in context.Languages select new ViewLanguagesViewModel() { ID = l.ID, Name = l.Name, Selected = a.Languages.Contains(l) })}).Single();
this.ViewData.Model = article;
}
return View();
Code after modifications could be something like:
var article = ArticleService.GetArticleDeep(ID);
var viewModel = /* mapping */
this.ViewData.Model = viewModel;
return View();
Problem is that GetArticleDeep should return an Article object with Languages included and the entire Person object included (it shouldn't know that the viewmodel needs just the Name of the Person). Also I have so far 3 different viewmodels for an article. For example if someone wants to see the list of articles, then it's unnecessary to get the languages, the body and some other properties, however it might be useful to get the Name of the publisher (which is in the deep). Before "testable" code the controller actions could just contain the linq to entities query and get whichever data they need using lazy loading, Include function, using subqueries, referencing foreign properties (Publisher.Name) ... So there is no unnecessary query to the database and no unnecessary data transferred from the database.
What should be the IService or IRepository interface provide to get the 3-4 different level of Article objects or sometimes list of these objects?
Not sure if you are planning to stick with lazy loading, but if you want a flexible way to integrate eager loading into your repository and service layers first check out this article:
http://blogs.msdn.com/b/alexj/archive/2009/07/25/tip-28-how-to-implement-include-strategies.aspx
He basically gives you a way to build a strongly-typed include strategy like this:
var strategy = new IncludeStrategy<Article>();
strategy.Include(a => a.Author);
Which can then be passed into a general method on your repository or service layers. This way you don't have to have a separate method for each circumstance (i.e. your GetArticleDeep method).
Here is an example repository method using the above include strategy:
public IQueryable<Article> Find(Expression<Func<Article, bool>> criteria, IncludeStrategy<Article> includes)
{
var query = includes.ApplyTo(context.Articles).Where(criteria);
return query;
}
I was reading Steven Sanderson's book Pro ASP.NET MVC Framework and he suggests using a repository pattern:
public interface IProductsRepository
{
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
}
He accesses the products repository directly from his Controllers, but since I will have both a web page and web service, I wanted to have add a "Service Layer" that would be called by the Controllers and the web services:
public class ProductService
{
private IProductsRepository productsRepsitory;
public ProductService(IProductsRepository productsRepository)
{
this.productsRepsitory = productsRepository;
}
public Product GetProductById(int id)
{
return (from p in productsRepsitory.Products
where p.ProductID == id
select p).First();
}
// more methods
}
This seems all fine, but my problem is that I can't use his SaveProduct(Product product) because:
1) I want to only allow certain fields to be changed in the Product table
2) I want to keep an audit log of each change made to each field of the Product table, so I would have to have methods for each field that I allow to be updated.
My initial plan was to have a method in ProductService like this:
public void ChangeProductName(Product product, string newProductName);
Which then calls IProductsRepository.SaveProduct(Product)
But there are a few problems I see with this:
1) Isn't it not very "OO" to pass in the Product object like this? However, I can't see how this code could go in the Product class since it should just be a dumb data object. I could see adding validation to a partial class, but not this.
2) How do I ensure that no one changed any other fields other than Product before I persist the change?
I'm basically torn because I can't put the auditing/update code in Product and the ProductService class' update methods just seem unnatural (However, GetProductById seems perfectly natural to me).
I think I'd still have these problems even if I didn't have the auditing requirement. Either way I want to limit what fields can be changed in one class rather than duplicating the logic in both the web site and the web services.
Is my design pattern just bad in the first place or can I somehow make this work in a clean way?
Any insight would be greatly appreciated.
I split the repository into two interfaces, one for reading and one for writing.
The reading implements IDisposeable, and reuses the same data-context for its lifetime. It returns the entity objects produced by linq to SQL. For example, it might look like:
interface Reader : IDisposeable
{
IQueryable<Product> Products;
IQueryable<Order> Orders;
IQueryable<Customer> Customers;
}
The iQueryable is important so I get the delayed evaluation goodness of linq2sql. This is easy to implement with a DataContext, and easy enough to fake. Note that when I use this interface I never use the autogenerated fields for related rows (ie, no fair using order.Products directly, calls must join on the appropriate ID columns). This is a limitation I don't mind living with considering how much easier it makes faking read repository for unit tests.
The writing one uses a separate datacontext per write operation, so it does not implement IDisposeable. It does NOT take entity objects as input or out- it takes the specific fields needed for each write operation.
When I write test code, I can substitute the readable interface with a fake implementation that uses a bunch of List<>s which I populate manually. I use mocks for the write interface. This has worked like a charm so far.
Don't get in a habit of passing the entity objects around, they're bound to the datacontext's lifetime and it leads to unfortunate coupling between your repository and its clients.
To address your need for the auditing/logging of changes, just today I put the finishing touches on a system I'll suggest for your consideration. The idea is to serialize (easily done if you are using LTS entity objects and through the magic of the DataContractSerializer) the "before" and "after" state of your object, then save these to a logging table.
My logging table has columns for the date, username, a foreign key to the affected entity, and title/quick summary of the action, such as "Product was updated". There is also a single column for storing the change itself, which is a general-purpose field for storing a mini-XML representation of the "before and after" state. For example, here's what I'm logging:
<ProductUpdated>
<Deleted><Product ... /></Deleted>
<Inserted><Product ... /></Inserted>
</ProductUpdated>
Here is the general purpose "serializer" I used:
public string SerializeObject(object obj)
{
// See http://msdn.microsoft.com/en-us/library/bb546184.aspx :
Type t = obj.GetType();
DataContractSerializer dcs = new DataContractSerializer(t);
StringBuilder sb = new StringBuilder();
XmlWriterSettings settings = new XmlWriterSettings();
settings.OmitXmlDeclaration = true;
XmlWriter writer = XmlWriter.Create(sb, settings);
dcs.WriteObject(writer, obj);
writer.Close();
string xml = sb.ToString();
return xml;
}
Then, when updating (can also be used for logging inserts/deletes), grab the state before you do your model-binding, then again afterwards. Shove into an XML wrapper and log it! (or I suppose you could use two columns in your logging table for these, although my XML approach allows me to attach any other information that might be helpful).
Furthermore, if you want to only allow certain fields to be updated, you'll be able to do this with either a "whitelist/blacklist" in your controller's action method, or you could create a "ViewModel" to hand in to your controller, which could have the restrictions placed upon it that you desire. You could also look into the many partial methods and hooks that your LTS entity classes should have on them, which would allow you to detect changes to fields that you don't want.
Good luck! -Mike
Update:
For kicks, here is how I deserialize an entity (as I mentioned in my comment), for viewing its state at some later point in history: (After I've extracted it from the log entry's wrapper)
public Account DeserializeAccount(string xmlString)
{
MemoryStream s = new MemoryStream(Encoding.Unicode.GetBytes(xmlString));
DataContractSerializer dcs = new DataContractSerializer(typeof(Product));
Product product = (Product)dcs.ReadObject(s);
return product;
}
I would also recommend reading Chapter 13, "LINQ in every layer" in the book "LINQ in Action". It pretty much addresses exactly what I've been struggling with -- how to work LINQ into a 3-tier design. I'm leaning towards not using LINQ at all now after reading that chapter.