Is using MVC WebGrid Bad Practice - asp.net-mvc

A junior developer on my team just created a screen using an MVC Helper called WebGrid.
My first reaction was a classic NO! to anything that sounded like Grid / bound tech.
As he demo'd the functionality, it wasn't bad, though it did use
?sort=Title
type of syntax which bothers me generally. He also showed me many examples of posts regarding use on forums like Stack. One such warned against their use, but was categorical and gave no clarification. I would like to know if users have any positive or negative experiences with the WebGrid control.
Would you recommend for or against as "best practice"?

I don't think you're going to get a definitive answer to this question, but I think it's a good question and deserves an answer (others may vote to close it, just as a heads-up). But this is my personal take, FWIW.
One of the benefits, to me, of MVC (vs. Web Forms) is that it provides almost full control over the rendered HTML. With Web Forms, .NET did its own crazy thing to get the front end bound to code behind, and this was magnified with using web controls.
Generally speaking, the same could be said about using controls like this in MVC -- they often limit the control one has over the rendered HTML.
Another argument is that they typically limit what you can do via client-side code (or how easy you can do it). This may be a side effect of not having full control over the HTML.
I would take a look at the HTML it produces -- you may quickly see enough reasons there not to use it (or, to use it).
Of course, the other argument is that third party controls often try to do too much, or do things differently than you'd like (like passing the sort through a query string, as you mentioned).

Related

Composite C1 - Has anyone built an app on top of it?

I have an existing application which is quite large, uses a SQL Server database and LINQ to SQL built in MVC. It does what it needs to do very well, but the CMS is sadly lacking (it's difficult, complicated to use and prone to errors).
I like the look of Composite C1 to migrate this application to so that my users can get a good CMS experience.
I don't really want to center my development around C1, so I've been looking at creating an MVC application:
http://docs.composite.net/Functions/MVC
I've created a sample controller, view and then returned some static data to the view and finally posted some data to the controller. All works as a "normal" MVC application would do.
Has anyone used this concept for a real world application? The idea is that if a user want's to display one of my controls on a page they just add the control via the Composite editor. I'll also add basic pages on installation.
It's a bit of a vague question, but I'm really looking for feedback on the following:
1) How "involved" do you need to be with Composite C1 stuff? I want to just create my controllers and other classes to do my work
2) How is the performance with this approach?
3) Is there many gotcha's that you've experienced?
I have built a larger application within/on top of a Composite C1 environment, so I can say its definitely possible and the compatibility with .NET application development is actually one of the main reasons why we chose Composite in the first place.
1) How "involved" do you need to be with Composite C1 stuff? I want to just create my controllers and other classes to do my work
You won't be able to completely ignore everything Composite related when developing a full fledged application however, simply because your controls/views/controllers will run on and be rendered by Composite C1. So necessarily some of the work is done at least in part by the C1 foundation you build on, e.g. Routing, Exception Handling or Rendering.
However you can usually work with or around those features without too much trouble. It may however take some understanding of how Composite works.
2) How is the performance with this approach?
So far I cannot say that Composite would slow down the application in any significant way. It may in fact support you in tasks like Output Caching.
3) Is there many gotcha's that you've experienced?
This is a very broad question, but you generally will always have to make sure you know whether something belongs in one of your controls or would be better fit into a Composite component (page, reusable html block). If you put things into the wrong place, the easiest things will become complicated (like creating a page link) due to information being not present in the current context. But as I said, you can solve this through clever design.
Another thing to look out for is that correct source versioning is a bit harder to set up in the first place with a Composite application, because you have to figure out what is content and what is application.
So far I have made good experiences with C1 and will be using it in the future. It may take a little more time to get into it in the first place compared to a vanilla ASP.NET application, but the work that is done for you regarding CMS parts is well worth it.

RenderAction vs RenderPartial performance

According to Brad Wilson, RenderAction is slower than RenderPartial.
However, has anyone got any statistics that show the difference in performance?
I'm in the process of developing an application where pages are composed of "Widgets".
I have two choices:
Composition at the View Level
Call RenderAction for each widget. This is by far the easiest approach but does mean that we're performing a full MVC cycle for each widget.
Composition at the Controller Level
Compose one ViewModel for the page that contains the data we need for each widget. Call RenderPartial for each widget. This is much more complicated to implement but does mean we'll make only one MVC cycle.
I tested the above approaches with 3 different widgets on a page and the difference in render time was 10ths of a second (hardly worth worrying about).
However, has anyone got any test results more concrete than this, or perhaps experience trying both approaches?
I've recently worked on an application that was experiencing performance issues, and found a view that was making four calls to RenderAction, plus another one in the layout. I found that each call to RenderAction--even when I added in a dummy action that returned an empty view--took around 200-300ms (on my local machine). Multiply by the number of calls and you have a huge performance hit on the page. In my case there were four calls causing about a second of unecessary server-side overhead. By comparison, calls to RenderPartial were around the area of 0-10ms.
I would avoid using RenderAction wherever possible in favor of RenderPartial. The controller should be responsible for returning all necessary information. In the case of widgets, if you need multiple actions for several widgets, I would try composing them into one action so the RenderAction overhead only occurs once, though if your site performs adequately I'd keep them separate for a cleaner design.
Edit: I gathered this information using MiniProfiler and hitting the site. It isn't super accurate but it does clearly show the differences.
Edit: As Oskar pointed out below, the application in question likely had some intensive code that runs for each request in global.asax. The magnitude of this hit will depend on the application code, but RenderPartial will avoid executing another MVC cycle altogether.
I'd suggest 2 more options, both require to compose the view model at Controller level and both can work together (depending on the data)
Html.DisplayFor() - display templates
Helpers via extension methods
Option 2 works very well if you want to keep those widgets in different assemblies, after all they're just functions returning a string. I think it has also the best performance, but of course you lose the 'designer friendly' templates. I think it's important to consider the maintainability aspect, not only raw performance (until you really need it, and even then, caching is more helpful).
For small stuff (date or name formatting etc) i'd use helpers, since the html is usually a span with a class, for more complex stuff I'd use the display templates.

Partial controls in Asp.net mvc page

At my new job, I was given some MVC work. There is only one controller with nine action methods(6 are for ajax rendering) . The page was bit large, so I divided it into small your controls and used render partial to render them. Some user controls were being render through ajax also. Most of the controls are not more like foreach loops and rendering some data from tables, not more 10-15 lines. The main index page passes model to all the controls. My main page looked very clean and easy to maintain.
But my team members are saying, I should put everything in the main page rather than building small controls. Their point is number of files will be a lot, if we start creating controls like this. Also they say if we are not reusing these controls somewhere else there is no point creating them separately.
I would like to know what is better approach for this kind of scenario. Any good links which can help us to understand things better, or any book we can read to clarify our questions.
Help will be appreciated.
Regards
Parminder
As a preface to my answer, let me mention the important value of maintainability. Software evolves over time... and must change to fit the need of the application.
Maintainability in code does not magically appear... Sacrifices (with a touch of paranoia sometimes) must be made in your coding style now, to have the flexibility you'd like in the future.
There may a large page in your project. Some may say that if it works, no need to fix it. But that's looking at it from a short term perspective. You may need some of those UI interfaces in other places in the future. What some persons may do (rather than make partials) is copy that code in the places where they need it - thus causing the same bloat over time that they were trying to avoid.
If you're on the project in the long haul, you'll more fully appreciate the need for flexibility over time. You can see that there are patterns that you'll want to re-use.
My suggestion then: Partials and controls are good things... they are good investments for your ease in the future. If you forecast reusability, that's a good sign for using them.
But use them sparingly. Don't micromanage everything on a page. Some things may be itching to be 'component-ized' but sometimes it's best to SSFL (Save some for later). Like everything in life, balance is important.
Having clean concise code is the way to go. Your code will be alot more readable if you utilize :
sections
templates
partial views
Just remember its always easier to navigate folder structure than to read 100's - 1000's of lines of code.
I recommend watching "Putting your controllers on a diet" by Jimmy Bogard.
Also read "Fat Controllers" by Ian Cooper.
these two links will give you a good idea on how to structure your MVC apps.

Intelligently extracting tags from blogs and other web pages

I'm not talking about HTML tags, but tags used to describe blog posts, or youtube videos or questions on this site.
If I was crawling just a single website, I'd just use an xpath to extract the tag out, or even a regex if it's simple. But I'd like to be able to throw any web page at my extract_tags() function and get the tags listed.
I can imagine using some simple heuristics, like finding all HTML elements with id or class of 'tag', etc. However, this is pretty brittle and will probably fail for a huge number of web pages. What approach do you guys recommend for this problem?
Also, I'm aware of Zemanta and Open Calais, which both have ways to guess the tags for a piece of text, but that's not really the same as extracting tags real humans have already chosen. But I would still love to hear about any other services/APIs to guess the tags in a document.
EDIT: Just to be clear, a solution that already works for this would be great. But I'm guessing there's no open-source software that already does this, so I really just want to hear from people about possible approaches that could work for most cases. It need not be perfect.
EDIT2: For people suggesting a general solution that usually works is impossible, and that I must write custom scrapers for each website/engine, consider the arc90 readability tool. This tool is able to extract the article text for any given article on the web with surprising accuracy, using some sort of heuristic algorithm I believe. I have yet to dig into their approach, but it fits into a bookmarklet and does not seem too involved. I understand that extracting an article is probably simpler than extracting tags, but it should serve as an example of what's possible.
Systems like the arc90 example you give work by looking at things like the tag/text ratios and other heuristics. There is sufficent difference between the text content of the pages and the surrounding ads/menus etc. Other examples include tools that scrape emails or addresses. Here there are patterns that can be detected, locations that can be recognized. In the case of tags though you don't have much to help you uniqely distinguish a tag from normal text, its just a word or phrase like any other piece of text. A list of tags in a sidebar is very hard to distinguish from a navigation menu.
Some blogs like tumblr do have tags whose urls have the word "tagged" in them that you could use. Wordpress similarly has ".../tag/..." type urls for tags. Solutions like this would work for a large number of blogs independent of their individual page layout but they won't work everywhere.
If the sources expose their data as a feed (RSS/Atom) then you may be able to get the tags (or labels/categories/topics etc.) from this structured data.
Another option is to parse each web page and look for for tags formatted according to the rel=tag microformat.
Damn, was just going to suggest Open Calais. There's going to be no "great" way to do this. If you have some target platforms in mind, you could sniff for Wordpress, then see their link structure, and again for Flickr...
I think your only option is to write custom scripts for each site. To make things easier though you could look at AlchemyApi. They have simlar entity extraction capabilities as OpenCalais but they also have a "Structured Content Scraping" product which makes it a lot easier than writing xpaths by using simple visual constraints to identify pieces of a web page.
This is impossible because there isn't a well know, followed specification. Even different versions of the same engine could create different outputs - hey, using Wordpress a user can create his own markup.
If you're really interested in doing something like this, you should know it's going to be a real time consuming and ongoing project: you're going to create a lib that detects which "engine" is being used in a page, and parse it. If you can't detect a page for some reason, you create new rules to parse and move on.
I know this isn't the answer you're looking for, but I really can't see another option. I'm into Python, so I would use Scrapy for this since it's a complete framework for scraping: it's complete, well documented and really extensible.
Try making a Yahoo Pipe and running the source pages through the Term Extractor module. It may or may not give great results, but it's worth a try. Note - enable the V2 engine.
Looking at arc90 it seems they are also asking publishers to use semantically meaningful mark-up [see https://www.readability.com/publishers/guidelines/#view-exampleGuidelines] so they can parse it rather easily, but presumably they must either have developed a generic rules such as #dunelmtech suggested tag/text ratios, which can work with article detection, or they might be using with a combination of some text-segmentation algorithms (from Natural Language Processing field) such as TextTiler and C99 which could be quite usefull for article detection - see http://morphadorner.northwestern.edu/morphadorner/textsegmenter/ and google for more info on both [published in academic literature - google scholar].
It seems that, however, to detect "tags" as you required is a difficult problem (for already mentioned reasons in comments above). One approach I would try out would be to use one of the text-segmentation (C99 or TextTiler) algorithms to detect article start/end and then look for DIV's / SPAN's / ULs with CLASS & ID attributes containing ..tag.. in them, since in terms of page-layout's tags tend to be generally underneath the article and just above the comment feed this might work surprisingly well.
Anyway, would be interesting to see whether you got somewhere with the tag detection.
Martin
EDIT: I just found something that might really be helpfull. The algorithm is called VIPS [see: http://www.zjucadcg.cn/dengcai/VIPS/VIPS.html] and stands for Vision Based Page Segmentation. It is based on the idea that page content can be visually split into sections. Compared with DOM based methods, the segments obtained by VIPS are much more semantically aggregated. Noisy information, such as navigation, advertisement, and decoration can be easily removed because they are often placed in certain positions of a page. This could help you detect the tag block quite accurately!
there is a term extractor module in Drupal. (http://drupal.org/project/extractor) but it's only for Drupal 6.

Spreadsheet like input facility for ASP MVC

I'm looking for recommendations for a spreadsheet-like input facility to sit in an ASP MVC environment.
The client currently has a large number of very complex interlinked shared spreadsheets (which they are effectively running 90% of their core business from) for collecting and processing information. They wish to move this to a web application and require ASP MVC. They realise that they will not be able to display as much information on screen as they do currently with their spreadsheets, so a 40 x 60 grid should suffice in most cases. Of this they'll be a limited number of cells for data entry that will immediately update other cells in the grid using various spreadsheet-like formula. The grid must be AJAX enabled.
The quality of the user-interface is of primary concern here. As inevitably there will be a certain amount of resistance to move to database/web (and this project is a pilot anyway) the system must be as slick as possible. Almost as important is ease of implementation - the final system will be quite large so the quicker it is possible to configure the grid the better.
Either Open Source or commercial would be fine. HTML/Javascript, Silverlight and Flex implementations can all be considered.
I initially asked a similar question a year ago (it's taken that long for the client to agree the project) but I'm sure options have changed since then and our environment is now better defined.
I think Grapecity Spread will fit the bill , it can be easily built with MVC pattern and now it also supports the Razor View Engine .There is already a blog which details how to use Grapecity Spread with MVC , you can go through this here
http://www.gcpowertools.info/2011/12/using-grapecity-spread-for-net-with-mvc.html
Microsoft Silverlight. It is almost certainly your best bet for a rich line of business application with web deployment. It will allow you to utilise a consistent code base across your back-end and front-end component.
Whilst a number of commercial datagrid packages exist (Telerik, etc), I'd suggest using the default Datagrid component that is available, and understanding fully the databinding and templating options available.
Check out YUI's datatable, maybe what you need
http://developer.yahoo.com/yui/examples/datatable/dt_cellediting.html
I have used it and its great, very developer friendly and supports pagination.
When asked for a multi-line editable grid, I've done it two ways.
Dropped a SL control onto the page. This was incredibly easy.
A lot of javascript. Double-clicking a row made a row editable with several textboxes to span the gap. I don't think is what you're looking for, though.
For something quick and easy have a look at JqGrid demos to see if it can do what you want
http://www.trirand.com/blog/jqgrid/jqgrid.html
JQuery is already in MVC and being javascript it will work without plugins for the browser. However it may not be powerful enough for what you want in which case you going to need to look at Silver light etc. Could the project not be approached in a more web friendly manor?

Resources