I started to develop an ASP .NET MVC website that connects to Dynamics 365 implementation.
The way I connect to Dynamics 365 is by using a separate Data Access Layer in my MVC website that make calls to Dynamics 365 every time I wanted to retrieve/create/update entity.
So every time I wanted to retrieve, for example, a list of Contacts from Dynamics, I will create an instance of OrganizationService using CRMServiceClient class from the SDK, and use it to query the CRM.
If, in another time I needed to update an entity, I will again made an instance of OrganizationService and use it to update the data in CRM.
Basically, every operation I will always create an instance of OrganizationService and query the CRM.
Is this the right way to do it? Are there any other approaches that I can take that can have better performance?
You are probably better off creating the OrganizationService once, then storing it in the application state. I don't have any empirical evidence for this, but I believe creating the service object can take a while.
Adxstudio (who created Microsoft CRM portals before they were acquired by Microsoft) also used a cache layer for data retrieved from CRM, to reduce the number of queries sent to CRM and improve overall performance.
Probably worth profiling the performance of both of these to see if the additional effort in storing the objects in memory is worth it in your scenario.
Related
I have developed some MVC applications by using Entity Framework code first approach and now I am developing a new application that will also use web services for mobile applications that we will create. So, I have trouble about the issues below. Could you clarify me please one by one regarding to the issues?
Which web service technology should I use i.e. Web API, WCF, etc? (I am using MVC5 and EF version 6 in my project)
Can I use the same CRUD methods for my web application and for web services? If so, which modifications should be made on the methods and on the other fields i.e. models, etc?
For a current MVC application where EF code first approach was used, is it better to create a new methods for web services or should the current methods be updated by adding ability to support also web services?
Thanks in advance...
I highly recommend to use Commands and Queries. It's covered in this and this articles.
The Command is simple DTO object, and it could be easily sent over the network. In this case you have control over the fields and behaviour you want to make public.
Because commands are simple data containers without behavior, it is
very easy to serialize them (using the XmlSerializer for instance) or
send them over the wire (using WCF for instance), which makes it not
only easy to queue them for later processing, but ot also makes it
very easy to log them in an audit trail- yet another reason to
separate data and behavior. All these features can be added, without
changing a single line of code in the application (except perhaps a
line at the start-up of the application).
I have an EF model with about 200 tables, 75 of which I'd like to expose via REST in an MVC app. I started by adding a WCF Data Service (WCF-DS), pointed it to the EF context, and bam, I had the entire database mapped to REST URI's with full OData syntax support in about 2 minutes.
Next I tried to create the same REST URI space with WebAPI. When I tried to add a WebAPI OData Controller the first thing it asks for was a Model Class and when I was done creating the controller (and copying all the required ODataConventionModelBuilder code into the WebApiConfig) I only had one REST endpoint! My impression now is that WebAPI is not well suited to expose entire EF models with a lot of brute force.
So my questions:
Am I missing a way to map a bunch of WebAPI endpoints to a EF model in one fell swoop?
(Maybe T4 templates that build all the WebAPI code when I generate my EF model??)
Are there any compelling reasons to consider WebAPI vs WCF-DS to expose large URI domains?
(Some say that the benefits of WebAPI are to have fine grained control over each and every MVC/HTTP request but that seems counter-productive if the goal is to conform to the OData spec. I'm not sure I want to have 75 controllers and 1000's of lines of code that would tempt my dev colleagues to change one entity's behavior that would result in different behavior from other entities.)
(For cross cutting concerns such as security, caching, or performance throttling WCF-DS seems to have sufficient configure-ability with Interceptors and its DataServiceConfiguration class. Are there any features of WebAPI that would do better here?)
Thanks.
Update: I found this article by Julie Lermon that helps a bit: http://msdn.microsoft.com/en-us/magazine/dn201742.aspx
Since I have only exposed EF model using WCF DS, I can't comment on Web API questions. But we never really had a reason to replace WCF DS with Web API for our model because as you also noticed, EF and WCF DS play so nicely together that you basically get an OData feed for free. On a client side the situation is different: we started with WCF Data Services Client that is trying to mimic Linq to Entity Framework, but is has so many limitations that I ended up writing my own OData client (you can read about what made us unhappy with WCF DS client part here).
Coming back to server side: our domain was large, we had 80+ tables with almost 1000 columns. And we even supported all CRUD operations using batch updates (OData analog of transactions). While I would recommend to think twice before exposing database record update operations over OData protocol due to design principles, we haven't had any technical issues with that approach.
It is my opinion that Web API + OData extensions is highly overrated for a large majority or use-cases, and my argument is that OData is fundamentally data-oriented while Web API has come to become a great fit for general-purpose APIs, which include service-oriented APIs.
Your use-case is, I believe, a prime example of a very data-oriented layer since you don't seem to want to add much domain logic on this tier (server-side of this HTTP API). And WCF-DS works great for that, especially if you're merely wrapping an EF model which does 90% of the work for you (as you've already observed).
Of course it would be a different story if you were modeling more intricate processes at that layer (in that tier), so exploring both options like you did is always a very good idea. Normally the obvious choice should come naturally, either you'll be writing a lot of redundant code with Web API (go with WCF-DS) or you'll be fighting with WCF-DS's very rigid framework by playing with odd entities and not-very-RESTful OData actions (go with Web API alone).
Web API with its OData extensions stands somewhere in the middle, although it's not always clear what advantages it provides over custom WCF-DS providers. I guess it's nice for people who already know Web API or ASP.NET MVC, and may be a requirement if you want open source. I personally wouldn't debate on this technology with technical arguments, except for the few gotchas one should know about (but which have nothing to do with its design). I've ranted about all of this a while ago, should you want more, but I stress again that there are no hard truths in any of this -- discussing architecture and design is in no-trivial proportions a matter of opinions.
Update: WCF-DS was killed.
We're doing an application which takes feeds and items in that feed to entities on our own site. This is supposed to be an open site which anybody can sign up to. All of this is done with MVC + knockoutjs. We are thinking about managing the entities from that site using Sharepoint 2013, essentially replacing our SQL tables of those entities, and putting them into lists on in our Sharepoint instance. We're also thinking of doing the same thing for users and passwords and such. Then we might have a setup for a single sign on for any other sites my company makes. Currently we're using NHibernate and SQL server.
Are we are naive to think that Sharepoint could handle the amount of traffic, with outside users using our site, along with customers. I've heard that it's doable, I'm just wondering if it's a smart thing to do, and if there are hickups, limitations that any of you have run into trying to do this.
We went pretty far down the Sharepoint-as-a-development-platform path and ultimately ended up scrapping what we had done and rewriting it in other technologies. That doesn't necessarily mean it is not the right choice for your situation, but there a few things to consider:
"Why do it this way?". What are the benefits to adding the Sharepoint layer to your technology stack? If not Sharepoint, what do the alternatives look like?
Do you already have a solid Sharepoint admin team in place? Sharepoint definately requires a dedicated admin or team of admins that really understand the product to keep it performant and to help you troubleshoot when things are not working correctly.
Do you have Sharepoint development talent already in house? Good Sharepoint developers are harder to find and are typically more expensive than your regular .NET developer. Also, some existing .NET developers may not be interested in learning Sharepoint.
What is your expected traffic, and can Sharepoint handle it out of the box? At least in previous versions of Sharepoint there were internal limits on the amounts of data that could be stored in each list. On top of that, there were practical limits before the performance of the app becomes totally unacceptable. Understanding what those limits are should be a part of your initial due diligence so you can plan for those eventualities.
Will you be extracting operational data for external reporting or warehousing purposes? Is your data team already familiar with getting data out of Sharepoint?
Ultimately, the reason we failed was we ran down the path after the promise of "easy" development without really committing to the product. When we started running into problems, we struggled with basics like troubleshooting because we had lost a couple key people and our regular devs and admins struggled to figure out what was going on. If we had had the right people in place, our experience might have been different. We didn't, however, and we eventually chose to move away from Sharepoint and rebuild on our standard MVC/SQL platform.
SharePoint has come a long way in a short time to allow external applications to interact with it in the way you describe. I wouldn't try it with anything but SharePoint 2013 mostly because the licensing allows for this without additional cost per user and partly because what I mention next isn't available in 2010 or earlier.
You can use a MVC/knockout frontend but, the MVC app for SharePoint template isn't exactly what you want unless you will provision every user as a SharePoint user account. That template is still an SP app which means it's run by an SP user. I would look at SharePoint as just an OData service that your app writes back to. You can either use the client object model (SCOM) and write back directly (each user exists as a SharePoint user), or you can proxy the data access from your MVC controller and use a 'service' SP user to connect to SharePoint using SCOM. SCOM is just REST and OData so you can use any web-capable language you choose. I know there are examples for C# and JavaScript (node.js). There may be others.
If you are expecting a lot of volume, I would suggest you host this application on SharePoint Online (Office 365), if possible, and configure it to federate with the rest of your environment. That way you only need to add more space as your data grows (rather than more and more servers as load grows).
Here's a nice overview of the APIs available to you:
http://msdn.microsoft.com/en-us/library/office/jj164060.aspx
SharePoint 2013 Developer Center:
http://msdn.microsoft.com/en-us/library/office/jj162979.aspx
5 minute video on SharePoint 2013 SCOM:
http://www.microsoft.com/office/preview?videoid=1e859ac8-58ca-46d0-a8e0-00f4189761a8&from=sharepermalink-link
Timely blog on anonymous access to SCOM:
http://blogs.msdn.com/b/kaevans/archive/2013/10/24/what-every-developer-needs-to-know-about-sharepoint-apps-csom-and-anonymous-publishing-sites.aspx
There is no point to redesign you application only for replacing your database tables with sharepoint lists. Performance is one issue. Sharepoint list limitation is another issue, you will lose flexibility from relational database design and give your life to a black box design called sharepoint list.
I'm a newbie to ASP.NET MVC. I've been learning MVC 3 for the past couple months and at my job I have to design a CRM system using MVC 3.
In all MVC 3 tutorials, they use MS SQL Compact Edition.
In the CRM project, I have to import Products table from the QB Database and populate that into the CRM.
Keeping in mind, the CRM has to use the QB database and import the products table and
Should I save the CRM data in SQL CE or should I use SQL Server to save all the CRM data as well as the QB data?
MVC 3 is entirely decoupled from the data layer.. The reason you'll have seen most tutorials coupling it with SQL Compact is because most web application tend to need a database of some sorts to be functional and SQL Compact is one of the simplest options when focus should really be on MVC itself.
As far as MVC, you need some way of making data available to the controller and ultimately the view.. you don't even have to use the entity framework (which I guess most examples use for simplicity), however, if you do want to use the entity framework, it looks like you can query quickbooks directly by using this
As I understand, you should implement next logic:
Retrieve needed data from QuickBooks. Here you can use any kind of paid tools like RssBus QuickBooks Data Provider mentioned above, but you are still free to do it using QuickBooks SDK directly (QBXML or QBFC), it is not so hard.
Convert received data to format applicable to your Products table structure.
Perform data export using LINQ to SQL or whatever you want. It is completely up to you which edition of MS SQL Server to use and depends only on complexity of functionality that you need from it.
[I've never used WCF before. I've been googling for a couple days and haven't found any info that makes my decision of whether or not to use it obvious to me.]
I've been developing a website using ASP.NET MVC, LINQ to SQL, and SQL Server.
Now I want to develop some mobile apps which will be fed data from the site's DB.
It has been suggested to me that I use WCF for this.
I know that if I have data facing the public internet, it can be scraped if someone really wants it, but I'd like to minimize the "scrapablility" of my data.
Since my mobile apps will probably just be sending/receiving data in JSON format, what benefits would I get from using WCF instead of just RESTful JSON-returning URI's in MVC?
And if I do implement WCF, should my MVC site be hitting those services for data also instead of using LINQ in my controllers?
I've got an ASP.NET MVC application hitting WCF. I originally developed it without WCF by having the controllers interact with a service layer that hits my repositories. A requirement came up during development that required me to physically separate the UI from the service layer.
Adding WCF was a pain in the rear. Things that worked without WCF no longer worked afterwards. For example, the state of my entities was lost upon transmission to/from the service layer making it very difficult to utilize certain features of my ORM (NHibernate). I could no longer retrieve an entity, map a viewmodel to the entity in my controller, and allow NHibernate to determine whether or not an update was necessary.
That said, the challenges associated with WCF were mostly incurred at the beginning. I don't need to revisit the configuration very often and I've gotten used to working with detached entities. I also have the benefit of physical separation and WCF is extremely flexible.
Would I use WCF if I needed web services but not the separation? I really don't know. I would probably try to make JSON action methods work because those are much easier (not to mention more fun). Keeping it simple is still a wonderful principle.
As for your MVC site hitting services? I think it's safe to say that your action methods should be very thin and there should be very little business logic or persistence concerns within your MVC project. Separation of Concerns makes it much easier to adapt and change your application.
I don't see any need for WCF. I'd consider an API area, or controller if the API is small, and deliver the data via JSON from a controller action. I'd refactor the app so that the API and your controllers use the same repositories. If you need to retrieve data via AJAX from your views, you can use the API, but I don't see any point in your controllers using them if they can take advantage of the repositories.