Handling WebDAV requests on MVC action - asp.net-mvc

I have an existing MVC3 application which allows users to upload files and share them with others. The current model is that if a user wants to change a file, they have to delete the one there and re-upload the new version. To improve this, we are looking into integrating WebDAV to allow the online editing of things like Word documents.
So far, I have been using the .Net server and client libraries from http://www.webdavsystem.com/ to set the website up as a WebDAV server and to talk with it.
However, we don't want users to interact with the WebDAV server directly (we have some complicated rules on which users can do what in certain situations based on domain logic) but go through the previous controller actions we had for accessing files.
So far it is working up to the point where we can return the file and it gives the WebDAV-y type prompt for opening the file.
The problem is that it is always stuck in read-only mode. I have confirmed that it works and is editable if I use the direct WebDAV URL but not through my controller action.
Using Fiddler I think I have found the problem is that Word is trying to talk negotiate with the server about the locking with a location that isn't returning the right details. The controller action for downloading the file is "/Files/Download?filePath=bla" and so Word is trying to talk to "/Files" when it sends the OPTIONS request.
Do I simply need to have an action at that location that would know how to respond to the OPTIONS request and if so, how would I do that response? Alternatively, is there another way to do it, perhaps by adding some property to the response that could inform Word where it should be looking instead?
Here is my controller action:
public virtual FileResult Download(string filePath)
{
FileDetails file = _fileService.GetFile(filePath);
return File(file.Stream, file.ContentType);
}
And here is the file service method:
public FileDetails GetFile(string location)
{
var fileName = Path.GetFileName(location);
var contentType = ContentType.Get(Path.GetExtension(location));
string license ="license";
var session = new WebDavSession(license) {Credentials = CredentialCache.DefaultCredentials};
IResource resource = session.OpenResource(string.Format("{0}{1}", ConfigurationManager.AppSettings["WebDAVRoot"], location));
resource.TimeOut = 600000;
var input = resource.GetReadStream();
return new FileDetails { Filename = fileName, ContentType = contentType, Stream = input };
}
It is still very early days on this so I appreciate I could be doing this in entirely the wrong way and so any form of help is welcome.

In the end it seems that the better option was to allow users to directly talk to the WebDAV server and implement the authentication logic to control it.
The IT Hit server has extensions that allow you to authenticate against the forms authentication for the rest of the site using basic or digest authentication from Office. Using that along with some other customisations to the item request logic gave us what we needed.

This is exactly what i did for a MVC 4 project.
https://mvc4webdav.codeplex.com/

Related

Dynamic redirect url from google console - oAuth

I have created a MVC/API project to enable external authentication and worked fine for my local host url. However, I need to achieve the below.
I am supporting multi tenancy (same app service and different DB), so each tenant has to connect different DB based on the custom param in the MVC url
Ex: https://localhost/tenant1, .../tenant2, .../tenant3 etc (not going with separate subdomain at this point)
I am not sure if the Google Console supports the wildcard url as a return ur and not sure how to achieve that in MVC code (Ex:http://localhost/* OR {0} .. something like that. (So dynamic input parameter will be returned back from google)
I am reading and attempting some solutions. Will update the answer here once i get the complete solution. In the meantime if anyone has any suggestions, please help me.
UPDATE 1:
I have updated my source code as follows:
Create session object before redirecting to the external login
System.Web.HttpContext.Current.Session["Tenant"] = "tenantname";
After callback read the tenant details and save in the session for subsequent DB calls based on the tenant name
public async Task<ActionResult> ExternalLoginCallback(string returnUrl)
{
var loginInfo = await AuthenticationManager.GetExternalLoginInfoAsync();
if (loginInfo == null)
{
return RedirectToAction("Login");
}
if (System.Web.HttpContext.Current.Session["Tenant"] != null)
{
string sessionObj = System.Web.HttpContext.Current.Session["Tenant"] as String;
}
This is a common requirement, and is easily solved. There are two components.
Firstly, regardless of which of your many URLs your application lives at (myapp.com/tenant1, /tenant2, etc) you have a single redirect URL (eg myapp.com/oauthredirect).
Secondly, when starting the OAuth dance (https://developers.google.com/identity/protocols/OAuth2WebServer#redirecting), you can specify a state parameter which will be passed into your oauthredirect routine (eg. as state=tenant1). You can then use this to create a redirect back to the appropriate site URL once you have finished your user registration tasks.
Be careful when specifying your redirect URLs into the developer console. They must be a character-by-character match with the actual URL. So, foe example, you will need to specify both http://myapp.com/oauthredirect and https://myapp.com/oauthredirect. I've always found it quite useful to create a local entry in /etc/hosts (or the windows equivalent) so your localhost is also resolved by eg. http://test.myapp.com
Authorized redirect URIs For use with requests from a web server. This
is the path in your application that users are redirected to after
they have authenticated with Google. The path will be appended with
the authorization code for access. Must have a protocol. Cannot
contain URL fragments or relative paths. Cannot be a public IP
address.
http://localhost/google-api-php-client-samples/Analytics/Oauth2.php
http://localhost/authorize/
You can have as many of them as you want but the wild card is not going to work.

Log Requests from some Link

How can i log requests that are going on some link?
I need to store requests Headers, Verb (Get or Post etc.), Request Data and Request Body.
It's must be some separate application like Fiddler.
DESC: I have web application. It makes some search. I want to log data of search request using another application which can log any requests for some site (in my case for my web app). How to make it? I make research for solution but find many examples where user can create some Module or Filter which must be included in web application. This case for me is not allowed.
If you have control of both sides, you can basically do whatever you want..
Maybe link to an action first that acts as a tracker:
public ActionResult Track()
{
//get whatever data you want here
//Request.Headers, Request.RequestType ect
//track the data in a database or whatever
SaveSomeData();
//get the original url from a post variable, or querystring, where you put it
var redirectUrl = Request["redirect"];
return Redirect(redirectUrl);
}
Then you would change your links for example a link to http://google.com, would change to
http://mywebsite.com/mycontroller/track?url=http://google.com
Another possible way would be to create a proxy, and monitor the data that goes through it.
Need a better idea of what you need though to help out more.

umbraco one site different brands

I'm very new to Umbraco and have a requirement to set up a site where different customers will access the same site, but see it with their own brand. It must be the same site in IIS and re-using the same razor views and related code, but our business teams have a requirement to set up a new customer for the same site, with their own values for the configurable content data via Umbraco without relying on support or developer involvement.
eg. Site URL is www.mysite.com
Customer from ClientA visits (maybe via URL www.mysite.com/ClientA or perhaps www.mysite.com?brand=ClientA) and sees the version branded for them.
A customer from ClientB should be able to visit the same site but passing in their brand code instead and see their customized version.
My first question is: Is this acheivable? If so, what is the correct way to do it?
I want to maximise code re-use.
Any help or pointers would be hugely appreciated.
Thanks in advance.
You can do this is standard asp.net, add a code behind for the standard default.aspx page that umbraco uses to drive everything and then in the onpreinit event switch either the master page or theme to the correct branding; something like:
protected override void OnPreInit(EventArgs e)
{
base.OnPreInit(e);
int templateId = umbraco.NodeFactory.Node.GetCurrent().template;
umbraco.template template = new umbraco.template(templateId);
string templateName = template.TemplateAlias;
if (Request.QueryString["brand"] = "ClientA")
{
Page.MasterPageFile = string.format("~/MasterPages/clienta/{0}.master", templateName);
}
}
So all content is tagged to the standard set of masterpages in the masterpages folder; but if a "?brand=ClientA" url is requested it automatically changes the masterpage to the clienta folder - allowing you to brand the page based on the querystring.

How to get the raw binary content in Grails controller

after many days of search and many unsuccessful tries, I hope that the community knows a way to achieve my task:
I want to use grails as a kind of a proxy to my solr backend. By this, I want to ensure that only authorized requests are handled by solr. Grails checks the provided collection and the requested action and validated the request with predefined user based rules. Therefore, I extended my grails URL mapping to
"/documents/$collection/$query" {
controller = "documents"
action = action = [GET: "proxy_get", POST: "proxy_post"]
}
The proxy_get method works fine even when the client is using solrJ. All I have to to is to forward the URL request to solr and to reply with the solr response.
However, in the proxy_post method, I need to get the raw body data of the request to forward it to solr. SolrJ is using javabin for that and I was not able so far to get the raw binary request. The most promising approach was this:
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(solrUrl);
InputStream requestStream = request.getInputStream();
ContentType contentType = ContentType.create(request.getContentType());
httpPost.setEntity(new ByteArrayEntity(IOUtils.toByteArray(requestStream), contentType));
httpPost.setHeader("Content-Type", request.getContentType())
HttpResponse solrResponse = httpClient.execute(httpPost);
However, the transferred content is empty in case of javabin (e.g. when I add a document using solrJ).
So my question is, whether there is any possibility to get to the raw binary post content so that I can forward the request to solr.
Mathias
try using Groovy HttpBuilder. It has a powerful low-level API, while providing groovyness

How can I write an MVC3/4 application that can both function as a web API and a UI onto that API?

My title sums this up pretty well. My first though it to provide a few data formats, one being HTML, which I can provide and consume using the Razor view engine and MVC3 controller actions respectively. Then, maybe provide other data formats through custom view engines. I have never really worked in this area before except for very basic web services, very long ago. What are my options here? What is this Web API I see linked to MVC4?
NOTE: My main HTML app need not operate directly off the API. I would like to write the API first, driven by the requirements of a skeleton HTML client, with a very rudimentary UI, and once the API is bedded down, then write a fully featured UI client using the same services as the API but bypassing the actual data parsing and presentation API components.
I had this very same thought as soon as the first talk of the Web API was around. In short, the Web API is a new product from the MS .NET Web Stack that builds on top of WCF, OData and MVC to provide a uniform means of creating a RESTful Web API. Plenty of resources on that, so go have a Google.
Now onto the question..
The problem is that you can of course make the Web API return HTML, JSON, XML, etc - but the missing piece here is the Views/templating provided by the Razor/ASPX/insertviewenginehere. That's not really the job of an "API".
You could of course write client-side code to call into your Web API and perform the templating/UI client-side with the mass amount of plugins available.
I'm pretty sure the Web API isn't capable of returning templated HTML in the same way an ASP.NET MVC web application can.
So if you want to "re-use" certain portions of your application (repository, domain, etc), it would probably be best to wrap the calls in a facade/service layer of sorts and make both your Web API and seperate ASP.NET MVC web application call into that to reduce code.
All you should end up with is an ASP.NET MVC web application which calls into your domain and builds templated HTML, and an ASP.NET Web API application which calls into your domain and returns various resources (JSON, XML, etc).
If you have a well structured application then this form of abstraction shouldn't be a problem.
I'd suggest developing your application in such a way that you use a single controller to return the initial application assets (html, javascript, etc) to the browser. Create your API / logic in WebAPI endpoint services and access those services via JavaScript. Essentially creating a single page application. Using MVC 4 our controller can return different Views depending on the device (phone, desktop, tablet), but using the same JavaScript all of your clients will be able to access the service.
Good libraries to look into include KnockoutJS, SammyJS , or BackBoneJS
If you do have a requirement to return HTML using the WebAPI e.g. to allow users to
click around and explore your API using the same URL then you can use routing\an html message handler.
public class HtmlMessageHandler : DelegatingHandler
{
private List<string> contentTypes = new List<string> { "text/html", "application/html", "application/xhtml+xml" };
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
if (request.Method == HttpMethod.Get && request.Headers.Accept.Any(h => contentTypes.Contains(h.ToString())))
{
var response = new HttpResponseMessage(HttpStatusCode.Redirect);
var htmlUri = new Uri(String.Format("{0}/html", request.RequestUri.AbsoluteUri));
response.Headers.Location = htmlUri;
return Task.Factory.StartNew<HttpResponseMessage>(() => response);
}
else
{
return base.SendAsync(request, cancellationToken);
}
}
}
For a full example check out:-
https://github.com/arble/WebApiContrib.MessageHandlers.Html
I've played with this idea before. I exposed an API through MVC3 as JSONResult methods on different controllers. I implemented custom security for the API using controller action filters. Then built a very AJAX heavy HTML front-end which consumed the JSON services. It worked quite well and had great performance, as all data transferred for the web app was through AJAX.
Frederik Normen has a good post on Using Razor together with ASP.NET Web API:
http://weblogs.asp.net/fredriknormen/archive/2012/06/28/using-razor-together-with-asp-net-web-api.aspx
One important constraint of a well designed REST service is utilizing "hypermedia as the engine of application state" (HATEOAS - http://en.wikipedia.org/wiki/HATEOAS).
It seems to me that HTML is an excellent choice to support as one of the media formats. This would allow developers and other users to browse and interact with your service without a specially built client. Which in turn would probably result in the faster development of a client to your service. (When it comes to developing actual HTML clients it would make more sense to use a json or xml.) It would also force a development team into a better designed rest service as you will be forced to structure your representations in such a way that facilitates an end users navigation using a browser.
I think it would be smart for any development team to consider taking a similar approach to Frederik's example and create a media type formatter that generates an HTML UI for a rest service based on reflecting on the return type and using conventions (or something similar - given the reflection I would make sure the html media format was only used for exploration by developers. Maybe you only make it accessible in certain environments.).
I'm pretty sure I'll end up doing something like this (if someone hasn't already or if there is not some other feature in the web api that does this. I'm a little new to Web API). Maybe it'll be my first NuGet package. :) If so I'll post back here when it's done.
Creating Html is a job for an Mvc Controller not for Web Api, so if you need something that is able to return both jSon and Html generated with some view engine the best option is a standard Mvc Controller Action methosd. Content Negotiation, that is the format to return, can be achieved with an Action Fiter. I have an action filter that enable the the controller to receive "hints" from the client on the format to return. The client can ask to return a view with a specific name, or jSon. The hint is sent either in the query string or in an hidden field (in case the request comes from a form submit). The code is below:
public class AcceptViewHintAttribute : ActionFilterAttribute
{
private JsonRequestBehavior jsBehavior;
public AcceptViewHintAttribute(JsonRequestBehavior jsBehavior = JsonRequestBehavior.DenyGet)
{
this.jsBehavior = jsBehavior;
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
string hint = filterContext.RequestContext.HttpContext.Request.Params["ViewHint"];
if (hint == null) hint = filterContext.RequestContext.RouteData.Values["ViewHint"] as string;
if (!string.IsNullOrWhiteSpace(hint) && hint.Length<=100 && new Regex(#"^\w+$").IsMatch(hint) )
{
ViewResultBase res = filterContext.Result as ViewResultBase;
if (res != null)
{
if (hint == "json")
{
JsonResult jr = new JsonResult();
jr.Data = res.ViewData.Model;
jr.JsonRequestBehavior = jsBehavior;
filterContext.Result = jr;
}
else
{
res.ViewName = hint;
}
}
}
base.OnActionExecuted(filterContext);
}
}
Now that it's been a little while through the Beta, MS just released the Release Candidate version of MVC4/VS2012/etc. Speaking to the navigation/help pages (mentioned by some other posters), they've added a new IApiExplorer class. I was able to put together a self-documenting help page that picks up all of my ApiControllers automatically and uses the comments I've already put inline to document them.
My recommendation, architecture-wise, as others have said as well, would be to abstract your application into something like "MVCS" (Model, View, Controller, Services), which you may know as something else. What I did was separate my models into a separate class library, then separated my services into another library. From there, I use dependency injection with Ninject/Ninject MVC3 to hook my implementations up as needed, and simply use the interfaces to grab the data I need. Once I have my data (which is of course represented by my models), I do whatever is needed to adjust it for presentation, and send it back to the client.
Coming from MVC3, I have one project that I ported to MVC4, which uses the "traditional" Razor markup and such, and a new project that will be a single page AJAX application using Backbone + Marionette and some other things sprinkled in. So far, the experience has been really great, it's super easy to use. I found some good tutorials on Backbone + Marionette here, although they can be a bit convoluted, and require a bit of digging through documentation to put it all together, it's easy once you get the hang of it:
Basic intro to Backbone.js: http://arturadib.com/hello-backbonejs/docs/1.html
Use cases for Marionette views (I found this useful when deciding how to create views for my complex models): https://github.com/derickbailey/backbone.marionette/wiki/Use-cases-for-the-different-views

Resources