HMAC with ASP.NET WebAPI using Cuong's Solution - asp.net-mvc

I spent a some time today looking through various HMAC implementations in C# for an upcoming WebAPI project. I wanted to start out with some existing code just to see it all work and understand it better before I either wrote it from scratch or modified it for my needs.
There are a bunch of great articles and posts both here and on the web. However, I have gotten to the point that I need some pointers and would greatly appreciate some insight.
I started with Cuong's post here: How to secure an ASP.NET Web API.
I knew I would have to expand upon it since I wanted to support both json and formencoded data. My test client is also written in C# using HttpClient and I spun up an empty WebAPI project and am using the ValuesController.
Below are my observations and questions:
POSTing: In order to get Cuong's code to work (validate successfully), my POST needs to include the parameters in the URL, however in order to get the values to my controller, I need to include them in the body. Is this normal for this type of authentication? In this particular instance, the message I am hashing is http://:10300/api/values?param1=value1&param2=value2. Now I can parse the query string manually to get them, however in order to get the value to my controller through binding, I must also:
var dict = new Dictionary<string, string>
{
{"param1", "value1"},
{"param2", "value2"}
};
var content = new FormUrlEncodedContent(dict);
var response = await httpClient.PostAsync(httpClient.BaseAddress, content);
Otherwise my parameter is always null in the post action of the ValuesController.
I am planning on expanding the code to include a nonce. Between the combination of a nonce, a timestamp and the verb, is that enough for a secure hash? Is there really a need to also hash the message?
I tried (very unsuccessfully) to extend the code to support json as well as form encoded data and I must be missing something obvious.
Cuong is using the Authentication and Timestamp headers instead of putting the signature and timestamp in the query string. Is there a benefit to one method over the other? The majority of articles I have read have them in the query string itself.
The code looks great and I am a little out of my element here. I might be safer (saner?) just writing it from scratch to appreciate the nuances of it. That said, if anyone can lend some insight into what I am seeing that would be great.
At the end of the day, I want to be able to use the built in authorization mechinism of the WebAPI framework to simply attribute the methods/controllers, be able to accept form encoded and json data and reasonably model bind for complex types.
* Update *
I have done some more work today and below is the code from my nUnit PostTest. I figured out how to get the values through without both including them in the body and the query string (code below).
[Test]
public async void PostTest()
{
using (var httpClient = new HttpClient())
{
var payload = new FormUrlEncodedContent(new Dictionary<string, string>
{
{"key1", "value1"},
{"key2", "value2"}
});
var now = DateTime.UtcNow.ToString("U");
httpClient.BaseAddress = new Uri(string.Format("http://ipv4.fiddler:10300/api/values"));
httpClient.DefaultRequestHeaders.Add("Timestamp", now);
httpClient.DefaultRequestHeaders.Add("Authentication", string.Format("test:{0}", BuildPostMessage(now, httpClient.BaseAddress, await payload.ReadAsStringAsync())));
var response = await httpClient.PostAsync(httpClient.BaseAddress, payload);
await response.Content.ReadAsStringAsync();
Assert.AreEqual(true, response.IsSuccessStatusCode);
}
}
I also figured out the model binding portion of it. There is a great article here: http://www.west-wind.com/weblog/posts/2012/Mar/21/ASPNET-Web-API-and-Simple-Value-Parameters-from-POSTed-data that explains how POST works and I was able to get it to work with both a model of my own design as well as with the FormDataCollection object.
Now I am left wondering whether or not it is worth adding json encoded messages or if standardizing on FormUrlEncoding is the way to go. Also, are client nounce's enough or should I implement a server side nounce? Does a server side nounce double all of the calls to the service (first one throws a 401, second one includes the payload with the nounce?

Related

Single Access Token from several endpoints?

I have a fundamental lack of understanding of implementing security with OAuth. All of the examples I see use the same pattern, so I'm pretty confident my premise is wrong, and the examples are right!
Generally I see it added along these lines.
.AddOAuth("auth", options =>
{
options.ClientId = "id";
options.ClientSecret = "secret";
options.CallbackPath = new PathString("/signin");
options.Scope.Add("scope");
options.AuthorizationEndpoint = "https://.../authorize";
options.TokenEndpoint = "https://.../token";
options.UserInformationEndpoint = "https://.../userinfo";
options.ClaimActions.MapJsonKey(ClaimTypes.NameIdentifier, "id");
options.ClaimActions.MapJsonKey(ClaimTypes.Name, "name");
options.Events = new OAuthEvents
{
OnCreatingTicket = async context =>
{
var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint);
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken);
var response = await context.Backchannel.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted);
response.EnsureSuccessStatusCode();
var user = JObject.Parse(await response.Content.ReadAsStringAsync());
context.RunClaimActions(user);
}
};
Now to my fundamental misunderstanding. I want to make a demo application that, for example, has a couple of pages that show the result of some rest api calls. These calls all require an access token. Once the code reaches OnCreatingTicket, the access token has been acquired, and it is used to get data from the UserInformationEndpoint. However, I have several different end points that I want the user to select from (and there may not even be a "UserInformationEndpoint" for the api I'm trying to call..).
So, I think I have a bogus idea about how I'm supposed to do this. Could someone spin me around and point me in the right direction?
Edit because its too long for a comment:
My requirement is just to write a demo app that demonstrates making calls to a api directory.
For the sake of the demo I just imagine a page with a series of links/buttons that take you to a page showing the data returned from different api calls.
In the example above (and all the examples i've seen so far) the authorization is setup at the server level and used to retrieve user information. For my demo getting user information back and displaying it might be nice, but its not the end goal. The fact that all of the examples center on using the toke a single time to get user info is what confuses me. I'm assuming (perhaps incorrectly) that I should get the token once and use it repeatedly from different points in the application.
I'm having trouble understanding if I am fundamentally misunderstanding how to approach this since I tend to imagine if all the examples I find are doing the same thing, they are doing it correctly.
It depends on the endpoint but the way I usually get the user's profile information is by adding scopes. For example, if your endpoint allows access to user profiles, inside your oauth options, request that scope when you authenticate the user:
options.Scope.Add("profile");
This should get it done with less traffic back and forth to the endpoint.
For example, I recently had the same issue with the Google API. I don't remember the exact name of the scope but I requested "profile" scope or something like that and it sent me email, first name, and last name. When I didn't request this scope it only sent me the user id.

how to verify referrer inside a MVC or Web Api ajax call

my MVC app has common ajax methods (in web api and regular controller). I'd like to authorize these calls based on which area (view) of my app the call is coming from. The problem I am facing is how to verify the origin of the ajax call.
I realize that this is not easily possible since ajax calls are easy to spoof, but since I have full control of how the view gets rendered (full page source) perhaps there is a way to embed anti-forgery type tokens that could later be verified to a Url Referrer.
Authentication is already handled and I can safely verify the identity of the call, the only problem is verifying which URL (MVC route) the call came from. More specifically, preventing the user from being able to spoof the origin of the ajax call.
I tried creating a custom authorization header and passing it between view render and ajax calls, and that works, but still easy to spoof (since a user could sniff the headers from another part of the site and re-use those). In the end I am not sure how to safely verify that the header has not been spoofed. The only thing that comes to mind is encoding some info about the original context inside the token, and validating it somehow against incoming call context (the one that's passing the token in ajax call).
I see that MVC has AntiForgery token capabilities, but I am not sure if that can solve my problem. If so I'd like to know how it could be used to verify that /api/common/update was called from /home/index vs /user/setup (both of these calls are valid).
Again, i'd like a way to verify which page an ajax call is coming from, and user identity is not the issue.
update
as per #Sarathy recommended I tried implementing anti-forgery token. As far as I can tell this works by adding a hidden field with token on each page, and comparing it to a token set in a cookie. Here is my implementation of custom action filter attribute that does token validation:
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var req = filterContext.RequestContext.HttpContext.Request;
var fToken = req.Headers["X-Request-Verification-Token"];
var cookie = req.Cookies[AntiForgeryConfig.CookieName];
var cToken = cookie != null
? cookie.Value
: "null";
log.Info("filter \ntoken:{0} \ncookie:{1}", fToken, cToken);
AntiForgery.Validate(cToken, fToken);
base.OnActionExecuting(filterContext);
}
then my anti forgery additional data provider looks like this:
public class MyAntiForgeryProvider : IAntiForgeryAdditionalDataProvider
{
public string GetAdditionalData(System.Web.HttpContextBase context)
{
var ad = string.Format("{0}-{1}",context.Request.Url, new Random().Next(9999));
log.Info("antiforgery AntiForgeryProvider.GetAdditionalData Request.AdditionalData: {0}", ad);
log.Info("antiforgery AntiForgeryProvider.GetAdditionalData Request.UrlReferrer: {0}", context.Request.UrlReferrer);
return ad;
}
public bool ValidateAdditionalData(System.Web.HttpContextBase context, string additionalData)
{
log.Info("antiforgery AntiForgeryProvider.ValidateAdditionalData Request.Url: {0}", context.Request.Url);
log.Info("antiforgery AntiForgeryProvider.ValidateAdditionalData additionalData: {0}", additionalData);
return true;
}
this works, in that i can see correct pages logged in the provider, and anti forgery breaks w/out the tokens.
however, unless i did something wrong, this seems trivial to spoof. for example
if i go to pageA and copy the token form pageB (just the form token, not even the cookie token), this still succeeds, and in my logs i see pageB while executing ajax method from pageA
confirmed that this is pretty easy to spoof.
I am using csrf to generate ajax tokens like this:
public static string MyForgeryToken(this HtmlHelper htmlHelper)
{
var c = htmlHelper.ViewContext.RequestContext.HttpContext.Request.Cookies[AntiForgeryConfig.CookieName];
string cookieToken, formToken;
AntiForgery.GetTokens(c != null ? c.Value : null, out cookieToken, out formToken);
return formToken;
}
I then pass the form token back with each ajax call and have a custom actionfilterattribute where I read/validate it along with cookie token
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var req = filterContext.RequestContext.HttpContext.Request;
var fToken = req.Headers[GlobalConstants.AntiForgeKey];
var cookie = req.Cookies[AntiForgeryConfig.CookieName];
var cToken = cookie != null
? cookie.Value
: "null";
log.Info("MyAntiForgeryAttribute.OnActionExecuting. \ntoken:{0} \ncookie:{1}", fToken, cToken);
AntiForgery.Validate(cToken, fToken);
this all works (changing anything about the token throws correct exception), then in my IAntiForgeryAdditionalDataProvider I can see what it thinks it's processing.
as soon as i override the csrf token from another view, it thinks it's that view. I don't even have to tamper with the UrlReferrer to break this :/
one way this could work if i could force the cookie to be different on every page load
I am assuming you can use IAntiForgeryAdditionalDataProvider for this.
public class CustomDataProvider : IAntiForgeryAdditionalDataProvider
{
public string GetAdditionalData(HttpContextBase context)
{
// Return the current request url or build a route or create a hash from a set of items from the current context.
return context.Request.Url.ToString();
}
public bool ValidateAdditionalData(HttpContextBase context, string additionalData)
{
// Check whether the allowed list contains additional data or delegate the validation to a separate component.
return false;
}
}
Register the provider in App_Start like below.
AntiForgeryConfig.AdditionalDataProvider = new CustomDataProvider();
https://msdn.microsoft.com/en-us/library/system.web.helpers.iantiforgeryadditionaldataprovider(v=vs.111).aspx
Hope this helps in your scenario.
You mentioned in your question that you're looking for Anti-forgery token capabilities.
Hence, I think what you're asking about is an anti-CSRF solution (CSRF=cross site request forgery).
One way to do this is to render a true random number (a one-time token) into your page, then passing it on each request, which can be done by adding a key/value pair to the request header and then checked at the backend (i.e. inside your controller). This is a challenge-response approach.
As you mentioned, in the server-side code you can use
var fToken = req.Headers["X-Request-Verification-Token"];
to get it from the requesting page.
To pass it along from each client AJAX request of the page, you can use
var tokenValue = '6427083747'; // replace this by rendered random token
$(document).ajaxSend(function (event, jqxhr, settings) {
jqxhr.setRequestHeader('X-Request-Verification-Token', tokenValue);
});
or you can set it for each request by using
var tokenValue = '2347893735'; // replace this by rendered random token
$.ajax({
url: 'foo/bar',
headers: { 'X-Request-Verification-Token': tokenValue }
});
Note that tokenValue needs to contain the random number which was rendered by the web server when the web page was sent to the client.
I would not use cookies for this, because cookies don't protect you against CSRF - you need to ensure that the page, which is requesting is the same as the page which was rendered (and hence created by the web server). A page being on a different tab in the same browser window could use the cookie as well.
Details can be found on the OWASP project page, in the OWASP CSRF prevention cheat sheet.
My quick interim solution was to use custom tokens created on each page load (guid which i keep track of in my token cache), which are passed as headers in all ajax calls. Additionally i create a original url hash and combine it into the custom auth token.
in my ajax methods I then extract the hash and compare it with UrlReferrer hash to ensure that hasn't been tampered with.
since the custom token is always different it's less obvious to guess what's going on as token appears to be different on every page load. however this is not secure because with enough effort the url hash can be uncovered. The exposure is somewhat limited because user identity is not the problem so worst case is a given user would gain write access to another section of the site but only as himself. My site is internal and i am auditing every move so any temper attempts would be caught quickly.
I am using both jQuery and angular so appending tokens with all requests like this:
var __key = '#Html.GetHeaderKey()' //helper method to get key from http header
//jQuery
$.ajaxSetup({
beforeSend: function (xhr, settings) {
xhr.setRequestHeader('X-Nothing-To-See-Here', __key); // totally inconspicuous
})
//angular
app.config(['$httpProvider', function ($httpProvider) {
$httpProvider.defaults.headers.common['X-Nothing-To-See-Here'] = __key;
});
update
the downside of this approach is that custom tokens need to be persisted across a web farm or app restarts. Based on #Sarathy's idea I am trying to side step this by leveraging MVC anti forgery framework. Basically add/remove my "salt" and let the framework manage the actual token validation. That way it's a bit less to manage for me. Will post more details once i verify that this is working.
So this is going to be one of those "you're doing it wrong" answers that I don't like, and so I apologize up front. In any case, from the question and comments, I'm going to propose you approach the problem differently. Instead of thinking about where did the request come from, think about what is the request trying to do. You need to determine if the user can do that.
My guess as to why this is hard in your case is I think you have made your api interface too generic. From your example api "api/common/update" I'm guessing you have a generic update api that can update anything, and you want to protect updating data X from a page that is only supposed to access data Y. If I'm off base there then ignore me. :)
So my answer would be: don't do that. Change your api around so it starts with the data you want to work with: api/dataX api/dataY. Then use user roles to protect those api methods appropriately. Behind the scenes you can still have a common update routine if you like that and it works for you, but keep the api interface more concrete.
If you really don't want to have an api for each table, and if its appropriate for you situation, perhaps you can at least have an api for protected/admin tables and a separate api for the standard tables. A lot of "if"s, but maybe this would work for your situation.
In addition, if your user can update some dataX but not other dataX, then you will have to do some sort of checking against your data, ideally against some root object and whether your user is authorized to see/use that root object.
So to summarize, avoid an overly generic api interface. By being more concrete you can use the existing security tools to help you.
And good luck!

How to get the raw binary content in Grails controller

after many days of search and many unsuccessful tries, I hope that the community knows a way to achieve my task:
I want to use grails as a kind of a proxy to my solr backend. By this, I want to ensure that only authorized requests are handled by solr. Grails checks the provided collection and the requested action and validated the request with predefined user based rules. Therefore, I extended my grails URL mapping to
"/documents/$collection/$query" {
controller = "documents"
action = action = [GET: "proxy_get", POST: "proxy_post"]
}
The proxy_get method works fine even when the client is using solrJ. All I have to to is to forward the URL request to solr and to reply with the solr response.
However, in the proxy_post method, I need to get the raw body data of the request to forward it to solr. SolrJ is using javabin for that and I was not able so far to get the raw binary request. The most promising approach was this:
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(solrUrl);
InputStream requestStream = request.getInputStream();
ContentType contentType = ContentType.create(request.getContentType());
httpPost.setEntity(new ByteArrayEntity(IOUtils.toByteArray(requestStream), contentType));
httpPost.setHeader("Content-Type", request.getContentType())
HttpResponse solrResponse = httpClient.execute(httpPost);
However, the transferred content is empty in case of javabin (e.g. when I add a document using solrJ).
So my question is, whether there is any possibility to get to the raw binary post content so that I can forward the request to solr.
Mathias
try using Groovy HttpBuilder. It has a powerful low-level API, while providing groovyness

How can I accept JSON requests from a Kendo UI data source in my MVC4 application?

I am using the Kendo UI grid in my MVC3 application and am quite pleased with it. I am using a Telerik provided example, excerpt below, to format the data posted by the grid's DataSource ally, and all is good. However, I don't want to have to rely on code like this. I would like to get Kendo and MVC talking without the 'translator', i.e. this code:
parameterMap: function(data, operation) {
var result = { };
for (var i = 0; i < data.models.length; i++) {
var model = data.models[i];
for (var member in model) {
result["models[" + i + "]." + member] = model[member];
}
}
return result;
}
This function is a 'hook' that allows me to manipulated data before Kendo ajaxes it out. By default, the Kendo DataSource sends content-type form-encoded, but not quite right for the MVC model binder. Without this, I can still use a FormCollection and do my own binding, but that is not on.
When I configure the DataSource to send JSON, and change my mapping function to look like this
parameterMap: function(data, operation) {
return JSON.stringify(data);
}
I get the following data being send in the request, but now I have no idea how to get MVC to bind to this. Right now my only hope is to grab Request.Params[0] in the action method, and deserialize this JSON myself.
I don't think I should have to write any code to get two HTTP endpoints to communicate properly using JSON in this day and age. What am I doing wrong, or, what should I be looking at on my side, i.e. the receiver of the requests. I would really prefer to minimize my intervention on the client side to maybe just the stringify call.
No idea if this is still a problem or not since this is a rather old question, but I had a scenario where I was shipping up json data to my controller and I had to give it a "hint" on what the name was so that model binding would work correctly:
public JsonResult GetDatByIds([Bind(Prefix="idList[]")]List<Guid> idList)
In my scenario, kendo was serializing my data and giving it a name of idList[] in the form post rather than just idList. Once I gave it the model binding hint, it worked like a charm. This might be the same for your scenario.

Returning XML or JSON depending on the HTTP request

I am trying to develop a RESTful Web service as an ASP.NET MVC 3 Web Application.
(I know, I should use the right tool for the job, which in this case means I should use WCF. But WCF has too many abstraction layers and is thus too big to fit inside my head. It would be cool for a research project, but I am trying to do my job. Besides I previously tried it, and now I am of the opinion that, despite its big promises, WCF sucks big time.)
Anyway, what I want to do is simple: I want my Web service to return its results as either XML or JSON, depending on the type specified in the HTTP request (by, default, JSON). How do I do that?
A Json action result already exists. MvcContrib has an XML action result you can return, or you could just use Content (xmlContent, "text/xml") as your action result.
You can query the accept header to determine which action result you would like to return. As long as your action method returns type ActionResult, it doesn't matter which type it returns.
That said, once you prove the overall concept, there are better ways to structure what you're trying to do.
A quick solution is to create an optional parameter on your Controller method, and return the view as the appropriate format.
public ActionResult GetFormattedResults(string format)
{
var data = GetResults();
ActionResult result = new JsonResult(data);
switch(format.ToLower())
{
case "xml":
result = new XmlResult(data); // this class doesn't exist in MVC3 you will need to roll your own
case "html":
result = new View(data);
}
return result;
}
You could also wrap the formatting functionality into an ActionFilter so you can reuse the functionality across controller methods.

Resources