I'm building a grails app, and the default URLMapping provided by grails is /$controller/$action?/$id
I'm concerned about the security aspects of this catch all mapping. On one hand, it's a pain to explicitly list our all of the mappings, but on the other hand it seems like there could be potential security issues, such as forgetting to secure certain mappings.
By explicitly specifying the mappings, we have much tighter control over the URLs. It also lets us making more user friendly URLs (e.g. maybe we could pluralize things like using having a people/john/ url instead of /erson/john.
Are there other concerns with leaving the default mappings? Is there a possibility that we could unintentionally expose the fact that a certain admin page is valid (I'll have to look more into spring security as to how to redirect to a 404 for trying to access admin pages for a non admin user)?
I think you answered yourself. Default url mapping "/$controller/$action?/$id" is easy to use but may be used as hole in the case of bad controllers security implementation.
But probably the best solution is to place security checks even at domains level, so even if for error a user can reach a not authorized controller an exception will block him to do anything with the domains.
Related
My MVC site uses the antiforgeryToken code, which works well in chrome, firefox. However, in IE10, I have noticed that it gives me the error:
required anti-forgery cookie "__RequestVerificationToken" is not present
Definitely a cookie related issue as when I allow all cookies, it works fine. (ie. lowest privacy settings)
However, I have also noticed that when I go to GoDaddy and take off domain forwarding masking, (but leave the domain forwarding in) it also works fine.
Is there a way to get this working with the masking? (Masking is an option which allows forwarding of a domain while hiding the non-domain name. I am doing this because I am using Azure websites and would rather have my users see my actual domain name, not xxx.azurewebsites.net)
Thanks for any help here!
Domain forwarding masking works by hosting your real URL inside a frame. In that scenario, your real website content is coming from a different domain than the main page's domain. As such, any cookies your site tries to set will be interpreted as 'third party cookies' and are going to be blocked by any browser set to block those kinds of cookies (including, apparently, IE10 with its default settings).
Frankly, I think you are fighting a losing battle here. These kinds of cookies are benign in your use case, but they look exactly like the kinds of cookies advertisers are using to track people across websites, and so I would expect browsers to be even more hostile to them as time goes by.
I think your options in this case are to not need cookies (e.g. don't use the anti CSRF features provided by ASP.NET MVC), or to move your website to a host that allows you to directly serve your content at the real URL (so that you don't have to use the godaddy masking technique). The latter is probably the best long term solution.
I am attempting to construct a web app in which the back end is a complete RESTful web service. I.e. the models (business logic) would be completely accessible via HTTP. For example:
GET /api/users/
GET /api/users/1
POST /api/users
PUT /api/users/1
DELETE /api/users/1
Whats the proper way to provide more methods that aren't CRUD (verbs/actions)? Is this considered more of a RPC-api domain? How would one properly design the RPC api to run on top of the RESTful api?
For example, how would I elegantly implement a forgot password method for a user.
POST (?) /api/users/1/forgot
The application (Controllers/View) would then use a https requests (HMVC like) to access the models and methods. What would be the best for authentication? OAuth, Basic Auth over HTTPs?
Although this is "best practice" for scalability later on, am I over engineering this task? Is it best to just follow the typical MVC model and provide a very basic API?
This question has been mostly inspired by ASP.NET's MVC 4 (WebAPI) and a NodeJS module https://github.com/marak/webservice.js
Thanks in advance
I recently started learning REST, and when developing a new web service I think you're doing the right thing to consider it.
You are correct in your assumptions about the custom verbs. REST acknowledges that some actions need to be handled in a different way, and custom verbs don't violate the requirements. You should use POST when communicating with the server, but the verbs are normally written in imperative. Instead of forgot, I'd probably use remind or something similar. I.e., you should give instructions on what to do, rather than describe what happened without clearly indicating what you expect as a result.
Furthermore, the preferred way to construct the service is to include api into the domain name, and drop it from the path. I'd write your particular example like this:
POST /users/1/remind HTTP/1.1
Host: api.myservice.example.com
Session handling in REST is a bit tricky. The cleanest way of doing it would probably be to authenticate with username and password on every single request, using Basic access authentication. However, I believe that it's rarely done like that. You should read this question (and its accepted answer): OAuth's tokens and sessions in REST
EDIT: I'd also drop the trailing forward slash in the GET request in your example. If the service is truly RESTful, then the resource is not supposed to be accessibly from both /users/ and /users. A particular resource should have one and only one URL pointing to it. A URL with a trailing slash is actually distinct from one without. REST promotes dropping it, and a RESTful web service should not accept both (which in the case of GET means responding with 200 OK), although it may redirect from one to the other. Otherwise, it might lead to confusion about the proper URL, duplicate caching, weeping and gnashing of teeth. :)
EDIT 2: In RESTful Web Services by Richardson & Ruby you're discouraged from putting the new verb in the path. Instead, you could append something like ?_method=remind. It's up to you which one you choose, but please remember that you're not supposed to handle these requests with GET, regardless of what you choose. A GET must not change the resource, and should not cause side effects if the user browses back and forth in the history. Otherwise, you might end up resending the password several times. Use POST instead.
We are designing an application that will use Rails and Wordpress to interact with each other. We would like to have a universal logout where you could logout from either application and it would delete cookies from the other app. They will share the same host and toplevel domain. Is there a way to do this?
Access to a cookie is dependent on the domain of the server attempting to read the request -- and potentially the domain specified in the cookie. So assuming the domains match (e.g. www.example.com and www.example.com on both blog and Rails app) either should have access to a cookie set by the other.
If this is not the case (e.g. blog.example.com, www.example.com), you'll need to make sure when the cookie is set in either place, it's set for the entire domain (e.g. .example.com). But this doesn't help: while Rails can delete WP's cookie, and vice-versa, the method for creating (and using) them needs to be mutually understood.
So there's a twist here, since this is a session cookie; in this case, the cookie (which either app should have access to) is setting a value that is used and interpreted on the server side, where sessions are managed. WordPress and Rails both different methods and look for different cookies.
A solution (idea) would be to have one or the other subsystem catch incoming requests (most likely WP, and probably through some .htaccess RewriteRule, assuming you're using Apache) and create an intermediate cookie that the other could check that provides sufficient proof that the user has logged in correctly. WP's PHP for this is pretty good, and easily extended -- you just need to create some token that's a shared secret between the two apps (one of the values in wp-config.php such as LOGGED_IN_KEY might be a good option).
Maybe a solution would be to take the publicly available value from the WP cookie for username, and append the shared secret value and (in both systems) create an MD5 hash to store in a cookie. In this case, Rails' authentication would subordinate to WP's, so you would need to make sure Rails knew to delegate things like forgotten password, changed password, etc, to WP's mechanisms.
Obviously I am thinking aloud, but maybe this is a path to consider.
In any case, this is preferable to having both systems know how to trust the other's authentication.
Fiddling with cookie deletion appears to be dirty and error prone.
You might rather want to have a look at auth providers and the according plugins such as:
OAuth (WP - Rails; maybe make either side an OAuth provider)
CAS (WP - Rails)
LDAP (WP - Rails)
...
Maybe it's an option to switch from WP to one of Rail's CMS like:
Refinery CMS
Typo
...
I'm trying to implement a small ASP.NET MVC site which interacts with another site. In short, sessions are managed between the main site and satellite sites through tokens in the URL. I can specify the url format but I can't remove the requirement that a session token is submitted as part of the URL.
I'm trying to work out how to set up the routing and am in a few minds here. I can't decide which would be best, or if there is perhaps a better way to do it. The main ways I'm thinking:
routes.MapRoute("Main", "{controller}/{action}/{id}/{token}");
Gives URLs like http://mysite.com/Products/Detail/5/5f1c8bbf-d4f3-41f5-ac5f-48f5644a6d0f
Pro: mostly keeps with existing MVC convention for site nagivation
Con: Adds complication to routing when supporting defaults for ID and Action.
routes.MapRoute("Main", "{token}/{controller}/{action}/{id}/");
Gives URLs like http://mysite.com/5f1c8bbf-d4f3-41f5-ac5f-48f5644a6d0f/Products/Detail/5
Pro: simplifies routing - can still apply action/id defaults as per standard MVC convention
Con: very "un-web-like" URLs. Requires regex to validate that the first variable is a valid GUID / token before moving on to next route in the table.
The other possibility coming to mind, passing sessions like:
http://mysite.com/Home/Index?session=5f1c8bbf-d4f3-41f5-ac5f-48f5644a6d0f
The related problem with that is I have a base class derived from Controller which all other secure pages are going through. The SecureController class overrides Execute() and checks for the validity of the token taken from the URL. Both approaches (GET and routing) seem like it would be easy enough to get the token within the controller Execute() function, but the GET approach feels kind of tacky whereas the routing approach feels like it's, for lack of better explanation, breaking the elegance of the MVC routing design.
Has anyone else out there taken on a similar problem and had any particular successes or difficulties to share?
It seems no matter you do, your URLs will be pretty messy with that token.
I have had to handle this kind of single sign-on functionality in an ASP.NET MVC app as well, but I went for a slightly different and much simpler approach: I created a GatewayController with a SignOn action that took a session token and a URL as parameters.
Then this SignOn action would just check the validity of the session token and then sign the user on to my site, redirecting to the supplied URL. From then on, the session token is not needed anymore, as authentication from then on would be cookie-based.
It might not be entirely applicable in your case, depending on your requirements. If you are required to continuously check the validity of the session token somewhere, you could however just do the same thing as I did and then store the session token in the user's session data, allowing you to check the token in each request.
We need to default URL to unique name. If it is www then with no prefix or vice versa. So decision to be made is either stick with www or with no prefix.
With no prefix cookie is set for all sub domains. What are other downsides for it? Or benefits?
Basically we need this for OpenID as OpenID will make users look different if they came from www or with no prefix.
As our site is new so we can go with either one. Also, how the domain name looks is not much of a concern.
You probably want to redirect (with a HTTP 301 - Permanent Redirect) one to the other anyway, since maintaining consistent urls is much easier that way. So whichever you decide, just make sure the actual authentication is done after the redirect, and users looking different won't be an issue.
That said, if you want www or not depends entirely on how other things in your appliction works. You mention that cookies for domain.com will be saved for all subdomains - is this something you want? Are you ever going to need to differentiate (for example, by allowing users to set up their own authentication systems for subdomains as a shared hosting service might do)?
If none of the differences you find between including and excluding www matter to your application, I'd go for not using www. The main reason for this is my picture of current trends on the internet - more and more applications (SO is an example of this) tend to leave the www out, both when linking to their own sites, and in marketing of different kinds.
However, the main point is make both work. You don't want your site to break because the user did(n't) type www at the beginning of the url.
By not using the www subdomain, you can suffer a performance hit when delivering static content, as noted here: http://developer.yahoo.com/performance/rules.html#cookie_free. As I understand it, if you use http://example.com/ and http://static.example.com for static content, any cookies you set on the main domain will be passed with requests to your static subdomain.
This can be avoided quite easily, by buying a distinct domain for static content. However, this can certainly be dealt with by using a www subdomain.
Then again, this is a very minor con, and really only comes into play when you're dealing with a high-demand site. (For example, Digg uses http://digg.com and http://*.diggstatic.com).
Ultimately, I would say that this is such a minor problem that it can probably be dealt with if performance starts to suffer. Don't optimize prematurely, and all that...
And, as #Tomas Lycken points out, make sure you account for www even if you don't use the subdomain.