I wrote a cypress plugin that helps to simulate auth0 for testing without having to go through the whole redirect to and from the login pages.
It uses silent authentication or getTokenSilently to avoid the redirects etc.
return new Cypress.Promise((resolve, reject) => {
let person = atom.slice(Cypress.spec.name, 'person').get();
assert(!!person && typeof person.email !== 'undefined', `no scenario in login`);
auth0Client.getTokenSilently({ ignoreCache: true, currentUser: person.email, test: Cypress.currentTest.title })
.then((token) => {
log(`successfully logged in with token ${JSON.stringify(token)}`);
resolve(token);
}).catch((e) => {
console.error(e);
reject(e);
});
})
The auth0 docs appear to be saying that silent authentication is not the preferred way for single-page applications.
This post Refresh token rotation does not suggest an alternative to getTokenSilently that does not use third-party cookies.
If I look at the source code in #auth0/spa-js, getTokenSilently still appears to use cookies, even if you have
scope: "offline_access",
useRefreshTokens: true
I am confused as to what is an alternative to getTokenSilently that does not use third party cookies.
getTokenSilently also does not currently work in safari because of this reason.
SECURED SPA TESTS
The simplest all round option might be to do a full redirect at the start of your test suite, as in this Curity code example, which uses Cypress tests. This has the advantage that the SPA can call real APIs afterwards. It will also work in all browsers including Safari. It is probably the option I would use.
GETTING MOCK TOKENS IN CLIENTS
I like your mock plugin idea, since at times it is useful to be able to bypass some of the Authorization Server behaviour, if it gets in the way of testing.
It feels like infrastructure that you cannot control is getting in your way though. Perhaps your plugin should not need to depend on client side SDKs. Maybe you could just have a mock getTokenSilently that issues its own tokens using a JWT library?
Token issuing code
It depends though on what you want to do next, eg do you want to call real APIs? If so then the Secured SPA Tests option might be simplest.
USING MOCK TOKENS IN APIs
One useful technique to be aware of is the Wiremock tool, which can register canned responses with tokens base on tokens you've issued in your own test code. Here are some API tests of mine that use this approach.
In my case the API requires user level tokens and a code flow by default. These tests enable a more productive API development setup, where the API code validates mock tokens in the standard way. It may not be such a good fit for SPA tests though.
SILENT REDIRECTS
In a real SPA, getTokenSilently relies on the Authorization Server's (third party) SSO cookie. A code flow is run on a hidden iframe to get new tokens, using OpenID Connect's prompt=none request parameter. This does not work in the Safari browser with default settings, since it drops the SSO cookie, as part of an initiative to prevent third parties from tracking users.
BACKEND FOR FRONTEND
To solve the above problem in a real SPA, the current best practice is to use first party cookies instead of relying on the SSO session cookie. An SPA running at https://example.com can use an API at https://api.example.com to enable this. More on this theme in the Token Handler Pattern.
Related
From my understanding, the advantage that Authorization Code Flow has over Implicit Flow is that with ACF, the access token gets sent to a server side app rather than to a browser app. This makes the access token much harder to steal, because the access token never reaches the browser (and is thus not susceptible to a Cross Site Scripting attack).
I would have thought that PKCE would try to solve this issue. But it does not. The access token is still sent to the browser. Hence it can still be stolen.
Is there something I am missing here?
Many thanks.
Authorization Code Flow (PKCE) is considered superior security to the previous solution of Implicit Flow:
With implicit flow the access token was returned directly in a browser URL and could perhaps be viewed in logs or the browser history
With Authorization Code Flow this is handled better, with reduced scope for exploits:
Phase 1: A browser redirect that returns a one time use 'authorization code'
Phase 2: Swapping the code for tokens is then done via a direct Ajax request
PKCE also provides protection against a malicious party intercepting the authorization code from the browser response and being able to swap it for tokens.
Both are client side flows and their reason for existing is to use access tokens in public clients. Authorization Code Flow (PKCE) is the standard flow for all of these:
Single Page Apps
Mobile Apps
Desktop Apps
In the SPA case the token should not be easily stealable, especially if stored only in memory as recommended. However, there are more concerns when using tokens in a browser, since it is a dangerous place, and you need to follow SPA Best Practices.
In the browser case there are other options of course, such as routing requests via a web back end or reverse proxy in order to keep tokens out of the browser, and dealing with auth cookies in addition to using tokens.
I think your are right. The tokens are not in a http-only cookie, and are therefore accessible by a malicious script (injected via an XSS attack). The attacking script can read all tokens (after a successful and normal auth flow) from local storage (or wherever they got put) and use them.
I think CORS protections should prevent the malicious script from sending the tokens out to an attacker directly, which would be a devastating failure, as this potentially includes a long lived refresh token. Therefore, I suspect configuring CORS correctly is super critical when using these local-client based flows (by local-client I mean a browser, mobile app, or native PC app).
In short, these local-client flows can be made secure, but if there is an XSS attack, or badly configured CORS, then those attacks can become extremely dangerous - because the refresh token could potentially be sent to the attacker for them to use at will in their own good time, which is about as bad as an attack can get.
I have an app, client side, that uses auth0 for accessing the different API's on the server. But now I want to add another app, a single page app, I'm going to use VueJs, and this app would be open "ideally" w/o a user having to sign in, it's like a demo with reduced functionality, I just want to check that the user is not a robot basically, so I don't expose my API in those cases.
My ideas so far:
- Somehow use recaptcha and auth0 altogether.
- Then have a new server that would validate that the calls are made only to allowed endpoints (this is not of my interest in the question), so that even if somehow the auth is vulnerated it doesn't leave the real server open to all type of calls.
- Pass the call to the server along with the bearer token, just as if I was doing it with my other old client app.
Is this viable? Now I'm forcing the user to validate, this is more a thing about UX (User-experience), but I'd like a way to avoid that. I'm aware that just with auth0 I can't do this see this post from Auth0, so I was expecting a mix between what I mentioned.
EDIT:
I'm sticking to validating in both cases, but I'm still interested to get opinions over this as future references.
At the end, with the very concept of how auth0 works that idea is not possible, so my approach was the following:
Give a temporary authenticated (auth 0) visitor a token which has restricted access level, then pass the request to a new middle server, the idea is to encrypt the real ids so the frontend thinks it's requesting project A123456etc, when indeed it's going to get decrypted in the middle server to project 456y-etc and given a whitelist it will decide to pass the request along with the token to the final server, the final server has measures to reduce xss and Ddos threats.
Anyway, if there's a better resolve to it I will change the accepted answer.
You could do a mix of using recaptcha for the open public, then on the server side analyse the incoming user request (you can already try to get a human made digital fingerprint just to differentiate with a robot-generated one) and the server (more a middle server) makes the call to you API (and this server has limited surface access)
What we normally do in these situations (if I got your issue correctly) is to create two different endpoints, one working with the token and another one receiving the Recaptcha token and validating it with Google servers.
Both endpoints end up calling the same code but this way you can add extra functionality in a layer in the 'public' endpoint to ensure that you are asking only for public features (if that cannot be granted just modifying the interface).
I am building an API using the Scala version of the play framework. Some of the endpoints will contain confidential data but I am not sure how exactly to secure this.
Secure social (http://securesocial.ws/guide/configuration.html) is a library that I've been looking at but it seems oriented around websites and logging in with OAuth providers.
In this case it seems like I need to be an OAuth provider. Or is it possible that I can allow users to login with a provider, say Twitter? But then how would that work? The documentation around OAuth seems to be incredibly awful.
There is no built in way to manage tokens. I would recommend building a token system, that would distribute and manage access token use. You can have one per user, or use a different system.
Then for each endpoint, you would have a wrapped action to secure the API.
case class SecuredAPIRequest(request:Request[AnyContent]) extends
WrappedRequest(request)
trait SecuredController{
import play.api.mvc.Results._
//This takes an action for a request, and checks to see if the apiKey equals or API_KEY
def SecuredAPIAction(f:SecuredAPIRequest => Result) = Action{
request =>
request.body.asJson.map{ jsValue =>
(jsValue \ "apiKey").asOpt[String] match{
case Some(key) if validKey(key) => f(SecuredAPIRequest(request)) //We are clear to go, execute our function.
// NOTE: validKey would be a function that checks the key against our DB ensuring that it is valid.
case None => Forbidden
}
}.getOrElse(Forbidden)
}
}
What SecureSocial does is route security request to the authentication service ( Twitter, Facebook, etc ), and uses their token as security. This would not work as an API, because it would be impossible to redirect users for auth.
I have to implement a web site (MVC4/Single Page Application + knockout + Web.API) and I've been reading tons of articles and forums but I still can't figure out about some points in security/authentication and the way to go forward when securing the login page and the Web.API.
The site will run totally under SSL. Once the user logs on the first time, he/she will get an email with a link to confirm the register process. Password and a “salt” value will be stored encrypted in database, with no possibility to get password decrypted back. The API will be used just for this application.
I have some questions that I need to answer before to go any further:
Which method will be the best for my application in terms of security: Basic/ SimpleMembership? Any other possibilities?
The object Principal/IPrincipal is to be used just with Basic Authentication?
As far as I know, if I use SimpleMembership, because of the use of cookies, is this not breaking the RESTful paradigm? So if I build a REST Web.API, shouldn't I avoid to use SimpleMembership?
I was checking ThinkTecture.IdentityModel, with tokens. Is this a type of authentication like Basic, or Forms, or Auth, or it's something that can be added to the other authentication types?
Thank you.
Most likely this question will be closed as too localized. Even then, I will put in a few pointers. This is not an answer, but the comments section would be too small for this.
What method and how you authenticate is totally up to your subsystem. There is no one way that will work the best for everyone. A SPA is no different that any other application. You still will be giving access to certain resources based on authentication. That could be APIs, with a custom Authorization attribute, could be a header value, token based, who knows! Whatever you think is best.
I suggest you read more on this to understand how this works.
Use of cookies in no way states that it breaks REST. You will find ton of articles on this specific item itself. Cookies will be passed with your request, just the way you pass any specific information that the server needs in order for it to give you data. If sending cookies breaks REST, then sending parameters to your API should break REST too!
Now, a very common approach (and by no means the ONE AND ALL approach), is the use of a token based system for SPA. The reason though many, the easiest to explain would be that, your services (Web API or whatever) could be hosted separately and your client is working as CORS client. In which case, you authenticate in whatever form you choose, create a secure token and send it back to the client and every resource that needs an authenticated user, is checked against the token. The token will be sent as part of your header with every request. No token would result in a simple 401 (Unauthorized) or a invalid token could result in a 403 (Forbidden).
No one says an SPA needs to be all static HTML, with data binding, it could as well be your MVC site returning partials being loaded (something I have done in the past). As far as working with just HTML and JS (Durandal specifically), there are ways to secure even the client app. Ultimately, lock down the data from the server and route the client to the login screen the moment you receive a 401/403.
If your concern is more in the terms of XSS or request forging, there are ways to prevent that even with just HTML and JS (though not as easy as dropping anti-forgery token with MVC).
My two cents.
If you do "direct" authentication - meaning you can validate the passwords directly - you can use Basic Authentication.
I wrote about it here:
http://leastprivilege.com/2013/04/22/web-api-security-basic-authentication-with-thinktecture-identitymodel-authenticationhandler/
In addition you can consider using session tokens to get rid of the password on the client:
http://leastprivilege.com/2012/06/19/session-token-support-for-asp-net-web-api/
Why almost all websites out there are using cookies instead of basic auth?
It can't be only that the user/pass window is ugly and none of them is more secure. They are both insecure (without https).
To logout of a basic auth login the browser often needs to be quit entirely. This means there is no way for the server to log out the user.
I believe basic auth also has more overhead (assuming your cookie size isn't massive), but I might be wrong about that.
HTTP basic auth also sends the username and password with every request, making it potentially less secure because there is more opportunity for interception.
You have more control over cookies. You can encrypt them so that they are secure even without HTTPS. Basic auth is always unsecure over HTTP. Also cookies don't contain the password on each request. And, yes, what can I say, users like AJAX login forms and nice animated effects when logging in which unfortunately cannot be achieved with basic auth.
With cookies you have the complete control on when to authenticate the user, its not as soon as theres a request.
Plus you dont have to authenticate for pictures as well
Another thing is that you dont have to rely on a sysadmin for auth.
You also have the choice regarding the users repository with session.
There are other things. As you said, both are more or less secure so why not opt with flexibility? To showcase sites to clients we often use server auth as it is easy and a global solution.. for forms within apps, we use cookies.