I have something to get clarified regarding the following. The "OAuth 2.0 for Native Apps" spec says,
However, as the implicit flow cannot be protected by PKCE [RFC7636]
(which is required in Section 8.1), the use of the Implicit Flow with
native apps is NOT RECOMMENDED.
This reasoning behind why we shouldn't use the implicit grant type made me confused.
As I understand, PKCE is required for Authorization code grant because it needs 2 separate calls to get an access token and we need to make sure both these requested are done by the same app.
Please correct me if I'm wrong.
And now, since the implicit grant type doesn't need such 2 calls to get a token, I don't think we really need PKCE there. Again please correct me if I'm wrong.
That means "implicit flow does not need to be protected by PKCE". Then why does "implicit flow cannot be protected by PKCE" has become a reason above to avoid using it for native apps?
As I understand, PKCE is required for Authorization code grant because it needs 2 separate calls to get an access token and we need to make sure both these requested are done by the same app.
The first part of the sentence is not correct, second one ("we need to make sure...") is. PKCE is not required because of the 2 requests - the two requests make PKCE possible to implement. The problem is about who can steal the code/token before it reaches the application that requested it. The implicit flow has the same security problems as the auth code flow - described in section 8.1 of the RFC. Without PKCE, if an attacker gets a code or an access token, he can use the token right away or exchange the code for tokens first. With PKCE, the code is useless without knowing the code_verifier.
Since the implicit flow didn't get any security extension that would solve its security problems, it cannot be recommended.
And depending on what redirect URI option you choose, there may be problem with delivering the fragment part of the redirect URL (used by the implicit flow to transfer tokens) to the application. But I'm not sure about this part.
Related
Below I have tried to justify my question, "Does Proof Key for Code Exchange (PKCE) work without TLS? I believe I am missing understanding part of the spec and was hoping for some direction. I also have a second question, Does PKCE only relate to using Cookies to store the auth token (not mentioned in spec).Please help me identify the misinformation or lack of information in my comments below.
rfc7636's introduction section describes an attack authorization code interception attack on public clients. It states,
the attacker intercepts the authorization code returned from the
authorization endpoint within a communication path not protected by
Transport Layer Security (TLS) ...
The Introductions PreCondition section item 4 indicates that TLS not protecting the response .
The Introductions "To mitigate this attack" paragraph the states
This works as the mitigation since the attacker would not know this
one-time key, since it is sent over TLS and cannot be intercepted.
and
this extension utilizes a dynamically created cryptographically random
key called "code verifier".
RFC6949 mentions attacks around the use of cookies; however, rfc7636 does not specify pertaining only to cookies or local storage of the Auth token. Therefore is seems it would resolve an attack on the request if the auth token was also stored dynamically. Is this the case?
PKCE is used as a proof that the party which initiated authentication (via a browser redirect) is the same party completing it (via an HTTP POST).
It works without TLS, as in steps 4 and 8 of my blog post, where the code verifier in step 8 must match the code challenge in step 4.
Cookies are not directly related to PKCE. However, in a browser based app without PKCE or a client secret, malicious browser code only needs to send the authorization code to get tokens. If tokens are stored in cookies then the malicious code can only perform session hijacking.
PKCE was originally introduced for public clients that cannot use a client secret. These days, it is recommended for all clients that use the code flow, including those with a client secret. And of course TLS should also be used.
From my understanding, the advantage that Authorization Code Flow has over Implicit Flow is that with ACF, the access token gets sent to a server side app rather than to a browser app. This makes the access token much harder to steal, because the access token never reaches the browser (and is thus not susceptible to a Cross Site Scripting attack).
I would have thought that PKCE would try to solve this issue. But it does not. The access token is still sent to the browser. Hence it can still be stolen.
Is there something I am missing here?
Many thanks.
Authorization Code Flow (PKCE) is considered superior security to the previous solution of Implicit Flow:
With implicit flow the access token was returned directly in a browser URL and could perhaps be viewed in logs or the browser history
With Authorization Code Flow this is handled better, with reduced scope for exploits:
Phase 1: A browser redirect that returns a one time use 'authorization code'
Phase 2: Swapping the code for tokens is then done via a direct Ajax request
PKCE also provides protection against a malicious party intercepting the authorization code from the browser response and being able to swap it for tokens.
Both are client side flows and their reason for existing is to use access tokens in public clients. Authorization Code Flow (PKCE) is the standard flow for all of these:
Single Page Apps
Mobile Apps
Desktop Apps
In the SPA case the token should not be easily stealable, especially if stored only in memory as recommended. However, there are more concerns when using tokens in a browser, since it is a dangerous place, and you need to follow SPA Best Practices.
In the browser case there are other options of course, such as routing requests via a web back end or reverse proxy in order to keep tokens out of the browser, and dealing with auth cookies in addition to using tokens.
I think your are right. The tokens are not in a http-only cookie, and are therefore accessible by a malicious script (injected via an XSS attack). The attacking script can read all tokens (after a successful and normal auth flow) from local storage (or wherever they got put) and use them.
I think CORS protections should prevent the malicious script from sending the tokens out to an attacker directly, which would be a devastating failure, as this potentially includes a long lived refresh token. Therefore, I suspect configuring CORS correctly is super critical when using these local-client based flows (by local-client I mean a browser, mobile app, or native PC app).
In short, these local-client flows can be made secure, but if there is an XSS attack, or badly configured CORS, then those attacks can become extremely dangerous - because the refresh token could potentially be sent to the attacker for them to use at will in their own good time, which is about as bad as an attack can get.
The big security issue always brought up when using OAuth2 seems to be centered around not technically being Authentication. However, if you are the owner of the identify provider and the resources being accessed, doesn't this accomplish authentication and thus make not necessary implementing a solution on like OpenID Connect on top?
In OAuth there's only one token that is being passed around (an access token) which is opaque to the Client/RP. The access token represents a capability that is given by an end user but it doesn't say anything about that user: not who the user is nor whether and how the user authenticated (because as said the token contains no information whatsoever by spec).
Anything that you can come up with would lead to an extension of OAuth 2.0 by adding the information described above in the (or a different) token - thus make it no longer opaque to the RP - and/or define an endpoint where that information about the user can be obtained. But then this extension would be exactly what OpenID Connect has standardized so it would not make much sense to deviate from that.
I'm trying to add OTP/2FA support into OAuth2, but after much reading through the RFC6749, it's still not clear how OTP/2FA could be cleanly added without violating the specification.
Although OTP/2FA entry can be added into the authorize dialog flow, there is no provision for adding it into token. For example, public client apps with Resource owner password-based privileges may want to provide the token directly when requesting a new access_token, rather than having to embed a HTML dialog box.
Therefore my questions are;
Does the RFC allow for custom grant_type? Should this be used to provide 2FA/OTP functionality?
Does the RFC allow for additional attributes on an existing grant_type? For example, grant_type=token&otp_code=1234 (RFC does not make it clear if additional attributes are allowed on grant_type's within the specification)
Should OTP functionality be placed into headers? This is the approach that Github used, but it feels really ugly/hacky.
Are there any other approaches that I have overlooked?
Thank you in advance
The RFC allows for an extension (custom) grant, see section https://www.rfc-editor.org/rfc/rfc6749#section-8.3. That grant could define additional attributes.
OAuth 2.0 does not define how the Resource Owner authenticates to the Authorization Server, with the exception of the Resource Owner Password Credentials grant. Your proposal could be designed as an extended variant of that grant.
I'm working on something similar. I've got an endpoint at which you can request certain metadata for a user such as whether or not 2fa / mfa / otp is turned on and the salt (and iterations and algorithm) for Proof of Secret / Secure Remote Password.
If you go that route you can simply extend the grant types with an mfa or totp field (that's what I'm doing). You could also create a custom grant type as mentioned above (perhaps more proper).
Another solution is to check for MFA / 2FA / OTP during the authorization dialog step.
If I were done with my OAuth2 implementation I'd link you to that code too, but here are some libraries for an Authenticator:
Browser Authenticator has the components you need to generate and verify a key and token in the browser: https://git.coolaj86.com/coolaj86/browser-authenticator.js
Node Authenticator has the complementary server-side code: https://git.coolaj86.com/coolaj86/node-authenticator.js
You would still need to provide your own database storage mechanism and link it in to your OAuth implementation or you could make a very simple one and run it as a microservice.
I agree with Hanz Z.: you can design your own grant type to consume OTP.
But OTP can also be used for client authentication (confidential clients as you have to store credentials).
For example, we could imagine an header in the token request (X-OAuth2-OTP = 01234567). Client authentification will fail if the client activated OTP but no header is set.
Based on specifc SCOPE (Ex. OTP) you can mark bearer(access_token) as ""verified_otp = false"" on the backend, then after authorization server received valid otp for that session mark your bearer(access_token) as ""verified_otk = true"". For your resource server, check verified_otk field befores authorize calls that depends this verification.
I feel silly even asking this question, but am at the limits of my understanding, and am hoping someone can provide some context.
I'm looking at the following (https://stormpath.com/blog/token-auth-for-java/) which states:
The access_token is what will be used by the browser in subsequent requests... The Authorization header is a standard header. No custom headers are required to use OAuth2. Rather than the type being Basic, in this case the type is Bearer. The access token is included directly after the Bearer keyword.
I'm in the process of building a website, for which I'll be coding both the back-end REST service, as well as the front-end browser client. Given this context, why do I need to follow any of the guidelines given above? Instead of using the access_token, Authorization and Bearer keywords, what's stopping me from using any keywords I like, or skipping the Bearer keyword entirely in the header? After all, as long as the front-end and back-end services both read/write the data in a consistent manner, shouldn't everything work fine?
Are the keywords and guidelines given above merely best-practice suggestions, to help others better understand your code/service? Are they analogous to coding-styles? Or is there any functional impact in not following the above guidelines?
Given this context, why do I need to follow any of the guidelines given above?
Because they are standardized specifications that everyone is meant to conform to if they want to interact with each other.
Instead of using the access_token, Authorization and Bearer keywords, what's stopping me from using any keywords I like, or skipping the Bearer keyword entirely in the header?
Nothing, except that it won't be OAuth anymore. It will be something custom that you created for yourself that noone else will understand how to use, unless you publish your own spec for it.
After all, as long as the front-end and back-end services both read/write the data in a consistent manner, shouldn't everything work fine?
Who is to say that you alone will ever write the only front-end? Or that the back-end will never move to another platform? Don't limit yourself to making something custom when there are open standards for this kind of stuff.
Are the keywords and guidelines given above merely best-practice suggestions, to help others better understand your code/service?
No. They are required protocol elements that help the client and server talk to each other in a standardized manner.
Authorization is a standard HTTP header used for authentication. It has a type so the client can specify what kind of authentication scheme it is using (Basic vs NTLM vs Bearer, etc). It is important for the client to specify the correct scheme being used, and for the server to handle only the schemes it recognizes.
Bearer is the type of authentication that OAuth uses in the Authorization header. access_token is a parameter of OAuth's Bearer authentication.
If you use the Authorization header (which you should), you must specify a type, as required by RFCs 2616 and 2617:
Authorization = "Authorization" ":" credentials
credentials = auth-scheme #auth-param
auth-scheme = token
auth-param = token "=" ( token | quoted-string )
So, in this case, Bearer is the auth-scheme and access_token is an auth-param.
Are they analogous to coding-styles?
No.
Or is there any functional impact in not following the above guidelines?
Yes. A client using your custom authentication system will not be able to authenticate on any server that follows the established specifications. Your server will not be able to authenticate any client that does not use your custom authentication system.