OAuth and sticky sessions - oauth

I use OAuth 2 for authorization and need to implement it in a load balanced cluster. I've considered a few approaches, but it seems there is no way around a centralized approach. Here are my thoughts:
1. Balancing using source IP
Caching the tokens on one server and balancing by IP would be ideal, however, the IP can not assumed to be static. So when the same user tries to access services that require authorization from another IP with a valid token, it will fail, because it is not cached on this machine. Also other devices logged in with this user will not reach the same machine.
2. Balancing using a load balancing cookie
Also not really an option, since it cannot be assumed that every client implements cookie storage.
3. Balancing using the Authorization header
Balancing by hashing the Authorization: Bearer token header is problematic, because the first request (for requesting the authorization token) has no Authorization header, thus, the following request might not hit the same instance.
My current approach is to use a central Redis instance for authorization token storage.
Is there an option left, where a centralized approach can be avoided?

I think you still have two options to consider.
One is to balance by session ID. Application servers usually can be configured to manage sessions either by cookie or a GET parameter added to every link, so it does not definitely needs cookie storage. Additionally, there are very few HTTP clients left that still do not implement cookie storage, so you may want to reconsider item 2 of your list.
The other one is using self-contained tokens, e.g. JSON Web Tokens (JWT) with signatures (JWS). Validation of self-contained tokens may not need central database, each server instance can check token signatures alone and extract authorization details from the token itself. However, if you need support for revoking tokens, you may still need a central database to store at least a blacklist of revoked tokens.
Though I cannot provide you a full-fledged solution, hope this gives you some ideas.

Related

Can the OIDC "jwks_uri" URI defined in the "/.well-known/openid-configuration" be cached?

I'm writing a service which requires OIDC. I've seen a number of client libraries (the programming language and library here is irrelevant) which allow auto-refreshing/re-fetching the content at the "jwks_uri" endpoint which is defined in the discovery document at the IdP's "/.well-known/openid-configuration" endpoint.
I know the content defined at the actual "jwks_uri" endpoint can change. But can the value of this "jwks_uri" (the URI itself) change in the discovery document? I can't seem to find any answer in the specs.
Yes it can be cached. For example some clients (like in .NET) do for example cache this information (including the singing keys) and refresh this information every 24 hours.
If you validate your token remotely (in the IDP side) yes you can cache the content of .Well-known for ever, in the most case the endpoints of your IDP will never change, but if you validate your token locally, you need the last keys in IDP side, and you need to know the frequency change of keys in your IDP, basing on that you can cache your keys.

Oauth2 authorization code flow: where do I store authorization code?

Good day! I am trying to implement my own authorization server using oauth2 standards. Upon reading into its specifications on authorization code flow, a 3rd party application requesting for API access needs an authorization code from the authorization server, which will then be used to exchange for an access token. My question is, once I generate an authorization code from my authorization server, by concept, where do I store it so that when a client app requests an exchange for access token, I can verify that the authorization code is valid?
You can store the code anywhere you want - in your server memory (as an object in a map), in a database or in any other safe storage. If your server is just a single application (having just one RAM), you can store the codes in memory if you don't mind losing them during application restarts. But if you want to run multiple instances of your application (e.g. in Kubernetes) or server is composed of multiple applications, you will need to use some external storage (database, Hazelcast, Redis).
With the code, you will need to keep metadata such as client_id, validity, PKCE attributes (code_challenge_method, code_challenge) and such. When you receive a request to your token endpoint wanting to exchange the code for tokens, you need to find the code in your storage, compare the relevant metadata (client_id, PKCE code_verifier, client_secret) and issue tokens.
But you should keep the code with a timestamp saying when the tokens were issued. And you should be able to find what tokens were issued from the code. Because if you receive another /token exchange request with the same code, you should invalidate all the tokens issued - the code was probably stolen.
It's good to read OAuth2 Security RFC for all the considerations.
You can create a global data structure map and map the client_id to the auth codes and delete them after the access token is exchanged, this is a very simple a valid solution as long as it is properly implemented and the auth code and deleted correctly.
Since the exchange happens directly, you don't need to worry about the heap filling up since the auth code is created and deleted in a very short period of time making space. Say 1000 users log in every minute, a data structure of 1000 elements is very acceptable in most cases assuming there is a timeout of the exchange of 1 min (which should be the case)

Do OAuth clients need to validate tokens as part of authorization code grant?

Does an app using authorization code (with or without PKCE) to obtain access + id tokens on behalf of a user need to also validate those tokens (signature, not expired, audience, etc.)?
If so, what for? Since the client is using TLS and pointing to the provider it's been configured with, what attacks/threats does that client also validating the token mitigate?
At first glance you are right and the spec basically allows for not checking tokens directly returned from the token endpoint over TLS as you suggest indeed, but:
Firstly, one may argue that if a signature is present, it is there for a reason and it should be validated since the Provider is also free to return tokens without a signature (alg="none") if it did not want/need the Client to validate.
Secondly, there are known attacks ("IDP mixup") that trick the Client into talking to the wrong token endpoint as a way of stealing the Authorization Code: verifying that the ID token returned does not match the expected signature would at least stop the Client from processing an ID token produced by an attacker.
Thirdly, it would be good for the Client to protect itself against broken or compromised IDPs in general, avoiding replay attacks or similar.
I guess when you're doing all of it in a single domain, i.e. Client, AS and RS are all under control of the same organisation and the relationships between them are fixed and one-to-one only, technically verification would be overkill based on current knowledge and known attacks.
But in case your use case spans multiple security domain, in general it is better to verify than it is to assume.

OpenID Connect, oAuth2 - Where to start?

I am not sure which approach I should be taking in our implementation and need some guidance.
I have a REST API (api.mysite.com) built in the Yii2 Framework (PHP) that accesses data from mysite.com (database). On mysite.com our users will be able to create Connected Apps that will provision a client id + secret - granting access to their account (full scope?).
Based on my research, the next step seems to be setting up something to actually provide the bearer tokens to be passed to the api - I have been leaning towards oAuth2, but then I read that oAuth2 does not provide authentication. Based on this I think I need OpenID Connect in order to also provide user tokens because my API needs to restrict data based on the user context.
In this approach, it is my understanding that I need to have an Authentication Server - so a few questions:
Is there software I can install to act as an OpenID Connect/oAuth2 authentication server?
Are there specific Amazon Web Services that will act as an OpenID Connect/oAuth2 Authentication Server?
I am assuming the flow will be: App makes a request to the auth server with client id + secret and receives an access token. Access token can be used to make API calls. Where are these tokens stored (I am assuming a database specific to the service/software I am using?)
When making API calls would I pass a bearer token AND a user token?
Any insight is greatly appreciated.
your understanding is not very far from reality.
Imagine you have two servers one for Authentication, this one is responsible for generating the tokens based on a Authorization Basic and base64 encoded CLientID / ClientSecret combo. This is application authentication basically. If you want to add user data as well, simply pass username / password in the post body, authenticate on the server side and then add some more data to the tokens, like the usernames, claims, roles, etc
You can control what you put in these tokens, if you use something like JWT ( Json Web Tokens ) then they are simply json bits of data.
then you have a Resource server, you hit it with a Authorization Bearer and the token you obtained from the Authorization one.
Initially the tokens are not stored anywhere, they are issued for a period of time you control. You can however do something else and store them in a db if you really want to. The expiration is much safer though, even if someone gets their hands on them they won't be available for long! In my case I used 30 minutes for token validity.
Now, you haven't specified what languages/frameworks you are looking into. If you use something like dot net then look into IdentityServer, version 4 is for Dot net core, 3 for anything below.
I also have a pretty long article on this subject if you are interested:
https://eidand.com/2015/03/28/authorization-system-with-owin-web-api-json-web-tokens/
Hopefully all this clarifies some of the questions you have.
-- Added to answer a question in comments.
The tokens contain all the information they need to be authenticated by the resource server correctly, you don't need to store them in a database for that. As I already said, you can store them but in my mind this makes them less secure. Don't forget you control what goes into a token so you can add usernames if that's what you need.
Imagine this scenario, you want to authenticate the application and the user in the same call to the Authorization Server. Do the OAuth2 in the standard way, which means authenticate the application first based on the client id / client secret. If that passes then next do the user authentication. Add the username or userid to the token you generate and any other bits of information you need. What this means that the resource server can safely assume that the username passed to it in the token has already been validated by the authentication server otherwise no token would have been generated in the the first place.
I prefer to keep these two separate myself, meaning let the AS ( Authorization Server) to deal with the application level security. Then on the RS (Resource Server) side you have an endpoint point like ValidateUser for example, which takes care of the user validation, after which you can do whatever you need. Pick whichever feels more appropriate for your project I'd say.
One final point, ALWAYS make sure all your api calls ( both AS and RS are just apis really ) are made over HTTPS and never ever have any important information transmitted via a GET call which means the URL can be intercepted. Both Headers and POST body are encrypted and secure over HTTPS.
This should address both your questions, I believe.

Should an oAuth server give the same accessToken to a same client request?

I am developing an oAuth2 server and I've stumbled upon this question.
Lets suppose a scenario where my tokens are set to expire within one hour. On this timeframe, some client goes through the implicit auth fifty times using the same client_id and same redirect_uri. Basically same everything.
Should I give it the same accessToken generated on the first request on the subsequent ones until it expires or should I issue a new accessToken on every request?
The benefits of sending the same token is that I won't leave stale and unused tokens of a client on the server, minimizing the window for an attacker trying to guess a valid token.
I know that I should rate-limit things and I am doing it, but in the case of a large botnet attack from thousands of different machines, some limits won't take effect immediately.
However, I am not sure about the downsides of this solution and that's why I came here. Is it a valid solution?
I would rather say - no.
Reasons:
You should NEVER store access tokens in plain text on the Authorization Server side. Access tokens are credentials and should be stored hashed. Salting might not be necessary since they are generated strings anyway. See OAuth RFC point 10.3.
Depending how you handle subsequent requests - an attacker who knows that a certain resource owner is using your service and repeat requests for the used client id. That way an attacker will be able to impersonate the resource owner. If you really return the same token then at least ensure that you authenticate the resource owner every time.
What about the "state" parameter? Will you consider requests to be the "same" if the state parameter is different? If no then a botnet attack will simply use a different state every time and force you to issue new tokens.
As an addition - generally defending against a botnet attack via application logic is very hard. The server exposing your AS to the internet should take care for that. On application layer you should take care that it does not go down from small-bandwidth attacks.
You can return the same access_token if it is still valid, there's no issue with that. The only downside may be in the fact that you use the Implicit flow and thus repeatedly send the - same, valid - access token in a URL fragment which is considered less secure than using e.g. the Authorization Code flow.
As a thumb rule never reuse keys, this will bring additional security in the designed system in case of key capture
You can send different access token when requested after proper authentication and also send refresh token along your access token.
Once your access token expires, you should inform user about that and user should re-request for new access token providing one-time-use refresh token previously provided to them skipping need for re-authentication, and you should provide new access token and refresh token.
To resist attack with fake refresh token, you should blacklist them along with their originating IP after few warnings.
PS: Never use predictable tokens. Atleast make it extremely difficult to brute force attacks by using totally random, long alpha-numeric strings. I would suggest bin2hex(openssl_random_pseudo_bytes(512)), if you are using php.

Resources