Clarificaton required on CIBA Spec - oauth-2.0

I am going through [1]CIBA (https://openid.net/specs/openid-client-initiated-backchannel-authentication-core-1_0.html#OpenID.Core) specifications and could not understand what is user_code and how it needs to be deployed for CIBA.
"User code is a mechanism to prevent unsolicited authentication requests from appearing on a user's authentication device. "
This is how the specification begins with.See section 7.1.2 for more.
It Will be helpful if someone could explain this functionality and how it should be supported in an Identity server point of view!!

This may be a bit late, but I hope it can help future readers if so. The user_code mechanism is to prevent unwanted CIBA request appearing on an end user's Authorization Device (AD). There might be instances a Relying Party (RP) or client application, which uses the CIBA Authorization Server (OP), knows the end user's identifier thus resulting in subsequent requests which the end user doesn't what at the current time. So as a preventive measure, a user_code must be supplied before a CIBA request can start, do its thing in the AD etc. This is to make sure CIBA request that pops up on an end user's AD is really "wanted" or should I say under the conscious knowledge of an end user.
The way a user_code is structured is similar to a password. It's a secret that only the end user has the knowledge of... but it should be other than a password.
So for the implementation aspect, a rough approach could be like below:
Register client app to support user_code
Make sure corresponding user accounts have a user_code
Before a CIBA request, client application will need to ask end user to input user_code
The user_code supplied will be passed along an authentication request, that is the initial CIBA request
Authorization server (OP) validates the request parameters, make sure user_code is valid for the picked user
Continue the rest of the flow conforming the specs...

Related

Forging a Cross Site Request Forgery (CSRF) token

I had a look at Rails' ActionController::RequestForgeryProtection module and couldn't find anything related to using secrets. Basically, it uses secure PRNG as a one time pad, xors, computes Base64 and embeds into HTML (form, tags). I agree that it is impossible for an attacker to guess what a PRNG generates, but nevertheless I can generate (or forge if you like) a similar token, embed it into my "evil" form and submit. As far as understand Rails compares ( verifies) it on the backend. But I can't fully understand why it is secure. After all, I can generate my own token exactly like Rails does. Could someone clarify how the security is achieved?
You might misunderstand what this protects against, so let's first clarify what CSRF is, and what it is not. Sorry if this is not the point of confusion, might still be helpful for others, and we will get to the point afterwards.
Let's say you have an application that allows you to say transfer money with a POST request (do something that "changes state"), and uses cookie-based sessions. (Note that this is not the only case csrf might be possible, but by far the most common.) This application receives the request and performs the action. As an attacker, I can set up another application on a different domain, and get a user to visit my rogue application. It does not even have to look similar to the real one, it can be completely different, just having a user visit my rogue domain is enough. I as the attacker can then send a post to the victim application's domain, to the exact url with all the necessary parameters so that money gets transferred (the action will be performed). The victim user need not even know if this happens in xhr from javascript - or I can just properly post a form, the user gets redirected, but the harm is done.
This is affected by a few things, but the point is that cross-origin requests are not prevented by the same origin policy, only the response will not be available to the other domain - but in this case when server state changes in the victim application (like money gets transferred), the attacker might not care much about the response itself. All this needs is that the victim user that visits the attacker's page while still being logged in to the victim application. Cookies will be sent with the request regardless of the page the request is sent from, the only thing that counts is the destination domain (well, unless samesite is set for the cookie, but that's a different story).
Ok, so how does Rails (and similar synchronizer token solutions) prevent this? If you lok at lines 318 and 322 in the source, the token received from the user is compared to the one already stored in the session. When a user logs in, a random token is generated and stored for that particular user, for that particular session. Subsequent requests that change state (everything apart from GET) check if the token received from the client is the same that's stored in the session. If you (or an attcker) generate and send a new one, that will be different and the request will fail validation. An attacker on their own website cannot guess the correct token to send, so the attack above becomes impossible. An attacker on a different origin also cannot read the token of a user, because that is prevented by the same origin policy (the attacker can send a GET request to the victim app, but cannot read the response).
So to clarify, CSRF is not a protection against parameter tampering, which might have caused your confusion. In your own requests, if you know the token, you can change the request in any way, send any parameter, the CSRF token does not protect against this. It is against the attack outlined above.
Note that the description above is only scratching the surface, there is a lot of depth to CSRF protection, and Rails too does a little more, with some other frameworks doing a lot more to protect against less likely attacks.

How would you build a production ready authentication system for a GraphQL API built with Rails and React?

I am trying to integrate an authentication system with Graphql and rails that communicates with a React front end and I would like to know what is the best way to do it for a production environment
I know that this might involve using jwt but I would like to know how would you do it?
When the user signs in/up from the react front end it sends the request to the rails graphql api that authenticates the user. Then when the authenticated user makes a request/query it, the backend first makes sure that the user has access to the resources that he is requesting and then send those resources in json to the react front end
This is a bit of an open-ended question. It's probably not really possible to write a specific answer to your question, but here goes nothing.
There are multiple ways to set up authentication with GraphQL. First of all, it's important to understand whether your user is allowed to make any GraphQL queries at all without being authenticated.
You're saying you're authenticating the user with your Rails GraphQL API. Are you doing this with a mutation or with a REST call? If it's just REST and the user isn't allowed to use the GraphQL API without authenticating then you may just be able to block the user from interacting with the GraphQL API at all, when they're not authenticated.
Otherwise it's common to check whether the user is authenticated and if so keep the user data in your GraphQL query context. Then you'll now — per query — whether the user is authenticated.
When the user is attempting to access any resource that they may not be able to see or are attempting to send a mutation without being authenticated, then you can just end the entire query/request with a GraphQL error.
Since GraphQL errors are still considered part of a successful HTTP request you can handle them as usual in your front end as part of the UI. They'll be listed in the usual errors array of the response, as specified in the GraphQL spec.
Regarding JWT, you can of course use JWT to authenticate the user, which requires you to either store a token in a cookie or somewhere else in the user's browser. Typically you'd just send the token in the Authorization header with every GraphQL request.

Is it possible to use recaptcha with auth0 in some way to avoid having a user to sign in but still have a token?

I have an app, client side, that uses auth0 for accessing the different API's on the server. But now I want to add another app, a single page app, I'm going to use VueJs, and this app would be open "ideally" w/o a user having to sign in, it's like a demo with reduced functionality, I just want to check that the user is not a robot basically, so I don't expose my API in those cases.
My ideas so far:
- Somehow use recaptcha and auth0 altogether.
- Then have a new server that would validate that the calls are made only to allowed endpoints (this is not of my interest in the question), so that even if somehow the auth is vulnerated it doesn't leave the real server open to all type of calls.
- Pass the call to the server along with the bearer token, just as if I was doing it with my other old client app.
Is this viable? Now I'm forcing the user to validate, this is more a thing about UX (User-experience), but I'd like a way to avoid that. I'm aware that just with auth0 I can't do this see this post from Auth0, so I was expecting a mix between what I mentioned.
EDIT:
I'm sticking to validating in both cases, but I'm still interested to get opinions over this as future references.
At the end, with the very concept of how auth0 works that idea is not possible, so my approach was the following:
Give a temporary authenticated (auth 0) visitor a token which has restricted access level, then pass the request to a new middle server, the idea is to encrypt the real ids so the frontend thinks it's requesting project A123456etc, when indeed it's going to get decrypted in the middle server to project 456y-etc and given a whitelist it will decide to pass the request along with the token to the final server, the final server has measures to reduce xss and Ddos threats.
Anyway, if there's a better resolve to it I will change the accepted answer.
You could do a mix of using recaptcha for the open public, then on the server side analyse the incoming user request (you can already try to get a human made digital fingerprint just to differentiate with a robot-generated one) and the server (more a middle server) makes the call to you API (and this server has limited surface access)
What we normally do in these situations (if I got your issue correctly) is to create two different endpoints, one working with the token and another one receiving the Recaptcha token and validating it with Google servers.
Both endpoints end up calling the same code but this way you can add extra functionality in a layer in the 'public' endpoint to ensure that you are asking only for public features (if that cannot be granted just modifying the interface).

OpenID Connect, oAuth2 - Where to start?

I am not sure which approach I should be taking in our implementation and need some guidance.
I have a REST API (api.mysite.com) built in the Yii2 Framework (PHP) that accesses data from mysite.com (database). On mysite.com our users will be able to create Connected Apps that will provision a client id + secret - granting access to their account (full scope?).
Based on my research, the next step seems to be setting up something to actually provide the bearer tokens to be passed to the api - I have been leaning towards oAuth2, but then I read that oAuth2 does not provide authentication. Based on this I think I need OpenID Connect in order to also provide user tokens because my API needs to restrict data based on the user context.
In this approach, it is my understanding that I need to have an Authentication Server - so a few questions:
Is there software I can install to act as an OpenID Connect/oAuth2 authentication server?
Are there specific Amazon Web Services that will act as an OpenID Connect/oAuth2 Authentication Server?
I am assuming the flow will be: App makes a request to the auth server with client id + secret and receives an access token. Access token can be used to make API calls. Where are these tokens stored (I am assuming a database specific to the service/software I am using?)
When making API calls would I pass a bearer token AND a user token?
Any insight is greatly appreciated.
your understanding is not very far from reality.
Imagine you have two servers one for Authentication, this one is responsible for generating the tokens based on a Authorization Basic and base64 encoded CLientID / ClientSecret combo. This is application authentication basically. If you want to add user data as well, simply pass username / password in the post body, authenticate on the server side and then add some more data to the tokens, like the usernames, claims, roles, etc
You can control what you put in these tokens, if you use something like JWT ( Json Web Tokens ) then they are simply json bits of data.
then you have a Resource server, you hit it with a Authorization Bearer and the token you obtained from the Authorization one.
Initially the tokens are not stored anywhere, they are issued for a period of time you control. You can however do something else and store them in a db if you really want to. The expiration is much safer though, even if someone gets their hands on them they won't be available for long! In my case I used 30 minutes for token validity.
Now, you haven't specified what languages/frameworks you are looking into. If you use something like dot net then look into IdentityServer, version 4 is for Dot net core, 3 for anything below.
I also have a pretty long article on this subject if you are interested:
https://eidand.com/2015/03/28/authorization-system-with-owin-web-api-json-web-tokens/
Hopefully all this clarifies some of the questions you have.
-- Added to answer a question in comments.
The tokens contain all the information they need to be authenticated by the resource server correctly, you don't need to store them in a database for that. As I already said, you can store them but in my mind this makes them less secure. Don't forget you control what goes into a token so you can add usernames if that's what you need.
Imagine this scenario, you want to authenticate the application and the user in the same call to the Authorization Server. Do the OAuth2 in the standard way, which means authenticate the application first based on the client id / client secret. If that passes then next do the user authentication. Add the username or userid to the token you generate and any other bits of information you need. What this means that the resource server can safely assume that the username passed to it in the token has already been validated by the authentication server otherwise no token would have been generated in the the first place.
I prefer to keep these two separate myself, meaning let the AS ( Authorization Server) to deal with the application level security. Then on the RS (Resource Server) side you have an endpoint point like ValidateUser for example, which takes care of the user validation, after which you can do whatever you need. Pick whichever feels more appropriate for your project I'd say.
One final point, ALWAYS make sure all your api calls ( both AS and RS are just apis really ) are made over HTTPS and never ever have any important information transmitted via a GET call which means the URL can be intercepted. Both Headers and POST body are encrypted and secure over HTTPS.
This should address both your questions, I believe.

Should an oAuth server give the same accessToken to a same client request?

I am developing an oAuth2 server and I've stumbled upon this question.
Lets suppose a scenario where my tokens are set to expire within one hour. On this timeframe, some client goes through the implicit auth fifty times using the same client_id and same redirect_uri. Basically same everything.
Should I give it the same accessToken generated on the first request on the subsequent ones until it expires or should I issue a new accessToken on every request?
The benefits of sending the same token is that I won't leave stale and unused tokens of a client on the server, minimizing the window for an attacker trying to guess a valid token.
I know that I should rate-limit things and I am doing it, but in the case of a large botnet attack from thousands of different machines, some limits won't take effect immediately.
However, I am not sure about the downsides of this solution and that's why I came here. Is it a valid solution?
I would rather say - no.
Reasons:
You should NEVER store access tokens in plain text on the Authorization Server side. Access tokens are credentials and should be stored hashed. Salting might not be necessary since they are generated strings anyway. See OAuth RFC point 10.3.
Depending how you handle subsequent requests - an attacker who knows that a certain resource owner is using your service and repeat requests for the used client id. That way an attacker will be able to impersonate the resource owner. If you really return the same token then at least ensure that you authenticate the resource owner every time.
What about the "state" parameter? Will you consider requests to be the "same" if the state parameter is different? If no then a botnet attack will simply use a different state every time and force you to issue new tokens.
As an addition - generally defending against a botnet attack via application logic is very hard. The server exposing your AS to the internet should take care for that. On application layer you should take care that it does not go down from small-bandwidth attacks.
You can return the same access_token if it is still valid, there's no issue with that. The only downside may be in the fact that you use the Implicit flow and thus repeatedly send the - same, valid - access token in a URL fragment which is considered less secure than using e.g. the Authorization Code flow.
As a thumb rule never reuse keys, this will bring additional security in the designed system in case of key capture
You can send different access token when requested after proper authentication and also send refresh token along your access token.
Once your access token expires, you should inform user about that and user should re-request for new access token providing one-time-use refresh token previously provided to them skipping need for re-authentication, and you should provide new access token and refresh token.
To resist attack with fake refresh token, you should blacklist them along with their originating IP after few warnings.
PS: Never use predictable tokens. Atleast make it extremely difficult to brute force attacks by using totally random, long alpha-numeric strings. I would suggest bin2hex(openssl_random_pseudo_bytes(512)), if you are using php.

Resources