Reset Client Secret OAuth2 - Do clients need to re-grant access? - oauth-2.0

As a security best practice, when an engineer/intern leaves the team, I want to reset the client secret of my Google API console project.
The project has OAuth2 access granted by a bunch of people, and I need to ensure that those (grants as well as refresh tokens) will not stop working. Unfortunately, I've not been able to find documentation that explicitly states this.

Yes. Client Secret reset will immediately (in Google OAuth 2.0, there may be a few minutes delay) invalidate any authorization "code" or refresh token issued to the client.
Client secret reset is a countermeasure against abuse of revealed client secrets for private clients. So it makes sense to require re-grant once the secret is reset.
I did not find any Google document states this explicitly either. But my practice proves that reset will impact users, also you can do a test on it.
And in our work, we programmers do not touch product's secret, we have test clients. Only a very few product ops guys can touch that. So I think you need to try your best to narrow down the visibility of the secret in your team. Rest is not a good way.

Related

Which OAuth 2.0 flow to use?

I have read rfc6749 and https://auth0.com/docs/authorization/which-oauth-2-0-flow-should-i-use but I couldn't seem to find the flow that's exactly for my use case.
There will be a native app(essentially a GUI) that will spin up a daemon on end user device. The daemon will try to call internal APIs hosted by the backend. The internal APIs don't need to verify the user identity; but it's preferred that the device identity can be verified to some extent. There will be an OAuth authorization server in the backend to handle the logic. But I couldn't identify which is the correct flow to use for this case.
Originally I thought this is a good fit for client credentials grant type. But then I realized that this might be a public client but client credentials is supposed to be used for confidential clients only.
I then came to find out about authorization code with PKCE flow. This seems to be the recommended flow for native apps but it doesn't make much sense to me as there will be redirects and user needs to interact but the APIs that will be called is supposed to be internal and user shouldn't know about these back channel stuff at all. Also the resource owner should be the same as the client in this case, which should be the machine not the user.
So which flow should I use?
Thanks a lot for the help!
Client Credentials feels like the standard option but there are a few variations:
SINGLE CLIENT SECRET
This is not a good option since anyone who captures a message in transit can access data for any user.
CLIENT SECRET PER USER
Using Dynamic Client Registration might be an option, where each instance of the app gets its own client ID and secret, linked to a user.
The Daemon then uses this value, and if the secret is somehow captured in transit it only impacts one user.
STRONGER CLIENT SECRETS
The client credentials grant can also be used with stronger secrets such as a Client Assertion, which can be useful if you want to avoid sending the actual secret.
This type of solution would involve generating a key per user when they aughenticate, then storing the private key on the device, eg in the keychain.

Is there anything in the standard for Open ID Connect to terminate a session?

Is there a standard mechanism in Open ID Connect to kill an active session?
Say a client has an Access token set to expire in 2 minutes. Someone from a central location logs the user out. The idea to prevent that access token from being viable on the very next request as opposed to when the token expires.
This would require Web APIs to contact the authorization server on every single request, which would cause performance problems.
It is standard to use short lived access tokens as the best middle ground. Most commonly this is around 30 or 60 minutes by default.
When reviewing OAuth behaviour in areas like this it can be worth comparing to older systems:
It was never possible to revoke cookies in the manner you describe - so security is not made worse by using OAuth 2.0 based solutions
Typically it is possible to centrally revoke refresh tokens though, so that the next token refresh requires a new login.
There are a couple of drafts that are helpful depending on your specific implementation:
https://openid.net/specs/openid-connect-session-1_0.html
https://openid.net/specs/openid-connect-backchannel-1_0.html
https://openid.net/specs/openid-connect-frontchannel-1_0.html
Several OIDC products are using these methods currently:
https://backstage.forgerock.com/docs/am/6/oidc1-guide/#openam-openid-session-management
https://www.ibm.com/support/knowledgecenter/SSD28V_liberty/com.ibm.websphere.wlp.core.doc/ae/twlp_oidc_session_mgmt_endpoint.html
Several Others also.

What the attacker could do if he obtains application's client_secret?

I've searched a lot online but with no use. I want to know what could the attacker do if he obtains the client_id and client_secret of an Google's Oauth2 app. Like what the information he would be able to see? Can he edit the app configurations? Can he see other people information?
I haven't worked with Oauth2.0 before so please make your answer simple
Thanks!
I want to know what could the attacker do if he obtains the client_id
and client_secret of an Google's Oauth2 app.
The OAuth 2 Client Secret must be protected. However, if it is leaked the attacker needs one more item. A valid redirect_uri. If the attacker has both along with the (public) Client ID, they might be able to generate OAuth tokens for your account.
The redirect_uri is often valid for http://localhost because developers forget to remove this URI after development completes. This means that someone could run a local server and generate OAuth tokens. This is a big security hole.
What can they do with the OAuth tokens? depends ...
Like what the information he would be able to see? Can he edit the app
configurations? Can he see other people information?
You did not specify whose OAuth system, what it is authorizing, etc. Therefore the answer is "it depends".
For Google Cloud, the hacker will need the credentials for someone authorized in Google Cloud. Some systems have very poor security, so as they say, anything can happen and often does with poorly designed security.
In a properly designed system, there are several layers that the hacker needs to get thru. Having the Client Secret helps a lot, but is not a total security failure. The hacker can only authenticate with the system. The next layer, which is authorization, needs to be breached. In a correctly designed system, the hacker will need to authenticate with a user with authorized permissions. If the hacker has that, then you are in big trouble. He might have the keys to do anything he wants. Again, it depends.

What's so secret about the OAuth secret?

I'm writing a web service which uses OAuth2 for authorization. I'm using C# and WCF, though this isn't really pertinent to my question. Having never used OAuth before, I've been doing my research. I'm on the "verify they're actually authorized to use this service" end of things. I think I have a pretty good idea of how OAuth2 works now, but one aspect of it still confounds me.
OAuth2 is token-based. The token is just text and contains some information, including a "secret" that only your application (the web service, in my case) and the Authorization Server knows. The secret can be just a text phrase or a huge string of semi-random characters (sort of like a GUID). This "proves" the user contacted the Authorization Server and got the secret from it. What confuses me is this only seems to prove that the user contacted the Authorization server sometime in the past. In fact, it doesn't even prove that. It just proves that the user knows the secret. The rest of the token (such as role, duration, other stuff) can all be faked. Once the user gets one token for the service he wants access to, they could whip up new tokens with falsified information as much as they like. In fact, there could be numerous servers set up with thousands of "secrets" for nefarious individuals to use at will. Of course, this wouldn't happen often, but it seems very possible.
Am I correct, or is there something I'm missing? Is this a known weakness of OAuth2?
Whenever the Resource Server (or "service") receives a token, it needs to validate it. Depending on the token type it can check its signature that was created with the private key that Authorization Server has or it can call into the Authorization Server to validate the token. This way a user cannot forge a token: it would be impossible to forge the signature or to make the Authorization Server verify a token that it did not issue.
FWIW: you seem to be conflating the "token" and "client secret" and perhaps even the private key of the Authorization Server; they're all different concepts.

Server side removal of Oauth token

If a user wants to remove him/herself from our service, we delete all of their data from our database, including Oauth tokens. The Oauth tokens we have are secure and persistent. As part of best practice we would like to totally invalidate the tokens as if they want to their Google accounts page and removed it there. Reading the Oauth documentation it was not clear to me if this is possible because all of the examples pertained to single-session or non-secure cases (and excuse my lack of "What did you try?"-ism but I'm trying to get a quick plan together on how to do this).
So
1) is this possible? Preferably on 1.0?
2) how to do this?
Yes, you can revoke tokens programmatically as if the user revoked access in their accounts settings page.
For AuthSub and OAuth 1.0, use the AuthSubRevoke token endpoint by making an OAuth-signed request to:
https://www.google.com/accounts/AuthSubRevokeToken
For OAuth 2.0, use the revocation endpoint like:
https://accounts.google.com/o/oauth2/revoke?token={refresh_token}

Resources