I'm just curious - do I need to keep the client_secret from Google/FaceBook/another OAuth 2.0 providers in a 'secret' place? As far as I can see, there're very little things that could be done with client-secret parameter, as soon as I specify very restrictive callback-urls.
So is it safe, for instance, to commit 'secret' keys to github/bitbucket/etc as a public repository for some live web-project?
As far as I know, client-secret has nothing in common with the developer account on google/facebook, so it's not possible to use it for hjacking or spoofing.
Am I missing something? Thanks!
Try to keep it as secret as possible.
For a web app it is crucial to keep it secret, the security of the whole flow relies on that.
For native apps do your best. It can always be reverse engineered from your binary, but in some cases that may not be trivial. If possible don't commit in plain to github or such. You can add it as part of the build process.
Related
I am very aware that you can't store your secret on a front end app, however, is there any way to work around this thus still having the benefits while using the javascript adapter.
I'm guessing using the JWT token option lead to the same issue.
I've read about using 2 different clients, one as a confidential admin and the other as a public client. All though I don't see how it is any better as the secret will still be held publicly, just in a different location.
Should I look deeper into this, are there any other workaround ?
Thanks
Solution : This won't turn your client into a confidential one, but you can add a layer of security by using PKCE.
With OAuth 2.0 PKCE Flow for Installed App (e.g. a desktop app/cli/client library), it seems that nothing is preventing an attacker to:
obtain client_id by using the original app (client_id is public and can be easily copied from browser bar/source code)
make a fake app to mimic original app
use the fake app to seduce the user to grant access and thus obtain a refresh token which essentially means full access within requested scopes
Without PKCE, it's hard to fake an app and obtain a refresh token because that would require an attacker to obtain client_secret. It seems to me that, although PKCE offers security improvements over implicit flow, it makes it so much easier to masquerade authentic apps that use OAuth 2.0?
I'm using googlecloudsdk (gcloud), it seems that it has client_id (and even many client_id/client_secret pairs) hard coded into the source code, which is distributed to the client. I doubt there's anything to stop attackers to fake gcloud and thus gain access to user's GCP environment (for proof, run gcloud auth login and it will show you the url in the console that an attacker needs.) Could anyone clarify/help me to understand what's going on?
Private URI schemes are probably the best you can do on desktop but are not perfect as you say. It is what I use for my Desktop Code Sample, but ideally I'd also like to resolve the above concern.
For mobile you can use Claimed HTTPS Schemes to solve the problem - see the answer I added to the post sllopis sent.
I would be aware of Updated OAuth 2.1 Guidance for Native Apps - see section 10 - but I don't think you can fully solve this problem.
It is expected that end users are careful about desktop apps they install, to reduce risks for this scenario. Hopefully operating system support will enable better cryptographic options in future.
Just wanted to follow up on this because I had the same question myself, but also answered it myself and I wanted to add something that wasn't said here:
When you set up the application on the oauth2 server, you have to set up a number of redirect_uris, allowed places to return to after authorization is complete. This means that someone who creates a phishing attack like the one you described cannot return to their own app after login, and will never receive the code.
There is a separate attack where you try and return to a legitimate app from an illegitimate app, however this is solved by the inclusion of the state variable.
I've read a lot about the different flows (authorization code, implicit, hybrid and some extensions such as PKCE). Now I'm on the authorization code flow with PKCE.
PKCE ensures the initiator is the same user as the users who exchanges the authorization code for an access token. That is nice and OK.
When using this flow without a client_secret (which is recommended for SPA/Javscript applications) there is no warranty that the client is the known/original client. So, the 'consent' the user gave, is of no value. uhh?
I am working on a nativate client (a public downloadable binary). A secret cannot be considered confidential when baked in the binary, it can be decompiled for example.
Now I'm in dubio. What is better, bake the secret in the binary so that there is some extra layer of assurance the client is the known client or stop asking for 'consent' and give the same client_id to the whole world, only relying on the user-credentials.
Or is there something wrong with my story?
Very good question and made me realise a gap in my understanding. It is the role of the redirect uri to deal with this risk. In the web / https case the only hack that could work would be to edit the hosts file of the user. I'm the native case it is less perfect and your question is covered below. Generally our best bet is to follow recommendations / standards - but they have plenty of problems! https://web-in-security.blogspot.com/2017/01/pkce-what-cannot-be-protected.html?m=1
To others reading this case I've read a lot more.
Client impersonation is not easy fixable.
RFC8252 seems to be the most applicable article with recommendations for native apps - https://www.rfc-editor.org/rfc/rfc8252
"Claimed ‘https’ scheme" is mentioned as the best solution (IOS, Android and maybe UWP apps).
Since I'm working on a native Windows, non-UWP application I can't use this. As far as I can see the "Web Authentication Broker based on the app SID" is possible for my situation.
The other method is to accept the client as not known/identified and ask for 'consent' every time the client would access personal data.
I'm looking into Slack's integrations and well, I'll paste an edited version of mine here:
API Token: ecbr-33598907266-3sArMzpiKksmA73mRKGja1GB
Webhook URL: https://hooks.slack.com/services/X0F5H7V8S/P15GYA26D/gcHAYaY0kZFCirN1aywJTF0Q
I can see both of these being a case of security through obscurity, but can't they still be guessed? I know many combinations will have to be run, so, it's not entirely secure. I can see a countermeasure being stopping a server from requesting all the possibilities thereby making it harder to guess. Probably a bigger vulnerability is leaking the token somehow... But I'm curious to know how safe OAuth tokens and GUID URLs are in general.
I have an app that uses a single-user OAUTH token. I can store the four values (consumer key/secret, token/secret) directly inside the app but that's not recommended and I don't want the secrets to be checked into source code. The app doesn't use a database. I know that however I store them, someone with access to the server could figure them out but I'd like to at least get it out of the source code. I've thought of passing them as Passenger environment variables or storing them in a separate file on the server but are there better ways? Is there any point to encrypting them since anyone that could see them would also have the access needed to decrypt?
Not having the keys stored in the source code actually is actually bad a practice in the accoding to the most agile setup (continuous deployment).
But, by what you say, you want to have two groups: those who can make the code, and those who can deploy it. Those who can deploy it have access to the keys, and, in the most secure setting, must NOT use the code of the application. You can make the oauth still work by having those who code autenticate to a system that proxies all the authorization part, and authenticates de application. Such keys (app -> auth middle man) can be in repository, as they are internal.
Any other setup: authentication library created by those who can deploy, encrypted keys, anything else can be broken by those who make the code. If you don't trust them enough to have access to the keys, you probably don't trust them enough not to try to jailbreak the keys.
The resulting deployment scheme is much more complicated, and, therefore much more prone to erros. But it is, otherwise, more secure. You still have to trust someone, like those who install the operating system, the proxy's system middleware, those who maintain the proxy's machine(s), those who can long on it, and so on. If the groupo of people with access to the keys is small enough, and you trust them, then you've gained security. Otherwise, you lost security, ability to respond to change, and wasted a lot of people's time.
Unfortunately, all authorization schemes require you to trust someone. No way around it. This is valid for any application/framework/authorization scheme, not only sinatra, rails, oauth, java, rsa signatures, elliptic curves, and so on.