I am very aware that you can't store your secret on a front end app, however, is there any way to work around this thus still having the benefits while using the javascript adapter.
I'm guessing using the JWT token option lead to the same issue.
I've read about using 2 different clients, one as a confidential admin and the other as a public client. All though I don't see how it is any better as the secret will still be held publicly, just in a different location.
Should I look deeper into this, are there any other workaround ?
Thanks
Solution : This won't turn your client into a confidential one, but you can add a layer of security by using PKCE.
Related
With OAuth 2.0 PKCE Flow for Installed App (e.g. a desktop app/cli/client library), it seems that nothing is preventing an attacker to:
obtain client_id by using the original app (client_id is public and can be easily copied from browser bar/source code)
make a fake app to mimic original app
use the fake app to seduce the user to grant access and thus obtain a refresh token which essentially means full access within requested scopes
Without PKCE, it's hard to fake an app and obtain a refresh token because that would require an attacker to obtain client_secret. It seems to me that, although PKCE offers security improvements over implicit flow, it makes it so much easier to masquerade authentic apps that use OAuth 2.0?
I'm using googlecloudsdk (gcloud), it seems that it has client_id (and even many client_id/client_secret pairs) hard coded into the source code, which is distributed to the client. I doubt there's anything to stop attackers to fake gcloud and thus gain access to user's GCP environment (for proof, run gcloud auth login and it will show you the url in the console that an attacker needs.) Could anyone clarify/help me to understand what's going on?
Private URI schemes are probably the best you can do on desktop but are not perfect as you say. It is what I use for my Desktop Code Sample, but ideally I'd also like to resolve the above concern.
For mobile you can use Claimed HTTPS Schemes to solve the problem - see the answer I added to the post sllopis sent.
I would be aware of Updated OAuth 2.1 Guidance for Native Apps - see section 10 - but I don't think you can fully solve this problem.
It is expected that end users are careful about desktop apps they install, to reduce risks for this scenario. Hopefully operating system support will enable better cryptographic options in future.
Just wanted to follow up on this because I had the same question myself, but also answered it myself and I wanted to add something that wasn't said here:
When you set up the application on the oauth2 server, you have to set up a number of redirect_uris, allowed places to return to after authorization is complete. This means that someone who creates a phishing attack like the one you described cannot return to their own app after login, and will never receive the code.
There is a separate attack where you try and return to a legitimate app from an illegitimate app, however this is solved by the inclusion of the state variable.
I've read a lot about the different flows (authorization code, implicit, hybrid and some extensions such as PKCE). Now I'm on the authorization code flow with PKCE.
PKCE ensures the initiator is the same user as the users who exchanges the authorization code for an access token. That is nice and OK.
When using this flow without a client_secret (which is recommended for SPA/Javscript applications) there is no warranty that the client is the known/original client. So, the 'consent' the user gave, is of no value. uhh?
I am working on a nativate client (a public downloadable binary). A secret cannot be considered confidential when baked in the binary, it can be decompiled for example.
Now I'm in dubio. What is better, bake the secret in the binary so that there is some extra layer of assurance the client is the known client or stop asking for 'consent' and give the same client_id to the whole world, only relying on the user-credentials.
Or is there something wrong with my story?
Very good question and made me realise a gap in my understanding. It is the role of the redirect uri to deal with this risk. In the web / https case the only hack that could work would be to edit the hosts file of the user. I'm the native case it is less perfect and your question is covered below. Generally our best bet is to follow recommendations / standards - but they have plenty of problems! https://web-in-security.blogspot.com/2017/01/pkce-what-cannot-be-protected.html?m=1
To others reading this case I've read a lot more.
Client impersonation is not easy fixable.
RFC8252 seems to be the most applicable article with recommendations for native apps - https://www.rfc-editor.org/rfc/rfc8252
"Claimed ‘https’ scheme" is mentioned as the best solution (IOS, Android and maybe UWP apps).
Since I'm working on a native Windows, non-UWP application I can't use this. As far as I can see the "Web Authentication Broker based on the app SID" is possible for my situation.
The other method is to accept the client as not known/identified and ask for 'consent' every time the client would access personal data.
I'm trying to implement an OAuth2 provider for my web service.
It seems easier to implement the Authentication Server together with the Resource Server. The specification doesn't say anything about the communication between them.
Does anybody see a reason not to do this?
I had a post yesterday regarding this issue. I hope we can mutual answer each other. First to directly answer your question, I think it depends very much on the load that your app has to handle. If you have to scale your app to many resource servers, keeping a separate auth server is the best because you can centrally manage user credentials and access_token in one place.
Here is my question. I believe if you have tried something similar to mine, you can give me some suggestions.
OAuth - Separating Auth Server and Resource server returns invalid token when accessing protected resource
I'm starting a new system creating using .NET MVC - which is a relatively large scale business management platform. There's some indication that we'll open the platform to public once it is released and pass the market test.
We will be using ExtJs for the front-end which leads us to implement most data mining work return in JSON format - this makes me think whether I should learn the OAuth right now and try to embed the OAuth concept right from the beginning?
Basically the platform we want to create will initially fully implemented internally with a widget system; our boss is thinking to learn from Twitter to build just a core database and spread out all different features into other modules that can be integrated into the platform. To secure that in the beginning I proposed intranet implementation which is safer without much authentication required; however they think it will be once-for-all efforts if we can get a good implementation like OAuth into the platform as we start? (We are team of 6 and none of us know much about OAuth in fact!)
I don't know much about OAuth, so if it's worth to implement at the beginning of our system, I'll have to take a look and have my vote next week for OAuth in our meeting. This may effect how we gonna implement the whole web service thing, so may I ask anyone who's done large-scale web service /application before give some thoughts and advice for me?
Thanks.
OAuth 1 is nice if you want to use HTTP connections. If you can simply enforce HTTPS connections for all users, you might want to use OAuth 2, which is hardly more than a shared token between the client and server that's sent for each single request, plus a pre-defined way to get permission from the user via a web interface.
If you have to accept plain HTTP as well, OAuth 1 is really nice. It protects against replay attacks, packet injection or modification, uses a shared secret instead of shared token, etc. It is, however, a bit harder to implement than OAuth 2.
OAuth 2 is mostly about how to exchange username/password combinations for an access token, while OAuth 1 is mostly about how make semi-secure requests to a server over an unencrypted connection. If you don't need any of that, don't use OAuth. In many cases, Basic HTTP Authentication via HTTPS will do just fine.
OAuth is a standard for authentication and authorization. You can read about it in many places and learn; Generally the standard lets a client register in the authentication server, and then whenever this client attempts to access a protected resource, he is directed to the auth-server to get a token (first he gets a code, then he exchanges it with a token). But this is only generally, there are tons of details and options here...
Basically, one needs a good reason to use oAuth. If a simpler authentication mechanism is good for you - go for it.
I'm getting started on building a REST API for a project I'm working on, and it led me to do a little research as to the best way to build an API using RoR. I find out pretty quickly that by default, models are open to the world and can be called via URL by simply putting a ".xml" at the end of the URL and passing appropriate parameters.
So then the next question came. How do I secure my app to prevent unauthorized changes? In doing some research I found a couple articles talking about attr_accessible and attr_protected and how they can be used. The particular URL I found talking about these was posted back in May of '07 (here).
As with all things ruby, I'm sure that things have evolved since then. So my question is, is this still the best way to secure a REST API within RoR?
If not what do you suggest in either a "new project" or an "existing project"scenario?
There are several schemes for authenticating API requests, and they're different than normal authentication provided by plugins like restful_authentication or acts_as_authenticated. Most importantly, clients will not be maintaining sessions, so there's no concept of a login.
HTTP Authentication
You can use basic HTTP authentication. For this, API clients will use a regular username and password and just put it in the URL like so:
http://myusername:mypass#www.someapp.com/
I believe that restful_authentication supports this out of the box, so you can ignore whether or not someone is using your app via the API or via a browser.
One downside here is that you're asking users to put their username and password in the clear in every request. By doing it over SSL, you can make this safe.
I don't think I've ever actually seen an API that uses this, though. It seems like a decently good idea to me, especially since it's supported out of the box by the current authentication schemes, so I don't know what the problem is.
API Key
Another easy way to enable API authentication is to use API keys. It's essentially a username for a remote service. When someone signs up to use your API, you give them an API key. This needs to be passed with each request.
One downside here is that if anyone gets someone else's API key, they can make requests as that user. I think that by making all your API requests use HTTPS (SSL), you can offset this risk somewhat.
Another downside is that users use the same authentication credentials (the API key) everywhere they go. If they want to revoke access to an API client their only option is to change their API key, which will disable all other clients as well. This can be mitigated by allowing users to generate multiple API keys.
API Key + Secret Key signing
Deprecated(sort of) - see OAuth below
Significantly more complex is signing the request with a secret key. This is what Amazon Web Services (S3, EC2, and such do). Essentially, you give the user 2 keys: their API key (ie. username) and their secret key (ie. password). The API key is transmitted with each request, but the secret key is not. Instead, it is used to sign each request, usually by adding another parameter.
IIRC, Amazon accomplishes this by taking all the parameters to the request, and ordering them by parameter name. Then, this string is hashed, using the user's secret key as the hash key. This new value is appended as a new parameter to the request prior to being sent. On Amazon's side, they do the same thing. They take all parameters (except the signature), order them, and hash using the secret key. If this matches the signature, they know the request is legitimate.
The downside here is complexity. Getting this scheme to work correctly is a pain, both for the API developer and the clients. Expect lots of support calls and angry emails from client developers who can't get things to work.
OAuth
To combat some of the complexity issues with key + secret signing, a standard has emerged called OAuth. At the core OAuth is a flavor of key + secret signing, but much of it is standardized and has been included into libraries for many languages.
In general, it's much easier on both the API producer and consumer to use OAuth rather than creating your own key/signature system.
OAuth also inherently segments access, providing different access credentials for each API consumer. This allows users to selectively revoke access without affecting their other consuming applications.
Specifically for Ruby, there is an OAuth gem that provides support out of the box for both producers and consumers of OAuth. I have used this gem to build an API and also to consume OAuth APIs and was very impressed. If you think your application needs OAuth (as opposed to the simpler API key scheme), then I can easily recommend using the OAuth gem.
How do I secure my app to prevent
unauthorized changes?
attr_accessible and attr_protected are both useful for controlling the ability to perform mass-assignments on an ActiveRecord model. You definitely want to use attr_protected to prevent form injection attacks; see Use attr_protected or we will hack you.
Also, in order to prevent anyone from being able to access the controllers in your Rails app, you're almost certainly going to need some kind of user authentication system and put a before_filter in your controllers to ensure that you have an authorized user making the request before you allow the requested controller action to execute.
See the Ruby on Rails Security Guide (part of the Rails Documentation Project) for tons more helpful info.
I'm facing similar questions as you at the moment because i'm also building out a REST api for a rails application.
I suggest making sure that only attributes that can be user edited are marked with attr_accessible. This will set up a white list of attributes that can be assigned using update_attributes.
What I do is something like this:
class Model < ActiveRecord::Base
attr_accessible nil
end
All my models inherit from that, so that they are forced to define attr_accessible for any fields they want to make mass assignable. Personally, I wish there was a way to enable this behaviour by default (there might be, and I don't know about it).
Just so you know someone can mass assign a property not only using the REST api but also using a regular form post.
Another approach that saves building a lot of the stuff yourself is to use something like http://www.3scale.net/ which handles keys, tokens, quotas etc. for individual developers. It also does analytics and creates a developer portal.
There's a ruby/rails plugin ruby API plugin which will apply to policies to traffic as it arrives - you can use it in conjunction with the oAuth gem. You can also us it by dropping varnish in front of the app and using the varnish lib mod: Varnish API Module.