We recently started using IdentityServer3. There is a requirement to have the call go via a messaging system (proxy) because of network limitations, which is responsible for routing to the RS.
OAuth asks for TLS or an end to end trusted channel.
Now the message brokers are part of a trusted network, but is it a good idea to have the bearer tokens be in clear text at any point midway? Is this something we should be concerned about or trust the messaging system? Or any other recommended approach?
Thanks!
Related
I have read rfc6749 and https://auth0.com/docs/authorization/which-oauth-2-0-flow-should-i-use but I couldn't seem to find the flow that's exactly for my use case.
There will be a native app(essentially a GUI) that will spin up a daemon on end user device. The daemon will try to call internal APIs hosted by the backend. The internal APIs don't need to verify the user identity; but it's preferred that the device identity can be verified to some extent. There will be an OAuth authorization server in the backend to handle the logic. But I couldn't identify which is the correct flow to use for this case.
Originally I thought this is a good fit for client credentials grant type. But then I realized that this might be a public client but client credentials is supposed to be used for confidential clients only.
I then came to find out about authorization code with PKCE flow. This seems to be the recommended flow for native apps but it doesn't make much sense to me as there will be redirects and user needs to interact but the APIs that will be called is supposed to be internal and user shouldn't know about these back channel stuff at all. Also the resource owner should be the same as the client in this case, which should be the machine not the user.
So which flow should I use?
Thanks a lot for the help!
Client Credentials feels like the standard option but there are a few variations:
SINGLE CLIENT SECRET
This is not a good option since anyone who captures a message in transit can access data for any user.
CLIENT SECRET PER USER
Using Dynamic Client Registration might be an option, where each instance of the app gets its own client ID and secret, linked to a user.
The Daemon then uses this value, and if the secret is somehow captured in transit it only impacts one user.
STRONGER CLIENT SECRETS
The client credentials grant can also be used with stronger secrets such as a Client Assertion, which can be useful if you want to avoid sending the actual secret.
This type of solution would involve generating a key per user when they aughenticate, then storing the private key on the device, eg in the keychain.
My mobile app is using an HTTP-based API with endpoints that aren't hard to figure out, such as https://<domain>/api/config or https://<domain>/api/login.
So someone could create an account in the app, then use the credentials in some request-making desktop app ("rogue client") to send requests to /api/login and then, after "logging in" with my bearer authentication scheme, go on to other endpoints to see what data is being sent from there.
Such attempts could potentially let people peep into some sensitive data about other users that should only be accessible internally by the app alone.
What would be an established approach to improve my app's security in guaranteeing that any data sent from my backend API is accessible by the app only?
Specifically for iOS apps, are there any frameworks to achieve this?
My backend is Nginx & Django.
MAPPING AN API
My mobile app is using an HTTP-based API with endpoints that aren't hard to figure out, such as https:///api/config or https:///api/login.
All it's needed to map all the API endpoints being used by your mobile app is for someone to install your mobile app in a device they control and proxy the requests through a proxy, like the mitmproxy:
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
BEARER AUTHORIZATION TOKEN EXTRACTION
So someone could create an account in the app, then use the credentials in some request-making desktop app ("rogue client") to send requests to /api/login and then, after "logging in" with my bearer authentication scheme, go on to other endpoints to see what data is being sent from there.
Yes you can create the account in the app and extract the bearer authentication token, and for this you can continue to use the proxy approach I mentioned to map all the API endpoints. You can read this article to see how I use mitmproxy to extract an API key, therefore applicable for your bearer token scenario.
The mitmproxy allow us to intercept, manipulate and replay requests on the fly or at any point in time, therefore an excellent tool to poke around your aPI and extract all data while you use the mobile app as a normal user.
SENSITIVE DATA ACCESS
Such attempts could potentially let people peep into some sensitive data about other users that should only be accessible internally by the app alone.
Well here it seems more like a design problem of your mobile app and backend, because a logged user should never be able to access API endpoints as another user.
Also you need to ensure that each API endpoint strictly returns only the absolute necessary data for the mobile app do what it needs to do. Unfortunately more often then not developers have fat API endpoints that give away a lot of info, and then its up to the consumer to filter the data it needs. Don't do this, instead using roles to authorize what amount of data each logged user as access to in each API endpoint, therefore allowing for more or less data to be sent back in the response accordingly to his role.
Another thing to keep in mind is that developers tend to do too much business logic on the client side, and this approach also leads to fat APIs and to leak data that could be kept in server side if the API was the only one responsible to perform that business logic. Try to design your mobile apps to be as dumb as possible, and make them delegate to the backend all the hard work. This approach also as the advantage of making easy to fix bugs without needing to release a new mobile app.
IMPROVE API SECURITY
What would be an established approach to improve my app's security in guaranteeing that any data sent from my backend API is accessible by the app only?
Well you bought yourself a very hard challenge to overcome, but while hard is not impossible to achieve a solution that allows your API server to have a very high degree of confidence that the requests is receiving are indeed from a genuine instance of your mobile app.
So it seems that you want to lock your API server to only accept requests from your mobile app, and if that is the case then please read this reply I gave to the question How to secure an API REST for mobile app? for the sections on Securing the API Server and A Possible Better Solution.
Specifically for iOS apps, are there any frameworks to achieve this?
If you have read the reply I linked above, then you know by now that you should employ security in depth, by using as many layers as you can afford, being the most effective of all the Mobile App Attestation concept.
Bear in mind that has you add more security layers, more time consuming will be for an attacker to overcome all of them. This also raises the bar for the skill set necessary for an attacker to have in order to bypass all of them, thus putting at bay scripts kids and seasonal attackers.
By the way don't forget to always apply strong code obfuscation techniques to your code base.
DO YOU WANT TO GO THE EXTRA MILE?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
The correct way is to use App Attestation
App Attest Flow
It is a cryptographically secure way to make sure your app is the thing which is accessing your API and not something else.
Apple has an App Attest service. Some docs are located here:
https://developer.apple.com/documentation/devicecheck/establishing_your_app_s_integrity
https://developer.apple.com/documentation/devicecheck/validating_apps_that_connect_to_your_server
Here is the best practice of using JWTs.
On the client-side, you create the token (there are many libraries for this), using the secret token to sign it.
You pass it as part of the API request, and the server will know it’s that specific client because the request is signed with its unique identifier
Edit: Based on comments and discussions, reaching to full security is not possible, But you can make it harder using the above blog post practice.
I'm just starting to learn about Identity Server, OAuth 2, and OpenId Connect. While doing so I've spent some time looking over the different OAuth flows and their applications. I understand the risks of using the Resource Owner Password Credential flow when the client is 3rd party or not trusted. However, I haven't really been able to find much on its use when the client(mobile app) and api are trusted 1st party. What are the potential risks of using this flow in this scenario? If you could point to specific security vulnerabilities that would be very helpful.
Thanks!
If you are talking about exactly the following...
Your own Mobile App (using trusted libraries)
Collects User Credentials (as if they were logging on your website, assuming you have one)
Sends them over TLS to your Auth server
Returns the normal token response if correct
Then I would argue there is no security penalty, at least, it is no worse than using username/password auth in the first place.
However, there is a wider problem with mobile authentication of this nature.
There is no way to tell that it's your app sending the requests, this applies to all OAuth2 flows (even if you use a more secure flow, the User can simply pull apart the mobile app and extract the credentials).
There are some features from both Google and Apple that attempt to fix this problem, but I'm not sure how mature or secure they are at the moment, it might be worth looking into.
So you are relying on the User to not get tricked into installing a fake app, however, this lands under social engineering and it does apply to all OAuth 2 flows.
I'm trying to implement authentication/authorization in my solution. I have a bunch of backend services(including identity service) under API Gateway, "backend for frontend" service, and SPA (React + Redux). I have read about OAuth2.0/OpenIdConnect, and I can't understand, why I shouldn't use Resource owner password flow?
A client ( my backend for frontend server ) is absolutely trusted, I can simply send users login/password to the server, then it forwards them to Identity server, receives the access token && refresh token and stores refresh token in memory(session, Redis, etc), and send the access token to SPA, which stores it in local storage. If SPA will send a request with the expired access token, the server will request a new one using refresh token and forwards the request to API Gateway with the new access token.
I think in my case a flows with redirects can provide worth user experience, and are too complicated.
What have I misunderstood? What potholes I'll hit if I'll implement authentication/authorization as I described above?
OAuth 2.0 specification's introduction section gives one key information on the problem it tries to solve. I have highlighted a section below,
In the traditional client-server authentication model, the client
requests an access-restricted resource (protected resource) on the
server by authenticating with the server using the resource owner's
credentials. In order to provide third-party applications access to
restricted resources, the resource owner shares its credentials with
the third party
As a summary what OAuth wants to provide is an authorization layer which removes the requirement of exposing end user credentials to a third party. To achieve this it presents several flows (ex:- Authorization code flow, Implicit flow etc.) to obtain tokens which are good enough to access protected resources.
But not all clients may able to adopt those flows. And this is the reason OAuth spec introduce ROPF. This is highlighted from following extraction,
The resource owner password credentials grant type is suitable in
cases where the resource owner has a trust relationship with the
client, such as the device operating system or a highly privileged
application.The authorization server should take special care when
enabling this grant type and only allow it when other flows are not
viable.
According to your explanation, you have a trust relationship with client. And your flow seems to be work fine. But from my end I see following issues.
Trust
The trust is between end user and the client application. When you release and use this as a product, will your end users trust your client and share their credentials.? For example, if your identity server is Azure AD, will end users share Azure credentials with your client.?
Trust may be not an issue if you are using a single identity server and it will be the only one you will ever use. Which brings us the next problem,
Support for multiple identity servers
One advantage you get with OAuth 2 and OpenID Connect is the ability to use multiple identity servers. For example, you may move between Azure AD, Identityserver or other identity servers which of customer's choice (ex:- they already use on internally and they want your app to use it). Now if your application wants to consume such identity servers, end users will have to share credentials with your client. Sometimes, these identity servers may not even support ROPF flow. And yet again TRUST become an issue.!
A solution ?
Well I see one good flow you can use. You have one front end server and a back-end server. I believe your client is the combination of both. If that's the case you could try to adopt authorization code flow. It's true your front end is a SPA. But you have a backend you can utilise to obtain tokens. Only challenge is to connect front end SPA with back end for token response (pass access token to SPA and store other tokens in back-end). With that approach, you avoid above mentioned issues.
I trying to implement OAuth 2 provider for web service and then built native application on top of it. Also I want give access to API for third-party developers.
I read OAuth 2 specification already and can't choose right flow. I want authenticate both CLI and GUI apps as well.
First of all we have two client types - public and confidential. Of course both GUI and CLI apps will be public. But what is difference between this two types? In this case for what I need client_secret if I can get access token without it just by changing client type?
I tried to look at some API implementations of popular services like GitHub. But they use HTTP Basic Auth. Not sure it is a good idea.
Is there any particular difference? Does one improve security over the other?
As to the difference between public and confidential clients, see http://tutorials.jenkov.com/oauth2/client-types.html which says:
A confidential client is an application that is capable of keeping a
client password confidential to the world. This client password is
assigned to the client app by the authorization server. This password
is used to identify the client to the authorization server, to avoid
fraud. An example of a confidential client could be a web app, where
no one but the administrator can get access to the server, and see the
client password.
A public client is an application that is not capable of keeping a
client password confidential. For instance, a mobile phone application
or a desktop application that has the client password embedded inside
it. Such an application could get cracked, and this could reveal the
password. The same is true for a JavaScript application running in the
users browser. The user could use a JavaScript debugger to look into
the application, and see the client password.
Confidential clients are more secure than public clients, but you may not always be able to use confidential clients because of constraints on the environment that they run in (c.q. native apps, in-browser clients).
#HansZ 's answer is a good starting point in that it clarifies the difference between a public and private client application: the ability to keep the client secret a secret.
But it doesn't answer the question: what OAuth2 profile should I use for which use cases? To answer this critical question, we need to dig a bit deeper into the issue.
For confidential applications, the client secret is supplied out of band (OOB), typically by configuration (e.g. in a properties file). For browser based and mobile applications, there really isn't any opportunity to perform any configuration and, thus, these are considered public applications.
So far, so good. But I disagree that this makes such apps unable accept or store refresh tokens. In fact, the redirect URI used by SPAs and mobile apps is typically localhost and, thus, 100% equivalent to receiving the tokens directly from the token server in response to a Resource Owner Password Credentials Grant (ROPC) .
Many writers point out, sometimes correctly, that OAuth2 doesn't actually do Authentication. In fact, as stated by the OAuth2 RFC 6749, both the ROPC and Client Credentias (CC) grants are required to perform authentication. See Section 4.3 and Section 4.4.
However, the statement is true for Authorization Code and Implicit grants. But how does authentication actually work for these domains?
Typically, the user enters her username and password into a browser form, which is posted to the authentication server, which sets a cookie for its domain. Sorry, but even in 2019, cookies are the state of the authentication art. Why? Because cookies are how browser applications maintain state. There's nothing wrong with them and browser cookie storage is reasonably secure (domain protected, JS apps can't get at "http only" cookies, secure requires TLS/SSL). Cookies allow login forms to be presented only on the 1st authorization request. After that, the current identity is re-used (until the session has expired).
Ok, then what is different between the above and ROPC? Not much. The difference is where the login form comes from. In an SPA, the app is known to be from the TLS/SSL authenticated server. So this is all-but identical to having the form rendered directly by the server. Either way, you trust the site via TLS/SSL. For a mobile app, the form is known to be from the app developer via the app signature (apps from Google Play, Apple Store, etc. are signed). So, again, there is a trust mechanism similar to TLS/SSL (no better, no worse, depends on the store, CA, trusted root distributions, etc.).
In both scenarios, a token is returned to prevent the application from having to resend the password with every request (which is why HTTP Basic authentication is bad).
In both scenarios, the authentication server MUST be hardened to the onslaught of attacks that any Internet facing login server is subjected. Authorization servers don't have this problem as much, because they delegate authentication. However, OAuth2 password and client_credentials profiles both serve as de facto authentication servers and, thus, really need to be tough.
Why would you prefer ROPC over an HTML form? Non-interactive cases, such as a CLI, are a common use case. Most CLIs can be considered confidential and, thus, should have both a client_id and client_secret. Note, if running on a shared OS instance, you should write your CLI to pull the client secret and password from a file or, at least, the standard input to avoid secrets and passwords from showing up in process listings!
Native apps and SPAs are another good use, imo, because these apps require tokens to pass to REST services. However, if these apps require cookies for authentication as well, then you probably want to use the Authorization Code or Implicit flows and delegate authentication to a regular web login server.
Likewise, if users are not authenticated in the same domain as the resource server, you really need to use Authorization Code or Implicit grant types. It is up to the authorization server how the user must authenticate.
If 2-factor authentication is in use, things get tricky. I haven't crossed this particular bridge yet myself. But I have seen cases, like Attlassian, that can use an API key to allow access to accounts that normally require a 2nd factor beyond the password.
Note, even when you host an HTML login page on the server, you need to take care that it is not wrapped either by an IFRAME in the browser or some Webview component in a native application (which may be able to set hooks to see the username and password you type in, which is how password managers work, btw). But that is another topic falling under "login server hardening", but the answers all involve clients respecting web security conventions and, thus, a certain level of trust in applications.
A couple final thoughts:
If a refresh token is securely delivered to the application, via any flow type, it can be safely stored in the browser/native local storage. Browsers and mobile devices protect this storage reasonably well. It is, of course, less secure than storing refresh tokens only in memory. So maybe not for banking applications ... But a great many apps have very long lived sessions (weeks) and this is how it's done.
Do not use client secrets for public apps. It will only give you a false sense of security. Client secrets are appropriate only when a secure OOB mechanism exists to deliver the secret and it is stored securely (e.g. locked down OS permissions).