iOS API Key: Is there an actual safe way to secure your API key when making http requests? - ios

Currently, I am getting an API key from the server after logging in and using it to make http requests. I currently store the API key in the iPhone app's database. However, I've heard that I should store it in a keychain from a colleague. So, I searched on Stackoverflow and seen questions regarding this. It seems this isn't really a secure way of storing API keys at all.
Secure keys in iOS App scenario, is it safe?
In iOS, how can I store a secret "key" that will allow me to communicate with my server?
I don't know a way to stop hackers from reverse engineering to get the API key from the iOS app. A user on StackOverflow basically said it will only overcomplicate things for little to no benefits.
I need to find the post, but someone recommended to just make sure you're making a secure API request (SSL certificate) and you have a way to remove the API key if someone is hacked.

As already pointed out by #jake you should use a token tied up only to the user instead of an Api Key for all users, but other enhancements can be done for further protect your App when doing the http requests.
The user token can be a signed JWT token and then you can enhance the security of the communication between your server and the App with Certificate Pinning in order to protect against Man in the Middle Attacks.
Other techniques like the use of OAUTH2 and hiding secrets can be used to enhance the security of your App and you can read more about it here.
Keep in mind that Certificate Pinning can be bypassed by hooking frameworks such as Xposed that contain modules specific to bypass the pinning, but still another layer of security that you should not discard once it will increase the effort necessary to hack your App on the device and will protect your App against Man in the Middle Attacks.
For ultimately security between your App and the back-end you should use an App integrity attestation service, that will guarantee at run-time that your App was not tampered or is not running in a rooted device by using an SDK integrated in you App and a service running in the cloud.
On successful attestation of the App integrity a JWT token is issued and signed with a secret that only the back-end of your App and the attestation service in the cloud are aware and on failure the JWT is signed with a fake secret that the App back-end does not know, allowing this way for the App back-end to only serve requests when it can verify the signature in the JWT token and refuse them when it fails the verification.
Once the secret used by the cloud attestation service is not known by the App it is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
You can find such a service in Approov that have SDKs for several platforms, including IOS. The integration will also need a small check in the App back-end code to verify the JWT token in order the back-end can protect itself against fraudulent use.
JWT Token
Token Based Authentication
JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.
Certificate Pinning
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
OAUTH2
The OAuth 2.0 authorization framework enables a third-party
application to obtain limited access to an HTTP service, either on
behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the
third-party application to obtain access on its own behalf. This
specification replaces and obsoletes the OAuth 1.0 protocol described
in RFC 5849.
Disclaimer: I work at Approov.

A more secure mechanism would be to return an authentication token on login. This authentication token should be unique to the user. If you have proper authorization and security mechanisms on the backend (to mitigate DDOS attacks, injection attacks, users accessing other user’s data, etc) then who cares if they get their authorization token from the keychain or wherever it is stored? Since the authentication token is tied to their account you could just invalidate the token so it stops working if the user is malicious. And you could even disable their account altogether if you have the right mechanisms in place on the backend.
Many of the security mechanisms can be automated on the backend. Platforms like AWS can easily be configured to automatically disable accounts that are doing certain malicious calls to your backend.

Related

Oauth pkce flow impersonating someone else's client

For confidential clients, there are scopes assigned to clients and the logged in user has to consent to them. Since there is client secret involved in exchange of auth code for access token, no one can impersonate them and take advantage of their scopes.
But when it comes to pkce flow on a native app, if I had someone else's clientId (clientIds are not considered private information) which has a lot of scopes, I could just start the flow with with their clientId. What is stopping a hacker from using some reputed clientId in the PKCE flow and have access to all their scopes?
NATIVE CALLBACK METHODS
If you look at RFC8252, certain types of callback URL can be registered by more than one app, meaning only a client ID needs to be stolen in order to impersonate a real app, as you say.
This still requires a malicious app to trick the user to sign in before tokens can be retrieved. And of course each app should use only the scopes it needs, and prefer readonly ones. After that it depends on the type of native app.
MOBILE
A mobile app can use Claimed HTTPS Schemes via an https callback URL to overcome this. It is backed by App Links on Android or Universal Links on iOS. Even if a malicious app uses the client ID, it cannot receive the login response with the authorization code, because it will be received on a URL like this, and the mobile OS will only pass this to the app that has proved ownership of the domain via the deep linking rehistration process:
https://mobile.mycompany.com/callback?code=xxx
DESKTOP
For desktop apps there are gaps, since only Loopback and Private URI Scheme callback URLs can be used. It relies on users to avoid installing malicious apps. Eg only install apps from stores that require code signing, which also inform the user who the publisher is. If users install malicious apps then perhaps they have deeper problems.
ATTESTATION
A newer technique is to use a form of client authentication before authentication begins. For confidential clients, Pushed Authorization Requests are used, which uses the app's client credential, so this cannot be used by native clients by default.
Mobile apps could potentially provide proof of ownership of their Google / Apple signing keys during authentication, and there is a proposed standard around that.

How do you enforce secure storage of Access Token by your Partner?

In an Open API world, Tokens are the door key (issued to anyone with a valid Client Id and Secret). Tokens allow anybody who has them to access a resource. As such, they are as critical as passwords.
Example: A 3rd party Native App wanting to access your APIs. The app uses the 'Client Id and Secret' to request for an 'Access Token'. This access token to be used for subsequent API calls.
Concern: Usually 'Access Tokens' have a longer TTL. When they are stored in the Mobile App/Client mobile device and if someone gains access to it, they will be able to replay API calls from a different source using this access token and the API URI.
How do you prevent such replay attacks (when access token is
compromised from the 3rd party app) for API calls ?
What secure practice do you follow to allow your consumers/clients
to securely store the 'Access Tokens' ?
In an Open API world, Tokens are the door key (issued to anyone with a valid Client Id and Secret).
I like this analogy :)
Tokens Importance
Tokens allow anybody who has them to access a resource. As such, they are as critical as passwords.
Well even more critical, because they control access to the API, thus an automated attack with a stolen token can ex-filtrate all the data behinhd that API in a matter of seconds, minutes or hours, depending on the size of it, and if rate limiting or other type of defenses are in place or not, and this as a name, Data Breach, and fines under GDPR can be huge, and may even put the company out of business or in serious difficulties.
Access Tokens with Refresh Tokens
Example: A 3rd party Native App wanting to access your APIs. The app uses the 'Client Id and Secret' to request for an 'Access Token'. This access token to be used for subsequent API calls.
Concern: Usually 'Access Tokens' have a longer TTL. When they are stored in the Mobile App/Client mobile device and if someone gains access to it, they will be able to replay API calls from a different source using this access token and the API URI.
The access token should be at least short-lived, and the client, mobile app or web app, should never be the ones requesting it to be issued, instead their respective back-ends should be the ones requesting it to the OAUTH provider, and use the refresh token mechanism to keep issuing short lived tokens, that should be in the range of a few minutes. This approach limits the time a compromised token can be reused by malicious requests.
So I would recommend you to switch to use refresh tokens, that will keep the access tokens short lived while refresh tokens can be long lived, but in the hours range, not days, weeks or years.
Example of refresh tokens flow:
Sourced from: Mobile API Security Techniques - part 2
NOTE: While the above graphic belongs to a series of articles written in the context of mobile APIs, they have a lot of information that is also valid for APIs serving web apps and third party clients.
By using the refresh tokens approach, when a client request fails due to an expired short lived access token, then the client needs to request a new short lived access token.
The important bit here is that the refresh token should not be sent to the browser or mobile app, only the short lived access token can be sent, therefore your third party clients must kept refresh tokens private, aka in their backends, therefore they MUST NOT send refresh tokens from javascript or their mobile app, instead any renewal of the short lived access tokens MUST BE delegated to their backends.
This is a good improvement, but this only identifies who is making the request, not what is making it, therefore your APIs continue without being able to totally trust in the request they receive as coming from trusted sources.
Oh wait a bit... who and what is getting me confused. Well your aren't the only one, and this is indeed a common misconception among developers.
The difference between who and what is accessing the API server.
This is discussed in more detail in this article I wrote, where we can read:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
If the quoted text is not enough for you to understand the differences, then please go ahead and read the entire section of the article, because without this being well understood you are prone to apply less effective security measures in your API server and clients.
API Replay Attacks
How do you prevent such replay attacks (when access token is compromised from the 3rd party app) for API calls ?
Well this is a very though problem to solve, because your API needs to know what is making the request, and it seems that now is only able to know the request was made in the behalf of some who.
For a more in depth approach on mitigating API replay attacks you go and read the section Mitigate Replay Attacks from this answer I gave in another question:
So you could have the client creating an HMAC token with the request url, user authentication token, your temporary token, and the time stamp that should be also present in a request header. The server would then grab the same data from the request and perform it's own calculation of the HMAC token, and only proceed with the request if it's own result matches the one for the HMAC token header in the request.
For a practical example of this in action you can read part 1 and part 2 of this blog series about API protection techniques in the context of a mobile app, that also features a web app impersonating the mobile app.
It has some code examples for the HMAC implementation, so I really recommend you take a look into, but bear in mind that HMAC only makes it a little more difficult to crack, not impossible.
So this is a very hard problem to solve when the access token belongs to a web app, but it's doable in the case of a mobile app that uses a Mobile App Attestation solution, and this is described in this section of another article I wrote, from where I will quote the following text:
In order to know what is sending the requests to the API server, a Mobile App Attestation service, at run-time, will identify with high confidence that your mobile app is present, has not been tampered/repackaged, is not running in a rooted device, has not been hooked into by an instrumentation framework(Frida, xPosed, Cydia, etc.), and is not the object of a Man in the Middle Attack (MitM). This is achieved by running an SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device it is running on.
For defending your API when used by mobile apps I recommend you to read the sections Securing the API Server and A Possible Better Solution from this answer I gave for the question How to secure an API REST for mobile app? (if sniffing requests gives you the “key”). For web apps I recommend you to go and read the section Web Application from my answer to the question How do I secure a REST-API?.
Storing Access Tokens
What secure practice do you follow to allow your consumers/clients to securely store the 'Access Tokens' ?
Well in web apps all it takes to extract any token is to hit F12 and look for them, but in mobile apps you need to reverse engineer them.
For a mobile app you can stat by looking into the repository android-hide-secrets to understand the several ways of hiding them, where the most effective is to use native C code, by leveraging JNI/NDK interfaces provided by Android. You can see more real use demo apps using this approach in the repo currency-converter-demo and the app for the shipfast-api-protection-demo. For example see here how configuration is loaded from the C code, by using JNI/NDK approach. But bear in mind that while this makes very hard to reverse engineer statically the mobile app binary to extract secrets, it doesn't provide any security from an attacker to go and use and instrumentation framework to hook into the code that returns the secret from the C code, or to perform a MitM attack to intercept the requests between the mobile app and the backend, thus getting hold of the secret you protect so well.
Mobile apps can solve this by using a Mobile App Attestation solution to not have any secrets at all in their code, therefore if you have read the link I provided previously for defending your API when used by a mobile app, then you will be already more familiar with how a Mobile App Attestation works, and be able to better understand why you may want to have your clients using your API like this:
So your API would be sitting along side the ones already in the APIs section, and the secret to access it is no longer in the public domain, instead is safely stored in the Reverse Proxy vault. To note that the green key is provided during runt time by the Mobile App Attestation service in the cloud, and the mobile app doesn't know if is a valid or an invalid JWT token. If if you have done yet, please go and read the link I already provided to the other answer, so that you can understand in more detail how the attestation works.
In conclusion this approach doesn't benefit only your API, because it also improves the overall security for the mobile apps of your clients.
GOING THE EXTRA MILE
As usual I am not able to finish a security related answer without recommending the excellent work of the OWASP foundation:
The Web Security Testing Guide:
The OWASP Web Security Testing Guide includes a "best practice" penetration testing framework which users can implement in their own organizations and a "low level" penetration testing guide that describes techniques for testing most common web application and web service security issues.
The Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.

Best practice for OAuth Tokens for a combination of native apps and backend processes

I have a scenario in which a mobile (native) app is requiring data from a long running task. The task is a continuesly running on the users behalf, and we have decided to move it to a backend service. To execute the task, the backend service must regularly fetch data from an API that is OAuth2 protected.
Our dilemma is that we are not sure how to provide the backend service with a set of access tokens to access the data API. Our mobile app uses the OAuth2 Authorization Grant flow with PKCS to get its own access token, refresh token and id-token (we use OpenID Connect). But how do we provide our backend service with a set of tokens? As the backend is continuesly running (also in absence of the mobile native app) we would like to provide it with its own set of access/refresh tokens.
There seem to be several solutions:
Proxy all mobile app communication to external APIs via the backend, make the client a private client, use the authorization code flow, and setup custom login sessions between the native app and our backend. To me this means more running backend infrastructure, and implementing session management including session refresh, etc. which is like re-implementing the OpenId flows on my own servers...
Using OpenID hybrid flow, providing both an access token as well as an authorization code to be shared with the backend. However, this seems to be directed at (in-browser) web-apps, not so much native mobile apps as it based on the implicit flow and as such less secure.
Doing the authorization code flow twice on the mobile app (optionally second time with prompt=none to suppress user interaction), keeping one code for the app, forwarding the second to the backend. Then both backend and app can exchange their own codes for access tokens. This feels a bit like a hack as the backend should be a private client, not a public client. It is the approach that Google seems to advertise though in CrossClientAuth
Performing impersonation by sending the id-token to the backend which then exchanges that id-token for its own set of access/refresh tokens. Microsoft does this through the jwt-bearer grant in its "on-behalf-of" scenarios. Moreover, the token-exchange RFC seems to cover the same use-case. However, in this scenario the id-token is indirectly used as access credential, not just as a bag of claims about the user identity, which is odd in my view as access should be controlled through access tokens, is it not? Moreover, not all services should be allowed to impersonate, so I suppose this comes with additional configuration complexity.
To me it feels that I'm overlooking something, this must be a problem others have too and must have been solved before... What would be best practice in my situation?

OAuth2 app with Touch ID

Is there any way that a third-party app can logically use Touch ID to authenticate to a web service that uses OAuth2?
Say I own a web service that requires authentication using OAuth2. It supports both the implicit and authorization-code grants (although I could add support for other grants if necessary).
A third party has a mobile app that uses this web service. It opens a native web view to authenticate, and loads my auth URL in it. The user enters their username/password on my domain, and I return an OAuth token back to the app.
If this app wants to implement Touch ID to speed up authentication, is there a way to do it that makes sense with OAuth2?
My understanding is that the purpose of the OAuth2 implicit and auth-code grants is to prevent the parent app from having access to the user's credentials. It only gets access to the resulting OAuth token, and that's only valid for a limited time.
With Touch ID, you would typically store the password using Keychain Services. So this obviously requires you to have access to the password.
I suppose they could store the OAuth token in the keychain instead of the password, but then this would only be valid for a short time.
The only answer I've come up with so far is what you allude to at the end: store the OAuth tokens -- but also a long-lived refresh token. How long that refresh token can live is definitely dependent on your specific security needs.
I don't know about any standard flow yet but here are some common considerations. Simply storing long-term credentials (passwords or refresh tokens, even encrypted at rest) would be mixing up security contexts in a way that is hard to audit. When using any local authentication (app-specific unlock PIN, any biometrics, or simply system unlock) it's important to do it in a way that can be verified by the server. So the first step would be device authentication, every instance of your app should use unique client id/client credentials (I suggest to implement Dynamic Client Registration Protocol to help with that but there could be other options). Then, it's a good idea to generate some piece of verifiable key information directly on the device, put it into secure storage (protected by whatever local unlocking mechanism and invalidated whenever biometrics changes or) and use it to generate a MAC of some kind, for example a JWT as a part of jwt-bearer flow (or some new extension to OAuth assertion framework). JWT tokens could include additional metadata (claims) that can provide more context to the server, like it can make informed decisions to force re-authentication in some cases.
To restate:
Device is authorized and issued an unique client credentials pair.
Locally-generated key is saved to the encrypted storage and protected by some local unlock mechanism (system lockscreen, PIN, biometrics, etc.)
The key gets registered with the server and tied to the device.
On unlocking the key is used to generate a JWT that is used as assertion for authenticating with the server.
Seems pretty standard to me, maybe someone should write up a BCP for this after thinking through all the implementation details, current practice, and security considerations.

OAuth 2 for native application - what is difference between public and confidential client types?

I trying to implement OAuth 2 provider for web service and then built native application on top of it. Also I want give access to API for third-party developers.
I read OAuth 2 specification already and can't choose right flow. I want authenticate both CLI and GUI apps as well.
First of all we have two client types - public and confidential. Of course both GUI and CLI apps will be public. But what is difference between this two types? In this case for what I need client_secret if I can get access token without it just by changing client type?
I tried to look at some API implementations of popular services like GitHub. But they use HTTP Basic Auth. Not sure it is a good idea.
Is there any particular difference? Does one improve security over the other?
As to the difference between public and confidential clients, see http://tutorials.jenkov.com/oauth2/client-types.html which says:
A confidential client is an application that is capable of keeping a
client password confidential to the world. This client password is
assigned to the client app by the authorization server. This password
is used to identify the client to the authorization server, to avoid
fraud. An example of a confidential client could be a web app, where
no one but the administrator can get access to the server, and see the
client password.
A public client is an application that is not capable of keeping a
client password confidential. For instance, a mobile phone application
or a desktop application that has the client password embedded inside
it. Such an application could get cracked, and this could reveal the
password. The same is true for a JavaScript application running in the
users browser. The user could use a JavaScript debugger to look into
the application, and see the client password.
Confidential clients are more secure than public clients, but you may not always be able to use confidential clients because of constraints on the environment that they run in (c.q. native apps, in-browser clients).
#HansZ 's answer is a good starting point in that it clarifies the difference between a public and private client application: the ability to keep the client secret a secret.
But it doesn't answer the question: what OAuth2 profile should I use for which use cases? To answer this critical question, we need to dig a bit deeper into the issue.
For confidential applications, the client secret is supplied out of band (OOB), typically by configuration (e.g. in a properties file). For browser based and mobile applications, there really isn't any opportunity to perform any configuration and, thus, these are considered public applications.
So far, so good. But I disagree that this makes such apps unable accept or store refresh tokens. In fact, the redirect URI used by SPAs and mobile apps is typically localhost and, thus, 100% equivalent to receiving the tokens directly from the token server in response to a Resource Owner Password Credentials Grant (ROPC) .
Many writers point out, sometimes correctly, that OAuth2 doesn't actually do Authentication. In fact, as stated by the OAuth2 RFC 6749, both the ROPC and Client Credentias (CC) grants are required to perform authentication. See Section 4.3 and Section 4.4.
However, the statement is true for Authorization Code and Implicit grants. But how does authentication actually work for these domains?
Typically, the user enters her username and password into a browser form, which is posted to the authentication server, which sets a cookie for its domain. Sorry, but even in 2019, cookies are the state of the authentication art. Why? Because cookies are how browser applications maintain state. There's nothing wrong with them and browser cookie storage is reasonably secure (domain protected, JS apps can't get at "http only" cookies, secure requires TLS/SSL). Cookies allow login forms to be presented only on the 1st authorization request. After that, the current identity is re-used (until the session has expired).
Ok, then what is different between the above and ROPC? Not much. The difference is where the login form comes from. In an SPA, the app is known to be from the TLS/SSL authenticated server. So this is all-but identical to having the form rendered directly by the server. Either way, you trust the site via TLS/SSL. For a mobile app, the form is known to be from the app developer via the app signature (apps from Google Play, Apple Store, etc. are signed). So, again, there is a trust mechanism similar to TLS/SSL (no better, no worse, depends on the store, CA, trusted root distributions, etc.).
In both scenarios, a token is returned to prevent the application from having to resend the password with every request (which is why HTTP Basic authentication is bad).
In both scenarios, the authentication server MUST be hardened to the onslaught of attacks that any Internet facing login server is subjected. Authorization servers don't have this problem as much, because they delegate authentication. However, OAuth2 password and client_credentials profiles both serve as de facto authentication servers and, thus, really need to be tough.
Why would you prefer ROPC over an HTML form? Non-interactive cases, such as a CLI, are a common use case. Most CLIs can be considered confidential and, thus, should have both a client_id and client_secret. Note, if running on a shared OS instance, you should write your CLI to pull the client secret and password from a file or, at least, the standard input to avoid secrets and passwords from showing up in process listings!
Native apps and SPAs are another good use, imo, because these apps require tokens to pass to REST services. However, if these apps require cookies for authentication as well, then you probably want to use the Authorization Code or Implicit flows and delegate authentication to a regular web login server.
Likewise, if users are not authenticated in the same domain as the resource server, you really need to use Authorization Code or Implicit grant types. It is up to the authorization server how the user must authenticate.
If 2-factor authentication is in use, things get tricky. I haven't crossed this particular bridge yet myself. But I have seen cases, like Attlassian, that can use an API key to allow access to accounts that normally require a 2nd factor beyond the password.
Note, even when you host an HTML login page on the server, you need to take care that it is not wrapped either by an IFRAME in the browser or some Webview component in a native application (which may be able to set hooks to see the username and password you type in, which is how password managers work, btw). But that is another topic falling under "login server hardening", but the answers all involve clients respecting web security conventions and, thus, a certain level of trust in applications.
A couple final thoughts:
If a refresh token is securely delivered to the application, via any flow type, it can be safely stored in the browser/native local storage. Browsers and mobile devices protect this storage reasonably well. It is, of course, less secure than storing refresh tokens only in memory. So maybe not for banking applications ... But a great many apps have very long lived sessions (weeks) and this is how it's done.
Do not use client secrets for public apps. It will only give you a false sense of security. Client secrets are appropriate only when a secure OOB mechanism exists to deliver the secret and it is stored securely (e.g. locked down OS permissions).

Resources