I have some MDM solution that we've developed through which we want to support managing iOS devices. Even though we'd already been able to successfully enrol and manage iOS devices via the same, I am trying to figure out a way to secure all web service invocations with OAuth, which take place between the native app running on iOS devices, connecting to the Enrolment and other APIs deployed as part of the MDM solution. Apparently, we've got limited control over modifying the native app to embed OAuth access tokens in the form of HTTP headers or some other means to be able to send those access tokens across to the MDM APIs, as the app logic cannot be modified. Do we have any configuration in the Enterprise App that runs on iOS devices to enable OAuth (or any other form of authentication) or some other means, which I can effectively use to get my requirement implemented?
iOS enrollment flow associate with a challenge token in the SCEP payload (mentioned as Challenge). Once you do the authentication from MDM server side there needs to be a unique token generated based on your user identity and embed that in this SCEP payload. For subsequent enrollment calls this token is passed and once the enrollment success this can be fetched and validate the user. Ideally this is just a way to link the device to a specific user which could be a temporary token generated at your MDM server end which link to a user identity or something related. To follow that you could apply OAuth password grant type and get the token once the authentication happens. Then this OAuth token can be then set as this challenge token for future use. But unlike in other OAuth communications iOS will not send this token in header as the bearer rather this will be embedded in the xml payload with proper encryption and signing in place.
Further iOS support protocol extension to validate users with open directory service using an auth token. This will by default have the ability to communicate back and forth using the checkin endpoint.
Related
For confidential clients, there are scopes assigned to clients and the logged in user has to consent to them. Since there is client secret involved in exchange of auth code for access token, no one can impersonate them and take advantage of their scopes.
But when it comes to pkce flow on a native app, if I had someone else's clientId (clientIds are not considered private information) which has a lot of scopes, I could just start the flow with with their clientId. What is stopping a hacker from using some reputed clientId in the PKCE flow and have access to all their scopes?
NATIVE CALLBACK METHODS
If you look at RFC8252, certain types of callback URL can be registered by more than one app, meaning only a client ID needs to be stolen in order to impersonate a real app, as you say.
This still requires a malicious app to trick the user to sign in before tokens can be retrieved. And of course each app should use only the scopes it needs, and prefer readonly ones. After that it depends on the type of native app.
MOBILE
A mobile app can use Claimed HTTPS Schemes via an https callback URL to overcome this. It is backed by App Links on Android or Universal Links on iOS. Even if a malicious app uses the client ID, it cannot receive the login response with the authorization code, because it will be received on a URL like this, and the mobile OS will only pass this to the app that has proved ownership of the domain via the deep linking rehistration process:
https://mobile.mycompany.com/callback?code=xxx
DESKTOP
For desktop apps there are gaps, since only Loopback and Private URI Scheme callback URLs can be used. It relies on users to avoid installing malicious apps. Eg only install apps from stores that require code signing, which also inform the user who the publisher is. If users install malicious apps then perhaps they have deeper problems.
ATTESTATION
A newer technique is to use a form of client authentication before authentication begins. For confidential clients, Pushed Authorization Requests are used, which uses the app's client credential, so this cannot be used by native clients by default.
Mobile apps could potentially provide proof of ownership of their Google / Apple signing keys during authentication, and there is a proposed standard around that.
I have a native ios app, when using sign in with apple in the app , and after the user is successfully authenticated, the apple server returns an identity token, authorization code, and user identifier to app, after that, app then send request my server with identity token and authorization code
so what i want to ask is can i verify identity token directly , or need using authorization code to send request to apple api, apple will response same identity token to my server
(https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens)
is it necessary to re get identity token by using authorization code?
or just verify identity token from app client just fine?
If you want better security, I would recommend sending the authorization code to Apple's server for validation, this is in case the client side has been tampered (eg: user jailbreak their phone and modified the identityToken parsing function etc).
If you are not worried on client side much (internal apps or casual apps), I think its fine to just verify identity token on client side.
Currently, I am getting an API key from the server after logging in and using it to make http requests. I currently store the API key in the iPhone app's database. However, I've heard that I should store it in a keychain from a colleague. So, I searched on Stackoverflow and seen questions regarding this. It seems this isn't really a secure way of storing API keys at all.
Secure keys in iOS App scenario, is it safe?
In iOS, how can I store a secret "key" that will allow me to communicate with my server?
I don't know a way to stop hackers from reverse engineering to get the API key from the iOS app. A user on StackOverflow basically said it will only overcomplicate things for little to no benefits.
I need to find the post, but someone recommended to just make sure you're making a secure API request (SSL certificate) and you have a way to remove the API key if someone is hacked.
As already pointed out by #jake you should use a token tied up only to the user instead of an Api Key for all users, but other enhancements can be done for further protect your App when doing the http requests.
The user token can be a signed JWT token and then you can enhance the security of the communication between your server and the App with Certificate Pinning in order to protect against Man in the Middle Attacks.
Other techniques like the use of OAUTH2 and hiding secrets can be used to enhance the security of your App and you can read more about it here.
Keep in mind that Certificate Pinning can be bypassed by hooking frameworks such as Xposed that contain modules specific to bypass the pinning, but still another layer of security that you should not discard once it will increase the effort necessary to hack your App on the device and will protect your App against Man in the Middle Attacks.
For ultimately security between your App and the back-end you should use an App integrity attestation service, that will guarantee at run-time that your App was not tampered or is not running in a rooted device by using an SDK integrated in you App and a service running in the cloud.
On successful attestation of the App integrity a JWT token is issued and signed with a secret that only the back-end of your App and the attestation service in the cloud are aware and on failure the JWT is signed with a fake secret that the App back-end does not know, allowing this way for the App back-end to only serve requests when it can verify the signature in the JWT token and refuse them when it fails the verification.
Once the secret used by the cloud attestation service is not known by the App it is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
You can find such a service in Approov that have SDKs for several platforms, including IOS. The integration will also need a small check in the App back-end code to verify the JWT token in order the back-end can protect itself against fraudulent use.
JWT Token
Token Based Authentication
JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.
Certificate Pinning
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
OAUTH2
The OAuth 2.0 authorization framework enables a third-party
application to obtain limited access to an HTTP service, either on
behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the
third-party application to obtain access on its own behalf. This
specification replaces and obsoletes the OAuth 1.0 protocol described
in RFC 5849.
Disclaimer: I work at Approov.
A more secure mechanism would be to return an authentication token on login. This authentication token should be unique to the user. If you have proper authorization and security mechanisms on the backend (to mitigate DDOS attacks, injection attacks, users accessing other user’s data, etc) then who cares if they get their authorization token from the keychain or wherever it is stored? Since the authentication token is tied to their account you could just invalidate the token so it stops working if the user is malicious. And you could even disable their account altogether if you have the right mechanisms in place on the backend.
Many of the security mechanisms can be automated on the backend. Platforms like AWS can easily be configured to automatically disable accounts that are doing certain malicious calls to your backend.
There appears to be four distinct flows in OAuth2, i.e. (link),
Authorization Code Flow - used with server-side Applications
Implicit - used with Mobile Apps or Web Applications (applications
that run on the user's device)
Resource Owner Password Credentials - used with trusted applications
such as those owned by the service itself.
Client Credentials - used with Applications API access.
If I'm developing a mobile application that will consume resources from its own API, i.e., the mobile app is developed by the same team developing the API, which of the four OAuth flows should I use and how?
Given my scenario, it sounds to me like option 3 is the way to go. If this is the case, would you adopt the following process:
Release you mobile app with the ClientId and ClientSecret stored on
it (deemed okay as the application is trusted).
Ask the user to log into their account using cookie-based
authentication (immediately deleting their username and password).
Cache the hash of their username and password returned in the
response of the cookie-based authentication.
Use the cached username and password, along with the ClientId and
ClientSecret, to request access and refresh tokens from the token
endpoint of the OAuth server.
Doe this seem sensible? It would be good to know if I'm on the right track with the above thought process, or if I'm going something incredibly silly and ought to be doing this some other way.
Resource Owner Password Credentials flow would be okay for your case.
BTW, it is difficult for a mobile application to keep its client secret confidential (RFC 6749, 2.1. Client Types, RFC 6749, 9. Native Applications). Therefore, in normal cases, a client secret should not be embedded in a mobile application. In other words, embedding a client secret is almost meaningless in terms of security.
2- Implicit - used with Mobile Apps or Web Applications (applications
that run on the user's device)
If your application runs entirely on a mobile device then you are encouraged to use this flow as your mobile app can't be trusted to keep its client credentials secret.
Does Google currently support cross client/platform auth for iOS? We need both our server and iOS app to be authorized to hit Google endpoints.
Instructions described in https://developers.google.com/accounts/docs/CrossClientAuth don't really work for iOS.
The only workaround I can think is have the iOS app do the initial user auth and pass the code + refresh token to the server and moving forward the server shares the access token with the app whenever the access token expires.
Need more info about your use case to make a recommendation, but I have found it's easier to drive the initial auth/access-code/refresh-token stuff from the server, and then let the client app request access tokens as required.
To answer your specific question, cross client auth is supported for iOS, but iOS lacks some of the convenience mechanisms which apply to the initial auth process.