Embedding client Id in chrome extension - oauth

I am building a chrome extension which will interact with salesforce-chatter api. But for a user using oAuth(User agent flow) authentication, I need to embed my client key in my extension.
Will this cause any security problem? Or is there a way to use oAuth without embedding client id in my extension?

The client id has to be included into a request, so the provider knows that the request came from you, as #Matt Lacey already pointed out. Normally, the provider also issues a confidential client secret that is additionally included into the access token request, so the provider can verify that your app is allowed to use that client id.
Chrome extensions run on an open platform and the platform itself provides no methods for either authenticating the extension against a server (which salesforce would then also have to support) or storing properties securely (would be hard, if not impossible on an open platform), so keeping the client secret confidential is unfortunately not possible.
As this is a common problem, it is already considered in the OAuth specification (see section 10.1 Client Authentication and 10.2 Client Impersonation). The provider is therefore required to do additional checks, but on the client side you can't do anything to effectively improve security.
If you want some more insight into how this will be handled on Android devices in the future, check out my answer here.

You have to embed the client ID in the extension to let Salesforce know what the app is that's trying to authenticate. These client IDs are intended to always be stored and passed to the server, so as long as you're storing it in a secure manner there shouldn't be a problem.

As Matt explained if you are creating a packaged app you will be forced to include the client id. Another solutions is to write the app as a hosted app:
What is the difference between packaged apps and hosted apps?
The drawback of this is the added complexity of managing a web server. But it will allow greater security.

Related

Oauth pkce flow impersonating someone else's client

For confidential clients, there are scopes assigned to clients and the logged in user has to consent to them. Since there is client secret involved in exchange of auth code for access token, no one can impersonate them and take advantage of their scopes.
But when it comes to pkce flow on a native app, if I had someone else's clientId (clientIds are not considered private information) which has a lot of scopes, I could just start the flow with with their clientId. What is stopping a hacker from using some reputed clientId in the PKCE flow and have access to all their scopes?
NATIVE CALLBACK METHODS
If you look at RFC8252, certain types of callback URL can be registered by more than one app, meaning only a client ID needs to be stolen in order to impersonate a real app, as you say.
This still requires a malicious app to trick the user to sign in before tokens can be retrieved. And of course each app should use only the scopes it needs, and prefer readonly ones. After that it depends on the type of native app.
MOBILE
A mobile app can use Claimed HTTPS Schemes via an https callback URL to overcome this. It is backed by App Links on Android or Universal Links on iOS. Even if a malicious app uses the client ID, it cannot receive the login response with the authorization code, because it will be received on a URL like this, and the mobile OS will only pass this to the app that has proved ownership of the domain via the deep linking rehistration process:
https://mobile.mycompany.com/callback?code=xxx
DESKTOP
For desktop apps there are gaps, since only Loopback and Private URI Scheme callback URLs can be used. It relies on users to avoid installing malicious apps. Eg only install apps from stores that require code signing, which also inform the user who the publisher is. If users install malicious apps then perhaps they have deeper problems.
ATTESTATION
A newer technique is to use a form of client authentication before authentication begins. For confidential clients, Pushed Authorization Requests are used, which uses the app's client credential, so this cannot be used by native clients by default.
Mobile apps could potentially provide proof of ownership of their Google / Apple signing keys during authentication, and there is a proposed standard around that.

How can I ensure the integrity of my iOS app?

I have the requirement that the signature of my Swift iOS app has to be checked.
I think it is only relevant for jailbroken devices as iOS checks the integrity by itself.
I couldn't find much on the web - most libraries/snippets have not been updated for 5 years.
My current approach would be to calculate the app signature (C-Code) and compare it with an array of signatures that have been loaded from the server. An array because to support multiple versions of the app.
Any ideas or comments on this approach? Maybe it is not relevant anymore for Swift?
Here are some resources that would inspire my solution:
https://github.com/olxios/SmartSec_iOS_Security
https://github.com/project-imas/encrypted_code_modules/blob/master/ecm_app_integrity_check/ecm_app_integrity_check/app_integrity_check.m
Your Current Approach
My current approach would be to calculate the app signature (C-Code) and compare it with an array of signatures that have been loaded from the server.
I need to alert you for the fact that in a rooted phone the attacker will be able to intercept your code at run-time and modify its behavior, this means that your logic to detect the signature is ok will always return true. They do this by using an introspection framework like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Your Requirement
I have the requirement that the signature of my Swift iOS app has to be checked. I think it is only relevant for jailbroken devices as iOS checks the integrity by itself.
Well if this requirement have the intention of only protecting your app from running in a rooted device, checking the signature of the app is not enough, once the device is rooted, any app on it can be easily manipulated, as I already mentioned. Protecting an app from attacks on the device itself is a daunting task, and is like playing the cat and mouse game with the attackers, by trying to keep ahead on the game. You will need to use in app protections to detect if the app is running in a jail-broken device, have an introspection framework attached, is running in an emulator, if a debugger is attached, etc. This is a never ending game with the attackers and they have a huge advantage, they can dedicated all their resources and time to break your app, if its worth it for them, but you may not have the same human resources, time and money to invest in this game. No matter how you decide to play the game you can always resort to the Apple Device Check API to mark in the API server that a specific device is not trustworthy.
In case the requirement to check the iOS app signature is more in line with protecting the API server from receiving requests from an iOS app that is not the genuine one you have uploaded to the Apple store, then a better solution may be possible, and is known by the the name of Mobile APP Attestation. So if this is also in the scope of your requirement you should keep reading, otherwise just skip it.
Before I dive into the Mobile APP Attestation concept I would like to first clear a misconception about WHO and WHAT is accessing an API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
So replace the mobile app by web app, and keep following my analogy around this picture.
The Intended Communication Channel represents the web app being used as you expected, by a legit user without any malicious intentions, communicating with the API server from the browser, not using Postman or using any other tool to perform a man in the middle(MitM) attack.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using Curl or a tool like Postman to perform the requests, a hacker using a MitM attack tool, like MitmProxy, to understand how the communication between the web app and the API server is being done in order to be able to replay the requests or even automate attacks against the API server. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the web app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the browser were your web app should be running from, with a real user.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the web app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise, you may end up discovering that It can be one of the legit users manipulating manually the requests or an automated script that is trying to gamify and take advantage of the service provided by the web app.
Well, to identify the WHAT, developers tend to resort to an API key that usually is sent in the headers of the web app. Some developers go the extra mile and compute the key at run-time in the web app, inside obfuscated javascript, thus it becomes a runtime secret, that can be reverse engineered by deobusfaction tools, and by inspecting the traffic between the web app and API server with the F12 or MitM tools.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?. While in the context of a Mobile App, the overall idea is still valid in the context of a web app. You wish you can read the article in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The use of a Mobile App Attestation solution will enable the API server to know WHAT is sending the requests, thus allowing to respond only to requests from a genuine mobile app while rejecting all other requests from unsafe sources.
The role of a Mobile App Attestation solution is to guarantee at run-time that your mobile app was not tampered with, is not running in a rooted device, not being instrumented by a framework like Frida, not being MitM attacked, and this is achieved by running an SDK in the background. The service running in the cloud will challenge the app, and based on the responses it will attest the integrity of the mobile app and device is running on, thus the SDK will never be responsible for any decisions.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
Summary
In the end, the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
Going the Extra Mile
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

How to give access to my api for a mobile app?

i have to develop the backend of a mobile app (IOS swift). I started to create the api with laravel.
But i'm concerned about the access to my api: how i should i give access to my api ? i've heard some stuff about Oauth key and passport .
For my app i want to :
-user can create an account (i guess it's with JWT)
-user can navigate in my app and start to use it after they create their account.
I wan't know the basic process about creating an api for a private use (only my app will use it) what security stuff should i implement and how the account creation for my app will work. Thx :)
PRIVATE APIs
wan't know the basic process about creating an api for a private use (only my app will use it)
Let me tell you here a cruel truth...
No matter if an API doesn't have public accessible documentation or if is is protected by any kind of secret or authentication mechanisms, once is accessible from the internet is not private any-more.
So you can make it hard to find and access, but to truly lock it to your mobile app you will gonna have an hard time to do it so.
WHO AND WHAT IS ACCESSING THE API SERVER
The WHO is the user of the mobile app that you can authenticate,authorize and identify in several ways, like using OpenID or OAUTH2 flows.
Now you need a way to identify WHAT is calling your API server and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server, is it really your genuine mobile app or is a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
Well to identify the WHAT developers tend to resort to an API key that usually they hard-code in the code of their mobile app and some go the extra mile and compute it at run-time in the mobile app, thus becomes a dynamic secret in opposition to the former approach that is a static secret embedded in the code.
REVERSE ENGINEERING A MOBILE APP BINARY IS EASY
The truth is that anything running in the client side can be reverse engineered
easily by an attacker on a device he controls. He will use introspection frameworks like Frida or xPosed to intercept at runtime the running code of the mobile app or will use a proxy tool like MiTM Proxy for watching the communications between the mobile app and the API server. Normally their first step in reverse engineer a mobile app will be to use the Mobile Security Framework to reverse engineer the binary of you mobile app to extract all static secrets and to identify attack vectors.
Mobile Security Framework
Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
So now what... Am I doomed to the point I cannot protect my API server from being abused??? No quiet so... hope still exists!!!
A POSSIBLE SOLUTION
So anything that runs on the client side and needs some secret to access an API can be abused in different ways and you can learn more on this series of articles about Mobile API Security Techniques. This articles will teach you how API Keys, User Access Tokens, HMAC and TLS Pinning can be used to protect the API and how they can be bypassed.
But i'm concerned about the access to my api: how i should i give access to my api ? i've heard some stuff about Oauth key and passport .
For my app i want to :
-user can create an account (i guess it's with JWT)
-user can navigate in my app and start to use it after they create their account.
...and how the account creation for my app will work.
Laravel Passport is an OAUTH2 server thus is a good solution to use for user creation and identification, thus to solve the problem of WHO is using your mobile app and API server.
what security stuff should i implement
To solve the problem of WHAT is accessing your mobile app you need to use one or all the solutions mentioned in the series of articles about Mobile API Security Techniques that I mentioned above and accepted that they can only make unauthorized access to your API server harder to bypass but not impossible.
A better solution can be employed by using a Mobile App Attestation solution that will enable the API server to know is receiving only requests from a genuine mobile app.
Mobile App Attestation
Use a Mobile App Attestation solution to enable the API server to know WHAT is sending the requests, thus enabling it to only respond to requests from a genuine mobile app.
The role of a Mobile App Attestation service is to guarantee at run-time that your mobile app was not tampered or is not running in a rooted device by running a SDK in the background that will communicate with a service running in the cloud to attest the integrity of the mobile app and device is running on.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.

oauth2 openid connect javascript (electron) desktop application

What is the correct oauth2 flow for a desktop application? Besides a desktop application I have a SPA Web GUI which does use the Implicit flow. There it does not matters if the client Redirects after 3600s to the IdP to issue a new Access token.
But the desktop application needs to be running 24/7 or could be running 24/7. So it needs to automatically refresh the access token via a refresh_token. But since the implicit flow does not provide refresh tokens it is probably the wrong flow for a desktop app, isn't it?
I guess I need the auth code flow, which does provide a refresh_token. But authentication requests needs a redirect_uri. Let's say I want to use Google as my openid provider. With google it looks like I can't register client credentials with a custom URI scheme (https://developers.google.com/identity/protocols/OpenIDConnect). What does work is to register for example http://localhost:9300, which theoretically could be handled by the app.
A
Whats the correct oauth2 flow for a desktop app to receive a refresh_token?
B
Can I catch the redirect_uri via a custom URI scheme without using the implicit flow (Google IdP)? It is way easier to listen for a custom uri scheme than listening on a local tcp port.
C
This is more a general question. Usually desktop apps are public apps, so I should not include client_secret right? So the only flow which would be left is the implicit flow. But how can I renew access tokens according to specs without bother the desktop user every 3600s?
In my case I could publish the app locally so not public, but how is it for a public app?
A - Authorization Code Grant
B - Not sure here, You can register a Custom URI Scheme
C - Not enough information provided.
Are you using the AppAuth libraries? If so you SHOULD use PKCE and then additional security measures for the refresh token should not be necessary, on the assumption that the client never sends the refresh token with anyone other than the IDP over a secure connection.
Does this help?
A: Yes use the code grant
B: yes use a custom scheme. In your case you should use the reverse of your client ID. e.g. com.googleusercontent.apps.123 is the reverse DNS notation of the client ID. Register your client as "Other" in the Google developer console.
C: Yes, it should not include the client secret. That is why you don't need to send the secret for native clients ("Other") when exchanging the code for a refresh token. Just leave that field blank and it'll work.
As suggested by jwilleke, please use an AppAuth library if it is available for your use case as it'll also handle some of the security issues (PKCE).
For native apps (Desktop), you can follow OAuth 2.0 for Native Apps. But this is still under review and you can refer the latest draft from provided link.
With this flow, you can use authorisation code flow to obtain both access token and a refresh token. Refresh tokens should solve the UX related issue when it comes to extended app usage (24/7 and beyond).
According to this working document, there are strict guidelines on client authentication. Section 8.5 discuss about them. As it says client credentials are not recommended
For this
reason, and those stated in Section 5.3.1 of [RFC6819], it is NOT
RECOMMENDED for authorization servers to require client
authentication of public native apps clients using a shared secret
Also as nvnagr has mentioned in his answer, PKCE [RFC7636] is a must to have for native public clients.

Why does Google provide a client secret for a Native application?

I'm writing a native application that works against a Google API. Upon registering my application, and despite its explicit designation as Native, the Google Developers Console provides me with a client secret.
As far as I understand the OAuth 2.0 protocol, native apps should never have a client secret, since they cannot guarantee its secrecy. Is Google mistaken in its implementation of OAuth 2.0? How should I proceed?
You are correct, the client secret isn't terribly useful in a native application from a being kept secret perspective. I suspect it's there mainly for consistency with the web application flow.
It does however have at least 1 useful feature... the original developer can reset it at any time, effectively revoking all refresh tokens bound to that client ID.

Resources