When an application uses the Drive Realtime API in conjunction with a user's file stored in Drive, it gets access to the realtime collaborative model which is associated with that file. It has been documented in both the official reference material [1] and in a previous answer here on StackOverflow [2] that when two different applications [3] use the Realtime API with the same Drive file, they will be accessing different collaborative models.
Let's assume that I have a server-side application with a client_secret and the user's OAuth refresh_token stored on a server that only I can access, and that the user's access_token only leaves that server when making direct calls (over HTTPS) to various Google APIs. Consider the case where my application has used the realtime.get and realtime.update methods of the Drive API to keep some sensitive data in the collaborative model of the user's Drive file, such as an encryption key or a long-lived OAuth refresh_token for a third-party service.
Is this sensitive data safe from disclosure to another application, even when that application also uses the Realtime API on the same file?
I don't think any other application could impersonate my application, since they wouldn't have access to my client_secret and wouldn't have a chance to intercept either the refresh_token or the user's access_token that is associated with my app.
Bonus question: Can the user bypass my application and gain access to this sensitive data?
I can't see a way for the user to impersonate my application. The user can use my application's public client_id and grant himself permission through the normal OAuth flow, but would have no way to exchange the resultant code for a valid access_token without knowing the client_secret.
"Models are isolated by application. If a user opens the same file with two different collaborative apps, separate documents are created."
Using Collaborative Models with Existing File Types
"When you create a doc in the realtime playground, it is owned by the realtime playground app. When you try to then get the response in the try-it feature, it uses an app specific to try-it which can't see the realtime model you created." Official answer to question "How to work with Realtime get and update api requests?"
That is, when the applications use different client_id values to obtain OAuth credentials for the user.
The realtime models for different applications are isolated as you describe, but you should assume that anything in the model is theoretically readable by any user on the ACL.
If the user has authorized your application, they can theoretically grab the oauth token used to make requests since it needs to be sent from their computer along with the requests.
Additionally, if you ever load the document its available in the browser in its entirety, regardless of what parts of it you display.
Related
Please forgive my ignorance on this topic. I've been a developer for a long time, but there's a huge gap in my knowledge and experience when it comes to authentication & authorization protocols and proper handling of tokens.
We've got a whole homegrown suite that consists of:
4 web apps (2 in Ruby/Rails, 1 in Elixir/Phoenix, 1 single-page React)
1 image server (serverless app written as an AWS Lambda / API Gateway)
1 custom data API (also serverless Lambda / API Gateway)
We also have an Amazon Cognito User Pool connected to our backend identity provider to authenticate users and generate tokens.
All but one of these allow some form of anonymous access; the other is only available to logged in users. If a user is logged in, they all need to access the user's profile info from the ID token, preferably without initiating another auth flow. Our backend apps may also need to make use of the access token, but obviously we wouldn't be handing that out to to the SPA or public API consumers.
My first thought is to store the tokens in a key/value store on the backend, and have a short-lived, encrypted JWT containing a unique session ID set on the shared domain that all of the backend apps have access to, with the key stored in a config secret. By decoding the session ID, they can get what they need from the data store. The API would also refresh when necessary.
I also know that API Gateway can use a Cognito user pool as an authorizer, but I'm unclear how I would make that work while integrating it with the rest of our apps and requirements above. Sometimes requests to the API are made from the browser (in the React app, for example), and sometimes they come from the backend of one of the web apps.
The image server and API are used by our apps, but are also documented and accessible for other people to build their own applications on. But they would have to register their apps as OIDC clients to receive any profile info from logged in users.
I'd love some advice on how to make all of this work, or at least pointers toward resources that might help make it less dizzying.
I have an old windows application written in VB.NET with SQL server backend. Currently the new user additions, deletion, adding entitlements etc. are managed by an old approval workflow system. After getting approvals, the user details and entitlements are inserted in to the SQL server database table manually.
I am trying to integrate this application with the SailPoint's Identity and access management. So the new user addition, deletion update and adding entitlements etc will be done through Sailpoint. For this, I would require to create a WEB API which can be called by Sailpoint and expose the functionalities(add user/delete user/add entitlements). The only consumer to this API is SailPoint.
I am new to OAuth and below are the grant types that I came across. But not sure which one I should be using in this particular scenario.
1.Implicit Grant
2.Resource Owner Password Credentials Grant
3.Client Credentials Grant
4.Authorization Code Grant
I have done research on the different authentication methods that we can use to secure the web api. But still confused on which one to apply in this scenario as this new web api is going to be made available in internet.
I already tried developing a POC with the OAuth 2.0 with password grant type referring this article. But when I read articles in the internet I found that the password grant type is not that secure and is deprecated.
Could you please advise on which grant type(client credentials/authorization code/implicit) to use in this scenario. I believe authorization code is used when the user is directly trying to access the API. In this scenario, SailPoint will be calling the API in the backend programmatically when they insert a new user in their UI.
I think it's a good approach to use client credentials in this case because the communication between IIQ and your Web API can be considered an API-to-API communication, I mean, IIQ is acting on behalf of itself in this communication.
See this article for more details - https://dzone.com/articles/four-most-used-rest-api-authentication-methods (bold part by myself)
OAuth 2.0 provides several popular flows suitable for different types
of API clients:
Authorization code — The most common flow, it is mostly used for
server-side and mobile web applications. This flow is similar to how
users sign up into a web application using their Facebook or Google
account.
Implicit — This flow requires the client to retrieve an
access token directly. It is useful in cases when the user’s
credentials cannot be stored in the client code because they can be
easily accessed by the third party. It is suitable for web, desktop,
and mobile applications that do not include any server component.
Resource owner password — Requires logging in with a username and
password. In that case, the credentials will be a part of the request.
This flow is suitable only for trusted clients (for example, official
applications released by the API provider).
Client Credentials —
Intended for the server-to-server authentication, this flow describes
an approach when the client application acts on its own behalf rather
than on behalf of any individual user. In most scenarios, this flow
provides the means to allow users to specify their credentials in the
client application, so it can access the resources under the client’s
control.
So, I found asking myself this question several times while building iOS applications. Essentially, most of the applications that I have been developing involves Firebase for data storage and maintain a shared instance to store user object [locally] upon authentication process completes.
Main concern with this process is that upon authentication process the user object contains only relative user information such as email, full name etc. But throughout other app features, the app may be required to update the user object once in a while. And, with such approach, I had always end up maintaining both parties remote & local stored user object.
Is there a proper method/practice on how to handle such problem ?
Auth0 stores user information for your tenant in a hosted cloud database, or you can choose to store user data in your own custom external database.
You should check Auth0, I reckon that is the "proper method" that you're searching for.
Auth0 is a cloud-based platform that provides authentication and authorization as a service. As an authentication provider, Auth0 enables developers to easily implement and customize login and authorization security.
Why use Firebase and Auth0 Together?
One thing to notice is that Firebase does provide authentication features out of the box.
I quote:
You should consider Auth0 with a custom Firebase token if you:
Already have Auth0 implemented and want to add realtime capabilities to your app
Need to easily use issued tokens to secure a back end that is not provided by Firebase
Need to integrate social identity providers beyond just Google, Facebook, Twitter, and GitHub
Need to integrate enterprise identity providers, such as Active Directory, LDAP, ADFS, SAMLP, etc.
Need a customized authentication flow
Need robust user management with APIs and an admin-friendly dashboard
Want to be able to dynamically enrich user profiles
Want features like customizable passwordless login, multifactor authentication, breached password security, anomaly detection, etc.
Must adhere to compliance regulations such as HIPAA, GDPR, SOC2, etc.
Must adhere to compliance regulations such as HIPAA, GDPR, SOC2, etc.
Essentially, Firebase's basic authentication providers should suffice
if you have a very simple app with bare-bones authentication needs and
are only using Firebase databases.
Let me know if you need any further help. Now go and have an awesome day!
My goal is to offer a service for user of my website to store their private notes.
I want that users can trust the service, therefore the data should not be accessible for my company.
Can i realize this with google-cloud-storage and oauth-2.0 authentication? I would use the Google Cloud Storage JSON API to send the notes directly from the browser into the cloud.
What would be the basic steps to implement this?
There are a couple of ways to handle this, depending on how you want to handle authentication. If you want to make sure that your application cannot access the objects and only the users can, you'll need the users to have Google accounts and authenticate your app to act as their agent using OAuth 2.
Your app could involve a piece of JavaScript that would prompt the user to authenticate with Google and grant it access to Google Cloud Storage under their name. It would then receive a token that it could use to act as them. From there, it would upload the note using that token with an ACL granting permissions only to the uploader.
The uploaded object would go into your bucket, but it would be owned by the end user. You'd have the ability to delete it, but not to read it, and your bucket would be billed for storage and access.
The downside here is that all of your users would need to have Google accounts that they could entrust to your application for short periods of time.
Here are some details on the OAuth 2 exchange: https://developers.google.com/accounts/docs/OAuth2UserAgent
Here's the JavaScript client that does a lot of the authorization heavy lifting for you:
https://code.google.com/p/google-api-javascript-client/
And an example of using that library for authorization:
https://developers.google.com/api-client-library/javascript/samples/samples#AuthorizingandMakingAuthorizedRequests
Another alternative would be for the user to upload directly to the cloud using YOUR credentials via signed URLs, but if you went down this road, you would be able to read the notes after they were uploaded.
I am using oauth to access different services provided by google.I am able to generate token per service basis. But I want to generate single token to use multiple services from google.
Can anyone tell me the solution for this?
https://developers.google.com/accounts/docs/OAuth2
As per the Google OAuth2 docs, it is possible to do this by setting multiple scopes, but be warned, it isn't a happy experience.
When making your request, set the scope parameter to multiple scopes, each separated by a single space.
Example: "https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email https://www.google.com/m8/feeds"
You can currently find a list of scopes here: https://developers.google.com/gdata/faq
Unfortunately, API access is not additive, meaning, if you ask for an access token for the Google Contacts API, then later on as the same application ask for an access token for the Google Profile API, you will end up with two access tokens, and neither can be used to access the other API. Facebook at least has the decency to give you back a single access token that grants access to all the permissions granted so far.
Because of this, you are left having to keep track of multiple access tokens (a horrible nightmare, given they expire very quickly), or ask for all of your permissions up-front, which is a user experience disaster.
Fragmented and disparate, the Google APIs are currently setup to fail if you want to do tight, multi-faceted integration.