Since Dropbox switched from long-lived to short-lived access tokens duplicity isn't working properly anymore as the token expires. Is it possible Is there a fix/workaround to this?
Dropbox is in the process of switching to only issuing short-lived access tokens (and optional refresh tokens) instead of long-lived access tokens. You can find more information on this migration here.
Apps can still get long-term access by requesting "offline" access though, in which case the app receives a "refresh token" that can be used to retrieve new short-lived access tokens as needed, without further manual user intervention. You can find more information in the OAuth Guide and authorization documentation.
Note that this is something that would need to be implemented by the programmer of the app though, so if you are not the programmer responsible for this integration, you may need to get an update from them to support this.
Please report this to duplicity issues.
The Duplicity Dropbox backend still doesn't seem to support short-lived tokens, but you could use the Duplicity rclone backend instead, which is part of the Duplicity codebase since version 0.8.09. It allows Duplicity to back up to any target that is supported by rclone – and rclone supports Dropbox short-lived tokens out of the box since version 1.54. Here's how to set this up:
Set up the rclone dropbox remote using rclone --config. The default choices worked for me. See the rclone dropbox documentation for more details.
Test that the remote works, for example by listing the remote files using rclone ls [your-remote-name]:
Run duplicity with a target in the form rclone://[your-remote-name]:
Related
I'm working on a project using the YouTube Data API. The Python script I'm running uses a client secrets JSON file, which I presume is for verifying the account owner. If I am having issues with it and need assistance, are there any security concerns with sharing this publicly? Is it even alright if it's held privately in a private github repository?
If you check the Google Developer TOS which you agreed to when you created your account on Google developer console
It is against the TOS for you to share this file with anyone. This is secret and only intended for the developer or team of developers who created it. This pertains to the entire client secret json file you download from Google developer console or google cloud console.
Again DO not share your google secret file. No matter what the accepted answer stays about how problematic it may or may not, nor does it matter what type of client it is. Sharing the client secret file would be volatilizing the TOS you agreed to.
My corrections for another answer on this thread here
The answer in question has some statements that i strongly disagree with and feel may cause confusion to developers. Let me start by saying I am not an employee of Google, my comments are my own and from my experience with working googles oauth / identity server server for eight+ years and contact with google identity team. I am concerned that some of the information in the answer above may confuse some developers. Rather than just saying dont share them the other question tries incorrectly IMO to explain why it wouldn't be so bad to share them. I will explain why you should never share them beyond the fact that its against googles TOS.
The security implications depend on the type of client secret. You can tell the difference by whether the key in the JSON file is installed or web.
The type of client has no effect upon I how great the security risk would be. If we ignore the definition of what a constitutes a security risk completely and just say that any chance anyone could get access to a users account or authenticate a user on behalf of the project, would constitute to big of a security risk then there is no difference.
Using the following command I could authenticate myself all i need is the credentials file for your project
https://accounts.google.com/o/oauth2/auth?client_id={clientid}.apps.googleusercontent.com&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/analytics.readonly&response_type=code
This would only work 100% of the time for an installed application. Why is this bad if i am just authenticating my own user. I could then use my evil powers to send so many requests against the API that the target google developer project would be locked down by Google for spamming.
If i have stolen another users login and password i can login to them from your Google developer project and i have access to their data and Google thinks its you hacking them.
This is a little harder with a web application due to the Redirect URI, However a lot of developers include add localhost as a redirect uri when in development and forget to take it out (Please never leave localhost as a redirect uri when you are in proudcution) . So in the event you have left redirect URI as a valid redirect URI in a web browser client then I can do the exact same thing.
Remember I am now able to authenticate users based upon your project to access mostly my own data. However if you have also set up access to client data for example via google drive I may be able to access that as well. (Note: Im not Sure on this one i havent actually tried.)
If i have manged though man in the middle attack or some other means to get a users refresh token, and I have the client secret file I can now access users data because I can create new access tokens with my refresh token for as long as i want. This is probably a bit harder to achieve.
Web application secrets
If the client secret is of the web type, then yes: you should absolutely not post it, and invalidate it if it gets exposed. This would allow a malicious entity to impersonate your backend and perform actions on your users' accounts on your behalf.
As stated above this will only be the case if the developer in question has left the redirect uri open for localhost or the person who now has your client secret file also has access to your web server. One very important fact is that if you have left localhost open i can then put up my own website using your credentials and set it up so it look exactly like your website. Users then think they are logging into Your super awesome app when in fact they are logging into Hacker Super awesome app giving them access to the users data. Again google thinks its you hacking them.
Installed application secrets
If the client secret is an installed-type secret, then it's less problematic to share privately, as it doesn't grant the sorts of abilities a web application secret does, such as the ability to authenticate as users who grant your application permission to access their data. As the documentation notes, "in this context, the client secret is obviously not treated as a secret."
This is completely false Installed applications give the exact same permissions as web applications there is no difference with-regard to Oauth2 access an access token is an access token no matter if it was created for an installed application or a web application.
As stated above security risk with giving out access to your installed application this is actually worse. As there are no redirect uris with installed applications. Anyone that has access to your client secret file could authenticate users who assume they are you because they are being shown your consent screen. Not only is your Google developer project being hjacked but your reputation to your users who think that they are authenticating to Super awesome app and in fact they are not granting the person who has stolen your credentials access to their data.
I would like to add one last thing. If you give another person your project credentials. The client secret json file. You are giving them access to make calls on your behalf. If you have bulling set up lets say against google maps api. You will be charged for the calls they make.
I hope this helps to clear up some of the confusion related to the accepted anwser.
Yes, this is a problem. It's called a "client secret" for a reason. If it does become exposed, you should take steps to invalidate it and get a new one so that someone doesn't try to impersonate you.
Short answer: the security implications depend on the type of secret, but you should not share it publicly for other reasons, including the Terms of Service, which state that:
You will keep your credentials confidential and make reasonable efforts to prevent and discourage other API Clients from using your credentials. Developer credentials may not be embedded in open source projects.
The security implications depend on the type of client secret. You can tell the difference by whether the key in the JSON file is installed or web.
Web application secrets
If the client secret is of the web type, then yes: you should absolutely not post it, and invalidate it if it gets exposed. This would allow a malicious entity to impersonate your backend and perform actions on your users' accounts on your behalf.
Installed application secrets
If the client secret is an installed-type secret, then it's less problematic to share privately, as it doesn't grant the sorts of abilities a web application secret does, such as the ability to authenticate as users who grant your application permission to access their data. As the documentation notes, "in this context, the client secret is obviously not treated as a secret."
You still should not post it publicly on GitHub, a Stack Overflow question, or other public places, as posting it publicly increases the probability of someone copying your code in its entirety or otherwise using your client secret in their own project, which might cause problems and likely would run afoul of the Terms of Service. People trying to reproduce your issue could pretty easily generate credentials to drop into your code—credentials are a reasonable thing to leave out of a question.
I plan to develop a command line client for a web API protected by OAuth 2 (JWT). The access and refresh tokens live for five and thirty minutes, respectively. Since the user will use the command line client for longer than five minutes in a sitting (a testing or debugging session, for example), the refresh token needs to be stored on the local computer, so that only one authentication is needed at the beginning of the session, and not later on.
I wonder where I can store the refresh token securely. A text file in the user's home directory might not sound too bad, because that's also the place the user stores all his or her private documents. However, any other application run by the same user could read that token and misuse it.
What are common solutions to such a problem?
Here you can use Resource Owner Password Credentials Grant in which you will use command prompt to get end user credentials and exchange them for tokens. With that, you can think of a way to encrypt received tokens based on end user credentials you got when requesting tokens. So whenever you require to use refresh token, you can prompt end user for their password so that you do not store the password but also you encrypt tokens so no other process can steal them as it is.
Use ROPG as Kavindu says - if you have an implementation that supports refresh tokens
After that you can store tokens in memory or via OS secure storage. The choice is a little subjective and depends on sensitivity of data being accessed
If using OS secure storage then store tokens per app + user, to ensure isolation
An example cross platform component that does this is keytar, which I've used in the past for desktop apps - it is nodejs based -not sure if this works for you:
https://github.com/atom/node-keytar/blob/master/README.md
Sample code that uses it
See the OS Secure Storage section of this write up to understand how the entry can be viewed + managed via built in OS tools:
https://authguidance.com/2018/01/26/final-desktop-sample-overview/
We've got a team Slack app and some slash commands configured with them. The slash commands are sending requests to a express REST endpoint which uses passport-slack as authentication.
I want that the requests generated by the slash commands to include the access token for the user since it's already logged in to Slack, but not just the verification token
Any idea on how to achieve this?
I would not recommend this, since it would breach Slack's security architecture and should also be unnecessary if your app is designed according to Slack's standards.
Your app only needs to retrieve and store the user token once during the installation process and can then use it for all future API calls indefinitely, since it does not have an expiry date. They also do not need to be refreshed.
Usually you would install a Slack app only once per Slack team and use that token for all future API calls. This is done using the OAauth 2.0 protocol, which ensured the token is generated in a secure way.
But if you really want the user token from every user, you can ask every user to install your Slack app. Its call "configurations" and that way you get all user tokens. But again, you only need to do that once and you should do it with the Oauth process.
I want to make an alert system on Apigee that will automatically send alerts to Slack, without the need for human interference.
However, the only OAuth flow for Slack I found on their api site seems to require a user to manually input their credentials: https://api.slack.com/docs/oauth
How can I automate getting an access token from Slack, so without having to manually input credentials?
I think you may have misunderstood the concept of Oauth. The way it is supposed to work is, that you run the process only once per Slack team (usually while installing the Slack app to your Slack team) and then store the access token you received for future reference (e.g. in a database). So whenever your Slack app needs the access token after installation it can always reuse the one it received during installation.
If you don't require any scripts to run for installation and you only need the access token you can also install your Slack app directly from Slack (under "Your Apps") and then copy and paste the resulting access token to your app configuration. Check this documentation for further information.
If you are generating webhooks on the fly, it requires OAuth 2.0 each time. However, if you use the Web API chat.postMessage method, it only requires the token (under the OAuth & Permissions section) to make a HTTP POST request to send a notification.
As a security best practice, when an engineer/intern leaves the team, I want to reset the client secret of my Google API console project.
The project has OAuth2 access granted by a bunch of people, and I need to ensure that those (grants as well as refresh tokens) will not stop working. Unfortunately, I've not been able to find documentation that explicitly states this.
Yes. Client Secret reset will immediately (in Google OAuth 2.0, there may be a few minutes delay) invalidate any authorization "code" or refresh token issued to the client.
Client secret reset is a countermeasure against abuse of revealed client secrets for private clients. So it makes sense to require re-grant once the secret is reset.
I did not find any Google document states this explicitly either. But my practice proves that reset will impact users, also you can do a test on it.
And in our work, we programmers do not touch product's secret, we have test clients. Only a very few product ops guys can touch that. So I think you need to try your best to narrow down the visibility of the secret in your team. Rest is not a good way.