How to get a full list of repositories that a user is allowed to access? - bitbucket

I have found bitbucket api like:
https://bitbucket.org/api/2.0/repositories/{teamname}
But this link return 301 status (moved permanently to !api/2.0/repositories/{teamname}).
Ok, but this one returns status 200 with zero repositories.
I provide two parameters as user and password, but nothing seems changed.
So, can anybody answer how to get full list of private repositories that allowed to specific user?

Atlassian Documentation - Repositories Endpoint provides a detail documentation on how to access the repositories.
The URL mentioned in bitbucket to GET a list of repositories for an account is:
GET https://api.bitbucket.org/2.0/repositories/{owner}
If you use the above URL it always retrieves the repositories where you are the owner. In order to retrieve full list of repositories that the user is member of, you should call:
GET https://api.bitbucket.org/2.0/repositories?role=member
You can apply following set of filters for role based on your needs.
To limit the set of returned repositories, apply the
role=[owner|admin|contributor|member] parameter where the roles are:
owner: returns all repositories owned by the current user.
admin: returns repositories to which the user has explicit
administrator access.
contributor: returns repositories to which the user has explicit write access.
member: returns repositories to which the user has explicit read
access.
Edit-1:
You can make use of Bitbucket REST browser for testing the request/response.(discontinued)

You should not use the API from the https://bitbucket.org/api domain.
Instead, you should always use https://api.bitbucket.org.
Now one reason you might be getting an empty result after following the redirect could be because some http clients will only send Basic Auth credentials if the server explicitly asks for them by returning a 401 response with the WWW-Authenticate response header.
The repositories endpoint does not require authentication. It will simply return the repos that are visible to anonymous users (which might well be an empty set in your case) and so clients that insist on a WWW-Authenticate challenge (there are many, including Microsoft Powershell) will not work as expected (note, curl always sends Basic Auth credentials eagerly, which makes it a good tool for testing).

Unfortunately, from what I see in the documentation, there is no way to list all private repositories which the user has access to.
GET https://api.bitbucket.org/2.0/repositories
"Returns a paginated list of all public repositories." according to the doco.
GET https://api.bitbucket.org/2.0/repositories/{owner}
"Returns a paginated list of all repositories owned by the specified account or UUID." according to the doco.
So, getting all private repositories not necessarily owned by the user is either not possible, or I haven't found the right endpoint, or the documentation is inacurate.

None of the answers above worked for me, so this is what I did. We'll use the Bitbucket REST API.
Authentication
You can't use your normal credentials. I created an API Password. I'm not sure how to get to this page via your browser, but go here: https://bitbucket.org/account/settings/app-passwords/
Create an App Password, then cut and save the password that Atlassian generates for you.
Curl
curl --user your_username:your_app_password https://api.bitbucket.org/2.0/repositories/your_workspace?pagelen=100
I piped that to jq and saved it to a file.
your_workspace you get from looking at the URL of any of your repositories.
Paging
The maximum pagelen appears to be 100. If you have more than 100 repos, then you might have to do this:
curl --user your_username:your_app_password https://api.bitbucket.org/2.0/repositories/your_workspace?pagelen=100&page=2
The JSON
The JSON isn't too bad. You want the "values" array. From there, look at links.clone, which might have two entries like this:
"clone": [
{
"href": "https://user#bitbucket.org/WORKSPACE/REPO.git",
"name": "https"
},
{
"href": "git#bitbucket.org:WORKSPACE/REPO.git",
"name": "ssh"
}
],
That's a cut & paste from my results with personal info changed. Also useful are two other fields:
"full_name": "WORKSPACE/repo",
"name": "Repo",

Expanding on blizzard's answer, here's a little node.js script I just wrote:
import axios from 'axios';
import fs from 'fs';
async function main() {
const bitbucket = axios.create({
baseURL: 'https://api.bitbucket.org/2.0',
auth: {
username: process.env.BITBUCKET_USERNAME!,
password: process.env.BITBUCKET_PASSWORD!,
}
});
const repos = [];
let next = 'repositories?role=member';
for(;;) {
console.log(`Fetching ${next}`)
const res = await bitbucket.get(next);
if(res.status < 200 || res.status >= 300) {
console.error(res);
return 1;
}
repos.push(...res.data.values);
if(!res.data.next) break;
next = res.data.next;
}
console.log(`Done; writing file`);
await fs.promises.writeFile(`${__dirname}/../data/repos.json`,JSON.stringify(repos,null,2),{encoding:'utf8'});
}
main().catch(err => {
console.error(err);
});

Related

Slack Conversations API conversations.kick returning "channel_not_found" for a public channel

I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.

temporary link / url for the attachments of an issue in jira cloud instance

JIRA provides a way to access the attachments of an issue using basic auth, jwt auth mechanisms. Using which we can download those files. We're able to download the files using both authentication mechanisms.
sample jwt auth:
curl -X GET --url https://{site-name}.atlassian.net/secure/attachment/1001/example.txt -H 'Authorization: jwt '
Issue / Our requirement:
But is there a way to generate temporarily accessible url for the JIRA issue's attachments which will have token embedded into the URI itself. I've added the example of that below
example url:
https://{site-name}.atlassian.net/attachment/1001/example.txt?token={temp_access_token}
While accessing / clicking on the above mentioned url the download should automatically start even if the user isn't logged into their account
Reason for our requirement:
We're creating jira cloud based service / app and one of its feature is providing access to the user's attachments through our application. Our limitation(cloud service cost) is that we can't download all the huge sized attachments and store and manage it. So we're looking for a solution using which user's can download from the JIRA's server directly
In your JWT generation steps, you can define how long the JWT should be valid. And you can attach a JWT to the URL like this: <Jira Base Url>/rest/api/3/...?jwt=.... This way, you could generate a JWT on demand and it'll only be valid for the given time that you define.
In the Java Example on the page Understanding JWT for Connect apps you can see how they are setting the expirationTime. Just do the same, on demand. Here is the important part of the code snippet:
public class JWTSample {
public String createUriWithJwt()
throws UnsupportedEncodingException, NoSuchAlgorithmException {
long issuedAt = System.currentTimeMillis() / 1000L;
long expiresAt = issuedAt + 180L;
/* ... */
JwtJsonBuilder jwtBuilder = new JsonSmartJwtJsonBuilder()
.issuedAt(issuedAt)
.expirationTime(expiresAt)
.issuer(key);
/* ... */
String jwtToken = /* ... */;
String apiUrl = baseUrl + apiPath + "?jwt=" + jwtToken;
return apiUrl;
}
}
Security concern: I'm explicitly mentioning that you should generate these links on demand because you should not set the expiration date to more than 5-10 minutes (which is already quite high). Otherwise an attacker just needs to retrieve your generated link (URLs are often logged somewhere) and is also able to retrieve the attachment as well.
Alternative Approach
Since you mentioned you'll build a service/app, why not chain the attachment download through your service? This way you wouldn't have to expose a JWT which is a potential security threat. For example: you offer a download button in your UI, this sends a HTTP request to your service and your service downloads the attachment and then forwards it to your user. However, this would not comply with your requirement to give access to unauthenticated users - if that's really what you want to do.

Error 400: invalid_scope "https://www.googleapis.com/auth/chat.bot"

The documentation for the new google hangouts chat says that you need to authorize the scope https://www.googleapis.com/auth/chat.bot to do pretty much anything.
Here's the error:
While generating an authentication URL using their OAuth2 client I get the message that the scope is invalid. I don't have that problem if I use https://www.googleapis.com/auth/chat or some other scope like the one for google plus.
When I try to google things on in the API Explorer no combination of the URL or parts of the URL work either.
Here is my code to fetch the URL, seems to work just fine for everything else:
var {google} = require('googleapis');
var OAuth2 = google.auth.OAuth2;
var oauth2Client = new OAuth2(
"clientid-idididid.apps.googleusercontent.com",
"_secretsuff",
"http://localhost:3000/auth/google/callback"
);
var scopes = [
"https://www.googleapis.com/auth/chat", //Works
"https://www.googleapis.com/auth/chat.bot" // Does not work
];
var url = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: scopes,
});
console.log(url);
In case others are running across this problem I think I've figured this out. Google doesn't seem need this auth scope enabled by a domain user because it's already authorised on the domain when your testing your bot. The "authorisation" of these scopes are dictated by users in a domain adding/removing bots from spaces.
I'll go into a bit of detail if you're confused.
When you create a bot in the console for an organisation https://console.cloud.google.com/apis/api/chat.googleapis.com/ your bot is added to the domain and can be added to spaces by users. If then go over to to the credentials and create a service account you can use that json file credentials to access the API as your bot. The code below gets a list of the people in a space.
var { google } = require('googleapis');
var chat = google.chat("v1");
var key = require('./google_service-account-credentials.json');
var jwtClient = new google.auth.JWT(
key.client_email,
null,
key.private_key,
['https://www.googleapis.com/auth/chat.bot'], // an array of auth scopes
null
);
jwtClient.authorize(function (err, tokens) {
chat.spaces.members.list({
auth: jwtClient,
parent: "spaces/AAAAD4xtKcE"
}, function (err, resp) {
console.log(resp.data);
});
});
If you try to get a list of members on other spaces (and other domains) the bot will fail with the exact same error message:
"Bot is not a member of the space."
I assume if you list your bot on the marketplace and it gets added to different domains and spaces google's API makes sure that your bot can do what it's trying to do on a space by space basis. It would be annoying have to setup some authentication flow after a bot has already been added for it to do its job. This is also probably why the current REST api doesn't let you list spaces under domains, it's not the paradigm this API works under.
It may have to do with one of the following:
The scope is created for service accounts. Make sure you are accessing the REST API with a service account.
Make sure that the bot is added to the room or space and has access to what you want it do.
Make sure the Service account is part of the bot project that you are using for the bot.

Gmail API returns 403 error code and "Delegation denied for <user email>"

Gmail API fails for one domain when retrieving messages with this error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 OK
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Delegation denied for <user email>",
"reason" : "forbidden"
} ],
"message" : "Delegation denied for <user email>"
}
I am using OAuth 2.0 and Google Apps Domain-Wide delegation of authority to access the user data. The domain has granted data access rights to the application.
Seems like best thing to do is to just always have userId="me" in your requests. That tells the API to just use the authenticated user's mailbox--no need to rely on email addresses.
I had the same issue before, the solution is super tricky, you need to impersonate the person you need to access gmail content first, then use userId='me' to run the query. It works for me.
here is some sample code:
users = # coming from directory service
for user in users:
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
####IMPORTANT######
credentials_delegated = credentials.with_subject(user['primaryEmail'])
gmail_service = build('gmail', 'v1', credentials=credentials_delegated)
results = gmail_service.users().labels().list(userId='me').execute()
labels = results.get('labels', [])
for label in labels:
print(label['name'])
Our users had migrated into a domain and their account had aliases attached to it. We needed to default the SendAs address to one of the imported aliases and want a way to automate it. The Gmail API looked like the solution, but our privileged user with roles to make changes to the accounts was not working - we kept seeing the "Delegation denied for " 403 error.
Here is a PHP example of how we were able to list their SendAs settings.
<?PHP
//
// Description:
// List the user's SendAs addresses.
//
// Documentation:
// https://developers.google.com/gmail/api/v1/reference/users/settings/sendAs
// https://developers.google.com/gmail/api/v1/reference/users/settings/sendAs/list
//
// Local Path:
// /path/to/api/vendor/google/apiclient-services/src/Google/Service/Gmail.php
// /path/to/api/vendor/google/apiclient-services/src/Google/Service/Gmail/Resource/UsersSettingsSendAs.php
//
// Version:
// Google_Client::LIBVER == 2.1.1
//
require_once $API_PATH . '/path/to/google-api-php-client/vendor/autoload.php';
date_default_timezone_set('America/Los_Angeles');
// this is the service account json file used to make api calls within our domain
$serviceAccount = '/path/to/service-account-with-domain-wide-delagation.json';
putenv('GOOGLE_APPLICATION_CREDENTIALS=' . $serviceAccount );
$userKey = 'someuser#my.domain';
// In the Admin Directory API, we may do things like create accounts with
// an account having roles to make changes. With the Gmail API, we cannot
// use those accounts to make changes. Instead, we impersonate
// the user to manage their account.
$impersonateUser = $userKey;
// these are the scope(s) used.
define('SCOPES', implode(' ', array( Google_Service_Gmail::GMAIL_SETTINGS_BASIC ) ) );
$client = new Google_Client();
$client->useApplicationDefaultCredentials(); // loads whats in that json service account file.
$client->setScopes(SCOPES); // adds the scopes
$client->setSubject($impersonateUser); // account authorized to perform operation
$gmailObj = new Google_Service_Gmail($client);
$res = $gmailObj->users_settings_sendAs->listUsersSettingsSendAs($userKey);
print_r($res);
?>
I wanted to access the emails of fresh email id/account but what happened was, the recently created folder with '.credentials' containing a JSON was associated with the previous email id/account which I tried earlier. The access token and other parameters present in JSON are not associated with new email id/account. So, in order make it run you just have to delete the '.credentails' folder and run the program again. Now, the program opens the browser and asks you to give permissions.
To delete the folder containing files in python
import shutil
shutil.rmtree("path of the folder to be deleted")
you may add this at the end of the program
Recently I started exploring Gmail API and I am following the same approach as Guo mentioned. However, it is going to take of time and too many calls when we the number of users or more. After domain wide delegation my expectation was admin id will be able to access the delegated inboxes, but seems like we need to create service for each user.

Best way to upload files to Box.com programmatically

I've read the whole Box.com developers api guide and spent hours on the web researching this particular question but I can't seem to find a definitive answer and I don't want to start creating a solution if I'm going down the wrong path. We have a production environment where as once we are finished working with files our production software system zips them up and saves them into a local server directory for archival purposes. This local path cannot be changed. My question is how can I programmatically upload these files to our Box.com account so we can archive these on the cloud? Everything I've read regarding this involves using OAuth2 to gain access to our account which I understand but it also requires the user to login. Since this is an internal process that is NOT exposed to outside users I want to be able to automate this otherwise it would not be feasable for us. I have no issues creating the programs to trigger everytime a new files gets saved all I need is to streamline the Box.com access.
I just went through the exact same set of questions and found out that currently you CANNOT bypass the OAuth process. However, their refresh token is now valid for 60 days which should make any custom setup a bit more sturdy. I still think, though, that having to use OAuth for an Enterprise setup is a very brittle implementation -- for the exact reason you stated: it's not feasible for some middleware application to have to rely on an OAuth authentication process.
My Solution:
Here's what I came up with. The following are the same steps as outlined in various box API docs and videos:
use this URL https://www.box.com/api/oauth2/authorize?response_type=code&client_id=[YOUR_CLIENT_ID]&state=[box-generated_state_security_token]
(go to https://developers.box.com/oauth/ to find the original one)
paste that URL into the browser and GO
authenticate and grant access
grab the resulting URL: http://0.0.0.0/?state=[box-generated_state_security_token]&code=[SOME_CODE]
and note the "code=" value.
open POSTMAN or Fiddler (or some other HTTP sniffer) and enter the following:
URL: https://www.box.com/api/oauth2/token
create URL encoded post data:
grant_type=authorization_code
client_id=[YOUR CLIENT ID]
client_secret=[YOUR CLIENT SECRET]
code= < enter the code from step 4 >
send the request and retrieve the resulting JSON data:
{
"access_token": "[YOUR SHINY NEW ACCESS TOKEN]",
"expires_in": 4255,
"restricted_to": [],
"refresh_token": "[YOUR HELPFUL REFRESH TOKEN]",
"token_type": "bearer"
}
In my application I save both auth token and refresh token in a format where I can easily go and replace them if something goes awry down the road. Then, I check my authentication each time I call into the API. If I get an authorization exception back I refresh my token programmatically, which you can do! Using the BoxApi.V2 .NET SDK this happens like so:
var authenticator = new TokenProvider(_clientId, _clientSecret);
// calling the 'RefreshAccessToken' method in the SDK
var newAuthToken = authenticator.RefreshAccessToken([YOUR EXISTING REFRESH TOKEN]);
// write the new token back to my data store.
Save(newAuthToken);
Hope this helped!
If I understand correctly you want the entire process to be automated so it would not require a user login (i.e run a script and the file is uploaded).
Well, it is possible. I am a rookie developer so excuse me if I'm not using the correct terms.
Anyway, this can be accomplished by using cURL.
First you need to define some variables, your user credentials (username and password), your client id and client secret given by Box (found in your app), your redirect URI and state (used for extra safety if I understand correctly).
The oAuth2.0 is a 4 step authentication process and you're going to need to go through each step individually.
The first step would be setting a curl instance:
curl_setopt_array($curl, array(
CURLOPT_URL => "https://app.box.com/api/oauth2/authorize",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "content-type: application/x-www-form-urlencoded",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_POSTFIELDS =>
"response_type=code&client_id=".$CLIENT_ID."&state=".$STATE,
));
This will return an html text with a request token, you will need it for the next step so I would save the entire output to a variable and grep the tag with the request token (the tag has a "name" = "request_token" and a "value" which is the actual token).
Next step you will need to send another curl request to the same url, this time the post fields should include the request token, user name and password as follows:
CURLOPT_POSTFIELDS => "response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&request_token=".$REQ_TOKEN."&login=".$USER_LOGIN."&password=".$PASSWORD
At this point you should also set a cookie file:
CURLOPT_COOKIEFILE => $COOKIE, (where $COOKIE is the path to the cookie file)
This will return another html text output, use the same method to grep the token which has the name "ic".
For the next step you're going to need to send a post request to the same url. It should include the postfields:
response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&redirect_uri=".$REDIRECT_URI."&doconsent=doconsent&scope=root_readwrite&ic=".$IC
Be sure to set the curl request to use the cookie file you set earlier like this:
CURLOPT_COOKIEFILE => $COOKIE,
and include the header in the request:
CURLOPT_HEADER => true,
At step (if done by browser) you will be redirected to a URL which looks as described above:
http://0.0.0.0(*redirect uri*)/?state=[box-generated_state_security_token]&code=[SOME_CODE] and note the "code=" value.
Grab the value of "code".
Final step!
send a new cur request to https//app.box.com/api/oauth2/token
This should include fields:
CURLOPT_POSTFIELDS => "grant_type=authorization_code&code=".$CODE."&client_id=".$CLIENT_ID."&client_secret=".$CLIENT_SECRET,
This will return a string containing "access token", "Expiration" and "Refresh token".
These are the tokens needed for the upload.
read about the use of them here:
https://box-content.readme.io/reference#upload-a-file
Hope this is somewhat helpful.
P.S,
I separated the https on purpuse (Stackoverflow wont let me post an answer with more than 1 url :D)
this is for PHP cURL. It is also possible to do the same using Bash cURL.
For anyone looking into this recently, the best way to do this is to create a Limited Access App in Box.
This will let you create an access token which you can use for server to server communication. It's simple to then upload a file (example in NodeJS):
import box from "box-node-sdk";
import fs from "fs";
(async function (){
const client = box.getBasicClient(YOUR_ACCESS_TOKEN);
await client.files.uploadFile(BOX_FOLDER_ID, FILE_NAME, fs.createReadStream(LOCAL_FILE_PATH));
})();
Have you thought about creating a box 'integration' user for this particular purpose. It seems like uploads have to be made with a Box account. It sounds like you are trying to do an anonymous upload. I think box, like most services, including stackoverflow don't want anonymous uploads.
You could create a system user. Go do the Oauth2 dance and store just the refresh token somewhere safe. Then as the first step of your script waking up go use the refresh token and store the new refresh token. Then upload all your files.

Resources