is there a way in Gerrit to either:
Limit access of Project Owner to read-only
Make certain API calls while not being a Project Owner. To be more specific, I'm interested in two calls:
GET projects/<project_name>/branches/<branch>/reflog/
and
GET projects/<project_name>/commits/<commit>
We have a few developer teams in the company, that could really benefit from Gerrit API, but we are trying to limit the access for the obvious reasons. So far we've created a group, "Devs A", where we added all the developers and then added that group to Project Owner. Then we blocked refs/meta/config privilege for "Devs A", but the group members are still able to edit/delete repositories and branches. Any idea on what else should we block?
Thanks
Why did you grant "Project Owner" permission to the "Devs A" group? If you want to let the group GET branches and commits endpoints you only need to grant "Read" permission to "refs/*".
Related
I am trying to collect all active TIs via the Beta Graph API by following this. But it doesn't return anything. Here is what I use in Postman:
https://graph.microsoft.com/beta/security/tiIndicators
Response (200):
{
"#odata.context": "https://graph.microsoft.com/beta/$metadata#security/tiIndicators",
"value": []
}
A bit of context for the environment I work in.
The tenant has multiple Sentinel workspaces & resource groups.
The application I use has the correct permissions:
ThreatIndicators.Read.All
ThreatIndicators.ReadWrite.OwnedBy
ThreatSubmission.Read.All
ThreatSubmission.ReadWrite.All
It is my current belief that this might be due to the limitations of the Beta API. My reasoning is that accourding to this documentation you need the ThreatIndicators.ReadWrite.OwnedBy permission to access the API. This would suggest that currently you can only view TI's that the resource itself created.
If more info is needed just ask.
According to the documentation, ThreatIndicators.ReadWrite.OwnedBy permission allow you to manage threat indicators your app creates or owns.
If you want to read all the threat indicators for your organization then your app needs ThreatIndicators.Read.All permission.
Although this is not a solution to the question it is a workaround. By using the Log Analytics API you can get the TI via a KQL.
ThreatIntelligenceIndicator
| where ExpirationDateTime > now() and
NetworkIP matches regex #"^(?:(?:25[0-5]|(?:2[0-4]|1\d|[1-9]|)\d)\.?\b){4}$" and
ConfidenceScore > 25
| summarize by NetworkIP
This is probably better as you can also use a watchlist to exclude specific IP addresses with one request.
One thing I struggled with this was Authorization. You must give your Application permission to use the api.loganalytics.io API, and the application needs the Log Analytics Reader role in the Log Analytic workspace you want to use.
I am in the process of switching the LDAP backend that we use to authenticate access to Gerrit.
When a user logs in via LDAP, a local account is created within Gerrit. We are running version 2.15 of Gerrit, and therefore our local user accounts have migrated from the SQL DB into NoteDB.
The changes in our infrastructure, mean that once the LDAP backend has been switched, user logins will appear to Gerrit as new users and therefore a new local account will be generated. As a result we will need perform a number of administrative tasks to the existing local accounts before and after migration.
The REST API exposes some of the functionality that we need, however two key elements appear to be missing:
There appears to be no way to retrieve a list of all local accounts through the API (such that I could then iterate through to perform the administrative tasks I need to complete). The /accounts/ endpoint insists on a query filter being specified, which does not appear to include a way to simply specify 'all' or '*'. Instead I am having to try and think of a search filter that will reliably return all accounts - I haven't succeeded yet.
There appears to be no way to delete an account. Once the migration is complete, I need to remove the old accounts, but nothing is documented for the API or any other method to remove old accounts.
Has anybody found a solution to either of these tasks that they could share?
I came to the conclusion that the answers to my questions were:
('/a/' in the below examples is accessing the administrative endpoint and so basic Auth is required and the user having appropriate permissions)
Retrieving all accounts
There is no way to do this in a single query, however combining the results of:
GET /a/accounts?q=is:active&n=<number larger than the number of users>
GET /a/accounts?q=is:inactive&n=<number larger than the number of users>
will give effectively the same thing.
Deleting an account
Seems that this simply is not supported. The only option appears to be to set an account inactive:
DELETE /a/accounts/<account_id>/active
TFS 2018u1. I have a custom Powershell task that calls TFS services via the VSSConnection object:
$VSS = Get-VssConnection -TaskContext $distributedTaskContext
$Client = $VSS.GetClient(...)
Question: what kind of security context does the task get? It's definitely not the agent account. To make sure, I've set up a temporary agent instance that runs as me, the TFS admin, and the custom task running on that agent doesn't have the full admin.
The underlying problem is - I'm trying to get the current agent record from a task, and the task only sees one pool, even though we have several. See this answer.
I'm trying to get the current agent record from a task, and the task only sees one pool, even though we have several.
There is a Agent pool security roles concept, for example with Administrator Role, you can:
Rregister or unregister agents from the pool and manage membership
for all pools, as well as view and create pools. They can also use the agent pool when creating an agent queue in a team project. The
system automatically adds the user that created the pool to the
Administrator role for that pool.
The default rights of a build agent running a release task should be the same as build service account. Please add your build service account to Agent pool security roles to Administrator from the collection-level admin context, Agent Pools page. Then try it again.
Another possibility you are lacking of vso.agentpools scope in your customize release task.
Grants the ability to view tasks, pools, queues, agents, and
currently running or recently completed jobs for agents
More details please take a look at Supported scopes
First off, the distributedTaskContext doesn't connect to TFS with NTLM, like Patrick Lu's answer suggests. It connects with Authorization:Bearer and a token. I've used the same token to invoke the /_api/_common/GetUserProfile endpoint, which returns the current user, and got back the following identity record:
{
"IdentityType": "user",
"FriendlyDisplayName": "Project Collection Build Service (TEAM FOUNDATION)",
"DisplayName": "Project Collection Build Service (TEAM FOUNDATION)",
"SubHeader": "Build\\233e4ccc-d129-4ba4-9c5b-ea82c7ae1d15",
"TeamFoundationId": "7a3195ee-870e-4151-ba58-1e522732086c",
"EntityId": "vss.ds.v1.ims.user.7a3195ee870e4151ba581e522732086c",
"Errors": [],
"Warnings": [],
"Domain": "Build",
"AccountName": "233e4ccc-d129-4ba4-9c5b-ea82c7ae1d15",
"IsWindowsUser": false,
"MailAddress": ""
}
It looks like some kind of artificial identity that TFS creates just for this purpose. Looking in the TFS database in the tbl_Identity table, there are numerous user records with names like that - one per collection, it seems, and also some that are project specific.
This user belongs to a server-level group called "Security Service Group" (and also to a collection level group with the same name). Those groups belong, respectively, to Team Foundation Valid Users and Project Collection Valid Users and nothing else.
At least on the collection level, the "Security Service Group" is visible and contains a lot of accounts.
All those "Build Service" users belong to the domain called "Build". A domain is not a security principal though, you can't grant rights to a domain.
Speaking of OAuth scopes. I've used the same token to invoke the homegrown "what are this token's scopes" page, and it turns out the distributedTaskContext token has exactly one - app_token. It's a valid scope that opens up all endpoints and all methods (see the dynamic scope list). The scopes parameter in the extension manifest has no bearing on that; it only affects the client-side contributions.
When it comes to pool visibility, though, the story is tricky. Seems like all the "Project Collection Build Service" accounts belong to Valid Users, but granting the Reader role on all pools to Valid Users doesn't open them up to the REST API in tasks. Granting Reader explicitly to "Project Collection Build Service" does. However, there are numerous accounts like this (one per collection, it seems) - and granting Reader only opens the pools up to release definitions in the collection where it resides. In order to let tasks in releases in all collections read the pools, you need to go through all collections and grant Reader to the "Project Collection Build Service" from each.
I'm trying to setup a release definition on TFS but I'm running into an access denied message:
I thought I should have this permission, since I am part of the "Agent Pool Administrator" group:
I noticed however, that my queue has no roles, and that I can't add one for some reason, which I suspect to be the related to the problem:
My question is how do I correctly configure the permissions? I've already googled a bunch but I still couldn't pinpoint what exact permission I'm missing.
[[Update]]
This is TFS 2015 update 3
Apparently, I am myself a project collection administrator already, but still don't have queue permissions and don't know, or can't see where to add myself as a queue admin.
The said queue was created by me, but indirectly, I created the agent pool with the auto-provision queues checked, and that created the queue, however, if I try to directly create a queue, I run into another "Access Denied" error
[Update]
Trying to run tfssecurity /collection:http://wada-pc:8080/tfs/DefaultCollection /g+ "[Agent Queues]\Agent Queue Administrators" "domain\account"
Leads me into Error: Access Denied: Eduardo Wada needs the following permission(s) to perform this action: Edit collection-level information
However, I should have that permisison:
Yes, your issue is related to the agent queue roles. An agent queue provides access to a pool of agents. Usually, there are two groups under the Roles:
Agent Queue Administrators: People in this group can register new agents in that pool, add users to the Agent Pool Service Accounts and add other administrators to the pool.
Agent Queue Users: For Team Foundation Server the service account you specify for the agent (commonly Network Service) is automatically added when you register the agent.
Try to use the account that create this agent queue to check whether it can see the roles, and add your account into the two groups.
Or, try to create a new agent queue to see whether you can see the roles, and deploy a new agent.
My situation is as follows:
Google Account A has some data in BigQuery.
Google Account B manages Account A's BigQuery data, and has also been given editor privileges for Account A's Cloud Platform project.
Account B has a Sheet in Google Drive that has some cool reference data in it. Account B logs into the BQ Web console, and creates a table in Account A's BQ project that is backed by this sheet.
All is well. Account B can query and join to this table successfully within Account A's BQ data from the web UI.
Problem:
Google Account A also has a service account that is an editor for Google Account A's Cloud Platform Project. This service account manages and queries the data in BQ using the python google-cloud API. When this service account attempts to query the reference table that is backed by Account B's GDrive Sheet, the job fails with this error:
Encountered an error while globbing file pattern. JobID: "testing_gdrivesheet_query_job1"
Near as I can tell this is actually an authentication issue. How can I give Account A's service account appropriate access to Account B's GDrive so it can access that reference table?
Bonus Points:
Is there any performance difference between a table backed by a GDrive Sheet vs a native BQ table?
While Orbit's answer helped me to find a solution for the issue, there are a few more things you need to consider. Therefore, I like to add my detailed solution to the problem. This solution is required if Orbit's basic solution does not work, in particular, if you use the G Suite and your policies do not allow sharing sheets/docs with accounts outside of your domain. In this case you cannot directly share a doc/sheet with the service account.
Before you start:
Create or select a service account in your project
Enable Domain-wide Delegation (DwD) in the account settings. If not present, this generates an OAuth client ID for the service account.
Make sure the delegated user#company.com has access to the sheet.
Add the required scopes to your service account's OAuth client (you may need to ask a G Suite admin to do this for you):
https://www.googleapis.com/auth/bigquery
https://www.googleapis.com/auth/drive
If the delegated user can access your drive-based table in the BigQuery UI, your service account should now also be able to access it on behalf of the delegated user.
Here is a full code snippet that worked for me:
#!/usr/bin/env python
import httplib2
from google.cloud import bigquery
from oauth2client.service_account import ServiceAccountCredentials
scopes = [
"https://www.googleapis.com/auth/drive",
"https://www.googleapis.com/auth/bigquery",
]
delegated_user = "user#example.com"
project = 'project-name'
table = 'dataset-name.table-name'
query = 'SELECT count(*) FROM [%s:%s]' % (project, table)
creds = ServiceAccountCredentials.from_json_keyfile_name('secret.json', scopes=scopes)
creds = creds.create_delegated(delegated_user)
http = creds.authorize(httplib2.Http())
client = bigquery.Client(http=http)
bq = client.run_sync_query(query)
bq.run()
print bq.fetch_data()
Note that I was not able to setup the delegation directly and needed to create an HTTP client using creds = creds.create_delegated(delegated_user) and http = creds.authorize(httplib2.Http()). The authorized HTTP client can then be used as HTTP client for the BigQuery client: client = bigquery.Client(http=http).
Also note that the service account does not need to have any predefined roles assigned in the project settings, i.e., you do not have to make it a bigquery user or even a project owner. I suppose it acquires access primarily via delegation.
You should be able to get this working with the following steps:
First share the sheet with the email/"service account id" associated with the service account.
Then you'll be able to access your sheet-backed table if you create a Client with the bigquery and drive scopes. (You might need to have domain-wide-delegation enabled on the service account).
scopes = ['https://www.googleapis.com/auth/bigquery', 'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'<path_to_json>', scopes=scopes)
# Instantiates a client
client = bigquery.Client(project = PROJECT, credentials = credentials)
bqQuery = client.run_sync_query(q)
bqQuery.run()
bqQuery.fetch_data()
For those of you trying to do this via Airflow or Google Cloud Composer, there are two main steps you'll need to do to accomplish this.
Grant view access to the spreadsheet to the project_name#developer.gserviceaccount.com. This should be the same service account you're using to access Google BigQuery. This can be done in the Sheets GUI or programmatically.
Add the following scope to your Google Cloud Connection in Airflow:
You will then be able to query external tables that reference Google Sheets.
Just need to add step from Evan Kaeding answer. You can find airflow connection in Airflow UI menu "Admin" -> "Connections" -> choose your connection. In my case I also need to add keyfile path or keyfile JSON of your service account in the airflow connection
based on this references https://cloud.google.com/composer/docs/how-to/managing/connections#creating_a_connection_to_another_project