can't use log analytics workspace in a different subscription? terraform azurerm policy assignment - terraform-provider-azure

I'm using terraform to write azure policy as code
I found two problems
1 I can't seem to use log analytics workspace that is on a different subscription, within same subscription, it's fine
2 For policies that needs managed identity, I can't seem to assign correct rights to it.
resource "azurerm_policy_assignment" "Enable_Azure_Monitor_for_VMs" {
name = "Enable Azure Monitor for VMs"
scope = data.azurerm_subscription.current.id
policy_definition_id = "/providers/Microsoft.Authorization/policySetDefinitions/55f3eceb-5573-4f18-9695-226972c6d74a"
description = "Enable Azure Monitor for the virtual machines (VMs) in the specified scope (management group, subscription or resource group). Takes Log Analytics workspace as parameter."
display_name = "Enable Azure Monitor for VMs"
location = var.location
metadata = jsonencode(
{
"category" : "General"
})
parameters = jsonencode({
"logAnalytics_1" : {
"value" : var.log_analytics_workspace_ID
}
})
identity {
type = "SystemAssigned"
}
}
resource "azurerm_role_assignment" "vm_policy_msi_assignment" {
scope = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.scope
role_definition_name = "Contributor"
principal_id = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.identity[0].principal_id
}
for var.log_analytics_workspace_ID, if i use the workspace id that is in the same subscription as the policy, it would work fine. but If I use a workspace ID from a different subscription, after deployment, the workspace field will be blank.
also for
resource "azurerm_role_assignment" "vm_policy_msi_assignment"
, I have already given myself user access management role, but after deployment, "This identity currently has the following permissions:" is still blank?

I got an answer to my own question:)
1 this is not something designed well in Azure, I recon.
MS states "a Managed Identity (MSI) is created for each policy assignment that contains DeployIfNotExists effects in the definitions. The required permission for the target assignment scope is managed automatically. However, if the remediation tasks need to interact with resources outside of the assignment scope, you will need to manually configure the required permissions."
which means, the system generated managed identity which needs access in log analytics workspace in another subscription need to be manually with log analytics workspace contributor rights
Also since you can't user user generated managed ID, you can't pre-populate this.
so if you want to to achieve in terraform, it seems you have to run policy assignment twice, the first time is just to get ID, then manual ( or via script) to assign permission, then run policy assignment again to point to the resource..
2 The ID was actually given the contributor rights, you just have to go into sub RBAC to see it.

Related

Why is the Network Watcher on Azure not destroyed by Terraform?

I have a simple Terraform configuration to create azure virtual network. When I do plan and then apply, a virtual network is created inside of a resource group as expected. But in addition to this resource group, there is one more created by the name NetworkWatcherRG, and inside of it I see a network watcher.
And the network watcher.
Now when I run the Terraform destroy command, I expect that every thing is cleaned up, all the Resource groups are destroyed. But instead, everything except for the NetworkWatcherRG and the Network Watcher inside of it are destroyed.
Looks like the Network Watcher along with its resource group, is NOT managed by Terraform. What am I missing?
The network watcher is not immediately obvious. Its not reveled immediately. So to see that, you need to go the simplified view of the resource groups. You need to click the Refresh button atleast 5 times(each time with a 2 second time gap) or you have to wait for long time and then click refresh.
So what is this network watcher and is it that Azure is creating it by itself and not managed by Terraform?
My Terraform configuration file is as follows.
# Terraform settings Block
terraform {
required_version = ">= 1.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.0"
}
}
}
# Provider Block
provider "azurerm" {
features {}
}
# create virtual network
resource "azurerm_virtual_network" "myvnet" {
name = "vivek-1-vnet"
address_space = ["10.0.0.0/16"] # This is a list, it has []. If it has { }, then its a map.
location = azurerm_resource_group.myrg.location
resource_group_name = azurerm_resource_group.myrg.name
tags = { # This is a map. This is {}
"name" = "vivek-1-vnet"
}
}
# Resource-1: Azure Resource Group
resource "azurerm_resource_group" "myrg" {
name = "vivek-vnet-rg"
location = var.resource_group_location
}
variable "resource_group_location" {
default = "centralindia"
description = "Location of the resource group."
}
And finally the commands I use are as follows.
terraform fmt
terraform init
terraform validate
terraform plan -out main.tfplan
terraform apply main.tfplan
terraform plan -destroy -out main.destroy.tfplan
terraform apply main.destroy.tfplan
I read the response from #RahulKumarShaw-MT . I believe the answer and it makes complete sense that terraform won't destroy resources it didn't create (unless someone can demonstrate otherwise). That said, I was able to delete the NetworkWatcherRG group using terraform! What I did to achieve this was I made sure to add a network watcher as one of my declared resources using azurerm_network_watcher (see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_watcher) in the same terraform script where I requested a virtual machine resource in another separate resource group. I think you created a vnet. My script creates a vnet too, and hence why I think Azure concludes that there is a need for a network watcher maybe? I named the first resource group, which contains my network watcher, whatever I wanted; doesn't have to be 'NetworkWatcherRG'. I watched the resource group be created and destroyed successfully with Terraform (using terraform apply and terraform destroy respectively, of course) along with my VM and vnet resources. Anyway, at the end, I refreshed the Azure Portal web page and saw no resource groups or resources in my test subscription. I'm not an Azure expert, but I suspect that if Azure already sees a network watcher present, then it won't create an additional one when terraform created my resources (e.g. - in my case a vm and a vnet), as a watcher will already be present as long as terraform creates that resource first before Azure gets a chance to.
Before applying terraform code i checked in my resource groups with name network watcher resource group for me , by default this resource grpup is created by Azure side.
As Mike-Ubezzi wrote on Microsoft forums:
Network Watcher resources are located in the hidden NetworkWatcherRG
resource group which is created automatically. For example, the NSG
Flow Logs resource is a child resource of Network Watcher and is
enabled in the NetworkWatcherRG.
The Network Watcher resource represents the backend service for
Network Watcher and is fully managed by Azure. Customers do no need to
manage it. Operations like move are not supported on the resource.
However, the resource can be
deleted.
So terraform destroy will only delete the resource created by you(mentioned in .tfstate file).This is the region you won't able to delete the NetworkWatcherRG Resource Group.

User manage Service Account to deploy CloudRun instance

I need your help please. I am not able to find out what I am missing. I created user managed SA and provided roles
roles/run.admin
roles/iam.serviceAccountUser
but somehow I am not able to see it when creating service:
I also added impersonation to default compute SA.
I am pushing changes via terraform:
resource "google_service_account" "sa-deployer" {
project = local.project_id
account_id = "${local.env}-sa-deployer-tf"
display_name = "Service Account to deploy CloudRun instance"
}
resource "google_service_account_iam_member" "gce-default-account-iam" {
service_account_id = data.google_compute_default_service_account.default.name
role = "roles/iam.serviceAccountUser"
member = "serviceAccount:${google_service_account.sa-deployer.email}"
depends_on = [
google_service_account.sa-deployer
]
}
resource "google_project_iam_binding" "sa-deployer-run-admin" {
project = local.project_id
role = "roles/run.admin"
members = [
"serviceAccount:${google_service_account.sa-deployer.email}",
]
depends_on = [
google_service_account.sa-deployer
]
}
resource "google_project_iam_binding" "sa-deployer-build-admin" {
project = local.project_id
role = "roles/cloudbuild.builds.builder"
members = [
"serviceAccount:${google_service_account.sa-deployer.email}",
]
depends_on = [
google_service_account.sa-deployer
]
}
The current user must be serviceAccountUser to be able to list the service account on the project.
To allow a user to manage service accounts, grant one of the following roles:
Service Account User (roles/iam.serviceAccountUser): Includes permissions to list service accounts, get details about a service account, and impersonate a service account.
Service Account Admin (roles/iam.serviceAccountAdmin): Includes permissions to list service accounts and get details about a service account. Also includes permissions to create, update, and delete service accounts, and to view or change the IAM policy on a service account.
To learn more about these roles, see Service Accounts roles.
IAM basic roles(roles/viewer, roles/editor) also contain permissions to manage service accounts. You should not grant basic roles in a production environment, but you can grant them in a development or test environment.
For more information refer to the following documentations.
Permissions to manage service accounts.
Listing service accounts.

boto3 list all accounts in an organization

I have a requirement that I want to list all the accounts and then write all the credentials in my ~/.aws/credentials file. Fir this I am using boto3 in the following way
import boto3
client = boto3.client('organizations')
response = client.list_accounts(
NextToken='string',
MaxResults=123
)
print(response)
This fails with the following error
botocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the ListAccounts operation: The security token included in the request is expired
The question is , which token is it looking at? And if I want information about all accounts what credentials should I be using in the credentials file or the config file?
You can use boto3 paginators and pages.
Get an organizations object by using an aws configuration profile in the master account:
session = boto3.session.Session(profile_name=master_acct)
client = session.client('sts')
org = session.client('organizations')
Then use the org object to get a paginator.
paginator = org.get_paginator('list_accounts')
page_iterator = paginator.paginate()
Then iterate through every page of accounts.
for page in page_iterator:
for acct in page['Accounts']:
print(acct) # print the account
I'm not sure what you mean about "getting credentials". You can't get someone else's credentials. What you can do is list users, and if you want then list their access keys. That would require you to assume a role in each of the member accounts.
From within the above section, you are already inside a for-loop of each member account. You could do something like this:
id = acct['Id']
role_info = {
'RoleArn': f'arn:aws:iam::{id}:role/OrganizationAccountAccessRole',
'RoleSessionName': id
}
credentials = client.assume_role(**role_info)
member_session = boto3.session.Session(
aws_access_key_id=credentials['Credentials']['AccessKeyId'],
aws_secret_access_key=credentials['Credentials']['SecretAccessKey'],
aws_session_token=credentials['Credentials']['SessionToken'],
region_name='us-east-1'
)
However please note, that the role specified OrganizationAccountAccessRole needs to actually be present in every account, and your user in the master account needs to have the privileges to assume this role.
Once your prerequisites are setup, you will be iterating through every account, and in each account using member_session to access boto3 resources in that account.

KeyVault -> Databricks automatic integration

I have followed Create an Azure Key Vault-backed secret scope to integrate Databricks with Key Vault and all works ok. Unfortunately this requires manual intervention, which breaks our 'full automated infrastructure' approach. Is there any way to automate this step?
UPDATE: You create a Databricks-backed secret scope using the Databricks CLI (version 0.7.1 and above). Alternatively, you can use the Secrets API.
It does not appear that Azure Key Vault backed secret scope creation has a publicly available API call, unlike the Databricks backed secret scope creation. This is backed by the 'Note' on the secret scopes doc page:
Creating an Azure Key Vault-backed secret scope is supported only in the Azure Databricks UI. You cannot create a scope using the Secrets CLI or API.
A request for the feature you are asking for was made last year, but no ETA was given.
I took a look at the request made by the UI page. While the form data is simple enough, the headers and security measures make programmatic access impractical. If you are dead-set on automating this part, you could use one of those tools which automates the cursor around the screen and clicks things for you.
Now it is possible, but you can't use a service principal token. It must be a user token which hinder automation.
Refer to Microsoft Docs:
https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#create-an-azure-key-vault-backed-secret-scope-using-the-databricks-cli
You can use Databricks Terraform provider to create secret scope baked by the Azure KeyVault. But because of Azure limitations it should be done by using user’s AAD token (usually using azure cli). Here is the working snippet for creation of the secret scope from existing KeyVault:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.2.9"
}
}
}
provider "azurerm" {
version = "2.33.0"
features {}
}
data "azurerm_databricks_workspace" "example" {
name = var.workspace_name
resource_group_name = var.resource_group
}
provider "databricks" {
azure_workspace_resource_id = data.azurerm_databricks_workspace.example.id
}
data "azurerm_key_vault" "example" {
name = var.keyvault_name
resource_group_name = var.resource_group
}
resource "databricks_secret_scope" "example" {
name = data.azurerm_key_vault.example.name
keyvault_metadata {
resource_id = data.azurerm_key_vault.example.id
dns_name = data.azurerm_key_vault.example.vault_uri
}
}
variable resource_group {
type = string
description = "Resource group to deploy"
}
variable workspace_name {
type = string
description = "The name of DB Workspace"
}
variable keyvault_name {
type = string
description = "The name of DB Workspace"
}

Google Admin Directory API, bad request 400 invalid_grant. (using service account)

So before showing my code, let me explain what steps I took to 'properly' set up service account environment.
In google developer console, created service account. (this produced Client ID (which is a long number), Service account (xxxxx#xxxx.iam.gserviceaccount.com), and private key which I downloaded in P12.
In Admin console, put the client ID with appropriate scope. In my case the scopes I added is https://www.googleapis.com/auth/admin.directory.group.readonly and https://www.googleapis.com/auth/admin.directory.group.member.readonly.
In my code, correctly set up private key path and other environments.
def getDirectoryService: Directory = {
val httpTransport: HttpTransport = new NetHttpTransport()
val jsonFactory: JacksonFactory = new JacksonFactory()
val credential: GoogleCredential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.setServiceAccountId("xxxxx#xxxx.iam.gserviceaccount.com")
.setServiceAccountScopes(util.Arrays.asList(DirectoryScopes.ADMIN_DIRECTORY_GROUP_READONLY, DirectoryScopes.ADMIN_DIRECTORY_GROUP_MEMBER_READONLY))
.setServiceAccountUser("admin#domain.com")
.setServiceAccountPrivateKeyFromP12File(
new java.io.File("/pathToKey/privatekey.p12"))
.build()
val service: Directory = new Directory.Builder(httpTransport, jsonFactory, null)
.setHttpRequestInitializer(credential).build()
service
}
And then I attempt to execute something like this:
service.groups().list().execute()
or
service.groups().list("domain.com").execute()
This code would result in,
com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_grant"
}
at com.google.api.client.auth.oauth2.TokenResponseException.from(TokenResponseException.java:105)
at com.google.api.client.auth.oauth2.TokenRequest.executeUnparsed(TokenRequest.java:287)
at com.google.api.client.auth.oauth2.TokenRequest.execute(TokenRequest.java:307)
at com.google.api.client.googleapis.auth.oauth2.GoogleCredential.executeRefreshToken(GoogleCredential.java:384)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.api.client.auth.oauth2.Credential.intercept(Credential.java:217)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:868)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.company.project.GoogleServiceProvider.getGroups(GoogleServiceProvider.scala:81)
at com.company.project.ProjectHandler.handle(ProjectHandler.scala:110)
at com.company.common.web.DispatcherServlet.service(DispatcherServlet.scala:40)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
What could have I done wrong? I have been searching solution for past two days, and tried many things. One of the solutions I am not still not sure of is ntp syncing (as in how to exactly sync server time to ntp).
Any adivce would be very helpful, thank you!
UPDATE: I also made sure to activate the Admin Directory SDK, and enabled the Domain-Wide Delegation on developer's console.
UPDATE #2: I forgot to mention that, the admin account is not the owner of the project itself. So basically, I am a member of a domain, and I created a project, so I am the only owner of the project and the service account.(I am not the admin). But should an admin be owner of the project and create service account in order for this to work properly???
Ok, my problem was that in setServiceAccountUser I put admin group email address, not the actual user account. Apparently, it doesn't allow putting in group email (alias) address into setServiceAccountUser.
So after putting in an actual user account with admin privilege, it seems to be working.
I still wonder what would be the best practice though. As in, should I create a separate user account with admin privilege just for the project? I definitely don't want to just put in an admin account email address in my code.

Resources