Terraform GCP Secret Manager: For each for multiple secrets error - foreach

Trying to store some values in Secret Manager, Subnet names, cidr ranges, vpc name etc. The following is the code I am using with For Each. Giving me an error and I'm not sure what I'm doing incorrectly
resource "google_secret_manager_secret" "Network" {
provider = google-beta
for_each = local.exports
secret_id = each.key
replication {
automatic = false
}
}
resource "google_secret_manager_secret_version" "Network-Secrets" {
provider = google-beta
for_each = local.exports
secret = each.value.name
secret_data = each.value.object
}
locals {
exports = {
"rd-vpc-vpc-id" = { object = module.holly.network_id, name = "rd-vpc-vpc-id" }
"rd-subnets-subnet1-id" = { object = module.kryten_subnet.subnet_id, name = "rd-subnets-subnet1-id" }
"rd-subnets-subnet1-cidr" = { object = module.kryten_subnet.subnet_cidr, name = "rd-subnets-subnet1-cidr" }
}
}
When I do a terraform plan, I get no errors, but after entering yes on the Apply, the following error is presented.
Error: Error creating SecretVersion: googleapi: got HTTP response code 404 with body: <!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px
;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins
{color:#777;text-decoration:none}a img{border:0}#media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background
:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}#media only screen and (min-resolution:192dpi){#logo{bac
kground:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/bran
ding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}#media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/g
ooglelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>That’s an error.</ins>
<p>The requested URL <code>/v1beta1/rd-subnets-subnet1-cidr:addVersion?alt=json</code> was not found on this server. <ins>That’s all we know.</ins>
on ..\..\..\..\red-dwarf-terraform-modules\network\exports.tf line 12, in resource "google_secret_manager_secret_version" "Network-Secrets":
12: resource "google_secret_manager_secret_version" "Network-Secrets" {
Any help much appreciated.
Edit: If I use
secret = google_secret_manager_secret.Network.id
I get the below error
Error: Missing resource instance key
on ..\..\..\..\red-dwarf-terraform-modules\network\exports.tf
line 16, in resource "google_secret_manager_secret_version" "Network-Secrets":
16: secret = google_secret_manager_secret.Network.id
Because google_secret_manager_secret.Network has "for_each" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
google_secret_manager_secret.Network[each.key]
[terragrunt] 2020/07/16 22:46:43 Hit multiple errors:
exit status 1

The secret parameter needs to be google_secret_manager_secret.Network.id (the full resource url to the parent secret).

The correct usage would be:
The json conf file:
{
...
"secret" : {
"secret_A" : "secret_A_value",
"secret_B" : "secret_B_value"
}
}
resource "google_secret_manager_secret" "secret-basic" {
project = local.conf.project_id
for_each = local.conf.secret
secret_id = each.key
labels = {
label = "SOME_LABEL"
}
replication {
user_managed {
replicas {
location = "SOME_LOCATION"
}
}
}
}
resource "google_secret_manager_secret_version" "secret-version-basic" {
for_each = local.conf.secret
secret = google_secret_manager_secret.secret-basic[each.key].id
secret_data = each.value
}

Related

Azure event hub using terraform

I have question related to terraform code for azure event hub.
What are the security principles and policies that we need to take care while deploying azure event hub securely through terraform?. If possible please share the terraform code also.
Thanks
I have checked few docs but unable to understand it.
I tried to reproduce the same in my environment to create an Azure event hub using Terraform:
Terraform Code:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "venkyrg" {
name = "venkyrg1"
location = "West Europe"
}
resource "azurerm_eventhub_namespace" "example" {
name = "venkatnamespace"
location = azurerm_resource_group.venkyrg.location
resource_group_name = azurerm_resource_group.venkyrg.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "example" {
name = "venkateventhub"
namespace_name = azurerm_eventhub_namespace.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
partition_count = 2
message_retention = 1
}
#Event hub Policy creation
resource "azurerm_eventhub_authorization_rule" "example" {
name = "navi"
namespace_name = azurerm_eventhub_namespace.example.name
eventhub_name = azurerm_eventhub.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
listen = true
send = false
manage = false
}
# Service Prinicipal Assignment
resource "azurerm_role_assignment" "pod-identity-assignment" {
scope = azurerm_resource_group.resourceGroup.id
role_definition_name = "Azure Event Hubs Data Owner"
principal_id = "74cca40a-1d7e-4352-a66c-217eab00cf33"
}
Terraform Apply:
Once ran the code resources are created with event hub policies in Azure successfully, like below.
Policy Status:
Azure Built-in roles for Azure Event Hubs
Reference: Azurerm-eventhub with Terraform

How to create multiple alerts for single resource in azure using terraform

How to create multiple alerts for single resource in azure using terraform (i.e CPU, Memory & Disk I/O alerts of a VM)
Please check this below code to create multiple alerts for single resource
provider "azurerm"
features{}
}
resource "azurerm_resource_group" "rgv" {
name = "<resource group name>"
location = "west us"
}
resource "azurerm_monitor_action_group" "agv" {
name = "myactiongroup"
resource_group_name = azurerm_resource_group.rgv.name
short_name = "exampleact"
}
resource "azurerm_monitor_metric_alert" "alert" {
name = "example-metricalert"
resource_group_name = azurerm_resource_group.rgv.name
scopes = ["/subscriptions/1234XXXXXX/resourceGroups/<rg name>/providers/Microsoft.Compute/virtualMachines/<virtualmachine name>"]
description = "description"
target_resource_type = "Microsoft.Compute/virtualMachines"
criteria {
metric_namespace = "Microsoft.Compute/virtualMachines"
metric_name = "Percentage CPU"
aggregation = "Total"
operator = "GreaterThan"
threshold = 80
}
action {
action_group_id = azurerm_monitor_action_group.agv.id
}
}
Reference: hashicorp azurerm_monitor_metric_alert

Cert-Manager Challenge pending, no error in MIC (Azure DNS)

I can't get TLS to work. The CertficateRequest gets created, the Order too and also the Challenge. However, the Challenge is stuck in pending.
Name: test-tls-secret-8qshd-3608253913-1269058669
Namespace: test
Labels: <none>
Annotations: <none>
API Version: acme.cert-manager.io/v1
Kind: Challenge
Metadata:
Creation Timestamp: 2022-07-19T08:17:04Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizer.acme.cert-manager.io":
Manager: cert-manager-challenges
Operation: Update
Time: 2022-07-19T08:17:04Z
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
.:
k:{"uid":"06029d3f-d1ce-45db-a267-796ff9b82a67"}:
f:spec:
.:
f:authorizationURL:
f:dnsName:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:key:
f:solver:
.:
f:dns01:
.:
f:azureDNS:
.:
f:environment:
f:hostedZoneName:
f:resourceGroupName:
f:subscriptionID:
f:token:
f:type:
f:url:
f:wildcard:
Manager: cert-manager-orders
Operation: Update
Time: 2022-07-19T08:17:04Z
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:presented:
f:processing:
f:reason:
f:state:
Manager: cert-manager-challenges
Operation: Update
Subresource: status
Time: 2022-07-19T08:25:38Z
Owner References:
API Version: acme.cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Order
Name: test-tls-secret-8qshd-3608253913
UID: 06029d3f-d1ce-45db-a267-796ff9b82a67
Resource Version: 4528159
UID: 9594ed48-72c6-4403-8356-4991950fe9bb
Spec:
Authorization URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/131873811576
Dns Name: test.internal.<company_id>.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt
Key: xrnhZETWbkGTE7CA0A3CQd6a48d4JG4HKDiCXPpxTWM
Solver:
dns01:
Azure DNS:
Environment: AzurePublicCloud
Hosted Zone Name: internal.<company_id>.com
Resource Group Name: tool-cluster-rg
Subscription ID: <subscription_id>
Token: jXCR2UorNanlHqZd8T7Ifjbx6PuGfLBwnzWzBnDvCyc
Type: DNS-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/131873811576/vCGdog
Wildcard: false
Status:
Presented: false
Processing: true
Reason: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/tool-cluster-rg/providers/Microsoft.Network/dnsZones/internal.<company_id>.com/TXT/_acme-challenge.test?api-version=2017-10-01: StatusCode=404 -- Original Error: adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod cert-manager/cert-manager-5bb7949947-qlg5j in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.core.windows.net%2F
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 59m cert-manager-challenges Challenge scheduled for processing
Warning PresentError 11s (x7 over 51m) cert-manager-challenges Error presenting challenge: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/tool-cluster-rg/providers/Microsoft.Network/dnsZones/internal.<company_id>.com/TXT/_acme-challenge.test?api-version=2017-10-01: StatusCode=404 -- Original Error: adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod cert-manager/cert-manager-5bb7949947-qlg5j in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.core.windows.net%2F
It says to check the MIC pod logs, however, there are no errors logged:
I0719 08:16:52.271516 1 mic.go:587] pod test/test-deployment-b5dcc75f4-5gdtj has no assigned node yet. it will be ignored
I0719 08:16:52.284362 1 mic.go:608] No AzureIdentityBinding found for pod test/test-deployment-b5dcc75f4-5gdtj that matches selector: certman-label. it will be ignored
I0719 08:16:53.735678 1 mic.go:648] certman-identity identity not found when using test/certman-id-binding binding
I0719 08:16:53.737027 1 mic.go:1040] processing node aks-default-10282586-vmss, add [1], del [0], update [0]
I0719 08:16:53.737061 1 crd.go:514] creating assigned id test/test-deployment-b5dcc75f4-5gdtj-test-certman-identity
I0719 08:16:53.844892 1 cloudprovider.go:210] updating user-assigned identities on aks-default-10282586-vmss, assign [1], unassign [0]
I0719 08:17:04.545556 1 crd.go:777] updating AzureAssignedIdentity test/test-deployment-b5dcc75f4-5gdtj-test-certman-identity status to Assigned
I0719 08:17:04.564464 1 mic.go:525] work done: true. Found 1 pods, 1 ids, 1 bindings
I0719 08:17:04.564477 1 mic.go:526] total work cycles: 392, out of which work was done in: 320
I0719 08:17:04.564492 1 stats.go:183] ** stats collected **
I0719 08:17:04.564497 1 stats.go:162] Pod listing: 20.95µs
I0719 08:17:04.564504 1 stats.go:162] AzureIdentity listing: 2.357µs
I0719 08:17:04.564508 1 stats.go:162] AzureIdentityBinding listing: 3.211µs
I0719 08:17:04.564512 1 stats.go:162] AzureAssignedIdentity listing: 431ns
I0719 08:17:04.564516 1 stats.go:162] System: 71.101µs
I0719 08:17:04.564520 1 stats.go:162] CacheSync: 4.482µs
I0719 08:17:04.564523 1 stats.go:162] Cloud provider GET: 83.123547ms
I0719 08:17:04.564527 1 stats.go:162] Cloud provider PATCH: 10.700611864s
I0719 08:17:04.564531 1 stats.go:162] AzureAssignedIdentity creation: 24.654916ms
I0719 08:17:04.564535 1 stats.go:162] AzureAssignedIdentity update: 0s
I0719 08:17:04.564538 1 stats.go:162] AzureAssignedIdentity deletion: 0s
I0719 08:17:04.564542 1 stats.go:170] Number of cloud provider PATCH: 1
I0719 08:17:04.564546 1 stats.go:170] Number of cloud provider GET: 1
I0719 08:17:04.564549 1 stats.go:170] Number of AzureAssignedIdentities created in this sync cycle: 1
I0719 08:17:04.564554 1 stats.go:170] Number of AzureAssignedIdentities updated in this sync cycle: 0
I0719 08:17:04.564557 1 stats.go:170] Number of AzureAssignedIdentities deleted in this sync cycle: 0
I0719 08:17:04.564561 1 stats.go:162] Find AzureAssignedIdentities to create: 0s
I0719 08:17:04.564564 1 stats.go:162] Find AzureAssignedIdentities to delete: 0s
I0719 08:17:04.564568 1 stats.go:162] Total time to assign or update AzureAssignedIdentities: 10.827425179s
I0719 08:17:04.564573 1 stats.go:162] Total: 10.82763016s
I0719 08:17:04.564577 1 stats.go:212] *********************
I0719 08:19:34.077484 1 mic.go:1466] reconciling identity assignment for [/subscriptions/<subscription_id>/resourceGroups/tool-cluster-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/cert-manager-dns01] on node aks-default-10282586-vmss
I0719 08:22:34.161195 1 mic.go:1466] reconciling identity assignment for [/subscriptions/<subscription_id>/resourceGroups/tool-cluster-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/cert-manager-dns01] on node aks-default-10282586-vmss
The "reconciling identity" output gets repeated afterwards. Up to this point, I was able to handle my way through error messages, but now I have no idea how to proceed. Anyone got any lead what I'm missing?
Following my terraform code for the infrastructure.
terraform {
cloud {
organization = "<company_id>"
workspaces {
name = "tool-cluster"
}
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.6.0, < 4.0.0"
}
}
}
provider "azurerm" {
features {}
}
data "azurerm_client_config" "default" {}
variable "id" {
type = string
description = "Company wide unique terraform identifier"
default = "tool-cluster"
}
resource "azurerm_resource_group" "default" {
name = "${var.id}-rg"
location = "westeurope"
}
resource "azurerm_kubernetes_cluster" "default" {
name = "${var.id}-aks"
location = azurerm_resource_group.default.location
resource_group_name = azurerm_resource_group.default.name
dns_prefix = var.id
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D4_v5"
}
identity {
type = "SystemAssigned"
}
role_based_access_control_enabled = true
http_application_routing_enabled = true
}
resource "azurerm_dns_zone" "internal" {
name = "internal.<company_id>.com"
resource_group_name = azurerm_resource_group.default.name
}
resource "azurerm_user_assigned_identity" "dns_identity" {
name = "cert-manager-dns01"
resource_group_name = azurerm_resource_group.default.name
location = azurerm_resource_group.default.location
}
resource "azurerm_role_assignment" "dns_contributor" {
scope = azurerm_dns_zone.internal.id
role_definition_name = "DNS Zone Contributor"
principal_id = azurerm_user_assigned_identity.dns_identity.principal_id
}
I've added the roles "Managed Identity Operator" and "Virtual Machine Contributor" in the scope of the generated resourcegroup of the cluster (MC_tool-cluster-rg_tool-cluster-aks_westeurope) and "Managed Identity Operator" to the resource group of the cluster itself (tool-cluster-rg) to the kubelet_identity.
Code for the cert-manager:
terraform {
cloud {
organization = "<company_id>"
workspaces {
name = "cert-manager"
}
}
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.12.0, < 3.0.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.6.0, < 3.0.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.6.0, < 4.0.0"
}
}
}
data "terraform_remote_state" "tool-cluster" {
backend = "remote"
config = {
organization = "<company_id>"
workspaces = {
name = "tool-cluster"
}
}
}
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = data.terraform_remote_state.tool-cluster.outputs.host
client_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_certificate)
client_key = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_key)
cluster_ca_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.cluster_ca_certificate)
}
provider "helm" {
kubernetes {
host = data.terraform_remote_state.tool-cluster.outputs.host
client_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_certificate)
client_key = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_key)
cluster_ca_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.cluster_ca_certificate)
}
}
locals {
app-name = "cert-manager"
}
resource "kubernetes_namespace" "cert_manager" {
metadata {
name = local.app-name
}
}
resource "helm_release" "cert_manager" {
name = local.app-name
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = "v1.8.2"
namespace = kubernetes_namespace.cert_manager.metadata.0.name
set {
name = "installCRDs"
value = "true"
}
}
resource "helm_release" "aad_pod_identity" {
name = "aad-pod-identity"
repository = "https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts"
chart = "aad-pod-identity"
version = "v4.1.10"
namespace = kubernetes_namespace.cert_manager.metadata.0.name
}
resource "azurerm_user_assigned_identity" "default" {
name = local.app-name
resource_group_name = data.terraform_remote_state.tool-cluster.outputs.resource_name
location = data.terraform_remote_state.tool-cluster.outputs.resource_location
}
resource "azurerm_role_assignment" "default" {
scope = data.terraform_remote_state.tool-cluster.outputs.dns_zone_id
role_definition_name = "DNS Zone Contributor"
principal_id = azurerm_user_assigned_identity.default.principal_id
}
output "namespace" {
value = kubernetes_namespace.cert_manager.metadata.0.name
sensitive = false
}
and the code for my issuer:
terraform {
cloud {
organization = "<company_id>"
workspaces {
name = "cert-issuer"
}
}
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.12.0, < 3.0.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.6.0, < 3.0.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.6.0, < 4.0.0"
}
}
}
data "terraform_remote_state" "tool-cluster" {
backend = "remote"
config = {
organization = "<company_id>"
workspaces = {
name = "tool-cluster"
}
}
}
data "terraform_remote_state" "cert-manager" {
backend = "remote"
config = {
organization = "<company_id>"
workspaces = {
name = "cert-manager"
}
}
}
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = data.terraform_remote_state.tool-cluster.outputs.host
client_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_certificate)
client_key = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_key)
cluster_ca_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.cluster_ca_certificate)
}
provider "helm" {
kubernetes {
host = data.terraform_remote_state.tool-cluster.outputs.host
client_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_certificate)
client_key = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_key)
cluster_ca_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.cluster_ca_certificate)
}
}
locals {
app-name = "cert-manager"
}
data "azurerm_subscription" "current" {}
resource "kubernetes_manifest" "cluster_issuer" {
manifest = yamldecode(templatefile(
"${path.module}/cluster-issuer.tpl.yaml",
{
"name" = "letsencrypt"
"subscription_id" = data.azurerm_subscription.current.subscription_id
"resource_group_name" = data.terraform_remote_state.tool-cluster.outputs.resource_name
"dns_zone_name" = data.terraform_remote_state.tool-cluster.outputs.dns_zone_name
}
))
}
Also, the yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ${name}
spec:
acme:
email: support#<company_id>.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: ${name}
solvers:
- dns01:
azureDNS:
resourceGroupName: ${resource_group_name}
subscriptionID: ${subscription_id}
hostedZoneName: ${dns_zone_name}
environment: AzurePublicCloud
Finally, my sample app:
terraform {
cloud {
organization = "<company_id>"
workspaces {
name = "test-web-app"
}
}
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.12.0, < 3.0.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.6.0, < 4.0.0"
}
azuread = {
source = "hashicorp/azuread"
version = ">= 2.26.0, < 3.0.0"
}
}
}
data "terraform_remote_state" "tool-cluster" {
backend = "remote"
config = {
organization = "<company_id>"
workspaces = {
name = "tool-cluster"
}
}
}
provider "azuread" {}
provider "azurerm" {
features {}
}
provider "kubernetes" {
host = data.terraform_remote_state.tool-cluster.outputs.host
client_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_certificate)
client_key = base64decode(data.terraform_remote_state.tool-cluster.outputs.client_key)
cluster_ca_certificate = base64decode(data.terraform_remote_state.tool-cluster.outputs.cluster_ca_certificate)
}
locals {
app-name = "test"
host = "test.${data.terraform_remote_state.tool-cluster.outputs.cluster_domain_name}"
}
resource "azurerm_dns_cname_record" "default" {
name = local.app-name
zone_name = data.terraform_remote_state.tool-cluster.outputs.dns_zone_name
resource_group_name = data.terraform_remote_state.tool-cluster.outputs.resource_name
ttl = 300
record = local.host
}
resource "azuread_application" "default" {
display_name = local.app-name
}
resource "kubernetes_namespace" "default" {
metadata {
name = local.app-name
}
}
resource "kubernetes_secret" "auth" {
metadata {
name = "basic-auth"
namespace = kubernetes_namespace.default.metadata.0.name
}
data = {
"auth" = file("./auth")
}
}
resource "kubernetes_deployment" "default" {
metadata {
name = "${local.app-name}-deployment"
namespace = kubernetes_namespace.default.metadata.0.name
labels = {
app = local.app-name
}
}
spec {
replicas = 1
selector {
match_labels = {
app = local.app-name
}
}
template {
metadata {
labels = {
app = local.app-name
aadpodidbinding = "certman-label"
}
}
spec {
container {
image = "crccheck/hello-world:latest"
name = local.app-name
port {
container_port = 8000
host_port = 8000
}
}
}
}
}
}
resource "kubernetes_service" "default" {
metadata {
name = "${local.app-name}-svc"
namespace = kubernetes_namespace.default.metadata.0.name
}
spec {
selector = {
app = kubernetes_deployment.default.metadata.0.labels.app
}
port {
port = 8000
target_port = 8000
}
}
}
resource "kubernetes_ingress_v1" "default" {
metadata {
name = "${local.app-name}-ing"
namespace = kubernetes_namespace.default.metadata.0.name
annotations = {
"kubernetes.io/ingress.class" = "addon-http-application-routing"
"cert-manager.io/cluster-issuer" = "letsencrypt"
# basic-auth
"nginx.ingress.kubernetes.io/auth-type" = "basic"
"nginx.ingress.kubernetes.io/auth-secret" = "basic-auth"
"nginx.ingress.kubernetes.io/auth-realm" = "Authentication Required - foo"
}
}
spec {
rule {
host = local.host
http {
path {
path = "/"
backend {
service {
name = kubernetes_service.default.metadata.0.name
port {
number = 8000
}
}
}
}
}
}
rule {
host = trimsuffix(azurerm_dns_cname_record.default.fqdn, ".")
http {
path {
path = "/"
backend {
service {
name = kubernetes_service.default.metadata.0.name
port {
number = 8000
}
}
}
}
}
}
tls {
hosts = [ trimsuffix(azurerm_dns_cname_record.default.fqdn, ".") ]
secret_name = "${local.app-name}-tls-secret"
}
}
}
resource "kubernetes_manifest" "azure_identity" {
manifest = yamldecode(templatefile(
"${path.module}/azure_identity.tpl.yaml",
{
"namespace" = kubernetes_namespace.default.metadata.0.name
"resource_id" = data.terraform_remote_state.tool-cluster.outputs.identity_resource_id
"client_id" = data.terraform_remote_state.tool-cluster.outputs.identity_client_id
}
))
}
resource "kubernetes_manifest" "azure_identity_binding" {
manifest = yamldecode(templatefile(
"${path.module}/azure_identity_binding.tpl.yaml",
{
"namespace" = kubernetes_namespace.default.metadata.0.name
"resource_id" = data.terraform_remote_state.tool-cluster.outputs.identity_resource_id
"client_id" = data.terraform_remote_state.tool-cluster.outputs.identity_client_id
}
))
}
The two identity yaml:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
annotations:
# recommended to use namespaced identites https://azure.github.io/aad-pod-identity/docs/configure/match_pods_in_namespace/
aadpodidentity.k8s.io/Behavior: namespaced
name: certman-identity
namespace: ${namespace} # change to your preferred namespace
spec:
type: 0 # MSI
resourceID: ${resource_id} # Resource Id From Previous step
clientID: ${client_id} # Client Id from previous step
and
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: certman-id-binding
namespace: ${namespace} # change to your preferred namespace
spec:
azureIdentity: certman-identity
selector: certman-label # This is the label that needs to be set on cert-manager pods
edit: reformatted
I was not able to solve it with http application routing, so I installed my own ingress and instead of aad-pod-identity I installed ExternalDNS with Service Principal. The terraform code for that:
locals {
app-name = "external-dns"
}
resource "azuread_application" "dns" {
display_name = "dns-service_principal"
}
resource "azuread_application_password" "dns" {
application_object_id = azuread_application.dns.object_id
}
resource "azuread_service_principal" "dns" {
application_id = azuread_application.dns.application_id
description = "Service Principal to write DNS changes for ${data.terraform_remote_state.tool-cluster.outputs.dns_zone_name}"
}
resource "azurerm_role_assignment" "dns_zone_contributor" {
scope = data.terraform_remote_state.tool-cluster.outputs.dns_zone_id
role_definition_name = "DNS Zone Contributor"
principal_id = azuread_service_principal.dns.id
}
resource "azurerm_role_assignment" "rg_reader" {
scope = data.terraform_remote_state.tool-cluster.outputs.dns_zone_id
role_definition_name = "Reader"
principal_id = azuread_service_principal.dns.id
}
resource "kubernetes_secret" "external_dns_secret" {
metadata {
name = "azure-config-file"
}
data = { "azure.json" = jsonencode({
tenantId = data.azurerm_subscription.default.tenant_id
subscriptionId = data.azurerm_subscription.default.subscription_id
resourceGroup = data.terraform_remote_state.tool-cluster.outputs.resource_name
aadClientId = azuread_application.dns.application_id
aadClientSecret = azuread_application_password.dns.value
})
}
}
resource "kubernetes_service_account" "dns" {
metadata {
name = local.app-name
}
}
resource "kubernetes_cluster_role" "dns" {
metadata {
name = local.app-name
}
rule {
api_groups = [ "" ]
resources = [ "services","endpoints","pods", "nodes" ]
verbs = [ "get","watch","list" ]
}
rule {
api_groups = [ "extensions","networking.k8s.io" ]
resources = [ "ingresses" ]
verbs = [ "get","watch","list" ]
}
}
resource "kubernetes_cluster_role_binding" "dns" {
metadata {
name = "${local.app-name}-viewer"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = kubernetes_cluster_role.dns.metadata.0.name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.dns.metadata.0.name
}
}
resource "kubernetes_deployment" "dns" {
metadata {
name = local.app-name
}
spec {
strategy {
type = "Recreate"
}
selector {
match_labels = {
"app" = local.app-name
}
}
template {
metadata {
labels = {
"app" = local.app-name
}
}
spec {
service_account_name = kubernetes_service_account.dns.metadata.0.name
container {
name = local.app-name
image = "bitnami/external-dns:0.12.1"
args = [ "--source=service", "--source=ingress", "--provider=azure", "--txt-prefix=externaldns-" ]
volume_mount {
name = kubernetes_secret.external_dns_secret.metadata.0.name
mount_path = "/etc/kubernetes"
read_only = true
}
}
volume {
name = kubernetes_secret.external_dns_secret.metadata.0.name
secret {
secret_name = kubernetes_secret.external_dns_secret.metadata.0.name
}
}
}
}
}
}

What is the simplest way to get GCR image digest through terraform?

Terraform GCR provider has data source called google_container_registry_image which has an argument digest, but it will be null since it is stated that data source works completely offline.
data "google_container_registry_image" "image" {
project = "foo"
name = "bar"
tag = "baz"
}
output "digest" {
value = data.google_container_registry_image.image.digest // this is null
}
Current workaround I use uses docker provider and looks like this:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.11.0"
}
}
}
provider "docker" {
registry_auth {
address = "gcr.io"
config_file = pathexpand("~/.docker/config.json")
}
}
data "google_container_registry_image" "image" {
project = "foo"
name = "bar"
tag = "baz"
}
data "docker_registry_image" "image" {
name = data.google_container_registry_image.image.image_url
}
output "digest" {
value = data.docker_registry_image.image.sha256_digest
}
Using two providers and additional docker credentials seems pretty complicated for such a simple use case, is there some easier way to do it?

how do I add virtual network to api management with terraform?

how do I add virtual network to api management?
https://www.terraform.io/docs/providers/azurerm/r/api_management.html#virtual_network_configuration
A virtual_network_configuration block supports the following:
subnet_id - (Required) The id of the subnet that will be used for the API Management.
Just add the subnet Id as it shows in Terraform. Here is an example code:
provider "azurerm" {
features {}
}
data "azurerm_subnet" "example" {
name = "default"
virtual_network_name = "vnet-name"
resource_group_name = "group-name"
}
resource "azurerm_api_management" "example" {
name = "example-apim"
location = "East US"
resource_group_name = "group-name"
publisher_name = "My Company"
publisher_email = "company#terraform.io"
sku_name = "Developer_1"
virtual_network_type = "Internal"
virtual_network_configuration {
subnet_id = data.azurerm_subnet.example.id
}
policy {
xml_content = <<XML
<policies>
<inbound />
<backend />
<outbound />
<on-error />
</policies>
XML
}
}
And you can change the virtual network type as you need, also for other properties. I use the existing Vnet, you can create a new one or also use the existing one, it all depends on yourself.

Resources