Azure Terraform Web App private Endpoint virtual network - terraform-provider-azure

I am trying to automate the deployment of an azure virtual network and azure web app.
During the deployment of those resources, everything went just fine and no errors. So I wanted to try to activate the private endpoint on the web app. This is my configuration on terraform.
resource "azurerm_virtual_network" "demo-vnet" {
name = "virtual-network-test"
address_space = ["10.100.0.0/16"]
location = var.location
resource_group_name = azurerm_resource_group.rg-testing-env.name
}
resource "azurerm_subnet" "front_end" {
name = "Front_End-Subnet"
address_prefixes = ["10.100.5.0/28"]
virtual_network_name = azurerm_virtual_network.demo-vnet.name
resource_group_name = azurerm_resource_group.rg-testing-env.name
delegation {
name = "testing-frontend"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
And on the web app itself, I set this configuration
resource "azurerm_app_service_virtual_network_swift_connection" "web-app-vnet" {
app_service_id = azurerm_app_service.app-test.example.id
subnet_id = azurerm_subnet.front_end.id
}
NOTE: On my first deployment, the swift failed because I had not delegation on the virtual network, so I had to add the delegation on the subnet to be able to run terraform.
After setting in place all the configuration, I run my terraform, everything run just smoothly, no errors.
After the completion, I checked my web app Private Endpoint and that was just off.
Can please anyone explain me what am I doing wrong here?. I thought that the swift connection was the block of code to activate the Private endpoint but apparently I am missing something else.
Just to confirm my logic workflow, I tried to do the manual steps in the portal. But surprisingly I was not able because I have the delegation on the subnet, as you can see.
Thank you so much for any help and/or explanation you can offer to solve this issue

I have used the below code to test the creation of VNET and Web app with private endpoint.
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "rg" {
name = "ansumantest"
}
# Virtual Network
resource "azurerm_virtual_network" "vnet" {
name = "ansumanapp-vnet"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
address_space = ["10.4.0.0/16"]
}
# Subnets for App Service instances
resource "azurerm_subnet" "appserv" {
name = "frontend-app"
resource_group_name = data.azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.4.1.0/24"]
enforce_private_link_endpoint_network_policies = true
}
# App Service Plan
resource "azurerm_app_service_plan" "frontend" {
name = "ansuman-frontend-asp"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
kind = "Linux"
reserved = true
sku {
tier = "Premium"
size = "P1V2"
}
}
# App Service
resource "azurerm_app_service" "frontend" {
name = "ansuman-frontend-app"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.frontend.id
}
#private endpoint
resource "azurerm_private_endpoint" "example" {
name = "${azurerm_app_service.frontend.name}-endpoint"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
subnet_id = azurerm_subnet.appserv.id
private_service_connection {
name = "${azurerm_app_service.frontend.name}-privateconnection"
private_connection_resource_id = azurerm_app_service.frontend.id
subresource_names = ["sites"]
is_manual_connection = false
}
}
# private DNS
resource "azurerm_private_dns_zone" "example" {
name = "privatelink.azurewebsites.net"
resource_group_name = data.azurerm_resource_group.rg.name
}
#private DNS Link
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "${azurerm_app_service.frontend.name}-dnslink"
resource_group_name = data.azurerm_resource_group.rg.name
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.vnet.id
registration_enabled = false
}
Requirements:
As you can see from the above code the Private Endpoint , Private DNS and Private DNS Link block are required for creating the private endpoint and enabling it for the app service.
The App service Plan needs to have Premium Plan for having Private
endpoint.
The Subnet to be used by Private Endpoint should have
enforce_private_link_endpoint_network_policies = true set other
wise it will error giving message as subnet has private endpoint network policies enabled , it should be disabled to be used by Private endpoint.
DNS zone name should only be privatelink.azurewebsites.net as you are creating a private endpoint for webapp.
Outputs:

Related

Azure event hub using terraform

I have question related to terraform code for azure event hub.
What are the security principles and policies that we need to take care while deploying azure event hub securely through terraform?. If possible please share the terraform code also.
Thanks
I have checked few docs but unable to understand it.
I tried to reproduce the same in my environment to create an Azure event hub using Terraform:
Terraform Code:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "venkyrg" {
name = "venkyrg1"
location = "West Europe"
}
resource "azurerm_eventhub_namespace" "example" {
name = "venkatnamespace"
location = azurerm_resource_group.venkyrg.location
resource_group_name = azurerm_resource_group.venkyrg.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "example" {
name = "venkateventhub"
namespace_name = azurerm_eventhub_namespace.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
partition_count = 2
message_retention = 1
}
#Event hub Policy creation
resource "azurerm_eventhub_authorization_rule" "example" {
name = "navi"
namespace_name = azurerm_eventhub_namespace.example.name
eventhub_name = azurerm_eventhub.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
listen = true
send = false
manage = false
}
# Service Prinicipal Assignment
resource "azurerm_role_assignment" "pod-identity-assignment" {
scope = azurerm_resource_group.resourceGroup.id
role_definition_name = "Azure Event Hubs Data Owner"
principal_id = "74cca40a-1d7e-4352-a66c-217eab00cf33"
}
Terraform Apply:
Once ran the code resources are created with event hub policies in Azure successfully, like below.
Policy Status:
Azure Built-in roles for Azure Event Hubs
Reference: Azurerm-eventhub with Terraform

How to create multiple alerts for single resource in azure using terraform

How to create multiple alerts for single resource in azure using terraform (i.e CPU, Memory & Disk I/O alerts of a VM)
Please check this below code to create multiple alerts for single resource
provider "azurerm"
features{}
}
resource "azurerm_resource_group" "rgv" {
name = "<resource group name>"
location = "west us"
}
resource "azurerm_monitor_action_group" "agv" {
name = "myactiongroup"
resource_group_name = azurerm_resource_group.rgv.name
short_name = "exampleact"
}
resource "azurerm_monitor_metric_alert" "alert" {
name = "example-metricalert"
resource_group_name = azurerm_resource_group.rgv.name
scopes = ["/subscriptions/1234XXXXXX/resourceGroups/<rg name>/providers/Microsoft.Compute/virtualMachines/<virtualmachine name>"]
description = "description"
target_resource_type = "Microsoft.Compute/virtualMachines"
criteria {
metric_namespace = "Microsoft.Compute/virtualMachines"
metric_name = "Percentage CPU"
aggregation = "Total"
operator = "GreaterThan"
threshold = 80
}
action {
action_group_id = azurerm_monitor_action_group.agv.id
}
}
Reference: hashicorp azurerm_monitor_metric_alert

Terraform docker cannot authenticate with container registry for remote host

I am on a Windows machine using Terraform 0.13.4 and trying to spin up some containers on a remote host using Terraform and the Docker provider:
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
I am having a hard time with this as it cannot authenticate with my private registry. Always getting 401 Unauthorized.
If I don't do this to grab the sha256_digest and just use the docker_container resource, everything works but it forces replacements of the running containers.
Hello Angelos if you dont want to force replace the running container you should try this :
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
resource "docker_image" "example" {
name = data.docker_registry_image.mycontainer.name
pull_triggers = [data.docker_registry_image.mycontainer.sha256_digest]
keep_locally = true
}
then in the container use :
resource "docker_container" "example" {
image = docker_image.example.latest
name = "container_name"
}
you shoukd use
docker_image.example.latest
Using the resource docker_image itself if it already exist he wont pull the image and doesn't restart the container but if you pass the name as a string he will replace the container everytime.
https://www.terraform.io/docs/providers/docker/r/container.html
Turns out that the code is correct and that the container service I am using (older version of ProGet) is not replying correctly for the auth calls. I tested the code using another registry and it all works as expected.

how do I add virtual network to api management with terraform?

how do I add virtual network to api management?
https://www.terraform.io/docs/providers/azurerm/r/api_management.html#virtual_network_configuration
A virtual_network_configuration block supports the following:
subnet_id - (Required) The id of the subnet that will be used for the API Management.
Just add the subnet Id as it shows in Terraform. Here is an example code:
provider "azurerm" {
features {}
}
data "azurerm_subnet" "example" {
name = "default"
virtual_network_name = "vnet-name"
resource_group_name = "group-name"
}
resource "azurerm_api_management" "example" {
name = "example-apim"
location = "East US"
resource_group_name = "group-name"
publisher_name = "My Company"
publisher_email = "company#terraform.io"
sku_name = "Developer_1"
virtual_network_type = "Internal"
virtual_network_configuration {
subnet_id = data.azurerm_subnet.example.id
}
policy {
xml_content = <<XML
<policies>
<inbound />
<backend />
<outbound />
<on-error />
</policies>
XML
}
}
And you can change the virtual network type as you need, also for other properties. I use the existing Vnet, you can create a new one or also use the existing one, it all depends on yourself.

Icinga2 client Host culster-zone check command not going down (RED) when lost connection

I have setup a single master with 2 client endpoints in my icintga2 monitoring system using director with Top-Down mode.
I have also setup 2 client nodes with both accept configs and accept commands.
(hopefully this means I'm running Top Down Command Endpoint mode)
The service checks (disk/mem/load) for the 3 hosts are returning correct results. But my problem is:
according to the example from Top Down Command Endpoint example,
host icinga2-client1 is using "hostalive" as the host check_command.
eg.
object Host "icinga2-client1.localdomain" {
check_command = "hostalive" //check is executed on the master
address = "192.168.56.111"
vars.client_endpoint = name //follows the convention that host name == endpoint name
}
But one issue I have is that
if the client1 icinga process is not running,
the host status stays GREEN and also all of service status (disk/mem/load) stay all GREEN as well
because master is not getting any service check updates and hostalive check command is able to ping the node.
Under Best Practice - Health Check section,
it mentioned to use "cluster-zone" check commands.
I was expecting while using "cluster-zone",
the host status would be RED
when the client node icinga process is stopped,
but somehow this is not happening.
Does anyone has any idea?
My zone/host/endpoint configurations are as follows:
object Zone "icinga-master" {
endpoints = [ "icinga-master" ]
}
object Host "icinga-master" {
import "Master-Template"
display_name = "icinga-master [192.168.100.71]"
address = "192.168.100.71"
groups = [ "Servers" ]
}
object Endpoint "icinga-master" {
host = "192.168.100.71"
port = "5665"
}
object Zone "rick-tftp" {
parent = "icinga-master"
endpoints = [ "rick-tftp" ]
}
object Endpoint "rick-tftp" {
host = "172.16.181.216"
}
object Host "rick-tftp" {
import "Host-Template"
display_name = "rick-tftp [172.16.181.216]"
address = "172.16.181.216"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
object Zone "tftp-server" {
parent = "icinga-master"
endpoints = [ "tftp-server" ]
}
object Endpoint "tftp-server" {
host = "192.168.100.221"
}
object Host "tftp-server" {
import "Host-Template"
display_name = "tftp-server [192.168.100.221]"
address = "192.168.100.221"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
template Host "Host-Template" {
import "pnp4nagios-host"
check_command = "cluster-zone"
max_check_attempts = "5"
check_interval = 1m
retry_interval = 30s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_perfdata = true
}
Thanks,
Rick

Resources