When I upgrade my terraform azurerm provider from version 2.71.0 to version 2.88.0, I get extra plan output that I didn't have before. Namely this (x's are mine to obscure private information):
# module.xxxx.data.azurerm_app_service_environment.xxxx will be read during apply
# (config refers to values not yet known)
<= data "azurerm_app_service_environment" "xxxx" {
~ cluster_setting = [] -> (known after apply)
~ front_end_scale_factor = 5 -> (known after apply)
~ id = "xxxxx" -> (known after apply)
~ internal_ip_address = "xxxxx" -> (known after apply)
~ location = "xxxx" -> (known after apply)
name = "xxxxxx"
~ outbound_ip_addresses = [
- "xxxxx",
] -> (known after apply)
~ pricing_tier = "I2" -> (known after apply)
~ service_ip_address = "xxxxx" -> (known after apply)
~ tags = {
xxxxxx
} -> (known after apply)
# (1 unchanged attribute hidden)
+ timeouts {
+ read = (known after apply)
}
Why is an update of provider version causing a change in plan output like this? I do not see any enhancements to azurerm_app_service_environment in the azurerm change logs.
What is the potential impact of this new output when I deploy?
Another new piece of output that I get from the same upgrade is this:
~ resource "azurerm_private_dns_a_record" "xxxxxx" {
id = "/subscriptions/xxxxxx"
name = "xxxxx"
~ records = [
- "xxxx",
] -> (known after apply)
tags = {}
# (4 unchanged attributes hidden)
}
The records entry is populated by data.azurerm_app_service_environment.xxx.internal_ip_address so it is related to that first bit of new output. That internal_ip_address value has not changed. And none of this output appears if I run under azurerm version 2.71.0
Related
I have incoming BLE beacon data from a gateway in the following format:
{"msg":"advData","gmac":"94A408B02508","obj":
[
{"type":32,"dmac":"AC233FE0784F","data1":"0201060303F1FF1716E2C56DB5DFFB48D2B060D0F5A71096E000000000C564","rssi":-45,"time":"2022-10-13 02:46:24"},
{"type":32,"dmac":"AC233FE078A1","data1":"0201060303F1FF1716E2C56DB5DFFB48D2B060D0F5A71096E000000000C564","rssi":-42,"time":"2022-10-13 02:46:26"}
]
}
and I want to extract the attributes gmac, dmac, rssi, and process attribute data1 and ingest these into influxdb via a telegraf config file.
I can successfully ingest gmac, dmac, and rssi using the below telegraf config:
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json_v2"
tagexclude = ["topic"]
[[inputs.mqtt_consumer.json_v2]]
measurement_name = "a"
timestamp_path = "obj.#.time"
timestamp_format = "unix"
[[inputs.mqtt_consumer.json_v2.tag]]
path = "gmac"
rename = "g"
[[inputs.mqtt_consumer.json_v2.tag]]
path = "obj.#.dmac"
rename = "d"
[[inputs.mqtt_consumer.json_v2.field]]
path = "obj.#.rssi"
type = "int"
rename = "r"
However, I'm not sure how to process the data1 attribute where I need to (1) extract characters 15 and 16 and convert this from a hexadecimal value to an integer, and (2) extract characters 13 and 14 and convert each hexadecimal value to an integer before combining them together as a float (character 13 is the whole number component, character 14 is the decimal component).
Can anybody provide some guidance here?
Many thanks!
Got it working thanks to some help over at Influx Community, I've pasted the relevant section of the telegraf config file if this is of help to anyone else:
data_format = "json_v2"
tagexclude = ["topic"]
[[inputs.mqtt_consumer.json_v2]]
measurement_name = "a"
timestamp_path = "obj.#.time"
timestamp_format = "unix"
[[inputs.mqtt_consumer.json_v2.tag]]
path = "gmac"
# g is gateway MAC address
rename = "g"
[[inputs.mqtt_consumer.json_v2.tag]]
path = "obj.#.dmac"
# d is beacon MAC address
rename = "d"
[[inputs.mqtt_consumer.json_v2.field]]
path = "obj.#.rssi"
type = "int"
# r is RSSI of beacon
rename = "r"
[[inputs.mqtt_consumer.json_v2.field]]
path = "obj.#.data1"
data_type = "string"
[[processors.starlark]]
namepass = ["a"]
source = '''
def apply(metric):
data1 = metric.fields.pop("data1")
tempWhole = int("0x" + data1[26:28], 0)
tempDecimal = int("0x" + data1[28:30], 0)
tempDecimal = tempDecimal / 100
# t is temperature of chip to two decimal points precision
metric.fields["t"] = tempWhole + tempDecimal
# b is battery level in mV
metric.fields["b"] = int("0x" + data1[30:34], 0)
return metric
'''
I'm trying to create a Virtual network in Azure with a NAT gateway, using terraform.
I'm getting the following warning when I run terraform plan :
Warning: Argument is deprecated
│
│ with azurerm_public_ip.example2,
│ on PublicIPandNAT.tf line 16, in resource "azurerm_public_ip" "example2":
│ 16: zones = ["1"]
│
│ This property has been deprecated in favour of `availability_zone` due to a breaking behavioural change in Azure:
│ https://azure.microsoft.com/en-us/updates/zone-behavior-change/
│
│ (and one more similar warning elsewhere)
╵
Do you want to perform these actions?
But in the azurerm provider documentation on registry.terraform.io, there is no reference to a availability_zone argument in the azurerm_public_ip resource.
Is the terraform documentation out of date? what is the syntax of the availability_zone argument? and what is the risk of using the zones argument?
Trying to create a Virtual network in Azure with a NAT gateway, using
terraform.
To create Virtual network with NAT Gateway using terraform, we have tried at our end with following code and it works fine without any error.
You can use the below code to do the same by adding your required name :-
main.tf:-
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "nat-gateway-example-rg"
location = "West Europe"
}
resource "azurerm_public_ip" "example" {
name = "nat-gateway-publicIP"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
allocation_method = "Static"
sku = "Standard"
zones = ["1"]
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
}
resource "azurerm_public_ip_prefix" "example" {
name = "nat-gateway-publicIPPrefix"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
prefix_length = 30
zones = ["1"]
}
resource "azurerm_nat_gateway" "example" {
name = "nat-Gateway"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku_name = "Standard"
idle_timeout_in_minutes = 10
zones = ["1"]
}
OUTPUT DETAILS:-
My terraform version :-
what is the syntax of the availability_zone argument? and what is the
risk of using the zones argument?
AFAIK, there is no risk of use availability zone and you can find the reference in aforementioned code .
For more information please refer this HashiCorp| azurerm_nat_gateway
We have some Linux clusters (two machines each) running on Azure and we would like that each node of the cluster be created in different zones and using availability set.
We are trying to create the VM on Azure using Terraform:
resource "azurerm_linux_virtual_machine" "move-az-test" {
count = "1"
name = "move-az-test01"
location = var.azure_location_short
resource_group_name = azurerm_resource_group.rg.name
size = "Standard_B1S"
zone = 1
computer_name = "move-az01"
disable_password_authentication = true
admin_username = var.os_user
admin_password = var.os_password
availability_set_id = azurerm_availability_set.avset.id
network_interface_ids = [azurerm_network_interface.move-az-nic.id]
source_image_reference {
publisher = "OpenLogic"
offer = "CentOS"
sku = "7.6"
version = "latest"
}
os_disk {
name = "move-az-test0_OsDisk"
caching = "ReadWrite"
disk_size_gb = "128"
storage_account_type = "Standard_LRS"
}
}
But we have the message error: Error: "zone": conflicts with availability_set_id
The short answer is that the availability set and the availability zone can't exist at the same time. You can take deeper learning about them. The former is in a logical grouping of the VMs and the latter improves the availability on the physical regions.
Hi I'm running terraform
Terraform v0.13.4
provider registry.terraform.io/hashicorp/azurerm v2.41.0
I'm trying to set up azure metric monitoring for vm
resource "azurerm_scheduled_query_rules_log" "scheduled_rules" {
for_each = local.alert_rules
name = "${var.client_initial}-${each.key}"
location = var.resource_group_name.location
resource_group_name = var.resource_group_name
criteria {
metric_name = each.value.metric_name
dimension {
name = "Computer"
operator = "Include"
values = var.virtual_machines
}
}
data_source_id = var.log_analytics_workspace_ID
description = each.value.description
enabled = true
}
However when i run plan, it tells me
53: resource "azurerm_scheduled_query_rules_log" "scheduled_rules" {
The provider provider.azurerm does not support resource type
"azurerm_scheduled_query_rules_log".
I see this new resource is introduced in azurerm 2.1, not sure why it's not available on 2.41.0?
I also face the same error. It should be the resource azurerm_monitor_scheduled_query_rules_log instead of azurerm_scheduled_query_rules_log. There might be some mistakes or do not update in the terraform Example Usage.
Here is a working example with Terraform v0.14.3 + azurerm v2.41.0
# Example: LogToMetric Action for the named Computer
resource "azurerm_monitor_scheduled_query_rules_log" "example" {
name = format("%s-queryrule", "some")
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
criteria {
metric_name = "Average_% Idle Time"
dimension {
name = "Computer"
operator = "Include"
values = ["targetVM"]
}
}
data_source_id = azurerm_log_analytics_workspace.example.id
description = "Scheduled query rule LogToMetric example"
enabled = true
}
When updating Neo4j and py2neo to last versions (2.2.3 and 2.0.7 respectively), I'm facing some problems with some import scripts.
For instance here, just a bit of code.
graph = py2neo.Graph()
graph.bind("http://localhost:7474/db/data/")
batch = py2neo.batch.PushBatch(graph)
pp.pprint(batch)
relationshipmap={}
def create_go_term(line):
if(line[6]=='1'):
relationshipmap[line[0]]=line[1]
goid = line[0]
goacc = line[3]
gotype = line[2]
goname = line[1]
term = py2neo.Node.cast( {
"id": goid, "acc": goacc, "term_type": gotype, "name": goname
})
term.labels.add("GO_TERM")
pp.pprint(term)
term.push()
#batch.append( term )
return True
logging.info('creating terms')
reader = csv.reader(open(opts.termfile),delimiter="\t")
iter = 0
for row in reader:
create_go_term(row)
iter = iter + 1
if ( iter > 5000 ):
# batch.push()
iter = 0
# batch.push()
When using batch or simply push without batch, I'm getting this error:
py2neo.error.BindError: Local entity is not bound to a remote entity
What am I doing wrong?
Thanks!
I think you first have to create the node before you can add the label and use push:
term = py2neo.Node.cast( {
"id": goid, "acc": goacc, "term_type": gotype, "name": goname
})
graph.create(term) # now the node should be bound to a remote entity
term.labels.add("GO_TERM")
term.push()
Alternatively, you can create the node with a label:
term = Node("GO_TERM", id=goid, acc=goacc, ...)
graph.create(term)