azure terraform What is the difference between subnet = var.XXXXXXX versus subnet_id = "{${var.XXXXXXX}"" - terraform-provider-azure

Unclear on the difference between a parameter (ie subnet_id) such as the following: subnet_id = var.XXXXXXXXX , resource_group_name = "${ resource_created.XXXXXXXX.name}, subnet_id = "{$var.XXXXXXXXX}" , subnet_id = "{ "${azurerm_subnet.XXXXXXXXXXXXX.id}"? When is ".id" required at the end of a parable output? For example resource group creation has ".name" while IP configuration (IP_configuration)' subnet_id has a ".id" at the end ${azurerm_subnet.XXXXXXXXXXXXX.id} .
Is there a single link for all the possible formats such as .name, .location, .id,...?
browsed everywhere? Sorry for the basic question but I am very new to azure terraform.

you can look up all the resource reference and outputs using the api reference: https://www.terraform.io/docs/providers/azurerm/index.html

This is a pretty dated question but since there is no satisfactory answer to a commonly asked question, it might be helpful for someone stumbling across this question.
The difference is in both usage as well as in style.
The interpolation syntax with the $ is old style and the usage with the var keyword the new style syntax. Using the old style may throw a warning and rather than changing the syntax manually, tf fmt can also resolve the style differences.

Related

Problem finding metrics in the new Google My Business APIs

I am having some issues migrating my Google MY Business API python code.
My original code looks like:
Get my accounts:
service, flags = sample_tools.init(argv, "mybusiness", "v4", __doc__, __file__,
scope="https://www.googleapis.com/auth/business.manage",
discovery_filename=cfg.discovery_doc_old)
output = service.accounts().list().execute()
accounts = output["accounts"]
Get my locations name per account with the following call:
self.service.accounts().locations().list(parent=account['name']).execute()
For each location I get my insights report with the following call:
service.accounts().locations().reportInsights(name=self.account, body=body).execute()
Now since these calls are going to be deprecated, I need to update this code to the new Business APis. So far I managed to reproduce step 1 & 2 of my old code:
Get my accounts (using the my business account management api):
service, flags = sample_tools.init(argv, "mybusinessaccountmanagement", "v1", __doc__, __file__,
scope="https://www.googleapis.com/auth/business.manage",
discovery_filename=cfg.discovery_doc_new)
output = self.service.accounts().list().execute()
accounts = output["accounts"]
Get my location (using my business business information api):
service, flags = sample_tools.init(argv, "mybusinessbusinessinformation", "v1", __doc__, __file__,
scope="https://www.googleapis.com/auth/business.manage",
discovery_filename=cfg.discovery_doc_gmb_info)
output = service.accounts().locations().list(parent=self.accounts[0]['name'],
readMask='name',
).execute()
locations = output['locations']
Now I am missing the equivalent to the old
reportInsights(name=self.account, body=body).execute()
I haven't found anywhere some similar. I thought maybe I need to add it as the readmask, but also could find any documentation. I basically want to get the values of these metrics for each location using one of the new APIs:
https://developers.google.com/my-business/reference/rest/v4/Metric
I already went through this tutorial, even though I prefer using the client libraries:
https://developers.google.com/my-business/content/basic-setup
but it doesn't tell me where to find these metrics.
I have also tried the same structure as in the old API but I get the error message:
AttributeError: 'Resource' object has no attribute 'reportInsights'
Can someone help me with this? I am quite new to the Google APIs and maybe there is something obvious I missing out :/
Thanks a lot,
Rafael
As #devinthemaking has stated correctly, the reportInsights method is not deprecated and does not have a successor method yet.
The list of deprecated methods can be found here: Deprecation schedule

How to query sibling rules from a Bazel rule

I would like to be able to do the following in a Bazel BUILD file:
alpha(
name = "hello world",
color = "blue"
)
beta(
name = "hello again"
)
Where alpha and beta are custom rules. I want beta to be able to access the color attribute of the alpha rule, without adding a label attribute. In Bazel query, I can do something like this:
bazel query 'kind(beta, siblings(kind(alpha, //...)))'
which gives me the beta which is side by side to alpha. Can I achieve the same somehow from within the implementation function of the beta rule?
def _beta_rule_impl(ctx):
# This does not exist, I wish it did: ctx.siblings(kind='alpha')
I've seen this been done with a label like this
beta(
name = "hello again",
alpha_link = ":hello world" # explicitly linking
)
but I find this a bit verbose, especially since there is a sibling query support.
The way the question is formulated, the answer is no. It is not possible.
Bazel design philosophy is to be explicit about target dependencies. Providers mechanism is meant to provide the access to the dependency graph information during the analysis phase.
It is difficult to tell what is the actual use case is. Using Aspects might be the answer.
In my scenario, I'm trying to get a genrule to call a test rule before proceeding:
genrule(
name = "generate_buf_image",
srcs = [":protos", "cookie"],
outs = ["buf-image.json"],
cmd = "$(location //third_party/buf:cas_buf_image) //example-grpc/proto/v1:proto_backwards_compatibility_check $(SRCS) >$(OUTS)",
tools = [
"//third_party/buf:cas_buf_image",
"#buf",
],
)
If cas_buf_image.sh has ls -l "example-grpc/proto/v1" >&2, it shows:
… cookie -> …/example-grpc/proto/v1/cookie
… example.proto -> …/example-grpc/proto/v1/example.proto
IOW, examining what example-grpc/proto/v1/cookie is linked to and cding to its directory then performing the git commands should work.

terraform plan returns the Error: Unsupported argument

I have the following three files as below:
main.tf, variables.tf and dev.auto.tfvars
Snippet from main.tf
module "sql_vms" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
Snippet from variables.tf
variable "app_subnet_name" {
type = string
}
variable "sql_subnet_name" {
type = string
}
Snippet from dev.auto.tfvars
app_subnet_name = "subnet_1"
sql_subnet_name = "subnet_2"
application_nsg = "test_nsg"
However, I'm getting error like below
Error: Unsupported argument
on main.tf line 7, in module "sql_vms":
7: subnet_name = var.sql_subnet_name
An argument named "subnet_name" is not expected here.
Error: Unsupported argument
on main.tf line 8, in module "sql_vms":
8: app_nsg = var.application_nsg
An argument named "app_nsg" is not expected here.
My modules directory structure looks like below
$ ls -R terraform-modules/
terraform-modules/:
aws azure gcp
terraform-modules/aws:
alb ec2-instance-rhel
terraform-modules/aws/alb:
terraform-modules/aws/ec2-instance-rhel:
main.tf
terraform-modules/azure:
compute resourcegroup sqlserver
terraform-modules/azure/compute:
main.tf README.md variable.tf
terraform-modules/azure/resourcegroup:
data.tf outputs.tf variables.tf
terraform-modules/azure/sqlserver:
main.tf README.md variables.tf
terraform-modules/gcp:
compute
terraform-modules/gcp/compute:
main.tf
Any idea what is going wrong here?
If you are starting out with Terraform, you will get that error message ("An argument named "example" is not expected here") if your module arguments refer to the resource properties and not to variable names, see below for an example:
Example of a Terraform module "example_mod.tf" you want to call from your module:
variable "sg_name" { } # Usually in a separate file
variable "sg_desc" { } # called variables.tf
resource "example_resource" "example_name" {
name = var.sg_name
description = var.sg_desc
...
}
CORRECT WAY:
module "my_module" {
source = "./modules/example_mod.tf"
sg_name = "whatever" # NOTE the left hand side "sg_name" is the variable name
sg_desc = "whatever"
...
}
INCORRECT WAY: (Gives the error "An argument named "name" is not expected here" )
module "my_module" {
source = "./modules/example_mod.tf"
name = "whatever" # WRONG because the left hand side "name" is a resource property
description = "whatever" # WRONG for the same reason
...
}
I think the issue is that you do not refer to the exact module with the source. I see you have three modules in the source:
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
They are compute, resourcegroup and sqlserver. But you want to load them in one module. So it cannot find the related variables for the modules. I also don't think it's the right way to load all the modules like that. I would recommend you load the modules one by one like below:
module "compute" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/compute"
...
}
module "resourcegroup" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/resourcegroup"
...
}
module "sqlserver" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/sqlserver"
...
}
Without knowing the details about the module it is usually hard to say what's the reason for an error, but in this particular case it seems that there isn't a requirement in the module you're importing to use those two arguments (subnet_name and app_nsg), or rather that you are using a version of the module that doesn't require them to be present. What helps with that type of error is to check if there is a version of the module that does have such a requirement. The syntax for using a particular module version from Github is explained in Terraform Module Sources documentation, Selecting a Revision section:
module "vpc" {
source = "git::https://example.com/vpc.git?ref=v1.2.0"
}
You are probably using SSH to fetch the module, so the recommended way to do that is:
When using Git over SSH, we recommend using the ssh://-prefixed URL form for consistency with all of the other URL-like git address forms.
In your example, this translates to:
module "sql_vms" {
source = "git::ssh://git#github.com/org/terraform-modules-repo.git//azure/module-name?ref=v1.2.0"
where org is your organisation's (or your private) Github account, terraform-modules-repo is the repo where modules reside, module-name is the module you are using and ref=v1.2.0 represents the module revision number.
The error An argument named "example" is not expected here. means that the module doesn't expect to see an input argument with that name. Think about Terraform modules as functions in a programming language: in order to have a function provide a result, you pass the function a set of required arguments. If you provide more (or less) input arguments than required by that function call, you will get an error. (There are special cases but it is out of the scope of this question.)
Another similarity between modules and functions is that Terraform modules can also provide output values, besides creating resources that are specified. That can be handy in cases where output can be used as input in other modules or resources. The line module.resource_group.external_rg_location is doing exactly that: getting the output value from another module and using it to assign a value to an argument location.
I had a similar issue when working with AWS Eventbridge and Terraform.
When I run terraform plan I get the error below:
Error: Unsupported argument
│
│ on ../../modules/aws/eventbridge/main.tf line 37, in resource "aws_cloudwatch_event_target" "ecs_cloudwatch_event_target":
│ 37: maximum_age_in_seconds = var.maximum_age_in_seconds
│
│ An argument named "maximum_age_in_seconds" is not expected here.
Here's how I solved it:
The issue was that I was not using the correct attribute for the AWS Eventbridge resource block.
The attribute should have been maximum_event_age_in_seconds and not maximum_age_in_seconds.
Another issue that could this is not defining a variable in your terraform script that is already defined in a module.
That's all
It could be happening due to plenty of reasons.
I'd suggest some verification:
Check if you are using the correct source URL, path or revision branch/tag.
I'm not sure about your implementation, but you probably want to double check the revision you are referencing contains theses variable declarations.
GitHub Modules addressing allows ref argument.
Refer to the GitHub Module Addressing for Terraform Documentation and how to specify a revision.
Check if all necessary variables are declared on every module, including the root module.
Did you declare those variables both in a variables.tf file on your root directory and on the module context/path?
I know that's exhausting and repetitive, but every module should be designed as an "independent project". Each module **MUST have its own declared variables.tf**, which work as inputs for that module, and it is also desirable that it has its own mapped outputs.tf, provider.tf, backend.tf, etc., though these last ones are not required.
FYI: Doing so you guarantee scalability, reusability, as well as reliability to work with different tfstate files and even different repositories for each module in order to guarantee atomicity and minimum permissions, hence preventing your infrastructure from being destroyed by undesired code changes.
I highly recommend this read to understand the importance of independent modularization design.
Furthermore, tools like Terragrunt, Terratest can make this job less painful by keeping your code DRY ( Don't Repeat Yourself ).
Check if the **type constraints of the related variables match.**
If that's not your case, try looking if the type constraints match between all declarations of the variables used both as arguments ( on your root variables.tf ) and inputs ( on your module level variables.tf ).
I'll share my pain as well.
Writing block configuration like this
vpc_config = {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Instead of (Notice the = Equal Sign):
vpc_config {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Will give an error of An argument named "vpc_config" is not expected here and will waste you a few good hours.

GoogleCloudOptions doesn't have all options that <pipeline>.options has

So my beam job today ended up with this warning:
/usr/local/lib/python2.7/dist-packages/apache_beam/runners/dataflow/dataflow_runner.py:800: BeamDeprecationWarning: options is deprecated since First stable release. References to .options will not be supported
So as I understood, instead of doing this:
self.options = {'project': self.project_name,
'job_name': self.job_name,
}
I will have to move to this:
self.options = PipelineOptions()
google_cloud_options = self.options.view_as(GoogleCloudOptions)
google_cloud_options.project = self.project_name
google_cloud_options.job_name = self.job_name
But there is a problem, a lot of options are not available anymore, e.g. max number worker, setup file location...
I tried to go through its documentation again but couldn't find what are the replacements for those missing fields.
If I just added to the new GoogleCloudOptions some registered label, it will complain:
AttributeError: 'GoogleCloudOptions' object has no attribute
'max_num_workers'
So does anyone know what are the replacements for those fields?
Thank you.
It seems that some of the options have been moved to WorkerOptions in the same module of the Apache Beam SDK library.
Comment in the WorkerOptions class:
Command line options controlling the worker pool configuration.
It includes num_workers, max_num_workers, worker_machine_type, and a few more that I believe have been in GoogleCloudOptions before.
See this link for the module's source as of v2.12: https://beam.apache.org/releases/pydoc/2.12.0/_modules/apache_beam/options/pipeline_options.html#WorkerOptions

listing other than windows images from amazon using fog

I have been using fog for one of my project,i have used describe_images with filter parameters, but now i am getting only the windows images, so is there any way to get the other AMIs with changing the parameter(platform)?. lets an example 'platform => linux' something like that
spec_images = #conn.describe_images('Owner' => 'amazon','platform' => 'windows')
my_images = spec_images.body["imagesSet"]
# List image ID, architecture and location
for key in 0...my_images.length
print my_images[key]["imageId"], "\t" , my_images[key]["architecture"] , "\t\t" ,
my_images[key]["imageLocation"], "\n";
end
According to the API documentation for the DescribeInstances call...
Use windows if you have Windows based instances; otherwise, leave
blank.
So "windows" is the only valid value for that filter, presently, and according to the AWS developer forums, there isn't currently a way to filter for non-Windows instances:
It appears there is no way currently to filter for linux instances
using ec2-describe-instances. This is expected behavior and no easy
workaround at this time. We will be updating our documentation to
reflect this. I apologize for the inconvenience.

Resources