terraform plan returns the Error: Unsupported argument - terraform-provider-azure

I have the following three files as below:
main.tf, variables.tf and dev.auto.tfvars
Snippet from main.tf
module "sql_vms" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
Snippet from variables.tf
variable "app_subnet_name" {
type = string
}
variable "sql_subnet_name" {
type = string
}
Snippet from dev.auto.tfvars
app_subnet_name = "subnet_1"
sql_subnet_name = "subnet_2"
application_nsg = "test_nsg"
However, I'm getting error like below
Error: Unsupported argument
on main.tf line 7, in module "sql_vms":
7: subnet_name = var.sql_subnet_name
An argument named "subnet_name" is not expected here.
Error: Unsupported argument
on main.tf line 8, in module "sql_vms":
8: app_nsg = var.application_nsg
An argument named "app_nsg" is not expected here.
My modules directory structure looks like below
$ ls -R terraform-modules/
terraform-modules/:
aws azure gcp
terraform-modules/aws:
alb ec2-instance-rhel
terraform-modules/aws/alb:
terraform-modules/aws/ec2-instance-rhel:
main.tf
terraform-modules/azure:
compute resourcegroup sqlserver
terraform-modules/azure/compute:
main.tf README.md variable.tf
terraform-modules/azure/resourcegroup:
data.tf outputs.tf variables.tf
terraform-modules/azure/sqlserver:
main.tf README.md variables.tf
terraform-modules/gcp:
compute
terraform-modules/gcp/compute:
main.tf
Any idea what is going wrong here?

If you are starting out with Terraform, you will get that error message ("An argument named "example" is not expected here") if your module arguments refer to the resource properties and not to variable names, see below for an example:
Example of a Terraform module "example_mod.tf" you want to call from your module:
variable "sg_name" { } # Usually in a separate file
variable "sg_desc" { } # called variables.tf
resource "example_resource" "example_name" {
name = var.sg_name
description = var.sg_desc
...
}
CORRECT WAY:
module "my_module" {
source = "./modules/example_mod.tf"
sg_name = "whatever" # NOTE the left hand side "sg_name" is the variable name
sg_desc = "whatever"
...
}
INCORRECT WAY: (Gives the error "An argument named "name" is not expected here" )
module "my_module" {
source = "./modules/example_mod.tf"
name = "whatever" # WRONG because the left hand side "name" is a resource property
description = "whatever" # WRONG for the same reason
...
}

I think the issue is that you do not refer to the exact module with the source. I see you have three modules in the source:
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
They are compute, resourcegroup and sqlserver. But you want to load them in one module. So it cannot find the related variables for the modules. I also don't think it's the right way to load all the modules like that. I would recommend you load the modules one by one like below:
module "compute" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/compute"
...
}
module "resourcegroup" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/resourcegroup"
...
}
module "sqlserver" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/sqlserver"
...
}

Without knowing the details about the module it is usually hard to say what's the reason for an error, but in this particular case it seems that there isn't a requirement in the module you're importing to use those two arguments (subnet_name and app_nsg), or rather that you are using a version of the module that doesn't require them to be present. What helps with that type of error is to check if there is a version of the module that does have such a requirement. The syntax for using a particular module version from Github is explained in Terraform Module Sources documentation, Selecting a Revision section:
module "vpc" {
source = "git::https://example.com/vpc.git?ref=v1.2.0"
}
You are probably using SSH to fetch the module, so the recommended way to do that is:
When using Git over SSH, we recommend using the ssh://-prefixed URL form for consistency with all of the other URL-like git address forms.
In your example, this translates to:
module "sql_vms" {
source = "git::ssh://git#github.com/org/terraform-modules-repo.git//azure/module-name?ref=v1.2.0"
where org is your organisation's (or your private) Github account, terraform-modules-repo is the repo where modules reside, module-name is the module you are using and ref=v1.2.0 represents the module revision number.
The error An argument named "example" is not expected here. means that the module doesn't expect to see an input argument with that name. Think about Terraform modules as functions in a programming language: in order to have a function provide a result, you pass the function a set of required arguments. If you provide more (or less) input arguments than required by that function call, you will get an error. (There are special cases but it is out of the scope of this question.)
Another similarity between modules and functions is that Terraform modules can also provide output values, besides creating resources that are specified. That can be handy in cases where output can be used as input in other modules or resources. The line module.resource_group.external_rg_location is doing exactly that: getting the output value from another module and using it to assign a value to an argument location.

I had a similar issue when working with AWS Eventbridge and Terraform.
When I run terraform plan I get the error below:
Error: Unsupported argument
│
│ on ../../modules/aws/eventbridge/main.tf line 37, in resource "aws_cloudwatch_event_target" "ecs_cloudwatch_event_target":
│ 37: maximum_age_in_seconds = var.maximum_age_in_seconds
│
│ An argument named "maximum_age_in_seconds" is not expected here.
Here's how I solved it:
The issue was that I was not using the correct attribute for the AWS Eventbridge resource block.
The attribute should have been maximum_event_age_in_seconds and not maximum_age_in_seconds.
Another issue that could this is not defining a variable in your terraform script that is already defined in a module.
That's all

It could be happening due to plenty of reasons.
I'd suggest some verification:
Check if you are using the correct source URL, path or revision branch/tag.
I'm not sure about your implementation, but you probably want to double check the revision you are referencing contains theses variable declarations.
GitHub Modules addressing allows ref argument.
Refer to the GitHub Module Addressing for Terraform Documentation and how to specify a revision.
Check if all necessary variables are declared on every module, including the root module.
Did you declare those variables both in a variables.tf file on your root directory and on the module context/path?
I know that's exhausting and repetitive, but every module should be designed as an "independent project". Each module **MUST have its own declared variables.tf**, which work as inputs for that module, and it is also desirable that it has its own mapped outputs.tf, provider.tf, backend.tf, etc., though these last ones are not required.
FYI: Doing so you guarantee scalability, reusability, as well as reliability to work with different tfstate files and even different repositories for each module in order to guarantee atomicity and minimum permissions, hence preventing your infrastructure from being destroyed by undesired code changes.
I highly recommend this read to understand the importance of independent modularization design.
Furthermore, tools like Terragrunt, Terratest can make this job less painful by keeping your code DRY ( Don't Repeat Yourself ).
Check if the **type constraints of the related variables match.**
If that's not your case, try looking if the type constraints match between all declarations of the variables used both as arguments ( on your root variables.tf ) and inputs ( on your module level variables.tf ).

I'll share my pain as well.
Writing block configuration like this
vpc_config = {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Instead of (Notice the = Equal Sign):
vpc_config {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Will give an error of An argument named "vpc_config" is not expected here and will waste you a few good hours.

Related

Terraform for_each issue with data type

i have the next code to attach snapshot policy to existing disks for particular instance:
data "alicloud_ecs_disks" "db_disks" {
instance_id = alicloud_instance.db.id
}
resource "alicloud_ecs_auto_snapshot_policy" "common" {
...
}
resource "alicloud_ecs_auto_snapshot_policy_attachment" "db" {
for_each = { for disk in data.alicloud_ecs_disks.db_disks.disks : disk.disk_id => disk }
auto_snapshot_policy_id = alicloud_ecs_auto_snapshot_policy.common.id
disk_id = each.key
}
When i run plan it works well, but after applying the next plan fails with error:
data.alicloud_ecs_disks.db_disks.disks is a list of object, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be
│ determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply
│ only the resources that the for_each depends on.
What the best option to workaround this? it works using plan on some machines and sometimes not. Thanks
The reason for your error is that your alicloud_ecs_disks.db_disks references alicloud_instance.db which is probably created in the same configuration. As the error msg says, you can't use dynamic data in for_each.
The solution is to use -target option first to deploy your alicloud_instance.db first, and then when it has been deployed you can proceed with the rest of your infrastructure.
If you don't want to split your deployment, then you have to re-architect your TF so that there is no dependency on any dynamic content.
The problem is that during first apply snapshot policy has been attached to the system disk. It forces the whole instance re-creation on next plan and instance.id is not know until apply.

Why is this route test failing?

I've been following along with the testdriven.io tutorial for setting up a FastAPI with Docker. The first test I've written using PyTest errored out with the following message:
TypeError: Settings(environment='dev', testing=True, database_url=AnyUrl('postgres://postgres:postgres#web-db:5432/web_test', scheme='postgres', user='*****', password='*****', host='web-db',host_type='int_domain', port='5432', path='/web_test')) is not a callable object.
Looking at the picture, you'll notice that the Settings object has a strange form; in particular, its database_url parameter seems to be wrapping a bunch of other parameters like password, port, and path. However, as shown below my Settings class takes a different form.
From config.py:
# ...imports
class Settings(BaseSettings):
environment: str = os.getenv("ENVIRONMENT", "dev")
testing: bool = os.getenv("TESTING", 0)
database_url: AnyUrl = os.environ.get("DATABASE_URL")
#lru_cache()
def get_settings() -> BaseSettings:
log.info("Loading config settings from the environment...")
return Settings()
Then, in the conftest.py module, I've overridden the settings above with the following:
import os
import pytest
from fastapi.testclient import TestClient
from app.main import create_application
from app.config import get_settings, Settings
def get_settings_override():
return Settings(testing=1, database_url=os.environ.get("DATABASE_TEST_URL"))
#pytest.fixture(scope="module")
def test_app():
app = create_application()
app.dependency_overrides[get_settings] = get_settings_override()
with TestClient(app) as test_client:
yield test_client
As for the offending test itself, that looks like the following:
def test_ping(test_app):
response = test_app.get("/ping")
assert response.status_code == 200
assert response.json() == {"environment": "dev", "ping": "pong", "testing": True}
The container is successfully running on my localhost without issue; this leads me to believe that the issue is wholly related to how I've set up the test and its associated config. However, the structure of the error and how database_url is wrapping up all these key-value pairs from docker-compose.yml gives me the sense that my syntax error could be elsewhere.
At this juncture, I'm not sure if the issue has something to do with how I set up test_ping.py, my construction of the settings_override, with the format of my docker-compose.yml file, or something else altogether.
So far, I've tried to fix this issue by reading up on the use of dependency overrides in FastApi, noodling with my indentation in the docker-compose, changing the TestClient from one provided by starlette to that provided by FastAPI, and manually entering testing mode.
Something I noticed when attempting to manually go into testing mode was that the container doesn't want to follow suit. I've tried setting testing to 1 in docker-compose.yml, and testing: bool = True in config.Settings.
I'm new to all of the relevant tech here and bamboozled. What is causing this discrepancy with my test? Any and all insight would be greatly appreciated. If you need to see any other files, or are interested in the package structure, just let me know. Many thanks.
Any dependency override through app.dependency_overrides should provide the function being overridden as the key and the function that should be used instead. In your case you're assigning the correct override, but you're assigning the result of the function as the override, and not the override itself:
app.dependency_overrides[get_settings] = get_settings_override()
.. this should be:
app.dependency_overrides[get_settings] = get_settings_override
The error message shows that FastAPI tried to call your dictionary as a function, something that hints to it expecting a function instead.

Bazel fetch remote file not as a WORKSPACE rule?

In Bazel, how do I fetch a remote file as a build rule not as a WORKSPACE rule?
I want to use a build rule because WORKSPACE rules are not loaded for transitively.
e.g. this fails
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "foo",
urls = [ "https://example.com" ],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
executable = True,
)
Error in repository_rule: 'repository rule http_file' can only be called during workspace loading
If you really want to do that, you have to implement your own rule, a naïve trivial example relying on curl to fetch could be:
def _impl(ctx):
args = ctx.actions.args()
args.add("-o", ctx.outputs.out)
args.add(ctx.attr.url)
ctx.actions.run(
outputs = [ctx.outputs.out],
executable = "curl",
arguments = [args],
)
get_stuff = rule(
_impl,
attrs = {
"url": attr.string(
mandatory = True,
),
},
outputs = {"out": "%{name}.out"},
)
But (and esp. in such a trivial) for, it comes with problems. Apart from, do you want to step out of sandbox during the build? And do you want to talk to someone across the network during the build (out of the sandbox)? Bypassing repository_cache, and possibly getting remote_cache involved (networked caching of networked fetching). Specifically in this example, if content of the file pointed to by url changes... build has no idea and only fetches it when it either hasn't done so or the url itself has changed. I.e. the implementation would need to be more robust (mimic that of http_file for instance).
But it actually sounds like you're trying to address a different problem (transitive external dependencies, for which there could be another solution). One trick used for that is to define a macro (in your first level dependency to load define the next hop) and after declaring that first step as an external dependency in your parent project, load the that macro and use it from parent project WORKSPACE. This too has a price though, namely the first level dependency has to always be present (fetched or already cached), even if build target asked for does not actually need it (as that load and macro call will always pull it in).

How to find the file paths of all namespace loaded in an application using tcl/TK under Unix?

For existing flow, there would be a whole bunch of namespaces loaded when running some script job.
However, if I want to check & trace the usage of some command in some namespace, I need to find the script path of the certain namespace.
Is there some way to get that? Particularly, I'm talking about Primetime scripts.
Technically, namespaces don't have script paths. But we can do something close enough:
proc report_current_file {call code result op} {
if {$code == 0} {
# If the [proc] call was successful...
set cmd [lindex $call 1]
set qualified_cmd [uplevel 1 [list namespace which $cmd]]
set file [file normalize [info script]]
puts "Defined $qualified_cmd in $file"
}
}
trace add execution proc leave report_current_file
It's not perfect if you've got procedures creating procedures dynamically — the current file might be wrong — but that's fortunately not what most code does.
Another option that might work for you is to use tcl::unsupported::getbytecode, which produces a lot of information in machine-readable format (a dictionary). One of the pieces of information is the sourcefile key. Here's an example running interactively on my machine:
% parray tcl_platform
tcl_platform(byteOrder) = littleEndian
tcl_platform(engine) = Tcl
tcl_platform(machine) = x86_64
tcl_platform(os) = Darwin
tcl_platform(osVersion) = 20.2.0
tcl_platform(pathSeparator) = :
tcl_platform(platform) = unix
tcl_platform(pointerSize) = 8
tcl_platform(threaded) = 1
tcl_platform(user) = dkf
tcl_platform(wordSize) = 8
% dict get [tcl::unsupported::getbytecode proc parray] sourcefile
/opt/local/lib/tcl8.6/parray.tcl
Note that the procedure has to be already defined for this to work. And if Tcl's become confused about what file the code was in (because of dynamic programming trickery) then that key is absent.

Make shell aliases declaratively depend on packages

In NixOS, definitions of shell aliases can be defined inside the configurations.nix file like so:
environment.shellAliases = {
"my_some_cmd" = "some_cmd -flag 123";
}
This gets assigned even when the referred command (here: some_cmd) is not available in the system. Say, this command is included in a package. So it would be desirable to declare that the alias should only be assigned if the package is installed.
How could that be done? Would I have to just work with an wrapping if-statement or are there other ways to archive this?
If the if statement is the way to go, how could that be implemented?
You can bypass the need to install the package by using the full path to the command, example:
environment.shellAliases = {
"colored-tree" = "${pkgs.tree}/bin/tree -C";
};

Resources