Terraform: "Error: Reference to undeclared resource when calling modules from terragrunt" in Azure - terraform-provider-azure

I'm trying to use terragrunt for the first time. I have followed the directory structure referred to https://terratest.gruntwork.io/docs/getting-started/quick-start/. I wanted to ret gid of duplicate main.tf, outputs.tf, and vars.tf that I have been using inside my environment folders. Below are the version and error that I'm facing. Any help would be greatly appreciated. Thanks in advance.
Terragrunt version
terragrunt version v0.23.10
Terraform version
Terraform v0.12.24
Directory Structure
terraform-live/
├── prod
│ └── resource_group
│ ├── main.tf
│ └── terragrunt.hcl
└── terragrunt.hcl
contents of terraform-live/terragrunt.hcl
backend = "azurerm"
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = "common-rg"
storage_account_name = "testsa01"
container_name = "tfstate"
}
}
contents of terraform-live/prod/resource_group/main.tf
backend "azurerm" {}
}
contents of terraform-live/prod/resource_group/terragrunt.hcl
terraform {
source = "git::git#github.com:adi4dpeople/terraform_modules.git//resource_group?ref=v0.0.1"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
# These are the variables we have to pass in to use the module specified in the terragrunt configuration above
inputs = {
location = "westus"
rg_name = "testrg01"
}
When i run terragrunt plan, i get the following error:
[terragrunt] 2020/04/24 22:24:39 Reading Terragrunt config file at /home/aditya/terraform-live/prod/resource_group/terragrunt.hcl
[terragrunt] [/home/aditya/terraform-live/prod/resource_group] 2020/04/24 22:24:39 Running command: terraform --version
[terragrunt] 2020/04/24 22:24:44 Terraform files in /home/aditya/terraform-live/prod/resource_group/.terragrunt-cache/Hovi5Z9TKrGgHU_Lf1P2xFmhkm0/4M87gZKvnrwMknqj9CwuSBSfiHk/resource_group are up to date. Will not download again.
[terragrunt] 2020/04/24 22:24:44 Copying files from /home/aditya/terraform-live/prod/resource_group into /home/aditya/terraform-live/prod/resource_group/.terragrunt-cache/Hovi5Z9TKrGgHU_Lf1P2xFmhkm0/4M87gZKvnrwMknqj9CwuSBSfiHk/resource_group
[terragrunt] 2020/04/24 22:24:44 Setting working directory to /home/aditya/terraform-live/prod/resource_group/.terragrunt-cache/Hovi5Z9TKrGgHU_Lf1P2xFmhkm0/4M87gZKvnrwMknqj9CwuSBSfiHk/resource_group
[terragrunt] [/home/aditya/terraform-live/prod/resource_group] 2020/04/24 22:24:44 Backend azurerm has not changed.
[terragrunt] [/home/aditya/terraform-live/prod/resource_group] 2020/04/24 22:24:44 Running command: terraform init -backend-config=access_key=xxxxxxxxxxxx -backend-config=container_name=tfstate -backend-config=key=prod/resource_group/terraform.tfstate -backend-config=resource_group_name=testrg01 -backend-config=storage_account_name=testsa01
Initializing the backend...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
[terragrunt] 2020/04/24 22:24:52 Running command: terraform plan
Acquiring state lock. This may take a few moments...
Error: Reference to undeclared resource
on outputs.tf line 2, in output "id":
2: value = azurerm_resource_group.rg.id
A managed resource "azurerm_resource_group" "rg" has not been declared in the
root module.
Error: Reference to undeclared resource
on outputs.tf line 6, in output "name":
6: value = azurerm_resource_group.rg.name
A managed resource "azurerm_resource_group" "rg" has not been declared in the
root module.
Releasing state lock. This may take a few moments...
[terragrunt] 2020/04/24 22:25:01 Hit multiple errors:
exit status 1
aditya#LAPTOP-6C2MPJDV:~/terraform-live/prod/resource_group$

I have solved my problem with this GitHub issue on terragrunt
https://github.com/gruntwork-io/terragrunt/issues/1151

I faced a similar issue when trying to create assign a pull role to an Azure Kubernetes cluster to pull images from an Azure container registry using a Managed system identity
Azure role assignment (outputs.tf file)
output "acr_id" {
value = azure_container_registry.acr.id
}
This was put in a module directory called azure-role-assignment
However, when I call the module output file in my Test environment (main.tf file):
# Create azure container registry
module "azure_container_registry" {
source = "../modules/azure-container-registry"
container_registry_name = var.container_registry_name
resource_group_name = var.resource_group_name
location = var.location
sku = var.sku
admin_enabled = var.admin_enabled
}
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id
}
However, when I run terraform apply, I get the error:
Error: Reference to undeclared resource
on ../modules/azure-container-registry/outputs.tf line 2, in output "acr_id":
2: value = azure_container_registry.acr.id
A managed resource "azure_container_registry" "acr" has not been declared in
module.azure_container_registry.
Here's how I solved it:
The issue was from how I defined the value of the arc_id in the outputs.tf file. Instead of this:
Azure role assignment (outputs.tf file)
output "acr_id" {
value = azure_container_registry.acr.id
}
It should be this:
Azure role assignment (`outputs.tf` file)
output "acr_id" {
value = azurerm_container_registry.acr.id
}
That is azurerm_container_registry.acr.id and not azure_container_registry.acr.id
That's all.
I hope this helps

Related

unable to define ssh key when using terraform to create linux vm

I'm trying to use terraform to create linux vm. what I see online is pretty straight forward
resource "tls_private_key" "this" {
for_each = local.worker_env_map
algorithm = "RSA"
rsa_bits = 4096
}
resource "azurerm_linux_virtual_machine" "example" {
name = "worker-machine"
resource_group_name = "rogertest"
location = "australiaeast"
size = "Standard_D2_v4"
admin_username = data.azurerm_key_vault_secret.kafkausername.value
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = tls_private_key.this["env1"].public_key_openssh
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18_04-lts-gen2"
version = "latest"
}
}
but i'm keep getting this error
Code="InvalidParameter" Message="Destination path for SSH public keys is currently limited to its default value /home/kafkaadmin/.ssh/authorized_keys due to a known issue in Linux provisioning agent."
Target="linuxConfiguration.ssh.publicKeys.path"
but I'm following as exactly outline on this page?
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-terraform
I tired to reproduce the same issue in my environment and got the below results
This is the error I am getting for destination path for SSH public keys are currently limited to its default value, destination path on the VM for the SSH keys if the file is already exist the specific keys are appended to the file
If we need a non-default location for public keys then at the moment, the only way is to create our own custom solution.
I have used the below command to create own path for keys
az vm create --resource-group rg_name --name myVM --image UbuntuLTS --admin-username user_name --generate-ssh-keys --ssh-dest-key-path './'
I have the Linux-vm terraform code using this Document
I have followed the below steps to execute the file
terraform init
Using the above command it will initialize the file
terraform plan
This will creates an execution plan and it will preview the changes that terraform plans to make the infrastructure it will show the monitoring and email notification rules
terraform apply
This will creates or updates the infrastructure depending on the configuration and also creates the metric rules for the flexible server
I am able to see the created Linux-virtual machine
NOTE: For creating Linux-vm we can use this terraform Document also for reference

[Solved]: Error resolving image name 'debian-cloud/debian-9': Could not find image or family debian-cloud/debian-9

Google Cloud Skills Boost
(Quest) Secure Workloads in Google Kubernetes Engine
(Lab) Securing Applications on Kubernetes Engine - Three Examples
On the section "Provisioning the Kubernetes Engine cluster", run the command
make create
I got into an error:
enter code here
│ Error: Error resolving image name 'debian-cloud/debian-9': Could not find image or family debian-cloud/debian-9
│
│ with module.bastion.google_compute_instance.instance,
│ on modules/instance/main.tf line 54, in resource "google_compute_instance" "instance":
│ 54: resource "google_compute_instance" "instance" {
Step 1: Run the command to see the available images
gcloud compute images list | grep debian
Step 2: At the project root, run the below command
vi ./terraform/modules/instance/main.tf
Step 3: At VIM normal mode, search for the debian-9 by type /debian-9, hit enter
// Specify the Operating System Family and version.
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
Step 4: As my case, the debian-10 is available. So I change the above block code into
// Specify the Operating System Family and version.
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
Step 5: Change directory to the project root, then re-run the command make create.

failed to load the 'resty.core' when loading custom APISIX plugin

I am trying to create a custom lua plugin for the APISIX docker version 2.15.0. I am using the a slightly different apisix example plugin and I am loading it using the instructions in the Developer Guide. However when I am reloading APISIX I get the following error and the plugin is not loading:
2022/10/05 14:05:40 [alert] 1#1: failed to load the 'resty.core' module (https://github.com/openresty/lua-resty-core); ensure you are using an OpenResty release from https://openresty.org/en/download.html (reason: /usr/local/apisix/apisix/plugins/3rd-party.lua:18: loop or previous error loading module 'apisix.core') in /usr/local/apisix/conf/nginx.conf:404
To reporduce:
Clone the APISIX docker repo with the docker compose stack
Create the folder <repo>/example/plugins
Create a file named 3rd-party.lua and put the code below
Edit the <repo>/apisix_conf/config.yaml and add the line extra_lua_path: "/usr/local/apisix/apisix/plugins/3rd-party.lua" under apisix
Bind the lua script to the docker by adding the line - ./plugins/3rd-party.lua:/usr/local/apisix/apisix/plugins/3rd-party.lua under the apisix's volumes section
Run the docker stack with cd ./example && docker-compose up -d
See if the plugin is loaded.
The lua plugin code:
local require = require
local core = require("apisix.core")
local plugin_name = "3rd-party"
local schema = {
type = "object",
properties = {
body = {
description = "body to replace response.",
type = "string"
},
},
required = {"body"},
}
local plugin_name = "3rd-party"
local _M = {
version = 0.1,
priority = 12,
name = plugin_name,
schema = schema,
}
function _M.check_schema(conf)
return core.schema.check(schema, conf)
end
function _M.access(conf, ctx)
return 200, conf.body
end
return _M
After taking a lot of advice from the APISIX slack the correct steps to create a plugin for the docker-APISIX version are:
Create a lua plugin script
Bind the lua script to the docker apisix service by adding the line - /path/to/plugin/script/<plugin-name>.lua:/usr/local/apisix/apisix/plugins/<plugin-name>.lua under the apisix's volumes section like:
apisix:
...
volumes:
...
- ./plugins/3rd-party.lua:/usr/local/apisix/apisix/plugins/3rd-party.lua
Get a copy of the available plugins that reside in the apisix docker container in /usr/local/apisix/conf/config-default.yaml. These are under the plugins section of the file
Add a new section named plugins in the apisix config file with the available plugins taken from the previous step like:
plugins: # plugin list (sorted by priority)
- real-ip # priority: 23000
- client-control # priority: 22000
...
Add the new plugin in the list of plugins of apisix config file like:
plugins: # plugin list (sorted by priority)
- real-ip # priority: 23000
- client-control # priority: 22000
...
- 3rd-party
The steps above will load the new plugin in APISIX and it can be validated with a call curl http://<domain>/apisix/admin/plugins/list -H 'X-API-KEY: <key>'. The new plugin should appear in the response.
The steps above will not load the plugin to the apisix dashboard beacause the Dashboard is caching the list of plugins. To reload the cache follow these instructions.

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

docker containers - missing attribute

I needed a custom centos image with docker installed. So I built it using centos image and tagged it custom (shown below).
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
centos custom 84766562f881 4 hours ago 664MB
centos/systemd latest 05d3c1e2d0c1 7 weeks ago 202MB
I am trying to deploy couple of containers using Terraform on my local machine, each with a unique name that comes from another file. The docker images are on local machine. Here's the TF code.
$ cat main.tf
provider "docker" {
}
resource "docker_image" "centos" {
name = "centos:custom"
}
resource "docker_container" "app_swarm" {
image = "${docker_image.centos.custom}"
count = "${length(var.docker_cont)}"
name = "${element(var.docker_cont, count.index)}"
}
When I run terraform apply, I get this error which I am not sure how to fix. Can someone point me in the right direction please?
Error: Error running plan: 1 error(s) occurred:
* docker_container.app_swarm: 3 error(s) occurred:
* docker_container.app_swarm[0]: Resource 'docker_image.centos' does not have attribute 'custom' for variable 'docker_image.centos.custom'
* docker_container.app_swarm[1]: Resource 'docker_image.centos' does not have attribute 'custom' for variable 'docker_image.centos.custom'
* docker_container.app_swarm[2]: Resource 'docker_image.centos' does not have attribute 'custom' for variable 'docker_image.centos.custom'
Yes, the other file exists with names, its a simple list.
EDIT:
Thanks David, tried your suggestion and amended the code to look like-
provider "docker" {
}
resource "docker_image" "centos" {
name = "centos:custom"
}
resource "docker_container" "app_swarm" {
image = "${docker_image.centos.latest}"
count = "${length(var.docker_cont)}"
name = "${element(var.docker_cont, count.index)}"
}
But now I get this error.
Error: Error applying plan:
1 error(s) occurred:
* docker_image.centos: 1 error(s) occurred:
* docker_image.centos: Unable to read Docker image into resource: Unable to pull image centos:custom: error pulling image centos:custom: Error response from daemon: manifest for centos:custom not found
I guess I will have to setup a local Docker repository to get this working but I am not sure?
You can only use the specific fields listed in the docker_image resource documentation in the ${docker_image.centos...} interpolation. In particular, even though you don't use the tag :latest, you need a .latest property reference:
image = "${docker_image.centos.latest}"
(If the image actually is one you've built locally, you may also want to specify the keep_locally option on your docker_image resource so that terraform destroy won't delete it.)

Resources