Terraform warning: Warning: "use_microsoft_graph": [DEPRECATED] This field now defaults to `true` and will be removed in v1.3 of Terraform Core - terraform-provider-azure

I created a Terraform configuration to create a resource group. This uses a backend provider configuration, so the tfstate file will be created at a shared location and not locally.
When I apply plan terraform plan, I get the following warning.
Warning: "use_microsoft_graph": [DEPRECATED] This field now defaults to true and will be removed in v1.3 of Terraform Core due to the deprecation of ADAL by Microsoft.
The config files are as follows.
# Terraform Block
terraform {
required_version = ">= 1.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.0"
}
}
# Terraform State Storage to Azure Storage Container
backend "azurerm" {
resource_group_name = "storage-rg"
storage_account_name = "tfstatetrial"
container_name = "tfstatefiles"
key = "terraform.tfstate"
}
}
# Provider Block
provider "azurerm" {
features {}
}
# Resource-1: Azure Resource Group
resource "azurerm_resource_group" "myrg" {
name = "simple-rg" # local.rg_name
location = "centralindia" #var.resoure_group_location
}
I looked into this github issue, but could not find an answer.
The warning is because of the backend azurerm block. If I remove that(no remote state), then no warning comes up.
Any ideas what to be done?

This is a known "issue" and has been discussed at hashicorp/terraform#31118. tl;dr (as far as I understood):
The reasoning for this warning is [...] a temporary addition in 1.1, which has been flipped on in 1.2, and will be removed in 1.3
See also here:
use_microsoft_graph - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to true.
Note: In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting use_microsoft_graph to false. This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL.
I know this is not technically a solution to your question or issue, but I think you'll be better off when subscribing and contributing to the GitHub issue instead of waiting for an answer here :)

Related

How to declare conditional resource in serverless framework

I use typescript definition of serverless configuration (serverless.ts). How do I conditionally add resources using cloud formation templates ?
For example AWS cognito userpools, if I want to exclude these for offline mode, how can I specify it in serverless config file to not include them?
I solved it 2 ways. serverless-plugin-ifelse allowed me to exclude resources based on certain parameters.
Later I realized this may be problematic in the long run. So I created separate serverless config file for my offline use case to include all necessary resources. Prod/Staging environments used default serverless.ts file. Offline uses offline-serverless.ts file. Though there is some repetition of the resources config, this option ensures prod/staging config is not polluted with offline content. I could start offline using sls offline start --stage local --reloadHandler --config offline-serverless.ts
And the offline config reuses some code from main config. Sample offline-serverless.ts content is below:
import offlineplugins from "offlineplugins";
import pluginsconfig from "offlinepluginsconfig";
import plugins from "plugins";
const serverlessConfiguration: AWS = {
service: "my-offline-apis",
frameworkVersion: "3",
...pluginsconfig,
plugins: [...plugins, ...offlineplugins],
provider: {
name: "aws",
....
....

Default credentials can not be used to assume new style deployment roles

Following pipelines readme to set up a deployment pipeline, I ran
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://[ACCOUNT_ID]/us-west-2
to create the necessary roles. I would assume the roles would automatically add sts assume role permissions from my account principle. However, when I run cdk deploy I get the following warning
current credentials could not be used to assume
'arn:aws:iam::[ACCOUNT_ID]:role/cdk-hnb659fds-file-publishing-role-[ACCOUNT_ID]-us-west-2',
but are for the right account. Proceeding anyway.
I have root credentials in ~/.aws/credentials.
Looking at the deploy role policy, I don't see any sts permissions. What am I missing?
You will need to add permission to assume the role to the credentials from which you are trying to execute cdk deploy
{
"Sid": "assumerole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:PassRole"
],
"Resource": [
"arn:aws-cn:iam::*:role/cdk-readOnlyRole",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-deploy-role-*",
"arn:aws-cn:iam::*:role/cdk-hnb659fds-file-publishing-*"
]
}
First thing you need to do is enabling the verbose mode to see what is actually happenning.
cdk deploy --verbose
If you see similar message below. Continue with step 2. Otherwise, you need to address the problem by understanding the error message.
Could not assume role in target account using current credentials User: arn:aws:iam::XXX068599XXX:user/cdk-access is not authorized to perform: sts
:AssumeRole on resource: arn:aws:iam::XXX068599XXX:role/cdk-hnb659fds-deploy-role-XXX068599XXX-us-east-2 . Please make sure that this role exists i
n the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Check S3 buckets related to CDK and CloudFormation stacks from AWS Console. Delete them manually.
Enable the new style bootstrapping by one of the method mentioned here
Bootstrap the stack using below command. Then it should create all required roles automatically.
cdk bootstrap --trust=ACCOUNT_ID --cloudformation-execution-policies=arn:aws:iam::aws:policy/AdministratorAccess --verbose
NOTE: If you are working with docker image assets, make sure you have setup your repository before you deploy. New style bootstrapping does not create the repos automatically for you as mentioned in this comment.
This may be of use to somebody... The issue could be a mismatch of regions. I spotted it in verbose mode - the roles were created for us-east-1 but I had specified eu-west-2 in the bootstrap. For some reason it had not worked. The solution was to set the region (by adding AWS_REGION=eu-west-2 before the cdk deploy command).
I ran into a similar error. The critical part of my error was
failed: Error: SSM parameter /cdk-bootstrap/<>/version not found.
I had to re-run using the new bootstrap method that creates the SSM parameter. To run the new bootstrap method first set CDK_NEW_BOOTSTRAP via export CDK_NEW_BOOTSTRAP=1
Don't forget to run cdk bootstrap with those credentials against your account [ACCOUNT_ID].
For me, the problem was that I was using expired credentials. I was trying to use temporary credentials from AWS SSO, which were expired. The problem was that the error message is misleading: it says
current credentials could not be used to assume 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1', but are for the right account. Proceeding anyway.
(To get rid of this warning, please upgrade to bootstrap version >= 8)
However, applying the --verbose flag as suggested above showed the real problem:
Assuming role 'arn:aws:iam::123456789012:role/cdk-xxx999xxx-deploy-role-123456789012-us-east-1'.
Assuming role failed: The security token included in the request is expired
Could not assume role in target account using current credentials The security token included in the request is expired . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
Getting the latest SSO credentials fixed the problem.
After deploying with --verbose I could see it was a clock issue in my case:
Assuming role failed: Signature expired: 20220428T191847Z is now earlier than 20220428T192528Z (20220428T194028Z - 15 min.)
I resolve the clock issue on ubuntu using:
sudo ntpdate ntp.ubuntu.com
which then resolves the cdk issue.

How to select correct docker provider in terraform 0.14

To integrate with Docker, I've setup my terraform as follows:
The required provider:
docker = {
source = "kreuzwerker/docker"
version = "2.11.0"
}
the instantiation of that provider:
provider "docker" {
}
And finally I use it as follows in a resource:
data "docker_registry_image" "myapp" {
name = some_image_url
}
When I run terraform init, it seems it is still referring to the "old" terraform provider by HashiCorp:
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/random versions matching "3.0.1"...
- Finding hashicorp/null versions matching "~> 3.0.0"...
- Finding hashicorp/external versions matching "~> 2.0.0"...
- Finding kreuzwerker/docker versions matching "2.11.0"...
- Finding latest version of hashicorp/docker...
- Finding hashicorp/google versions matching "~> 3.56.0"...
- Finding hashicorp/azurerm versions matching "~> 2.46.1"...
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)
- Installing hashicorp/external v2.0.0...
- Installed hashicorp/external v2.0.0 (signed by HashiCorp)
- Installing kreuzwerker/docker v2.11.0...
- Installed kreuzwerker/docker v2.11.0 (self-signed, key ID 24E54F214569A8A5)
- Installing hashicorp/google v3.56.0...
- Installed hashicorp/google v3.56.0 (signed by HashiCorp)
- Installing hashicorp/azurerm v2.46.1...
- Installed hashicorp/azurerm v2.46.1 (signed by HashiCorp)
- Installing hashicorp/random v3.0.1...
- Installed hashicorp/random v3.0.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/docker: provider registry registry.terraform.io does not have a
provider named registry.terraform.io/hashicorp/docker
If you have just upgraded directly from Terraform v0.12 to Terraform v0.14
then please upgrade to Terraform v0.13 first and follow the upgrade guide for
that release, which might help you address this problem.
Did you intend to use kreuzwerker/docker? If so, you must specify that source
address in each module which requires that provider. To see which modules are
currently depending on hashicorp/docker, run the following command:
terraform providers
When I run terraform providers I indeed see the reference, caused by docker_registry_image:
...
├── provider[registry.terraform.io/hashicorp/docker]
...
Notes:
All other providers are on their latest version.
I'm using terraform 0.14.6.
The resource given above is the only docker resource I'm using.
I've already tried using an alias on the provider and the resource, but it does not work.
How can I solve this? Thanks!
It seems like we didn't migrate correctly.
I've solved it by setting my terrorm version back to 0.13 and running terraform 0.13upgrade. After the command executed I upgraded to 0.14.6 again and all worked.
source: https://www.terraform.io/docs/cli/commands/0.13upgrade.html
What did the command do?
This created a file in my module folder (where I use a docker resource) called versions.tf with the following contents:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
google = {
source = "hashicorp/google"
}
random = {
source = "hashicorp/random"
}
}
required_version = ">= 0.13"
}
Note that what is created here will depend on your specific situation.
It also created a file in my working directory that contained:
terraform {
required_version = ">= 0.13"
}
(The providers were in a different file and already had the correct docker source, hence only the required version addition was added to the new file.)

Terraform: How to install multiple versions of provider plugins? [duplicate]

This question already has answers here:
Multiple provider versions with Terraform
(2 answers)
Closed 2 years ago.
I am trying to deploy Azure resources through Terraform 0.12 with azurerm provider.
I have AKS module which works fine with azurerm version 2.5.0, but breaks with 2.9.0.
On the other hand Postgresql module works with version 2.9.0 but breaks with 2.5.0
I want to deploy both resources through a single terraform apply.
I tried below configuration but it fails at initialize phase.
provider "azurerm" {
version = "=2.9.0"
}
provider "azurerm" {
alias = "latest"
version = "=2.5.0"
}
$ terraform.exe init
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
No provider "azurerm" plugins meet the constraint "=2.5.0,=2.9.0".
The version constraint is derived from the "version" argument within the
provider "azurerm" block in configuration. Child modules may also apply
provider version constraints. To view the provider versions requested by each
module in the current configuration, run "terraform providers".
To proceed, the version constraints for this provider must be relaxed by
either adjusting or removing the "version" argument in the provider blocks
throughout the configuration.
Error: no suitable version is available
How to install both provider versions and point AKS module to v2.5.0 and point Postgres module to v2.9.0
Break the code into Modules and add provider section in your module and call the modules differently in your main.tf file.
Example
modules/AKS
provider {
}
modules/DB
provider {
}
Now call your modules differently
main.tf
module "AKS" {
source = "../modules/AKS"
}
module "DB" {
source = "../modules/DB"
}

Accessing Docker Vault secrets using Spring Cloud Starter Vault Config Could Not Resolve

I am running a Docker Vault container in dev mode, and I can't read a secret located at /secret/mobsters/ called password.
Here are Spring logs.
Running vault kv get secret/mobsters returns the password key value pair. I can also access the vault server locally.
Here is how I am referencing the secret:
#Value("${password}")
String password;
#PostConstruct
private void postConstruct() {
System.out.println("My password is: " + password);
}
The Spring Cloud Vault configuration is setup using a bootstrap.yml file:
spring.application.name: mobsters
spring.cloud.vault:
host: localhost
port: 8200
scheme: http
authentication: TOKEN
token: ...
I am getting an exception with the message (full exception here):
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'password' in value "${password}"`
From Vault UI:
Using Spring Vault/Spring Cloud Vault with HashiCorp Vault 0.10.0 does not work as the key/value backend is mounted with versioning enabled by default. This has some significance as the versioned API has changed entirely and breaks existing client implementations. Context paths and response structure are different.
You have two options:
Use an older Vault version (such as 0.9.5)
Try to cope with API changes until Spring Cloud Vault finds an approach to use the new API. You need to:
Set spring.cloud.vault.generic.backend=secret/data in your bootstrap configuration.
Prefix property names with data. so #Value("${hello.world}") becomes #Value("${data.hello.world}").
It looks like there is a way to fix this.
In your bootstrap.yml, make sure that generic.enabled is false and kv.enabled is true.
spring:
...
cloud.vault:
...
kv.enabled: true
generic.enabled: false
According to this answer on GitHub:
The main difference between those two is that kv injects the data
segment in the context path and unwraps nested data responses.
If you're running a [springboot] version before 2.0, then you need to implement an
org.springframework.cloud.vault.config.VaultConfigurer bean that is
exposed to the bootstrap context. SecretBackendConfigurer accepts a
path and a PropertyTransformer that transforms properties before
exposing these as PropertySource.

Resources