AWS CDK - inline IAM Policies with conflicting names are generated for different stacks using a shared role - aws-cdk

I'm using the CDK to deploy several stacks, and one of the roles used is shared across multiple stacks. The constructs (e.g. CodeBuildAction) which use the role frequently attach necessary permissions as an inline policy. However, despite knowing that it is an "imported" role, the inline policy name that is generated is not unique across stacks, and therefore both CloudFormation stacks contain the same Policy resource, and fight over the contents. (Neither stack contains the Role resource.)
import * as cdk from "#aws-cdk/core";
import * as iam from "#aws-cdk/aws-iam";
const sharedRoleArn = "arn:aws:iam::1111111111:role/MyLambdaRole";
const app = new cdk.App();
const stackOne = new cdk.Stack(app, "StackOne");
const roleRefOne = iam.Role.fromRoleArn(stackOne, "SharedRole", sharedRoleArn);
// Under normal circumstances, this is called inside constructs defined by AWS
// (like a CodeBuildAction that grants permission to access Artifact S3 buckets, etc)
roleRefOne.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["s3:ListBucket"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
const stackTwo = new cdk.Stack(app, "StackTwo");
const roleRefTwo = iam.Role.fromRoleArn(stackTwo, "SharedRole", sharedRoleArn);
roleRefTwo.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["dynamodb:List*"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
The following are fragments of the cloud assembly generated for the two stacks:
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: s3:ListBucket
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackOne/SharedRole/Policy/Resource
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: dynamodb:List*
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackTwo/SharedRole/Policy/Resource
You can see above that the aws:cdk:paths for two policies are different, but they end up with the same name (SharedRolePolicyA1DDBB1E). That is used as the physical name of the inline policy attached to the MySharedRole role. (The same behavior occurs for stacks in separate "Apps" as well.)
There's no affordance for setting the PolicyName for the "default policy" generated for a role (or which policies a construct attaches permissions to). I could also make the shared role immutable (using { mutable: false } on fromRoleArn, but then I need to reconstruct the potentially complicated Policies a set of constructs would have given the role, and attache it myself.
I was able to work around the issue by templating the stack name into the imported role's "id", as in:
const stack = cdk.Stack.of(scope)
const role = iam.Role.fromRoleArn(scope, `${stack.stackName}SharedRole`, sharedRoleArn);
where I construct my role.
Is this expected behavior? Do I misunderstand something about imported resources with CDK? Is there a better alternative? (My understanding with the construct ids is that they are only intended to need to be unique within a given scope.)

Related

How to add PTR record for EC2 Instance's Private IP in CDK?

I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions

AWS CDK: lambda permissions and EFS mount

I am trying to declare in my stack a lambda function with an EFS mount.
The lambda has a custom execution role with arn
arn:aws:iam::ACCOUNTID:role/service-role/ROLENAME
i.e. it was created in the stack using
lambda_role = aws_iam.Role(..., path="/service-role/", ...)
The code snippet declaring the lambda is
_lambda.Function(
self,
id="myId",
runtime=_lambda.Runtime.PYTHON_3_8,
code=_lambda.Code.asset('lambda'),
handler='my_module.lambda_handler',
role=lambda_role,
function_name="function-name",
timeout=core.Duration.seconds(30),
vpc=vpc,
filesystem=_lambda.FileSystem.from_efs_access_point(access_point, '/efs')
)
The deployment fails with this error:
API: iam:PutRolePolicy User: USERNAME is not authorized to perform: iam:PutRolePolicy on resource: role service-role with an explicit deny
That "role service-role" in the error message seemed weird, so I inspected the synthesized CF template, and noticed this section:
lambdarolePolicy2FC0B982:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
** policy giving elasticfilesystem:ClientWrite and elasticfilesystem:ClientMount permissions**
PolicyName: lambdarolePolicy2FC0B982
Roles:
- Fn::Select:
- 1
- Fn::Split:
- /
- Fn::Select:
- 5
- Fn::Split:
- ":"
- Fn::ImportValue: sgmkr-iam-arn-mr
That ImportValue maps to the arn of the lambda execution role. The problem I see is with the string manipulation, which does not seem to take into account the fact that the role has a path. The result of that chain of Select/Split is indeed "service-role", not the proper role name.
I have two questions:
why is CDK trying to add extra permissions to a role I defined? I already added the needed permissions to the relevant role, and I really don't want anything added to it. Moreover, in my setup, iam:PutRolePolicy calls are strictly regulated, so this would almost certainly fail. That's the very reason I pass my own role. How can I switch off this automatic policy generation?
why is CDK ignoring the role path? Is this intended?
Thanks for your help,
Andrea.
The easiest workaround would I could think of would be using "role name" instead "role ARN".
Instead
role_arn = iam.Role.from_role_arn(self, "Role", role_arn=<role_arn>)
Use:
role_name = iam.Role.from_role_name(self, "Role", role_name=<role_name>)

can't use log analytics workspace in a different subscription? terraform azurerm policy assignment

I'm using terraform to write azure policy as code
I found two problems
1 I can't seem to use log analytics workspace that is on a different subscription, within same subscription, it's fine
2 For policies that needs managed identity, I can't seem to assign correct rights to it.
resource "azurerm_policy_assignment" "Enable_Azure_Monitor_for_VMs" {
name = "Enable Azure Monitor for VMs"
scope = data.azurerm_subscription.current.id
policy_definition_id = "/providers/Microsoft.Authorization/policySetDefinitions/55f3eceb-5573-4f18-9695-226972c6d74a"
description = "Enable Azure Monitor for the virtual machines (VMs) in the specified scope (management group, subscription or resource group). Takes Log Analytics workspace as parameter."
display_name = "Enable Azure Monitor for VMs"
location = var.location
metadata = jsonencode(
{
"category" : "General"
})
parameters = jsonencode({
"logAnalytics_1" : {
"value" : var.log_analytics_workspace_ID
}
})
identity {
type = "SystemAssigned"
}
}
resource "azurerm_role_assignment" "vm_policy_msi_assignment" {
scope = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.scope
role_definition_name = "Contributor"
principal_id = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.identity[0].principal_id
}
for var.log_analytics_workspace_ID, if i use the workspace id that is in the same subscription as the policy, it would work fine. but If I use a workspace ID from a different subscription, after deployment, the workspace field will be blank.
also for
resource "azurerm_role_assignment" "vm_policy_msi_assignment"
, I have already given myself user access management role, but after deployment, "This identity currently has the following permissions:" is still blank?
I got an answer to my own question:)
1 this is not something designed well in Azure, I recon.
MS states "a Managed Identity (MSI) is created for each policy assignment that contains DeployIfNotExists effects in the definitions. The required permission for the target assignment scope is managed automatically. However, if the remediation tasks need to interact with resources outside of the assignment scope, you will need to manually configure the required permissions."
which means, the system generated managed identity which needs access in log analytics workspace in another subscription need to be manually with log analytics workspace contributor rights
Also since you can't user user generated managed ID, you can't pre-populate this.
so if you want to to achieve in terraform, it seems you have to run policy assignment twice, the first time is just to get ID, then manual ( or via script) to assign permission, then run policy assignment again to point to the resource..
2 The ID was actually given the contributor rights, you just have to go into sub RBAC to see it.

aws-cdk: I cannot add access permission to existing SQS

I have a code where I need to grant send messages to an existing sqs queue.
I have this code in the aws-cdk. But this is not working. No access permission get added.
const sqsQ = sqs.Queue.fromQueueArn(this, "some-id", "arn:aws:sqs:us-east-2:SOME-ACCOUNT:QUEUE-NAME");
sqsQ.grantSendMessages(new iam.ServicePrincipal("events.amazonaws.com"));
I don't think it's possible to grant permissions to an existing resource in CDK. Anytime you import a resource into your stack using something like fromQueueArn you can think of this as a read-only reference to the resource.
In other words, you can only update resources which are managed by your CDK code.
You have basically 2 options here:
Move the original SQS into your CDK managed stack. You can do this using CloudFormation resource import feature (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html)
Modify SQS permissions outside CDK in the place where it was originally defined.
Try something like this instead
sqsQ.addToResourcePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
principals: [new ServicePrincipal(ServicePrincipals.EVENTS)],
actions: ["sqs:SendMessage"],
resources: [sqsQ.queueArn],
conditions: {
ArnEquals: {
"aws:SourceArn": <ruleArn or whatever needs permissions here>,
},
},
})
);

Create CfnDBCluster in non-default VPC?

I'm trying to create a serverless aurora database with the AWS CDK (1.19.0). However, it will always be created in the default VPC of the region. If I specify a vpc_security_group_id cloudformation fails because the provided security group is in the vpc created in the same stack as the aurora db.
"The DB instance and EC2 security group are in different VPCs."
Here is my code sample:
from aws_cdk import (
core,
aws_rds as rds,
aws_ec2 as ec2
)
class CdkAuroraStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# The code that defines your stack goes here
vpc = ec2.Vpc(self, "VPC")
sg = ec2.SecurityGroup(self, "SecurityGroup",
vpc = vpc,
allow_all_outbound = True
)
cluster = rds.CfnDBCluster(self, "AuroraDB",
engine="aurora",
engine_mode="serverless",
master_username="admin",
master_user_password="password",
database_name="databasename",
vpc_security_group_ids=[
sg.security_group_id
]
)
Do I miss something and it is possible to create the CfnDbCluster in a specific vpc or is this just not possible atm?
Thanks for any help and advice. Have a nice day!
You should create a DB subnet group and include only the subnets you want Amazon RDS to launch instances into. Amazon RDS creates a DB subnet group in default VPC if none is specified.
You can use db_subnet_group_name property to specify your subnets, however it is better to use high-level constructs. In this case, there is one called DatabaseCluster.
cluster = DatabaseCluster(
scope=self,
id="AuroraDB",
engine=DatabaseClusterEngine.AURORA,
master_user=rds.Login(
username="admin",
password="Do not put passwords in your CDK code directly"
),
instance_props={
"instance_type": ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
"vpc_subnets": {
"subnet_type": ec2.SubnetType.PRIVATE
},
"vpc": vpc,
"security_group": sg
}
)
Do not specify password attribute for your database, CDK assigns a Secrets Manager generated password by default.
Just to note that this construct is still experimental, that means there might be a breaking change in the future.

Resources