aws-cdk: I cannot add access permission to existing SQS - amazon-sqs

I have a code where I need to grant send messages to an existing sqs queue.
I have this code in the aws-cdk. But this is not working. No access permission get added.
const sqsQ = sqs.Queue.fromQueueArn(this, "some-id", "arn:aws:sqs:us-east-2:SOME-ACCOUNT:QUEUE-NAME");
sqsQ.grantSendMessages(new iam.ServicePrincipal("events.amazonaws.com"));

I don't think it's possible to grant permissions to an existing resource in CDK. Anytime you import a resource into your stack using something like fromQueueArn you can think of this as a read-only reference to the resource.
In other words, you can only update resources which are managed by your CDK code.
You have basically 2 options here:
Move the original SQS into your CDK managed stack. You can do this using CloudFormation resource import feature (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html)
Modify SQS permissions outside CDK in the place where it was originally defined.

Try something like this instead
sqsQ.addToResourcePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
principals: [new ServicePrincipal(ServicePrincipals.EVENTS)],
actions: ["sqs:SendMessage"],
resources: [sqsQ.queueArn],
conditions: {
ArnEquals: {
"aws:SourceArn": <ruleArn or whatever needs permissions here>,
},
},
})
);

Related

Why is the Network Watcher on Azure not destroyed by Terraform?

I have a simple Terraform configuration to create azure virtual network. When I do plan and then apply, a virtual network is created inside of a resource group as expected. But in addition to this resource group, there is one more created by the name NetworkWatcherRG, and inside of it I see a network watcher.
And the network watcher.
Now when I run the Terraform destroy command, I expect that every thing is cleaned up, all the Resource groups are destroyed. But instead, everything except for the NetworkWatcherRG and the Network Watcher inside of it are destroyed.
Looks like the Network Watcher along with its resource group, is NOT managed by Terraform. What am I missing?
The network watcher is not immediately obvious. Its not reveled immediately. So to see that, you need to go the simplified view of the resource groups. You need to click the Refresh button atleast 5 times(each time with a 2 second time gap) or you have to wait for long time and then click refresh.
So what is this network watcher and is it that Azure is creating it by itself and not managed by Terraform?
My Terraform configuration file is as follows.
# Terraform settings Block
terraform {
required_version = ">= 1.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.0"
}
}
}
# Provider Block
provider "azurerm" {
features {}
}
# create virtual network
resource "azurerm_virtual_network" "myvnet" {
name = "vivek-1-vnet"
address_space = ["10.0.0.0/16"] # This is a list, it has []. If it has { }, then its a map.
location = azurerm_resource_group.myrg.location
resource_group_name = azurerm_resource_group.myrg.name
tags = { # This is a map. This is {}
"name" = "vivek-1-vnet"
}
}
# Resource-1: Azure Resource Group
resource "azurerm_resource_group" "myrg" {
name = "vivek-vnet-rg"
location = var.resource_group_location
}
variable "resource_group_location" {
default = "centralindia"
description = "Location of the resource group."
}
And finally the commands I use are as follows.
terraform fmt
terraform init
terraform validate
terraform plan -out main.tfplan
terraform apply main.tfplan
terraform plan -destroy -out main.destroy.tfplan
terraform apply main.destroy.tfplan
I read the response from #RahulKumarShaw-MT . I believe the answer and it makes complete sense that terraform won't destroy resources it didn't create (unless someone can demonstrate otherwise). That said, I was able to delete the NetworkWatcherRG group using terraform! What I did to achieve this was I made sure to add a network watcher as one of my declared resources using azurerm_network_watcher (see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_watcher) in the same terraform script where I requested a virtual machine resource in another separate resource group. I think you created a vnet. My script creates a vnet too, and hence why I think Azure concludes that there is a need for a network watcher maybe? I named the first resource group, which contains my network watcher, whatever I wanted; doesn't have to be 'NetworkWatcherRG'. I watched the resource group be created and destroyed successfully with Terraform (using terraform apply and terraform destroy respectively, of course) along with my VM and vnet resources. Anyway, at the end, I refreshed the Azure Portal web page and saw no resource groups or resources in my test subscription. I'm not an Azure expert, but I suspect that if Azure already sees a network watcher present, then it won't create an additional one when terraform created my resources (e.g. - in my case a vm and a vnet), as a watcher will already be present as long as terraform creates that resource first before Azure gets a chance to.
Before applying terraform code i checked in my resource groups with name network watcher resource group for me , by default this resource grpup is created by Azure side.
As Mike-Ubezzi wrote on Microsoft forums:
Network Watcher resources are located in the hidden NetworkWatcherRG
resource group which is created automatically. For example, the NSG
Flow Logs resource is a child resource of Network Watcher and is
enabled in the NetworkWatcherRG.
The Network Watcher resource represents the backend service for
Network Watcher and is fully managed by Azure. Customers do no need to
manage it. Operations like move are not supported on the resource.
However, the resource can be
deleted.
So terraform destroy will only delete the resource created by you(mentioned in .tfstate file).This is the region you won't able to delete the NetworkWatcherRG Resource Group.

AWS CDK - inline IAM Policies with conflicting names are generated for different stacks using a shared role

I'm using the CDK to deploy several stacks, and one of the roles used is shared across multiple stacks. The constructs (e.g. CodeBuildAction) which use the role frequently attach necessary permissions as an inline policy. However, despite knowing that it is an "imported" role, the inline policy name that is generated is not unique across stacks, and therefore both CloudFormation stacks contain the same Policy resource, and fight over the contents. (Neither stack contains the Role resource.)
import * as cdk from "#aws-cdk/core";
import * as iam from "#aws-cdk/aws-iam";
const sharedRoleArn = "arn:aws:iam::1111111111:role/MyLambdaRole";
const app = new cdk.App();
const stackOne = new cdk.Stack(app, "StackOne");
const roleRefOne = iam.Role.fromRoleArn(stackOne, "SharedRole", sharedRoleArn);
// Under normal circumstances, this is called inside constructs defined by AWS
// (like a CodeBuildAction that grants permission to access Artifact S3 buckets, etc)
roleRefOne.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["s3:ListBucket"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
const stackTwo = new cdk.Stack(app, "StackTwo");
const roleRefTwo = iam.Role.fromRoleArn(stackTwo, "SharedRole", sharedRoleArn);
roleRefTwo.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["dynamodb:List*"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
The following are fragments of the cloud assembly generated for the two stacks:
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: s3:ListBucket
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackOne/SharedRole/Policy/Resource
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: dynamodb:List*
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackTwo/SharedRole/Policy/Resource
You can see above that the aws:cdk:paths for two policies are different, but they end up with the same name (SharedRolePolicyA1DDBB1E). That is used as the physical name of the inline policy attached to the MySharedRole role. (The same behavior occurs for stacks in separate "Apps" as well.)
There's no affordance for setting the PolicyName for the "default policy" generated for a role (or which policies a construct attaches permissions to). I could also make the shared role immutable (using { mutable: false } on fromRoleArn, but then I need to reconstruct the potentially complicated Policies a set of constructs would have given the role, and attache it myself.
I was able to work around the issue by templating the stack name into the imported role's "id", as in:
const stack = cdk.Stack.of(scope)
const role = iam.Role.fromRoleArn(scope, `${stack.stackName}SharedRole`, sharedRoleArn);
where I construct my role.
Is this expected behavior? Do I misunderstand something about imported resources with CDK? Is there a better alternative? (My understanding with the construct ids is that they are only intended to need to be unique within a given scope.)

can't use log analytics workspace in a different subscription? terraform azurerm policy assignment

I'm using terraform to write azure policy as code
I found two problems
1 I can't seem to use log analytics workspace that is on a different subscription, within same subscription, it's fine
2 For policies that needs managed identity, I can't seem to assign correct rights to it.
resource "azurerm_policy_assignment" "Enable_Azure_Monitor_for_VMs" {
name = "Enable Azure Monitor for VMs"
scope = data.azurerm_subscription.current.id
policy_definition_id = "/providers/Microsoft.Authorization/policySetDefinitions/55f3eceb-5573-4f18-9695-226972c6d74a"
description = "Enable Azure Monitor for the virtual machines (VMs) in the specified scope (management group, subscription or resource group). Takes Log Analytics workspace as parameter."
display_name = "Enable Azure Monitor for VMs"
location = var.location
metadata = jsonencode(
{
"category" : "General"
})
parameters = jsonencode({
"logAnalytics_1" : {
"value" : var.log_analytics_workspace_ID
}
})
identity {
type = "SystemAssigned"
}
}
resource "azurerm_role_assignment" "vm_policy_msi_assignment" {
scope = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.scope
role_definition_name = "Contributor"
principal_id = azurerm_policy_assignment.Enable_Azure_Monitor_for_VMs.identity[0].principal_id
}
for var.log_analytics_workspace_ID, if i use the workspace id that is in the same subscription as the policy, it would work fine. but If I use a workspace ID from a different subscription, after deployment, the workspace field will be blank.
also for
resource "azurerm_role_assignment" "vm_policy_msi_assignment"
, I have already given myself user access management role, but after deployment, "This identity currently has the following permissions:" is still blank?
I got an answer to my own question:)
1 this is not something designed well in Azure, I recon.
MS states "a Managed Identity (MSI) is created for each policy assignment that contains DeployIfNotExists effects in the definitions. The required permission for the target assignment scope is managed automatically. However, if the remediation tasks need to interact with resources outside of the assignment scope, you will need to manually configure the required permissions."
which means, the system generated managed identity which needs access in log analytics workspace in another subscription need to be manually with log analytics workspace contributor rights
Also since you can't user user generated managed ID, you can't pre-populate this.
so if you want to to achieve in terraform, it seems you have to run policy assignment twice, the first time is just to get ID, then manual ( or via script) to assign permission, then run policy assignment again to point to the resource..
2 The ID was actually given the contributor rights, you just have to go into sub RBAC to see it.

How to fetch SSM Parameters from two different accounts using AWS CDK

I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.

How do I use ServiceWorker without a separate JS file?

We create service workers by
navigator.serviceWorker.register('sw.js', { scope: '/' });
We can create new Workers without an external file like this,
var worker = function() { console.log('worker called'); };
var blob = new Blob( [ '(' , worker.toString() , ')()' ], {
type: 'application/javascript'
});
var bloburl = URL.createObjectURL( blob );
var w = new Worker(bloburl);
With the approach of using blob to create ServiceWorkers, we will get a Security Error as the bloburl would be blob:chrome-extension..., and the origin won't be supported by Service Workers.
Is it possible to create a service worker without external file and use the scope as / ?
I would strongly recommend not trying to find a way around the requirement that the service worker implementation code live in a standalone file. There's a very important of the service worker lifecycle, updates, that relies on your browser being able to fetch your registered service worker JavaScript resource periodically and do a byte-for-byte comparison to see if anything has changed.
If something has changed in your service worker code, then the new code will be considered the installing service worker, and the old service worker code will eventually be considered the redundant service worker as soon as all pages that have the old code registered and unloaded/closed.
While a bit difficult to wrap your head around at first, understanding and making use of the different service worker lifecycle states/events are important if you're concerned about cache management. If it weren't for this update logic, once you registered a service worker for a given scope once, it would never give up control, and you'd be stuck if you had a bug in your code/needed to add new functionality.
One hacky way is to use the the same javascript file understand the context and act as a ServiceWorker as well as the one calling it.
HTML
<script src="main.js"></script>
main.js
if(!this.document) {
self.addEventListener('install', function(e) {
console.log('service worker installation');
});
} else {
navigator.serviceWorker.register('main.js')
}
To prevent maintaining this as a big file main.js, we could use,
if(!this.document) {
//service worker js
importScripts('sw.js');
else {
//loadscript document.js by injecting a script tag
}
But it might come back to using a separate sw.js file for service worker to be a better solution. This would be helpful if one'd want a single entry point to the scripts.

Resources