There is lots of example code, but the rapidly improving cdk package isn't helping me find working examples of some (I thought) simple things. eg., even an import I found in an example fails:
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
error TS2724: Module '"../node_modules/#aws-cdk/aws-ec2/lib"' has no exported member 'VpcNetworkRef'. Did you mean 'IVpcNetwork'?
Why does the example ec2 code not show creation of raw ec2 instances?
WHAT would help is example cdk code that uses hardcoded VpcId and SecurityGroupId (I'll pass these in as context values) to create a pair of new subnets (ie., 1 for each availability zone) into which we place a pair of EC2 instances.
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
We have lots of distinct environments (sets of aws infrastructure) that currently share a single account, VPC, and security group. This will change, but my current goal is to see if we can use the cloud dev kit to create new distinct environments in this existing model. We have a CF template today.
I can't tell where to start. The examples for referencing existing VPCs aren't compiling.
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
const vpc = VpcNetworkRef.import(this, 'unused', {vpcId, availabilityZones: ['unused']});
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
-----edit-------->
Discussions on gitter helped me answer this and how to add a bare Instance
const vpc - ec2.VpcNetwork.import(this, 'YOUR-VPC-NAME', {
vpcId: 'your-vpc-id',
availabilityZones: ['list', 'some', 'zones'],
publicSubnetIds: ['list', 'some', 'subnets'],
privateSubnetIds: ['list', 'some', 'more'],
});
const sg = ec2.SecurityGroup.import(this, 'YOUR-SG-NAME', {
securityGroupId: 'your-sg-id'
});
// can add subnets to existing..
const newSubnet = new ec2.VpcSubnet(this, "a name", {
availablityZone: "us-west-2b",
cidrBlock: "a.b.c.d/e",
vpcId: vpc.vpcId
});
// add bare instance
new ec2.CfnInstance(this, "instance name", {
imageId: "an ami",
securityGroupIds: [sg.securityGroupId],
subnetId: newSubnet.subnetId,
instanceType: "an instance type",
tags: [{ key: "key", value: "value"}]
});
No further answers needed... for me.
import ec2 = require('#aws-cdk/aws-ec2');
// looking up a VPC by its name
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcName: 'VPC-Name'
});
// looking up an SG by its ID
const sg = ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'SG-ID')
// creating the EC2 instance
const instance = new ec2.Instance(this, 'Instance', {
vpc: vpc,
securityGroup: sg,
instanceType: new ec2.InstanceType('m4.large'),
machineImage: new ec2.GenericLinuxImage({
'us-east-1': 'ami-abcdef' // <- add your ami-region mapping here
}),
});
I was running into the issue of importing an existing vpc / subnet / security group as well. I believe it's changed a bit since the original post. Here is how to do it as of v1.18.0:
import cdk, { Construct, Stack, Subnet, StackProps } from '#aws-cdk/core';
import { SecurityGroup, SubnetType, Vpc } from "#aws-cdk/aws-ec2";
const stackProps: StackProps = {
env: {
region: 'your region',
account: 'your account'
},
};
export class MyStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id, stackProps);
const vpc = Vpc.fromVpcAttributes(this, 'vpc', {
vpcId: 'your vpc id',
availabilityZones: ['your region'],
privateSubnetIds: ['your subnet id']
});
//Get subnets that already exists off your current vpc.
const subnets = vpc.selectSubnets({subnetType: SubnetType.PRIVATE});
//Create a subnet in the existing vpc
const newSubnet = new Subnet(this, 'subnet', {
availabilityZone: 'your zone',
cidrBlock: 'a.b.c.d/e',
vpcId: vpc.vpcId
});
//Get an existing security group.
const securityGroup = SecurityGroup.fromSecurityGroupId(this, 'securitygroup', 'your security group id');
}
}
Related
I'm using the AWS CDK to deploy code and infrastructure from a monorepo that includes both my front and backend logic (along with the actual CDK constructs). I'm using the CDK Pipelines library to kick off a build on every commit to my main git branch. The pipeline should:
deploy all the infrastructure. Which at the moment is just an API gateway with an endpoint powered by a Lambda function, and a S3 bucket that will hold the built frontend.
configure and build the frontend by providing the API URL that was just created.
move the built frontend files to the S3 bucket.
My Pipeline is in a different account than the actual deployed infrastructure. I've bootstrapped the environments and set up the correct trust policies. I've succeeded in the first two points by creating the constructs and saving the API URL as a CfnOutput. Here's a simplified version of the Stack:
class MyStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const API = new aws_apigateway.LambdaRestApi(this, id, {
handler: lambda,
});
this.apiURL = new CfnOutput(this, 'api_url', { value: api.url });
const bucket = new aws_s3.Bucket(this, name, {
bucketName: 'frontend-bucket',
...
});
this.bucketName = new CfnOutput(this, 'bucket_name', {
exportName: 'frontend-bucket-name',
value: bucket.bucketName
})
}
Here's my pipeline stage:
export class MyStage extends Stage {
public readonly apiURL: CfnOutput;
public readonly bucketName: CfnOutput;
constructor(scope, id, props) {
super(scope, id, props);
const newStack = new MyStack(this, 'demo-stack', props);
this.apiURL = backendStack.apiURL;
this.bucketName = backendStack.bucketName;
}
}
And finally here's my pipeline:
export class MyPipelineStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const pipeline = new CodePipeline(this, 'pipeline', { ... });
const infrastructure = new MyStage(...);
// I can use my output to configure my frontend build with the right URL to the API.
// This seems to be working, or at least I don't receive an error
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build'
],
primaryOutputDirectory: 'frontend/dist',
envFromCfnOutputs: {
AWS_API_BASE_URL: infrastructure.apiURL
}
})
// Now I need to move the built files to the S3 bucket
// I cannot get the name of the bucket however, it errors with the message:
// No export named frontend-bucket-name found. Rollback requested by user.
const bucket = aws_s3.Bucket.fromBucketAttributes(this, 'frontend-bucket', {
bucketName: infrastructure.bucketName.importValue,
account: 'account-the-bucket-is-in'
});
const s3Deploy = new customPipelineActionIMade(frontend.primaryOutput, bucket)
const postSteps = pipelines.Step.sequence([frontend, s3Deploy]);
pipeline.addStage(infrastructure, {
post: postSteps
});
}
}
I've tried everything I can think of to allow my pipeline to access that bucket name, but I always get the same thing: // No export named frontend-bucket-name found. Rollback requested by user. The value doesn't seem to get exported from my stack, even though I'm doing something very similar for the API URL in the frontend build step.
If I take away the 'exportName' of the bucket and try to access the CfnOutput value directly I get a dependency cannot cross stage boundaries error.
This seems like a pretty common use case - deploy infrastructure, then configure and deploy a frontend using those constructs, but I haven't been able to find anything that outlines this process. Any help is appreciated.
I'm trying to set a pair of Elastic IPs as the public facing addresses for a NetworkLoadBalancer object and running into issues. The console.log("CFN NLB"); line in the code below never executes because the load balancer definition throws the following error:
There are no 'Public' subnet groups in this VPC. Available types:
Subprocess exited with error 1
I'm doing it this way because there's no high-level way to assign existing Elastic IPs to a load balancer without using the Cfn escape hatch as discussed here.
If I enable the commented code in the NetworkLoadBalancer definition, the stack synths successfully but then I get the following when deploying:
You can specify either subnets or subnet mappings, not both (Service: AmazonElasticLoadBalancing; Status Code: 400; E
rror Code: ValidationError; Request ID: e4b90830-xxxx-4f13-8777-bcf56946781a; Proxy: null)
Code:
const pubSubnet1ID = 'subnet-xxxxxfa6d669cd496';
const pubSubnet2ID = 'subnet-xxxxxbaf8d2d77afb';
const pubSubnet1 = Subnet.fromSubnetId(this, 'pubSubnet1', pubSubnet1ID);
const pubSubnet2 = Subnet.fromSubnetId(this, 'pubSubnet2', pubSubnet2ID);
console.log("Tagging.");
Tags.of(pubSubnet1).add('aws-cdk:subnet-type', 'Public');
Tags.of(pubSubnet2).add('aws-cdk:subnet-type', 'Public');
console.log("Load Balancer...");
this.loadBalancer = new NetworkLoadBalancer(this, 'dnsLB', {
vpc: assets.vpc,
internetFacing: true,
crossZoneEnabled: true,
// vpcSubnets: {
// subnets: [pubSubnet1, pubSubnet2],
// },
});
console.log("CFN NLB");
this.cfnNLB = this.loadBalancer.node.defaultChild as CfnLoadBalancer;
console.log("Mappings");
const subnetMapping1: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet1ID,
allocationId: assets.elasticIp1.attrAllocationId,
}
const subnetMapping2: CfnLoadBalancer.SubnetMappingProperty = {
subnetId: pubSubnet2ID,
allocationId: assets.elasticIp2.attrAllocationId,
}
console.log("Mapping assignment");
this.cfnNLB.subnetMappings = [subnetMapping1, subnetMapping2];
I've found references to CDK wanting a tag of aws-cdk:subnet-type with a value of Public and added that tag to our public subnets (both manually and programmatically), but the error remains unchanged.
I found the solution. Uncommenting the vpcSubnets: part of the loadBalancer definition allowed me to get past the first error message. To get around the "You can specify either subnets or subnet mappings, not both" message, I added
this.cfnNLB.addDeletionOverride('Properties.Subnets');
before setting the subnetMappings attribute.
I want to save an initial admin user to my dynamodb table when initializing a cdk stack through a custom resource and am unsure of the best way to securely pass through values for that user. My code uses dotEnv and passes the values as environment variables right now:
import * as cdk from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
import * as dynamodb from "#aws-cdk/aws-dynamodb";
import * as customResource from "#aws-cdk/custom-resources";
require("dotenv").config();
export class CDKBackend extends cdk.Construct {
public readonly handler: lambda.Function;
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
const tableName = "CDKBackendTable";
// not shown here but also:
// creates a dynamodb table for tableName and a seedData lambda with access to it
// also some lambdas for CRUD operations and an apiGateway.RestApi for them
const seedDataProvider = new customResource.Provider(this, "seedDataProvider", {
onEventHandler: seedDataLambda
});
new cdk.CustomResource(this, "SeedDataResource", {
serviceToken: seedDataProvider.serviceToken,
properties: {
tableName,
user: process.env.ADMIN,
password: process.env.ADMINPASSWORD,
salt: process.env.SALT
}
});
}
}
This code works, but is it safe to pass through ADMIN, ADMINPASSWORD and SALT in this way? What are the security differences between this approach and accessing those values from AWS secrets manager? I also plan on using that SALT value when generating passwordDigest values for all new users, not just this admin user.
The properties values will be evaluated at deployment time. As such they will become part of CloudFormation template. The CloudFormation template can be viewed inside AWS Web Console. As such passing secrets around this way is questionable from security standpoint.
One way to overcome this is to store the secrets using AWS Secrets Manager. aws-cdk has good integration with it Secrets Manager. Once you create a secret you can import it via:
const mySecretFromName = secretsmanager.Secret.fromSecretNameV2(stack, 'SecretFromName', 'MySecret')
Unfortunately there's no support for resolving CloudFormation dynamic references in AWS Custom Resources. You can resolve the secret yourself though inside your lambda (seedDataLambda). The SqlRun repository provides an example.
Please remember to grant access to the secret for the custom resource lambda (seedLambda) e.g.
secret.grantRead(seedDataProvider.executionRole)
I am currently trying to create an aggregator for all of the config rules I created in order for a client to have a centralized place to view all regions config metrics.
Here is my code to create the configAggregator:
//adding role for configAggregator
const configAggregatorRole = new iam.Role(this, 'configAggregatorRole' ,{
assumedBy: new iam.ServicePrincipal('config.amazonaws.com')
});
configAggregatorRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSConfigRoleforOrganizations'));
configAggregatorRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('ReadOnlyAccess'));
//adding a content aggregator for managed config rules below
const globalConfigAggregator = new config.CfnConfigurationAggregator(this, 'globalConfigAggregator',{
configurationAggregatorName: 'globalConfigAggregator',
AccountAggregationSourceProperty: {
accountIds : this.account
}
});
}
}
I am currently trying to figure out what I should pass to specify that I want this account and x region to be the aggregated view of all the config rules in all the regions in that account. I am not sure how to do this. Thank you!
I want to deploy a Lex bot to my AWS account using CDK.
Looking at the API reference documentation I can't find a construct for Lex. Also, I found this issue on the CDK GitHub repository which confirms there is no CDK construct for Lex.
Is there any workaround to deploy the Lex bot or another tool for doing this ?
Edit: CloudFormation support for AWS Lex is now available, see Wesley Cheek's answer. Below is my original answer which solved the lack of CloudFormation support using custom resources.
There is! While perhaps a bit cumbersome, it's totally possible using custom resources.
Custom resources work by defining a lambda that handles creation and deletion events for the custom resource. Since it's possible to create and delete AWS Lex bots using the AWS API, we can make the lambda do this when the resource gets created or destroyed.
Here's a quick example I wrote in TS/JS:
CDK Code (TypeScript):
import * as path from 'path';
import * as cdk from '#aws-cdk/core';
import * as iam from '#aws-cdk/aws-iam';
import * as logs from '#aws-cdk/aws-logs';
import * as lambda from '#aws-cdk/aws-lambda';
import * as cr from '#aws-cdk/custom-resources';
export class CustomResourceExample extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Lambda that will handle the different cloudformation resource events
const lexBotResourceHandler = new lambda.Function(this, 'LexBotResourceHandler', {
code: lambda.Code.fromAsset(path.join(__dirname, 'lambdas')),
handler: 'lexBotResourceHandler.handler',
runtime: lambda.Runtime.NODEJS_14_X,
});
lexBotResourceHandler.addToRolePolicy(new iam.PolicyStatement({
resources: ['*'],
actions: ['lex:PutBot', 'lex:DeleteBot']
}))
// Custom resource provider, specifies how the custom resources should be created
const lexBotResourceProvider = new cr.Provider(this, 'LexBotResourceProvider', {
onEventHandler: lexBotResourceHandler,
logRetention: logs.RetentionDays.ONE_DAY // Default is to keep forever
});
// The custom resource, creating one of these will invoke the handler and create the bot
new cdk.CustomResource(this, 'ExampleLexBot', {
serviceToken: lexBotResourceProvider.serviceToken,
// These options will be passed down to the lambda
properties: {
locale: 'en-US',
childDirected: false
}
})
}
}
Lambda Code (JavaScript):
const AWS = require('aws-sdk');
const Lex = new AWS.LexModelBuildingService();
const onCreate = async (event) => {
await Lex.putBot({
name: event.LogicalResourceId,
locale: event.ResourceProperties.locale,
childDirected: Boolean(event.ResourceProperties.childDirected)
}).promise();
};
const onUpdate = async (event) => {
// TODO: Not implemented
};
const onDelete = async (event) => {
await Lex.deleteBot({
name: event.LogicalResourceId
}).promise();
};
exports.handler = async (event) => {
switch (event.RequestType) {
case 'Create':
await onCreate(event);
break;
case 'Update':
await onUpdate(event);
break;
case 'Delete':
await onDelete(event);
break;
}
};
I admit it's a very bare-bones example but hopefully it's enough to get you or anyone reading started and see how it could be built upon by adding more options and more custom resources (for example for intentions).
Deploying Lex using CloudFormation is now possible.
CDK support has also been added but it's only available as an L1 construct, meaning the CDK code is basically going to look like CloudFormation.
Also, since this feature just came out, some features may be missing or buggy. I have been unable to find a way to do channel integrations, and have had some problems with using image response cards, but otherwise have successfully deployed a bot and connected it with Lambda/S3 using CDK.