How to get the long region name with AWS CDK - aws-cdk

How can one get the region name in AWS CDK?
Some SDKs provide Region objects (see also this question)
Inside a CDK construct, self.region is available, and cdk.aws_region has a RegionInfo class, but it doesn't provide the name, eg: ap-southeast-1 -> Singapore

AWS exposes region long names (like Asia Pacific (Singapore) for ap-southeast-1) as publicly available Parameter Store parameters. Lookup this value at synth-time with ssm.StringParameter.valueFromLookup*. You need the region id for the parameter path, which a stack (or construct) can introspect.
export class MyStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const regionLongName: string = ssm.StringParameter.valueFromLookup(
this,
`/aws/service/global-infrastructure/regions/${this.region}/longName`
);
console.dir({ region: this.region, regionLongName });
}
}
For Python
aws_cdk.aws_ssm.StringParameter.value_from_lookup(
scope=self,
parameter_name=f'/aws/service/global-infrastructure/regions/{self.region}/longName')
cdk synth output::
{ region: 'us-east-1', regionLongName: 'US East (N. Virginia)' }
* The CDK's valueFromLookup context method will cache the looked-up parameter store value in cdk.context.json at synth-time. You may initially get a dummy value. Just synth again.

Related

Using the AWS CDK to deploy bucket, build frontend, then move files to bucket in a Code Pipeline

I'm using the AWS CDK to deploy code and infrastructure from a monorepo that includes both my front and backend logic (along with the actual CDK constructs). I'm using the CDK Pipelines library to kick off a build on every commit to my main git branch. The pipeline should:
deploy all the infrastructure. Which at the moment is just an API gateway with an endpoint powered by a Lambda function, and a S3 bucket that will hold the built frontend.
configure and build the frontend by providing the API URL that was just created.
move the built frontend files to the S3 bucket.
My Pipeline is in a different account than the actual deployed infrastructure. I've bootstrapped the environments and set up the correct trust policies. I've succeeded in the first two points by creating the constructs and saving the API URL as a CfnOutput. Here's a simplified version of the Stack:
class MyStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const API = new aws_apigateway.LambdaRestApi(this, id, {
handler: lambda,
});
this.apiURL = new CfnOutput(this, 'api_url', { value: api.url });
const bucket = new aws_s3.Bucket(this, name, {
bucketName: 'frontend-bucket',
...
});
this.bucketName = new CfnOutput(this, 'bucket_name', {
exportName: 'frontend-bucket-name',
value: bucket.bucketName
})
}
Here's my pipeline stage:
export class MyStage extends Stage {
public readonly apiURL: CfnOutput;
public readonly bucketName: CfnOutput;
constructor(scope, id, props) {
super(scope, id, props);
const newStack = new MyStack(this, 'demo-stack', props);
this.apiURL = backendStack.apiURL;
this.bucketName = backendStack.bucketName;
}
}
And finally here's my pipeline:
export class MyPipelineStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const pipeline = new CodePipeline(this, 'pipeline', { ... });
const infrastructure = new MyStage(...);
// I can use my output to configure my frontend build with the right URL to the API.
// This seems to be working, or at least I don't receive an error
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build'
],
primaryOutputDirectory: 'frontend/dist',
envFromCfnOutputs: {
AWS_API_BASE_URL: infrastructure.apiURL
}
})
// Now I need to move the built files to the S3 bucket
// I cannot get the name of the bucket however, it errors with the message:
// No export named frontend-bucket-name found. Rollback requested by user.
const bucket = aws_s3.Bucket.fromBucketAttributes(this, 'frontend-bucket', {
bucketName: infrastructure.bucketName.importValue,
account: 'account-the-bucket-is-in'
});
const s3Deploy = new customPipelineActionIMade(frontend.primaryOutput, bucket)
const postSteps = pipelines.Step.sequence([frontend, s3Deploy]);
pipeline.addStage(infrastructure, {
post: postSteps
});
}
}
I've tried everything I can think of to allow my pipeline to access that bucket name, but I always get the same thing: // No export named frontend-bucket-name found. Rollback requested by user. The value doesn't seem to get exported from my stack, even though I'm doing something very similar for the API URL in the frontend build step.
If I take away the 'exportName' of the bucket and try to access the CfnOutput value directly I get a dependency cannot cross stage boundaries error.
This seems like a pretty common use case - deploy infrastructure, then configure and deploy a frontend using those constructs, but I haven't been able to find anything that outlines this process. Any help is appreciated.

Is it safe to pass sensitive values as environment variables into an aws-cdk custom resource

I want to save an initial admin user to my dynamodb table when initializing a cdk stack through a custom resource and am unsure of the best way to securely pass through values for that user. My code uses dotEnv and passes the values as environment variables right now:
import * as cdk from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
import * as dynamodb from "#aws-cdk/aws-dynamodb";
import * as customResource from "#aws-cdk/custom-resources";
require("dotenv").config();
export class CDKBackend extends cdk.Construct {
public readonly handler: lambda.Function;
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
const tableName = "CDKBackendTable";
// not shown here but also:
// creates a dynamodb table for tableName and a seedData lambda with access to it
// also some lambdas for CRUD operations and an apiGateway.RestApi for them
const seedDataProvider = new customResource.Provider(this, "seedDataProvider", {
onEventHandler: seedDataLambda
});
new cdk.CustomResource(this, "SeedDataResource", {
serviceToken: seedDataProvider.serviceToken,
properties: {
tableName,
user: process.env.ADMIN,
password: process.env.ADMINPASSWORD,
salt: process.env.SALT
}
});
}
}
This code works, but is it safe to pass through ADMIN, ADMINPASSWORD and SALT in this way? What are the security differences between this approach and accessing those values from AWS secrets manager? I also plan on using that SALT value when generating passwordDigest values for all new users, not just this admin user.
The properties values will be evaluated at deployment time. As such they will become part of CloudFormation template. The CloudFormation template can be viewed inside AWS Web Console. As such passing secrets around this way is questionable from security standpoint.
One way to overcome this is to store the secrets using AWS Secrets Manager. aws-cdk has good integration with it Secrets Manager. Once you create a secret you can import it via:
const mySecretFromName = secretsmanager.Secret.fromSecretNameV2(stack, 'SecretFromName', 'MySecret')
Unfortunately there's no support for resolving CloudFormation dynamic references in AWS Custom Resources. You can resolve the secret yourself though inside your lambda (seedDataLambda). The SqlRun repository provides an example.
Please remember to grant access to the secret for the custom resource lambda (seedLambda) e.g.
secret.grantRead(seedDataProvider.executionRole)

Is there a way to deploy a Lex bot using CDK?

I want to deploy a Lex bot to my AWS account using CDK.
Looking at the API reference documentation I can't find a construct for Lex. Also, I found this issue on the CDK GitHub repository which confirms there is no CDK construct for Lex.
Is there any workaround to deploy the Lex bot or another tool for doing this ?
Edit: CloudFormation support for AWS Lex is now available, see Wesley Cheek's answer. Below is my original answer which solved the lack of CloudFormation support using custom resources.
There is! While perhaps a bit cumbersome, it's totally possible using custom resources.
Custom resources work by defining a lambda that handles creation and deletion events for the custom resource. Since it's possible to create and delete AWS Lex bots using the AWS API, we can make the lambda do this when the resource gets created or destroyed.
Here's a quick example I wrote in TS/JS:
CDK Code (TypeScript):
import * as path from 'path';
import * as cdk from '#aws-cdk/core';
import * as iam from '#aws-cdk/aws-iam';
import * as logs from '#aws-cdk/aws-logs';
import * as lambda from '#aws-cdk/aws-lambda';
import * as cr from '#aws-cdk/custom-resources';
export class CustomResourceExample extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Lambda that will handle the different cloudformation resource events
const lexBotResourceHandler = new lambda.Function(this, 'LexBotResourceHandler', {
code: lambda.Code.fromAsset(path.join(__dirname, 'lambdas')),
handler: 'lexBotResourceHandler.handler',
runtime: lambda.Runtime.NODEJS_14_X,
});
lexBotResourceHandler.addToRolePolicy(new iam.PolicyStatement({
resources: ['*'],
actions: ['lex:PutBot', 'lex:DeleteBot']
}))
// Custom resource provider, specifies how the custom resources should be created
const lexBotResourceProvider = new cr.Provider(this, 'LexBotResourceProvider', {
onEventHandler: lexBotResourceHandler,
logRetention: logs.RetentionDays.ONE_DAY // Default is to keep forever
});
// The custom resource, creating one of these will invoke the handler and create the bot
new cdk.CustomResource(this, 'ExampleLexBot', {
serviceToken: lexBotResourceProvider.serviceToken,
// These options will be passed down to the lambda
properties: {
locale: 'en-US',
childDirected: false
}
})
}
}
Lambda Code (JavaScript):
const AWS = require('aws-sdk');
const Lex = new AWS.LexModelBuildingService();
const onCreate = async (event) => {
await Lex.putBot({
name: event.LogicalResourceId,
locale: event.ResourceProperties.locale,
childDirected: Boolean(event.ResourceProperties.childDirected)
}).promise();
};
const onUpdate = async (event) => {
// TODO: Not implemented
};
const onDelete = async (event) => {
await Lex.deleteBot({
name: event.LogicalResourceId
}).promise();
};
exports.handler = async (event) => {
switch (event.RequestType) {
case 'Create':
await onCreate(event);
break;
case 'Update':
await onUpdate(event);
break;
case 'Delete':
await onDelete(event);
break;
}
};
I admit it's a very bare-bones example but hopefully it's enough to get you or anyone reading started and see how it could be built upon by adding more options and more custom resources (for example for intentions).
Deploying Lex using CloudFormation is now possible.
CDK support has also been added but it's only available as an L1 construct, meaning the CDK code is basically going to look like CloudFormation.
Also, since this feature just came out, some features may be missing or buggy. I have been unable to find a way to do channel integrations, and have had some problems with using image response cards, but otherwise have successfully deployed a bot and connected it with Lambda/S3 using CDK.

use existing vpc and security group when adding an ec2 instance

There is lots of example code, but the rapidly improving cdk package isn't helping me find working examples of some (I thought) simple things. eg., even an import I found in an example fails:
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
error TS2724: Module '"../node_modules/#aws-cdk/aws-ec2/lib"' has no exported member 'VpcNetworkRef'. Did you mean 'IVpcNetwork'?
Why does the example ec2 code not show creation of raw ec2 instances?
WHAT would help is example cdk code that uses hardcoded VpcId and SecurityGroupId (I'll pass these in as context values) to create a pair of new subnets (ie., 1 for each availability zone) into which we place a pair of EC2 instances.
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
We have lots of distinct environments (sets of aws infrastructure) that currently share a single account, VPC, and security group. This will change, but my current goal is to see if we can use the cloud dev kit to create new distinct environments in this existing model. We have a CF template today.
I can't tell where to start. The examples for referencing existing VPCs aren't compiling.
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
const vpc = VpcNetworkRef.import(this, 'unused', {vpcId, availabilityZones: ['unused']});
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
-----edit-------->
Discussions on gitter helped me answer this and how to add a bare Instance
const vpc - ec2.VpcNetwork.import(this, 'YOUR-VPC-NAME', {
vpcId: 'your-vpc-id',
availabilityZones: ['list', 'some', 'zones'],
publicSubnetIds: ['list', 'some', 'subnets'],
privateSubnetIds: ['list', 'some', 'more'],
});
const sg = ec2.SecurityGroup.import(this, 'YOUR-SG-NAME', {
securityGroupId: 'your-sg-id'
});
// can add subnets to existing..
const newSubnet = new ec2.VpcSubnet(this, "a name", {
availablityZone: "us-west-2b",
cidrBlock: "a.b.c.d/e",
vpcId: vpc.vpcId
});
// add bare instance
new ec2.CfnInstance(this, "instance name", {
imageId: "an ami",
securityGroupIds: [sg.securityGroupId],
subnetId: newSubnet.subnetId,
instanceType: "an instance type",
tags: [{ key: "key", value: "value"}]
});
No further answers needed... for me.
import ec2 = require('#aws-cdk/aws-ec2');
// looking up a VPC by its name
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcName: 'VPC-Name'
});
// looking up an SG by its ID
const sg = ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'SG-ID')
// creating the EC2 instance
const instance = new ec2.Instance(this, 'Instance', {
vpc: vpc,
securityGroup: sg,
instanceType: new ec2.InstanceType('m4.large'),
machineImage: new ec2.GenericLinuxImage({
'us-east-1': 'ami-abcdef' // <- add your ami-region mapping here
}),
});
I was running into the issue of importing an existing vpc / subnet / security group as well. I believe it's changed a bit since the original post. Here is how to do it as of v1.18.0:
import cdk, { Construct, Stack, Subnet, StackProps } from '#aws-cdk/core';
import { SecurityGroup, SubnetType, Vpc } from "#aws-cdk/aws-ec2";
const stackProps: StackProps = {
env: {
region: 'your region',
account: 'your account'
},
};
export class MyStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id, stackProps);
const vpc = Vpc.fromVpcAttributes(this, 'vpc', {
vpcId: 'your vpc id',
availabilityZones: ['your region'],
privateSubnetIds: ['your subnet id']
});
//Get subnets that already exists off your current vpc.
const subnets = vpc.selectSubnets({subnetType: SubnetType.PRIVATE});
//Create a subnet in the existing vpc
const newSubnet = new Subnet(this, 'subnet', {
availabilityZone: 'your zone',
cidrBlock: 'a.b.c.d/e',
vpcId: vpc.vpcId
});
//Get an existing security group.
const securityGroup = SecurityGroup.fromSecurityGroupId(this, 'securitygroup', 'your security group id');
}
}

how to get Neo4j cluster status using java api

I am trying to find the neo4j cluster health using java API. I see CLI CALL dbms.cluster.overview() is there any equivalent java api for this
1. Variant "Spring Boot"
If Spring Boot with Spring Data Neo4J is an option for you, you could define a DAO which executes your cypher statement and receives the result in an own QueryResult class.
1.1 GeneralQueriesDAO
#Repository
public interface GeneralQueriesDAO extends Neo4jRepository<String, Long> {
#Query("CALL dbms.cluster.overview() YIELD id, addresses, role, groups, database;")
ClusterOverviewResult[] getClusterOverview();
}
1.2 ClusterOverviewResult
#QueryResult
public class ClusterOverviewResult {
private String id; // This is id of the instance.
private List<String> addresses; // This is a list of all the addresses for the instance.
private String role; // This is the role of the instance, which can be LEADER, FOLLOWER, or READ_REPLICA.
private List<String> groups; // This is a list of all the server groups which an instance is part of.
private String database; // This is the name of the database which the instance is hosting.
// omitted default constructor as well getter and setter for clarity
}
1.3 Program flow
#Autowired
private GeneralQueriesDAO generalQueriesDAO;
[...]
ClusterOverviewResult[] clusterOverviewResult = generalQueriesDAO.getClusterOverview();
2. Variant "Without Spring"
Without Spring Boot the rough procedure could be:
Session session = driver.session();
StatementResult result = session.run("Cypher statement");
3. Variant "HTTP endpoints"
Another option could be to use of the HTTP endpoints for monitoring the health of a Neo4j Causal Cluster.

Resources