Using the AWS CDK to deploy bucket, build frontend, then move files to bucket in a Code Pipeline - aws-cdk

I'm using the AWS CDK to deploy code and infrastructure from a monorepo that includes both my front and backend logic (along with the actual CDK constructs). I'm using the CDK Pipelines library to kick off a build on every commit to my main git branch. The pipeline should:
deploy all the infrastructure. Which at the moment is just an API gateway with an endpoint powered by a Lambda function, and a S3 bucket that will hold the built frontend.
configure and build the frontend by providing the API URL that was just created.
move the built frontend files to the S3 bucket.
My Pipeline is in a different account than the actual deployed infrastructure. I've bootstrapped the environments and set up the correct trust policies. I've succeeded in the first two points by creating the constructs and saving the API URL as a CfnOutput. Here's a simplified version of the Stack:
class MyStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const API = new aws_apigateway.LambdaRestApi(this, id, {
handler: lambda,
});
this.apiURL = new CfnOutput(this, 'api_url', { value: api.url });
const bucket = new aws_s3.Bucket(this, name, {
bucketName: 'frontend-bucket',
...
});
this.bucketName = new CfnOutput(this, 'bucket_name', {
exportName: 'frontend-bucket-name',
value: bucket.bucketName
})
}
Here's my pipeline stage:
export class MyStage extends Stage {
public readonly apiURL: CfnOutput;
public readonly bucketName: CfnOutput;
constructor(scope, id, props) {
super(scope, id, props);
const newStack = new MyStack(this, 'demo-stack', props);
this.apiURL = backendStack.apiURL;
this.bucketName = backendStack.bucketName;
}
}
And finally here's my pipeline:
export class MyPipelineStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const pipeline = new CodePipeline(this, 'pipeline', { ... });
const infrastructure = new MyStage(...);
// I can use my output to configure my frontend build with the right URL to the API.
// This seems to be working, or at least I don't receive an error
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build'
],
primaryOutputDirectory: 'frontend/dist',
envFromCfnOutputs: {
AWS_API_BASE_URL: infrastructure.apiURL
}
})
// Now I need to move the built files to the S3 bucket
// I cannot get the name of the bucket however, it errors with the message:
// No export named frontend-bucket-name found. Rollback requested by user.
const bucket = aws_s3.Bucket.fromBucketAttributes(this, 'frontend-bucket', {
bucketName: infrastructure.bucketName.importValue,
account: 'account-the-bucket-is-in'
});
const s3Deploy = new customPipelineActionIMade(frontend.primaryOutput, bucket)
const postSteps = pipelines.Step.sequence([frontend, s3Deploy]);
pipeline.addStage(infrastructure, {
post: postSteps
});
}
}
I've tried everything I can think of to allow my pipeline to access that bucket name, but I always get the same thing: // No export named frontend-bucket-name found. Rollback requested by user. The value doesn't seem to get exported from my stack, even though I'm doing something very similar for the API URL in the frontend build step.
If I take away the 'exportName' of the bucket and try to access the CfnOutput value directly I get a dependency cannot cross stage boundaries error.
This seems like a pretty common use case - deploy infrastructure, then configure and deploy a frontend using those constructs, but I haven't been able to find anything that outlines this process. Any help is appreciated.

Related

How to inject a repository with typedi and typeorm

Im using typeorm, typedi and typegraphql (not nest.js) and am trying to inject my typeorm repository into the the service but its not working
Container.set("UserRepository", dataSource.getRepository(UserEntity));
#Service()
export class UserService {
constructor(private userRepository: Repository<UserEntity>) {}
async createUser({
name,
email,
password,
}: Input {...}
The error im getting is
Service with \"MaybeConstructable<Repository>\" identifier was not found in the container. Register it before usage via explicitly calling the \"Container.set\" function or using the \"#Service()\" decorator."
even though I can print out the repository with Container.get(UserRepository)
Does anyone know what im doing wrong?
try adding this annotation to your injected repo
import { InjectRepository } from 'typeorm-typedi-extensions';
constructor(#InjectRepository() private userRepository: Repository<UserEntity>) {}
you may need to install the typeorm-typedi-extensions package
and make sure you have useContainer(Container); in your bootstrapping process to register the typeorm container which should be the container from the above package
This was the solution:
Add the container to the buildSchema function that apollo gives us:
await dataSource.initialize();
const schema = await buildSchema({
resolvers,
emitSchemaFile: true,
container: Container,
});
Set the repositories on bootstrapping the app:
export const UserRepository = dataSource.getRepository(UserEntity).extend({});
Container.set("UserRepository", UserRepository);
Use it in your service:
export class UserService {
constructor(
#Inject("UserRepository") private userRepository: Repository<UserEntity>
) {}
}

Is it safe to pass sensitive values as environment variables into an aws-cdk custom resource

I want to save an initial admin user to my dynamodb table when initializing a cdk stack through a custom resource and am unsure of the best way to securely pass through values for that user. My code uses dotEnv and passes the values as environment variables right now:
import * as cdk from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
import * as dynamodb from "#aws-cdk/aws-dynamodb";
import * as customResource from "#aws-cdk/custom-resources";
require("dotenv").config();
export class CDKBackend extends cdk.Construct {
public readonly handler: lambda.Function;
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
const tableName = "CDKBackendTable";
// not shown here but also:
// creates a dynamodb table for tableName and a seedData lambda with access to it
// also some lambdas for CRUD operations and an apiGateway.RestApi for them
const seedDataProvider = new customResource.Provider(this, "seedDataProvider", {
onEventHandler: seedDataLambda
});
new cdk.CustomResource(this, "SeedDataResource", {
serviceToken: seedDataProvider.serviceToken,
properties: {
tableName,
user: process.env.ADMIN,
password: process.env.ADMINPASSWORD,
salt: process.env.SALT
}
});
}
}
This code works, but is it safe to pass through ADMIN, ADMINPASSWORD and SALT in this way? What are the security differences between this approach and accessing those values from AWS secrets manager? I also plan on using that SALT value when generating passwordDigest values for all new users, not just this admin user.
The properties values will be evaluated at deployment time. As such they will become part of CloudFormation template. The CloudFormation template can be viewed inside AWS Web Console. As such passing secrets around this way is questionable from security standpoint.
One way to overcome this is to store the secrets using AWS Secrets Manager. aws-cdk has good integration with it Secrets Manager. Once you create a secret you can import it via:
const mySecretFromName = secretsmanager.Secret.fromSecretNameV2(stack, 'SecretFromName', 'MySecret')
Unfortunately there's no support for resolving CloudFormation dynamic references in AWS Custom Resources. You can resolve the secret yourself though inside your lambda (seedDataLambda). The SqlRun repository provides an example.
Please remember to grant access to the secret for the custom resource lambda (seedLambda) e.g.
secret.grantRead(seedDataProvider.executionRole)

Is there a way to deploy a Lex bot using CDK?

I want to deploy a Lex bot to my AWS account using CDK.
Looking at the API reference documentation I can't find a construct for Lex. Also, I found this issue on the CDK GitHub repository which confirms there is no CDK construct for Lex.
Is there any workaround to deploy the Lex bot or another tool for doing this ?
Edit: CloudFormation support for AWS Lex is now available, see Wesley Cheek's answer. Below is my original answer which solved the lack of CloudFormation support using custom resources.
There is! While perhaps a bit cumbersome, it's totally possible using custom resources.
Custom resources work by defining a lambda that handles creation and deletion events for the custom resource. Since it's possible to create and delete AWS Lex bots using the AWS API, we can make the lambda do this when the resource gets created or destroyed.
Here's a quick example I wrote in TS/JS:
CDK Code (TypeScript):
import * as path from 'path';
import * as cdk from '#aws-cdk/core';
import * as iam from '#aws-cdk/aws-iam';
import * as logs from '#aws-cdk/aws-logs';
import * as lambda from '#aws-cdk/aws-lambda';
import * as cr from '#aws-cdk/custom-resources';
export class CustomResourceExample extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Lambda that will handle the different cloudformation resource events
const lexBotResourceHandler = new lambda.Function(this, 'LexBotResourceHandler', {
code: lambda.Code.fromAsset(path.join(__dirname, 'lambdas')),
handler: 'lexBotResourceHandler.handler',
runtime: lambda.Runtime.NODEJS_14_X,
});
lexBotResourceHandler.addToRolePolicy(new iam.PolicyStatement({
resources: ['*'],
actions: ['lex:PutBot', 'lex:DeleteBot']
}))
// Custom resource provider, specifies how the custom resources should be created
const lexBotResourceProvider = new cr.Provider(this, 'LexBotResourceProvider', {
onEventHandler: lexBotResourceHandler,
logRetention: logs.RetentionDays.ONE_DAY // Default is to keep forever
});
// The custom resource, creating one of these will invoke the handler and create the bot
new cdk.CustomResource(this, 'ExampleLexBot', {
serviceToken: lexBotResourceProvider.serviceToken,
// These options will be passed down to the lambda
properties: {
locale: 'en-US',
childDirected: false
}
})
}
}
Lambda Code (JavaScript):
const AWS = require('aws-sdk');
const Lex = new AWS.LexModelBuildingService();
const onCreate = async (event) => {
await Lex.putBot({
name: event.LogicalResourceId,
locale: event.ResourceProperties.locale,
childDirected: Boolean(event.ResourceProperties.childDirected)
}).promise();
};
const onUpdate = async (event) => {
// TODO: Not implemented
};
const onDelete = async (event) => {
await Lex.deleteBot({
name: event.LogicalResourceId
}).promise();
};
exports.handler = async (event) => {
switch (event.RequestType) {
case 'Create':
await onCreate(event);
break;
case 'Update':
await onUpdate(event);
break;
case 'Delete':
await onDelete(event);
break;
}
};
I admit it's a very bare-bones example but hopefully it's enough to get you or anyone reading started and see how it could be built upon by adding more options and more custom resources (for example for intentions).
Deploying Lex using CloudFormation is now possible.
CDK support has also been added but it's only available as an L1 construct, meaning the CDK code is basically going to look like CloudFormation.
Also, since this feature just came out, some features may be missing or buggy. I have been unable to find a way to do channel integrations, and have had some problems with using image response cards, but otherwise have successfully deployed a bot and connected it with Lambda/S3 using CDK.

use existing vpc and security group when adding an ec2 instance

There is lots of example code, but the rapidly improving cdk package isn't helping me find working examples of some (I thought) simple things. eg., even an import I found in an example fails:
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
error TS2724: Module '"../node_modules/#aws-cdk/aws-ec2/lib"' has no exported member 'VpcNetworkRef'. Did you mean 'IVpcNetwork'?
Why does the example ec2 code not show creation of raw ec2 instances?
WHAT would help is example cdk code that uses hardcoded VpcId and SecurityGroupId (I'll pass these in as context values) to create a pair of new subnets (ie., 1 for each availability zone) into which we place a pair of EC2 instances.
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
We have lots of distinct environments (sets of aws infrastructure) that currently share a single account, VPC, and security group. This will change, but my current goal is to see if we can use the cloud dev kit to create new distinct environments in this existing model. We have a CF template today.
I can't tell where to start. The examples for referencing existing VPCs aren't compiling.
import { VpcNetworkRef } from '#aws-cdk/aws-ec2';
const vpc = VpcNetworkRef.import(this, 'unused', {vpcId, availabilityZones: ['unused']});
Again, the target VPC and SecurityGroup for the instances already exist. We just (today) create new subnets as we add new sets of EC2 instances.
-----edit-------->
Discussions on gitter helped me answer this and how to add a bare Instance
const vpc - ec2.VpcNetwork.import(this, 'YOUR-VPC-NAME', {
vpcId: 'your-vpc-id',
availabilityZones: ['list', 'some', 'zones'],
publicSubnetIds: ['list', 'some', 'subnets'],
privateSubnetIds: ['list', 'some', 'more'],
});
const sg = ec2.SecurityGroup.import(this, 'YOUR-SG-NAME', {
securityGroupId: 'your-sg-id'
});
// can add subnets to existing..
const newSubnet = new ec2.VpcSubnet(this, "a name", {
availablityZone: "us-west-2b",
cidrBlock: "a.b.c.d/e",
vpcId: vpc.vpcId
});
// add bare instance
new ec2.CfnInstance(this, "instance name", {
imageId: "an ami",
securityGroupIds: [sg.securityGroupId],
subnetId: newSubnet.subnetId,
instanceType: "an instance type",
tags: [{ key: "key", value: "value"}]
});
No further answers needed... for me.
import ec2 = require('#aws-cdk/aws-ec2');
// looking up a VPC by its name
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcName: 'VPC-Name'
});
// looking up an SG by its ID
const sg = ec2.SecurityGroup.fromSecurityGroupId(this, 'SG', 'SG-ID')
// creating the EC2 instance
const instance = new ec2.Instance(this, 'Instance', {
vpc: vpc,
securityGroup: sg,
instanceType: new ec2.InstanceType('m4.large'),
machineImage: new ec2.GenericLinuxImage({
'us-east-1': 'ami-abcdef' // <- add your ami-region mapping here
}),
});
I was running into the issue of importing an existing vpc / subnet / security group as well. I believe it's changed a bit since the original post. Here is how to do it as of v1.18.0:
import cdk, { Construct, Stack, Subnet, StackProps } from '#aws-cdk/core';
import { SecurityGroup, SubnetType, Vpc } from "#aws-cdk/aws-ec2";
const stackProps: StackProps = {
env: {
region: 'your region',
account: 'your account'
},
};
export class MyStack extends Stack {
constructor(scope: Construct, id: string) {
super(scope, id, stackProps);
const vpc = Vpc.fromVpcAttributes(this, 'vpc', {
vpcId: 'your vpc id',
availabilityZones: ['your region'],
privateSubnetIds: ['your subnet id']
});
//Get subnets that already exists off your current vpc.
const subnets = vpc.selectSubnets({subnetType: SubnetType.PRIVATE});
//Create a subnet in the existing vpc
const newSubnet = new Subnet(this, 'subnet', {
availabilityZone: 'your zone',
cidrBlock: 'a.b.c.d/e',
vpcId: vpc.vpcId
});
//Get an existing security group.
const securityGroup = SecurityGroup.fromSecurityGroupId(this, 'securitygroup', 'your security group id');
}
}

How to create GoogleCredential object referencing the service account json file in Dataflow?

I have written a pipeline to extract G suite activity logs by referring the G suite java-quickstart where the code reads client_secret.json file as below,
InputStream in = new FileInputStream("D://mypath/client_secret.json");
GoogleClientSecrets clientSecrets = GoogleClientSecrets.load(JSON_FACTORY, new InputStreamReader(in));
Pipeline runs as expected in local(runner=DirectRunner) but the same code fails with java.io.FileNotFoundException expection when executed on cloud(runner=DataflowRunner)
I understand local path is invalid when executed on cloud. Any suggestion here?
Updated:
I have modified the code as below and I am able to read the client_secrets.json file
InputStream in =
Activities.class.getResourceAsStream("client_secret.json");
Actual problem is in creating the credential object
private static java.io.File DATA_STORE_DIR = new java.io.File(System.getProperty("user.home"),
".credentials/admin-reports_v1-java-quickstart");
private static final List<String> SCOPES = Arrays.asList(ReportsScopes.ADMIN_REPORTS_AUDIT_READONLY);
static {
try {
HTTP_TRANSPORT = GoogleNetHttpTransport.newTrustedTransport();
DATA_STORE_FACTORY = new FileDataStoreFactory(DATA_STORE_DIR);
} catch (Throwable t) {
t.printStackTrace();
System.exit(1);
}
}
public static Credential authorize() throws IOException {
// Load client secrets.
InputStream in =
Activities.class.getResourceAsStream("client_secret.json");
GoogleClientSecrets clientSecrets = GoogleClientSecrets.load(JSON_FACTORY, new InputStreamReader(in));
// Build flow and trigger user authorization request.
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(HTTP_TRANSPORT, JSON_FACTORY,
clientSecrets, SCOPES).setDataStoreFactory(DATA_STORE_FACTORY).setAccessType("offline").build();
Credential credential = new AuthorizationCodeInstalledApp(flow, new LocalServerReceiver()).authorize("user");
System.out.println("Credentials saved to " + DATA_STORE_DIR.getAbsolutePath());
return credential;
}
Observations:
Local execution:
On initial execution, program attempts to open browser to authorize the request and stores the authenticated object in a file - "StoredCredential".
On further executions, the stored file is used to make API calls.
Running on cloud(DataFlowRunner):
When I check logs, dataflow tries to open a browser to authenticate the request and stops there.
What I need?
How to modify GoogleAuthorizationCodeFlow.Builder such that the credential object can be created while running as dataflow pipeline?
I have found a solution to create GoogleCredential object using the service account. Below is the code for it.
public static Credential authorize() throws IOException, GeneralSecurityException {
String emailAddress = "service_account.iam.gserviceaccount.com";
GoogleCredential credential = new GoogleCredential.Builder()
.setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(emailAddress)
.setServiceAccountPrivateKeyFromP12File(Activities.class.getResourceAsStream("MYFILE.p12"))
.setServiceAccountScopes(Collections.singleton(ReportsScopes.ADMIN_REPORTS_AUDIT_READONLY))
.setServiceAccountUser("USER_NAME")
.build();
return credential;
}
Can you try running the program multiple times locally. What I am wondering is, if the "StoredCredential" file is available, will it just work? Or will it try to load up the browser again?
If so, can you determine the proper place to store that file, and download a copy of it from GCS onto the Dataflow worker? There should be APIs to download GCS files bundled with the dataflow SDK jar. So you should be able to use those to download the credential file.

Resources