I want to deploy a Lex bot to my AWS account using CDK.
Looking at the API reference documentation I can't find a construct for Lex. Also, I found this issue on the CDK GitHub repository which confirms there is no CDK construct for Lex.
Is there any workaround to deploy the Lex bot or another tool for doing this ?
Edit: CloudFormation support for AWS Lex is now available, see Wesley Cheek's answer. Below is my original answer which solved the lack of CloudFormation support using custom resources.
There is! While perhaps a bit cumbersome, it's totally possible using custom resources.
Custom resources work by defining a lambda that handles creation and deletion events for the custom resource. Since it's possible to create and delete AWS Lex bots using the AWS API, we can make the lambda do this when the resource gets created or destroyed.
Here's a quick example I wrote in TS/JS:
CDK Code (TypeScript):
import * as path from 'path';
import * as cdk from '#aws-cdk/core';
import * as iam from '#aws-cdk/aws-iam';
import * as logs from '#aws-cdk/aws-logs';
import * as lambda from '#aws-cdk/aws-lambda';
import * as cr from '#aws-cdk/custom-resources';
export class CustomResourceExample extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Lambda that will handle the different cloudformation resource events
const lexBotResourceHandler = new lambda.Function(this, 'LexBotResourceHandler', {
code: lambda.Code.fromAsset(path.join(__dirname, 'lambdas')),
handler: 'lexBotResourceHandler.handler',
runtime: lambda.Runtime.NODEJS_14_X,
});
lexBotResourceHandler.addToRolePolicy(new iam.PolicyStatement({
resources: ['*'],
actions: ['lex:PutBot', 'lex:DeleteBot']
}))
// Custom resource provider, specifies how the custom resources should be created
const lexBotResourceProvider = new cr.Provider(this, 'LexBotResourceProvider', {
onEventHandler: lexBotResourceHandler,
logRetention: logs.RetentionDays.ONE_DAY // Default is to keep forever
});
// The custom resource, creating one of these will invoke the handler and create the bot
new cdk.CustomResource(this, 'ExampleLexBot', {
serviceToken: lexBotResourceProvider.serviceToken,
// These options will be passed down to the lambda
properties: {
locale: 'en-US',
childDirected: false
}
})
}
}
Lambda Code (JavaScript):
const AWS = require('aws-sdk');
const Lex = new AWS.LexModelBuildingService();
const onCreate = async (event) => {
await Lex.putBot({
name: event.LogicalResourceId,
locale: event.ResourceProperties.locale,
childDirected: Boolean(event.ResourceProperties.childDirected)
}).promise();
};
const onUpdate = async (event) => {
// TODO: Not implemented
};
const onDelete = async (event) => {
await Lex.deleteBot({
name: event.LogicalResourceId
}).promise();
};
exports.handler = async (event) => {
switch (event.RequestType) {
case 'Create':
await onCreate(event);
break;
case 'Update':
await onUpdate(event);
break;
case 'Delete':
await onDelete(event);
break;
}
};
I admit it's a very bare-bones example but hopefully it's enough to get you or anyone reading started and see how it could be built upon by adding more options and more custom resources (for example for intentions).
Deploying Lex using CloudFormation is now possible.
CDK support has also been added but it's only available as an L1 construct, meaning the CDK code is basically going to look like CloudFormation.
Also, since this feature just came out, some features may be missing or buggy. I have been unable to find a way to do channel integrations, and have had some problems with using image response cards, but otherwise have successfully deployed a bot and connected it with Lambda/S3 using CDK.
Related
I'm using the AWS CDK to deploy code and infrastructure from a monorepo that includes both my front and backend logic (along with the actual CDK constructs). I'm using the CDK Pipelines library to kick off a build on every commit to my main git branch. The pipeline should:
deploy all the infrastructure. Which at the moment is just an API gateway with an endpoint powered by a Lambda function, and a S3 bucket that will hold the built frontend.
configure and build the frontend by providing the API URL that was just created.
move the built frontend files to the S3 bucket.
My Pipeline is in a different account than the actual deployed infrastructure. I've bootstrapped the environments and set up the correct trust policies. I've succeeded in the first two points by creating the constructs and saving the API URL as a CfnOutput. Here's a simplified version of the Stack:
class MyStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const API = new aws_apigateway.LambdaRestApi(this, id, {
handler: lambda,
});
this.apiURL = new CfnOutput(this, 'api_url', { value: api.url });
const bucket = new aws_s3.Bucket(this, name, {
bucketName: 'frontend-bucket',
...
});
this.bucketName = new CfnOutput(this, 'bucket_name', {
exportName: 'frontend-bucket-name',
value: bucket.bucketName
})
}
Here's my pipeline stage:
export class MyStage extends Stage {
public readonly apiURL: CfnOutput;
public readonly bucketName: CfnOutput;
constructor(scope, id, props) {
super(scope, id, props);
const newStack = new MyStack(this, 'demo-stack', props);
this.apiURL = backendStack.apiURL;
this.bucketName = backendStack.bucketName;
}
}
And finally here's my pipeline:
export class MyPipelineStack extends Stack {
constructor(scope, id, props) {
super(scope, id, props);
const pipeline = new CodePipeline(this, 'pipeline', { ... });
const infrastructure = new MyStage(...);
// I can use my output to configure my frontend build with the right URL to the API.
// This seems to be working, or at least I don't receive an error
const frontend = new ShellStep('FrontendBuild', {
input: source,
commands: [
'cd frontend',
'npm ci',
'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build'
],
primaryOutputDirectory: 'frontend/dist',
envFromCfnOutputs: {
AWS_API_BASE_URL: infrastructure.apiURL
}
})
// Now I need to move the built files to the S3 bucket
// I cannot get the name of the bucket however, it errors with the message:
// No export named frontend-bucket-name found. Rollback requested by user.
const bucket = aws_s3.Bucket.fromBucketAttributes(this, 'frontend-bucket', {
bucketName: infrastructure.bucketName.importValue,
account: 'account-the-bucket-is-in'
});
const s3Deploy = new customPipelineActionIMade(frontend.primaryOutput, bucket)
const postSteps = pipelines.Step.sequence([frontend, s3Deploy]);
pipeline.addStage(infrastructure, {
post: postSteps
});
}
}
I've tried everything I can think of to allow my pipeline to access that bucket name, but I always get the same thing: // No export named frontend-bucket-name found. Rollback requested by user. The value doesn't seem to get exported from my stack, even though I'm doing something very similar for the API URL in the frontend build step.
If I take away the 'exportName' of the bucket and try to access the CfnOutput value directly I get a dependency cannot cross stage boundaries error.
This seems like a pretty common use case - deploy infrastructure, then configure and deploy a frontend using those constructs, but I haven't been able to find anything that outlines this process. Any help is appreciated.
This question already has answers here:
Is it possible to add Authentication to access to NestJS' Swagger Explorer
(13 answers)
Closed 12 months ago.
I have built a project using NestJS along with #nestjs/swagger and swagger-ui-express for API documentation.
Now my docs can be accessible at this path /api/docs but this is absolutely public anyone can access it once I will deploy it to the cloud but I don't wanna do this although most of API's require Bearer token, unfortunately, some of them will remain publically exposed.
Is there any way I can have a login screen for authenticating users before they access my swagger docs?
Here is my code for setting up docs:
import { INestApplication } from '#nestjs/common';
import { SwaggerModule, DocumentBuilder } from '#nestjs/swagger';
import {
SWAGGER_API_ROOT,
SWAGGER_API_NAME,
SWAGGER_API_DESCRIPTION,
SWAGGER_API_CURRENT_VERSION,
} from './constants';
export const setupSwagger = (app: INestApplication) => {
const options = new DocumentBuilder()
.setTitle(SWAGGER_API_NAME)
.setDescription(SWAGGER_API_DESCRIPTION)
.setVersion(SWAGGER_API_CURRENT_VERSION)
.addBearerAuth()
.build();
const document = SwaggerModule.createDocument(app, options);
SwaggerModule.setup(SWAGGER_API_ROOT, app, document);
};
What if you only set the docs in dev mode?
You can create an environment var DEV = true or false. After deploy, set it to false and validate with:
if (process.env.DEV) {
export const setupSwagger = (app: INestApplication) => {
const options = new DocumentBuilder()
.setTitle(SWAGGER_API_NAME)
.setDescription(SWAGGER_API_DESCRIPTION)
.setVersion(SWAGGER_API_CURRENT_VERSION)
.addBearerAuth()
.build();
const document = SwaggerModule.createDocument(app, options);
SwaggerModule.setup(SWAGGER_API_ROOT, app, document);
};
}
Just and idea
I want to save an initial admin user to my dynamodb table when initializing a cdk stack through a custom resource and am unsure of the best way to securely pass through values for that user. My code uses dotEnv and passes the values as environment variables right now:
import * as cdk from "#aws-cdk/core";
import * as lambda from "#aws-cdk/aws-lambda";
import * as dynamodb from "#aws-cdk/aws-dynamodb";
import * as customResource from "#aws-cdk/custom-resources";
require("dotenv").config();
export class CDKBackend extends cdk.Construct {
public readonly handler: lambda.Function;
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
const tableName = "CDKBackendTable";
// not shown here but also:
// creates a dynamodb table for tableName and a seedData lambda with access to it
// also some lambdas for CRUD operations and an apiGateway.RestApi for them
const seedDataProvider = new customResource.Provider(this, "seedDataProvider", {
onEventHandler: seedDataLambda
});
new cdk.CustomResource(this, "SeedDataResource", {
serviceToken: seedDataProvider.serviceToken,
properties: {
tableName,
user: process.env.ADMIN,
password: process.env.ADMINPASSWORD,
salt: process.env.SALT
}
});
}
}
This code works, but is it safe to pass through ADMIN, ADMINPASSWORD and SALT in this way? What are the security differences between this approach and accessing those values from AWS secrets manager? I also plan on using that SALT value when generating passwordDigest values for all new users, not just this admin user.
The properties values will be evaluated at deployment time. As such they will become part of CloudFormation template. The CloudFormation template can be viewed inside AWS Web Console. As such passing secrets around this way is questionable from security standpoint.
One way to overcome this is to store the secrets using AWS Secrets Manager. aws-cdk has good integration with it Secrets Manager. Once you create a secret you can import it via:
const mySecretFromName = secretsmanager.Secret.fromSecretNameV2(stack, 'SecretFromName', 'MySecret')
Unfortunately there's no support for resolving CloudFormation dynamic references in AWS Custom Resources. You can resolve the secret yourself though inside your lambda (seedDataLambda). The SqlRun repository provides an example.
Please remember to grant access to the secret for the custom resource lambda (seedLambda) e.g.
secret.grantRead(seedDataProvider.executionRole)
Having moved my mobile app development to Flutter I am now in the process of experimenting with using Dart as my main server side language. The productivity benefits in using a single coding language in both the app and on the server are considerable. To that end I have set up a server with an Nginx front end which proxies all dynamic web requests to an Angel/Dart server.
Angel is a remarkably well written package and I had a working server written up in no time at all. However, in order to have a fully functional backend I need to be able to use both Redis and PostgreSQL from within my server side Dart code. I am using the resp_client package to access Redis. The issue I have run into is with the fact that RespCommand.get is asynchronous. With my newbie knowledge of both Dart and Angel I am unable to find a way to acquire a Redis key value via RespCommand.get in an Angel route handler and then somehow use that value in the response it returns.
My entire Dart backend server code is shown below
import 'package:angel_framework/angel_framework.dart';
import 'package:angel_framework/http.dart';
import 'package:postgres/postgres.dart';
import 'package:resp_client/resp_client.dart';
import 'package:resp_client/resp_commands.dart';
class DartWeb
{
static Angel angel;
static AngelHttp http;
static RespCommands redis;
static PostgreSQLConnection db;
static init() async
{
angel = Angel();
http = AngelHttp(angel);
angel.get('/',rootRoute);
await prepareRedis();
await http.startServer('localhost',3000);
}
static prepareRedis() async
{
RespServerConnection rsc = await connectSocket('localhost');
RespClient client = RespClient(rsc);
redis = RespCommands(client);
}
static preparePostgres() async
{
db = new PostgreSQLConnection('serverurl',portNo,'database',username:'user',password:'password');
await db.open();
}
static void rootRoute(RequestContext req,ResponseContext res)
{
try
{
await redis.set('test','foobar',expire:Duration(seconds:10));
String testVal = await redis.get('test');
res.write('Done $testVal');
} catch(e) {res.write('++ $e ++');}
}
}
main() async {await DartWeb.init();}
If I start up this server and then access it through my web browser I end up with a 502 Bad Gateway message. Not surprising. dart2native main.dart -o mainCompiled returns the error await can only be used in async... message.
So I tried instead
try
{
res.write('Before');
redis.set('test','foobar',expire:Duration(seconds:10)).then((bool done)
{
res.write('DONE $done');
});
res.write('After');
} catch(e) {res.write('++ $e ++');}
which simply printed out BeforeAfter in my browser with the DONE bit never showing up although a quick test via redis-cli shows that the key test had in fact been created.
My knowledge of both Dart and Angel is still in its infancy so I guess I am doing something incorrectly here. Shorn of all the detail my questions are essentially these -
how do I call and get the result from async methods in an Angel route dispatcher?
given that I am editing my Dart code in VSCode on my local Windows machine which accesses the relevant dart files on my Ubuntu server I loose the benefits of error reporting provided by the VSCode Dart plugin. dart2native, as I have used here, helps out but it would be nicer if I could somehow get a running error report within VSCode as I do when building Flutter apps locally. How can I accomplish this - if at all possible?
It turns out that Dart/Angel does not impose excessively strict constraints on the signature of a route handler. So you can quite safely declare a route handler like this one
static Future<void> rootRoute(RequestContext req,ResponseContext res) async
{
try
{
res.write('!! Before ');
await redis.set('test','foobar',expire:Duration(seconds:10));
String test = await redis.get('test');
res.write('After $test !!');
} catch(e) {res.write('++ $e ++');}
}
With the route simply returning a Future we can now safely do anything we like there - including calling other asynchronous methods: in this instance to fetch a Redis key value.
I have created an offline documentation with MkDocs and Workboxjs.
I execute workbox generateSW on the files generated by MkDocs which generates a Service Worker with precache setup with the precacheAndRoute function.
This works fine but when I update the documentation and generate new html files and the Service Worker it does not serve the new content until I completely close the browser. Refreshing or just closing the tab is not enough.
The worker is updating the content to the Cache Storage correctly which I can see from the Chrome devtools (Application -> Cache Storage -> workbox-precache*) but no matter how many times I hit refresh the browser won't display the new content.
I use this function to register the Service Worker
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
I wonder if I have to do something extra to make the content refresh properly?
My workbox-config.js is
module.exports = {
globDirectory: ".doc_build",
globPatterns: ["**/*"],
swDest: ".doc_build/sw.js"
};
This happens on both Firefox and Chrome.
Thanks to Robert Rowntree's link in the question comment I figured this out.
I my case the content gets refreshed to cache the but old version of the precache service worker still keeps running which has a list of objects like this
{
"url": "index.html",
"revision": "e4919b0cd0e772b3beb2d1f3d09af437"
}
as you can see it has the checksum of the old version in it and it will keep serving that until the old service worker gets deactivated and the new one activated.
It is possible to see that by checking registration.waiting when the old service worker is waiting for to be deactivated and new one to be installed. It seems that browser does this "at some point". It actually seems to happen if I just keep the tabs closed long enough.
The solution for my question is to force the service worker to skip the waiting period. It is possible to do that by sending a message to the service worker from the update event
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = async () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
// Send message to the service worker telling
// it should stop waiting for browser to deactivate it
registration.waiting.postMessage("skipWaiting");
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
Then in the Service Worker code I had to handle that message and call skipWaiting()
self.addEventListener("message", messageEvent => {
if (messageEvent.data === "skipWaiting") {
return skipWaiting();
}
});
To do this I had to move from workbox generateSW to workbox injectManifest to be able to add the skipping code.
But there are caveats in this solution. Read from the Robert's link onwards from
"The simplest and most dangerous approach is to just skip waiting during installation."
https://redfin.engineering/how-to-fix-the-refresh-button-when-using-service-workers-a8e27af6df68
Fortunately this is good enough for my case.