Pipeline stack which uses cross-environment actions must have an explicitly set region - aws-cdk

I am trying to create an AWS Code Pipeline to create a DynamoDB table in one of the stages. I was able to successfully deploy the same code with CDK v1. now trying to replicate the same on CDK v2. I am getting an error Pipeline stack which uses cross-environment actions must have an explicitly set region
Here's the complete code:
import { Stack, StackProps, Stage, StageProps } from "aws-cdk-lib";
import { AttributeType, Table } from "aws-cdk-lib/aws-dynamodb";
import {
CodePipeline,
CodePipelineSource,
ShellStep,
} from "aws-cdk-lib/pipelines";
import { Construct } from "constructs";
export class DdbStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
new Table(this, "TestTable", {
partitionKey: { name: "id", type: AttributeType.STRING },
});
}
}
class MyApplication extends Stage {
constructor(scope: Construct, id: string, props?: StageProps) {
super(scope, id, props);
new DdbStack(this, `${id}-ddb`, {});
}
}
export class PipelineStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const pipeline = new CodePipeline(this, `${id}-PipelineStack-`, {
crossAccountKeys: true,
selfMutation: false,
pipelineName: "MangokulfiCDK",
synth: new ShellStep("Synth", {
input: CodePipelineSource.connection("gowtham91m/mango-cdk", "main", {
connectionArn:
"arn:aws:codestar-connections:us-west-2:147866640792:connection/4b18bea2-9eb6-47b1-bbdc-adb3bf6fd2a9",
}),
commands: ["npm ci", "npm run build", "npx cdk synth"],
}),
});
pipeline.addStage(
new MyApplication(this, `Staging`, {
env: {
account: "123456789123",
region: "us-west-2",
},
})
);
pipeline.buildPipeline();
}
}

Related

CDK GraphqlApi with Function Using Typescript Causes Undefined or Not Exported

I have a aws_appsync.GraphqlApi with a lambda resolver:
import * as cdk from 'aws-cdk-lib';
import {aws_appsync} from 'aws-cdk-lib';
import {Construct} from 'constructs';
import {AttributeType, BillingMode, Table} from "aws-cdk-lib/aws-dynamodb";
import * as path from "path";
import {FieldLogLevel} from "aws-cdk-lib/aws-appsync";
import {RetentionDays} from "aws-cdk-lib/aws-logs";
import {Code, Function, Runtime} from "aws-cdk-lib/aws-lambda";
export class RelmStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const relmTable = new Table(this, 'relm', {
billingMode: BillingMode.PAY_PER_REQUEST,
tableName: 'relm',
partitionKey: {
name: 'pk',
type: AttributeType.STRING
}, sortKey: {
name: 'sk',
type: AttributeType.STRING
}
})
const api = new aws_appsync.GraphqlApi(this, 'relm-api', {
name: 'relm-api',
logConfig: {
fieldLogLevel: FieldLogLevel.ALL,
excludeVerboseContent: true,
retention: RetentionDays.TWO_WEEKS
},
schema: aws_appsync.SchemaFile.fromAsset(path.join(__dirname, 'schema.graphql')),
authorizationConfig: {
defaultAuthorization: {
authorizationType: aws_appsync.AuthorizationType.API_KEY,
apiKeyConfig: {
name: 'relm-api-key'
}
}
}
})
const createLambda = new Function(this, 'dialog-create', {
functionName: 'dialog-create',
runtime: Runtime.NODEJS_14_X,
handler: 'index.handler',
code: Code.fromAsset('src/lambda'),
memorySize: 3008,
})
const createDataSource = api.addLambdaDataSource('create-data-source', createLambda)
createDataSource.createResolver('create-resolver', {
typeName: 'Mutation',
fieldName: 'dialogCreate'
});
relmTable.grantWriteData(createLambda);
}
}
The sources lives under src/lambda/index.ts and the code is as follows:
exports.handler = async (event: any) => {
console.log('event: ', event)
};
Super simple. When the file extension is index.js everything works. When I change it to index.ts I get an error:
"index.handler is undefined or not exported"
I've looked at many examples on how to do this and all of them seem to be using the ts extension with no problems. What am I doing wrong here?
You should use the NodejsFunction which includes transpiling TypeScript to JavaScript.
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda_nodejs-readme.html
You can write your handler in typescript but you'll have to invoke a build script to transpile it into javascript to serve as your lambda handler.
This project uses cdk and tsc:
https://github.com/mavi888/cdk-typescript-lambda/blob/main/package.json
This line discusses using esbuild to transpile:
https://docs.aws.amazon.com/lambda/latest/dg/typescript-package.html

cdk watch command forces full deploy with unrelated error on file change

I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?

CDK and batch build

I am trying to have cypress tests run in parallel in codepipeline/codebuild as documented here: https://docs.cypress.io/guides/continuous-integration/aws-codebuild#Parallelizing-the-build
I am reading aws docs here:
https://docs.aws.amazon.com/codebuild/latest/userguide/batch-build-buildspec.html#build-spec.batch.build-list
This is what I have got so far:
import * as cdk from '#aws-cdk/core';
import * as codebuild from '#aws-cdk/aws-codebuild';
import * as iam from '#aws-cdk/aws-iam';
import { defaultEnvironment, NODE_JS_VERSION } from './environments/base-environment';
import { projectEnvironmentVars } from './environments/e2e-tests-project-environment';
// import { createReportGroupJsonObject } from '../../utils/utils';
// import { Duration } from '#aws-cdk/core'
interface RunTestsProjectProps {
testsBucketName: string,
testsBucketArn: string,
targetEnv: string,
repoType: string,
role: iam.Role,
codeCovTokenArn: string,
}
export class RunTestsProject extends codebuild.PipelineProject {
constructor(scope: cdk.Construct, id: string, props: RunTestsProjectProps) {
const { testsBucketName, testsBucketArn, targetEnv, repoType, codeCovTokenArn } = props
super(scope, id, {
projectName: id,
role: props.role,
environment: defaultEnvironment,
environmentVariables: projectEnvironmentVars({ testsBucketName, testsBucketArn, targetEnv, repoType, codeCovTokenArn }),
timeout: cdk.Duration.hours(3),
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
'runtime-versions': {
nodejs: NODE_JS_VERSION
}
},
build: {
commands: [
'if [ ! -f "${CODEBUILD_SRC_DIR}/scripts/assume-cross-account-role.env" ]; then echo "assume-cross-account-this.role.env not found in repo" && aws s3 cp s3://${ARTIFACTS_BUCKET_NAME}/admin/cross-account/assume-cross-account-role.env ${CODEBUILD_SRC_DIR}/scripts/; else echo "Overriding assume-cross-account-role.env from repo"; fi',
'. ${CODEBUILD_SRC_DIR}/scripts/assume-cross-account-role.env',
'bash ${CODEBUILD_SRC_DIR}/scripts/final-tests.sh'
],
batch: {
'fail-fast': false,
'build-list': []
}
},
},
artifacts: {
files: '**/*'
},
})
});
}
}
what should I add in the build-list part to have multiple builds running
I tried
'build-list': { identifier: 'build1', identifier: 'build2' }
but this looks like incorrect syntax.
The number of builds should ideally be based on the cypress grouping. Can it be dynamic or has to be defined statically?

Adding a stage to CodePipeline throws errors

I am creating a code pipeline as follows -
import * as cdk from "aws-cdk-lib";
import { CodeBuildStep, CodePipeline, CodePipelineSource } from "aws-cdk-lib/pipelines";
...
export class Pipeline extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
...
const pipeline = new CodePipeline(this, "Pipeline", {
pipelineName: "pipeline",
synth: new CodeBuildStep("SynthStep", {
input: CodePipelineSource.codeCommit(repo, "mainline"),
buildEnvironment: {
computeType: CodeBuild.ComputeType.MEDIUM,
buildImage: CodeBuild.LinuxBuildImage.STANDARD_5_0,
},
partialBuildSpec: buildSpec,
commands: [],
role: codeBuildSynthRole,
primaryOutputDirectory: "cdk/cdk.out",
}),
crossAccountKeys: true,
selfMutation: true,
dockerEnabledForSelfMutation: true,
});
const review = new ReviewPipelineStage(this, "Review", {
sourceFileSet: pipeline.cloudAssemblyFileSet,
});
const reviewStage = pipeline.addStage(review);
const gitleaksReviewAction = new GitleaksReviewAction(
this,
"GitleaksReviewAction",
{
sourceFileSet: pipeline.cloudAssemblyFileSet,
}
);
reviewStage.addPost(gitleaksReviewAction.action);
gitleaksReviewAction.gitleaksImage.repository.grantPull(
gitleaksReviewAction.action.project
);
}
}
I am trying to add a stage for Gitleaks and the GitleaksReviewAction construct is as follows -
export interface GitleaksReviewActionProps {
sourceFileSet: FileSet;
}
export class GitleaksReviewAction extends Construct {
public readonly action: CodeBuildStep;
public readonly gitleaksImage: DockerImageAsset;
constructor(scope: Construct, id: string, props: GitleaksReviewActionProps) {
super(scope, id);
this.gitleaksImage = new DockerImageAsset(this, "gitleaksDockerAsset", {
directory: path.join(__dirname, "../assets/gitleaks"),
});
this.action = new CodeBuildStep("Gitleaks", {
input: props.sourceFileSet,
commands: [
"find . -type d -exec chmod 777 {} \\;",
"find . -type f -exec chmod 666 {} \\;",
`aws ecr get-login-password --region $AWS_REGION | docker login -u AWS --password-stdin ${this.gitleaksImage.imageUri}`,
`docker run -v $(pwd):/repo ${this.gitleaksImage.imageUri} --path=/repo --repo-config-path=config/gitleaks/gitleaks.toml --verbose`,
],
buildEnvironment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
});
}
}
The ReviewPipelineStage is as follows -
export interface ReviewPipelineStageProps extends StageProps {
sourceFileSet: FileSet;
}
export class ReviewPipelineStage extends Stage {
constructor(scope: Construct, id: string, props: ReviewPipelineStageProps) {
super(scope, id, props);
new GitleaksReviewAction(this, "GitleaksReviewAction", {
sourceFileSet: props.sourceFileSet,
});
}
}
As i do a cdk synth i get an error -
throw new Error(`${construct.constructor?.name ?? 'Construct'} at '${Node.of(construct).path}' should be created in the scope of a Stack, but no Stack found`);
I am not sure if i should be using the other construct aws_codepipeline to define the stages or is this the right way to create a stage. Any ideas?
The issue is that you're creating a Construct in the scope of a Stage. You can't do that, you can only create Stacks in the scope of a Stage. Constructs have to be created in the scope of a Stack.
The issue is here:
export class ReviewPipelineStage extends Stage {
constructor(scope: Construct, id: string, props: ReviewPipelineStageProps) {
super(scope, id, props);
new GitleaksReviewAction(this, "GitleaksReviewAction", {
sourceFileSet: props.sourceFileSet,
});
}
}
this is a Stage, not a Stack.
To fix, this, you have to create a Stack in your stage, and create your Constructs in there.

"Client network socket disconnected before secure TLS connection was established" - Neo4j/GraphQL

Starting up NestJS & GraphQL using yarn start:dev using await app.listen(3200);. When trying to connect to my Neo4J Desktop, I get this error trying to get my queries at localhost:3200/graphQL:
"errors": [
{
"message": "Client network socket disconnected before secure TLS connection was established",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"getMovies"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"code": "ServiceUnavailable",
"name": "Neo4jError"
}
}
}
],
"data": null
}
So I figured my local Neo4J desktop graph is not running correctly, but I can't seem to find any answer how to solve it.. Currently I have a config.ts file which has:
export const HOSTNAME = 'localhost';
export const NEO4J_USER = 'neo4j';
export const NEO4J_PASSWORD = '123';
and a file neogql.resolver.ts:
import {
Resolver,
Query,
Args,
ResolveProperty,
Parent,
} from '#nestjs/graphql';
import { HOSTNAME, NEO4J_USER, NEO4J_PASSWORD } from '../config';
import { Movie } from '../graphql';
import { Connection, relation, node } from 'cypher-query-builder';
import { NotFoundException } from '#nestjs/common';
const db = new Connection(`bolt://${HOSTNAME}`, {
username: NEO4J_USER,
password: NEO4J_PASSWORD,
});
#Resolver('Movie')
export class NeogqlResolver {
#Query()
async getMovies(): Promise<Movie> {
const movies = (await db
.matchNode('movies', 'Movie')
.return([
{
movies: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run()) as any;
return movies;
}
#Query('movie')
async getMovieById(
#Args('id')
id: string,
): Promise<any> {
const movie = (await db
.matchNode('movie', 'Movie')
.where({ 'movie.id': id })
.return([
{
movie: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run<any>()) as any;
if (movie.length === 0) {
throw new NotFoundException(
`Movie id '${id}' does not exist in database `,
);
}
return movie[0];
}
#ResolveProperty()
async actors(#Parent() movie: any) {
const { id } = movie;
return (await db
.match([node('actors', 'Actor'), relation('in'), node('movie', 'Movie')])
.where({ 'movie.id': id })
.return([
{
actors: [
{
id: 'id',
name: 'name',
born: 'born',
},
],
},
])
.run()) as any;
}
}
Be sure to pass the Config object like this:
var hostname = this.configService.get<string>('NEO4J_URL');
var username = this.configService.get<string>('NEO4J_USERNAME');
var password = this.configService.get<string>('NEO4J_PASSWORD');
db = new Connection(`${hostname}`, {
username: username,
password: password,
}, {
driverConfig: { encrypted: "ENCRYPTION_OFF" }
});
I had the same problem with grandSTACK when running against a neo4j version 4 server. According to Will Lyon this is due to mismatched encryption defaults between driver and database: https://community.neo4j.com/t/migrating-an-old-grandstack-project-to-neo4j-4/16911/2
So passing a config object with
{ encrypted: "ENCRYPTION_OFF"}
to the Connection constructor should do the trick.

Resources