"Require SSL" on databases created via DatabaseClusterEngine - aws-cdk

Is it possible to "require SSL" on an Aurora database created using the AWS CDK? We've enabled encryption, but that is only "at rest", and we're also required to encrypt "in transit" and are being flagged by a security monitor because the database does not "require SSL".
Here is the code we use to set up the database:
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
defaultDatabaseName: dbName,
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: rds.AuroraPostgresEngineVersion.VER_13_7 }),
credentials: {
username: dbUser,
password: pgPasswordSecret.secretValue,
},
instanceProps: {
securityGroups: [securityGroup],
instanceType: primaryPostgresInstanceType(),
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
},
vpc,
},
storageEncrypted: true,
backup: {
retention: Duration.days(15),
},
})

The solution (from https://gitter.im/awslabs/aws-cdk?at=5e2ab552f196225bd64b7581) is to pass a parameterGroup when creating the database cluster, setting rds.force_ssl to '1':
const postgresVersion = rds.AuroraPostgresEngineVersion.VER_13_7
const parameterGroup = new rds.ParameterGroup(scope, 'ClusterParameterGroup', {
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: postgresVersion}),
parameters: {
'rds.force_ssl': '1',
},
})
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
...
parameterGroup,
...
})

Related

cdk watch command forces full deploy with unrelated error on file change

I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?

AWS CDK. Aurora + fargate service

I am trying to create a Fargate Service that connects to an Aurora Postgres DB through the CDK, but I am unable. I get an error connection. This should be pretty straightforward though. Does anybody have any resources?
export class myStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const DATABASE_NAME = "myDatabase";
const vpc = new ec2.Vpc(this, "myVpc");
const cluster = new ecs.Cluster(this, "myCluster", {
vpc: vpc
});
const databaseAdminSecret = new Secret(this, 'myCredentialsSecret', {
secretName: 'database-secret',
description: 'Database auto-generated user password',
generateSecretString: {
secretStringTemplate: JSON.stringify({username: 'boss'}),
generateStringKey: 'password',
passwordLength: 30,
excludeCharacters: "\"#/\\",
excludePunctuation: true,
}
});
const database = new rds.DatabaseCluster(this, 'myDatabase', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_14_5
}),
credentials: rds.Credentials.fromSecret(databaseAdminSecret),
instanceProps: {
vpc,
},
defaultDatabaseName: DATABASE_NAME,
port: 5432,
});
// Create a load-balanced Fargate service and make it public
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "myService", {
cluster: cluster, // Required
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('my-custom-image'),
environment: {
DB_URL: `jdbc:postgresql://${database.clusterEndpoint.socketAddress}/${DATABASE_NAME}`,
DB_USERNAME: databaseAdminSecret.secretValueFromJson('username').unsafeUnwrap().toString(),
DB_PASSWORD: databaseAdminSecret.secretValueFromJson('password').unsafeUnwrap().toString(),
},
},
});
// Allow the service to connect to the database
database.connections.allowDefaultPortFrom(service.service);
}
}
When I spin up this stack with cdk deploy my Fargate service ends up dying because "Connection was refused".
What am I doing wrong?
Thanks,

"Client network socket disconnected before secure TLS connection was established" - Neo4j/GraphQL

Starting up NestJS & GraphQL using yarn start:dev using await app.listen(3200);. When trying to connect to my Neo4J Desktop, I get this error trying to get my queries at localhost:3200/graphQL:
"errors": [
{
"message": "Client network socket disconnected before secure TLS connection was established",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"getMovies"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"code": "ServiceUnavailable",
"name": "Neo4jError"
}
}
}
],
"data": null
}
So I figured my local Neo4J desktop graph is not running correctly, but I can't seem to find any answer how to solve it.. Currently I have a config.ts file which has:
export const HOSTNAME = 'localhost';
export const NEO4J_USER = 'neo4j';
export const NEO4J_PASSWORD = '123';
and a file neogql.resolver.ts:
import {
Resolver,
Query,
Args,
ResolveProperty,
Parent,
} from '#nestjs/graphql';
import { HOSTNAME, NEO4J_USER, NEO4J_PASSWORD } from '../config';
import { Movie } from '../graphql';
import { Connection, relation, node } from 'cypher-query-builder';
import { NotFoundException } from '#nestjs/common';
const db = new Connection(`bolt://${HOSTNAME}`, {
username: NEO4J_USER,
password: NEO4J_PASSWORD,
});
#Resolver('Movie')
export class NeogqlResolver {
#Query()
async getMovies(): Promise<Movie> {
const movies = (await db
.matchNode('movies', 'Movie')
.return([
{
movies: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run()) as any;
return movies;
}
#Query('movie')
async getMovieById(
#Args('id')
id: string,
): Promise<any> {
const movie = (await db
.matchNode('movie', 'Movie')
.where({ 'movie.id': id })
.return([
{
movie: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run<any>()) as any;
if (movie.length === 0) {
throw new NotFoundException(
`Movie id '${id}' does not exist in database `,
);
}
return movie[0];
}
#ResolveProperty()
async actors(#Parent() movie: any) {
const { id } = movie;
return (await db
.match([node('actors', 'Actor'), relation('in'), node('movie', 'Movie')])
.where({ 'movie.id': id })
.return([
{
actors: [
{
id: 'id',
name: 'name',
born: 'born',
},
],
},
])
.run()) as any;
}
}
Be sure to pass the Config object like this:
var hostname = this.configService.get<string>('NEO4J_URL');
var username = this.configService.get<string>('NEO4J_USERNAME');
var password = this.configService.get<string>('NEO4J_PASSWORD');
db = new Connection(`${hostname}`, {
username: username,
password: password,
}, {
driverConfig: { encrypted: "ENCRYPTION_OFF" }
});
I had the same problem with grandSTACK when running against a neo4j version 4 server. According to Will Lyon this is due to mismatched encryption defaults between driver and database: https://community.neo4j.com/t/migrating-an-old-grandstack-project-to-neo4j-4/16911/2
So passing a config object with
{ encrypted: "ENCRYPTION_OFF"}
to the Connection constructor should do the trick.

Get a handle for the Application Load Balancer for an Elastic Beanstalk environment instantiated via CDK

I created an Elastic Beanstalk Environment like so:
const certificate = new certificatemanager.Certificate(this, 'Certificate', {
domainName: props.domainName,
subjectAlternativeNames: [],
validationMethod: certificatemanager.ValidationMethod.EMAIL,
});
const optionSettings = {
'aws:autoscaling:asg': {
MinSize: '2',
MaxSize: '2',
},
'aws:ec2:vpc': {
VPCId: vpc.vpcId,
Subnets: vpc.privateSubnets.map((subnet) => subnet.subnetId).join(','),
ElbSubnets: vpc.publicSubnets.map((subnet) => subnet.subnetId).join(','),
},
'aws:elasticbeanstalk:environment': {
EnvironmentType: 'LoadBalanced',
LoadBalancerType: 'application',
},
'aws:elbv2:listener:443': {
ListenerEnabled: 'true',
Protocol: 'HTTPS',
SSLCertificateArns: certificate.certificateArn,
},
'aws:autoscaling:launchconfiguration': {
IamInstanceProfile: 'aws-elasticbeanstalk-ec2-role',
InstanceType: 't3.medium',
},
'aws:elasticbeanstalk:application:environment': {
CORS_ORIGIN_ALLOW_ALL: 'False',
},
};
const environment = new elasticbeanstalk.CfnEnvironment(this, 'Environment', {
environmentName: `env`,
description: 'My Environment Description',
applicationName: application.applicationName || 'Error',
versionLabel: applicationVersion.ref,
solutionStackName: '64bit Amazon Linux 2018.03 v2.9.5 running Python 3.6',
optionSettings: OptionSettingsUtil.flatten(optionSettings),
});
Where OptionSettingsUtil.flatten is a custom function I wrote to flatten configuration options.
How can I get the handle for the Application Load Balancer resource that will be generated by this Elastic Beanstalk environment? I need it to associate a WAF ACL with it.
You can not until it is actually created, then you can lookup for it as follow:
const loadBalancer = elbv2.ApplicationLoadBalancer.fromLookup(this, 'ALB', {
loadBalancerTags: {
'elasticbeanstalk:environment-name': environmentName
},
})
Docs: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-elasticloadbalancingv2-readme.html#looking-up-load-balancers-and-listeners

TypeORM connection provider as Connection class

Is it possible to use connection class as provide like here?
import { Connection, createConnection } from 'typeorm';
export const databaseProviders = [{
provide: Connection,
useFactory: async () => await createConnection({
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'testo',
entities: [
__dirname + '/../**/*.entity{.ts,.js}',
],
logging: true,
synchronize: true,
}),
}];
To make imports work like:
constructor(
private connection: Connection,
) {
this.repository = connection.getRepository(Project);
}
In that case nestjs can't find dependency. I guess the problem is in typeorm, it is compiled to plain es5 function. But maybe there a solution for this?
repository to reproduce
UPDATE:
I found acceptable solution nestjs typeorm module, but don't understand why my Connection class is not worked, but it works well with strings. Hope #kamil-myƛliwiec will help to understand.
modules: [
TypeOrmModule.forRoot(
[
Build,
Project,
],
{
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'testo',
entities: [
Build,
],
logging: true,
synchronize: true,
}),
],
// And the n inject like this by entity name
#InjectRepository(Build) private repository: Repository<Build>,

Resources