I am trying to create a Fargate Service that connects to an Aurora Postgres DB through the CDK, but I am unable. I get an error connection. This should be pretty straightforward though. Does anybody have any resources?
export class myStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const DATABASE_NAME = "myDatabase";
const vpc = new ec2.Vpc(this, "myVpc");
const cluster = new ecs.Cluster(this, "myCluster", {
vpc: vpc
});
const databaseAdminSecret = new Secret(this, 'myCredentialsSecret', {
secretName: 'database-secret',
description: 'Database auto-generated user password',
generateSecretString: {
secretStringTemplate: JSON.stringify({username: 'boss'}),
generateStringKey: 'password',
passwordLength: 30,
excludeCharacters: "\"#/\\",
excludePunctuation: true,
}
});
const database = new rds.DatabaseCluster(this, 'myDatabase', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_14_5
}),
credentials: rds.Credentials.fromSecret(databaseAdminSecret),
instanceProps: {
vpc,
},
defaultDatabaseName: DATABASE_NAME,
port: 5432,
});
// Create a load-balanced Fargate service and make it public
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "myService", {
cluster: cluster, // Required
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('my-custom-image'),
environment: {
DB_URL: `jdbc:postgresql://${database.clusterEndpoint.socketAddress}/${DATABASE_NAME}`,
DB_USERNAME: databaseAdminSecret.secretValueFromJson('username').unsafeUnwrap().toString(),
DB_PASSWORD: databaseAdminSecret.secretValueFromJson('password').unsafeUnwrap().toString(),
},
},
});
// Allow the service to connect to the database
database.connections.allowDefaultPortFrom(service.service);
}
}
When I spin up this stack with cdk deploy my Fargate service ends up dying because "Connection was refused".
What am I doing wrong?
Thanks,
Related
I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?
Is it possible to "require SSL" on an Aurora database created using the AWS CDK? We've enabled encryption, but that is only "at rest", and we're also required to encrypt "in transit" and are being flagged by a security monitor because the database does not "require SSL".
Here is the code we use to set up the database:
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
defaultDatabaseName: dbName,
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: rds.AuroraPostgresEngineVersion.VER_13_7 }),
credentials: {
username: dbUser,
password: pgPasswordSecret.secretValue,
},
instanceProps: {
securityGroups: [securityGroup],
instanceType: primaryPostgresInstanceType(),
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
},
vpc,
},
storageEncrypted: true,
backup: {
retention: Duration.days(15),
},
})
The solution (from https://gitter.im/awslabs/aws-cdk?at=5e2ab552f196225bd64b7581) is to pass a parameterGroup when creating the database cluster, setting rds.force_ssl to '1':
const postgresVersion = rds.AuroraPostgresEngineVersion.VER_13_7
const parameterGroup = new rds.ParameterGroup(scope, 'ClusterParameterGroup', {
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: postgresVersion}),
parameters: {
'rds.force_ssl': '1',
},
})
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
...
parameterGroup,
...
})
I'm creating an Application Load Balancer using the AWS CDK v2.
This is my code:
const lb = new elb.ApplicationLoadBalancer(this, 'LB', {
vpc: ec2.Vpc.fromLookup(this, 'vpc-lookup', {
isDefault: true
}),
internetFacing: true
});
const listener = lb.addListener('Listener', {
port: 80,
});
My question is how do I get the URL (DNS name) of the load balancer? I need it in the CDK after to update something
TL;DR The name's actual value is resolved at deploy-time. At synth-time, you can pass loadBalancerDnsName to other constructs and CDK will create the necessary references.
Resource identifiers like DNS addresses are generally known only only at deploy-time. The CDK uses Tokens to "represent values that can only be resolved at a later time in the lifecycle of an app" . ApplicationLoadBalancer's loadBalancerDnsName: string property is one of those properties whose value resolves to a string Token placeholder
at synth-time and an actual value at deploy-time.
Here's an example of passing the loadBalancerDnsName between constructs:
export class AlbStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const alb = new elb.ApplicationLoadBalancer(this, 'MyALB', {
vpc: ec2.Vpc.fromLookup(this, 'DefaultVpc', { isDefault: true }),
});
// WON'T WORK: at synth-time, the name attribute returns a Token, not the expected DNS name
console.log(alb.loadBalancerDnsName); // ${Token[TOKEN.220]}
// WILL WORK - CDK will wire up the token in CloudFormation as
new ssm.StringParameter(this, 'MyAlbDns', {
stringValue: alb.loadBalancerDnsName,
});
}
}
The CDK's CloudFormation template output has Fn::GetAtt placeholder for the DNS name that resolves at deploy-time:
// CDK CloudFormation stack template
// Resources section
"MyAlbDnsFD44EB27": {
"Type": "AWS::SSM::Parameter",
"Properties": {
"Type": "String",
"Value": { "Fn::GetAtt": [ "MyALB911A8556", "DNSName" ] } // this will resolve to the string at deploy
},
"Metadata": {
"aws:cdk:path": "TsCdkPlaygroundAlbStack/MyAlbDns/Resource"
}
},
I created an Elastic Beanstalk Environment like so:
const certificate = new certificatemanager.Certificate(this, 'Certificate', {
domainName: props.domainName,
subjectAlternativeNames: [],
validationMethod: certificatemanager.ValidationMethod.EMAIL,
});
const optionSettings = {
'aws:autoscaling:asg': {
MinSize: '2',
MaxSize: '2',
},
'aws:ec2:vpc': {
VPCId: vpc.vpcId,
Subnets: vpc.privateSubnets.map((subnet) => subnet.subnetId).join(','),
ElbSubnets: vpc.publicSubnets.map((subnet) => subnet.subnetId).join(','),
},
'aws:elasticbeanstalk:environment': {
EnvironmentType: 'LoadBalanced',
LoadBalancerType: 'application',
},
'aws:elbv2:listener:443': {
ListenerEnabled: 'true',
Protocol: 'HTTPS',
SSLCertificateArns: certificate.certificateArn,
},
'aws:autoscaling:launchconfiguration': {
IamInstanceProfile: 'aws-elasticbeanstalk-ec2-role',
InstanceType: 't3.medium',
},
'aws:elasticbeanstalk:application:environment': {
CORS_ORIGIN_ALLOW_ALL: 'False',
},
};
const environment = new elasticbeanstalk.CfnEnvironment(this, 'Environment', {
environmentName: `env`,
description: 'My Environment Description',
applicationName: application.applicationName || 'Error',
versionLabel: applicationVersion.ref,
solutionStackName: '64bit Amazon Linux 2018.03 v2.9.5 running Python 3.6',
optionSettings: OptionSettingsUtil.flatten(optionSettings),
});
Where OptionSettingsUtil.flatten is a custom function I wrote to flatten configuration options.
How can I get the handle for the Application Load Balancer resource that will be generated by this Elastic Beanstalk environment? I need it to associate a WAF ACL with it.
You can not until it is actually created, then you can lookup for it as follow:
const loadBalancer = elbv2.ApplicationLoadBalancer.fromLookup(this, 'ALB', {
loadBalancerTags: {
'elasticbeanstalk:environment-name': environmentName
},
})
Docs: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-elasticloadbalancingv2-readme.html#looking-up-load-balancers-and-listeners
// fargate
const ecsService = new patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster: cluster, // Required
publicLoadBalancer: true,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('nginx')
}
});
// codepipeline artifact
const sourceOutput = new codepipeline.Artifact();
// pipeline
const pipeline = new codepipeline.Pipeline(this, 'Pipeline');
// pipeline stage: Source
pipeline.addStage({
stageName: 'Source',
actions: [
new codepipeline_actions.EcrSourceAction({
actionName: 'ecr_push',
repository: repository,
output: sourceOutput
})
]
});
// pipeline stage: Deploy
pipeline.addStage({
stageName: 'Deploy',
actions: [
new codepipeline_actions.EcsDeployAction({
actionName: 'Deploy',
input: sourceOutput,
service: ecsService
})
]
});
Using patterns ApplicationLoadBalancedFargateService to create fagate service
But, codepipeline_actions EcsDeployAction props service required type ecs.BaseService
How to resolve this problem? Back to build fargae service from scrath ?
Any suggestion will be appreciated !!
The ApplicationLoadBalancedFargateService higher-level pattern has a service property that is exposed on the instance. The type of ecsService.service is FargateService which implements the IBaseService interface. Your code should work if you change it to :
pipeline.addStage({
stageName: 'Deploy',
actions: [
new codepipeline_actions.EcsDeployAction({
actionName: 'Deploy',
input: sourceOutput,
service: ecsService.service, // <-
})
]
});
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecs-patterns.ApplicationLoadBalancedFargateService.html#service