I'm trying to create a serverless aurora database with the AWS CDK (1.19.0). However, it will always be created in the default VPC of the region. If I specify a vpc_security_group_id cloudformation fails because the provided security group is in the vpc created in the same stack as the aurora db.
"The DB instance and EC2 security group are in different VPCs."
Here is my code sample:
from aws_cdk import (
core,
aws_rds as rds,
aws_ec2 as ec2
)
class CdkAuroraStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# The code that defines your stack goes here
vpc = ec2.Vpc(self, "VPC")
sg = ec2.SecurityGroup(self, "SecurityGroup",
vpc = vpc,
allow_all_outbound = True
)
cluster = rds.CfnDBCluster(self, "AuroraDB",
engine="aurora",
engine_mode="serverless",
master_username="admin",
master_user_password="password",
database_name="databasename",
vpc_security_group_ids=[
sg.security_group_id
]
)
Do I miss something and it is possible to create the CfnDbCluster in a specific vpc or is this just not possible atm?
Thanks for any help and advice. Have a nice day!
You should create a DB subnet group and include only the subnets you want Amazon RDS to launch instances into. Amazon RDS creates a DB subnet group in default VPC if none is specified.
You can use db_subnet_group_name property to specify your subnets, however it is better to use high-level constructs. In this case, there is one called DatabaseCluster.
cluster = DatabaseCluster(
scope=self,
id="AuroraDB",
engine=DatabaseClusterEngine.AURORA,
master_user=rds.Login(
username="admin",
password="Do not put passwords in your CDK code directly"
),
instance_props={
"instance_type": ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
"vpc_subnets": {
"subnet_type": ec2.SubnetType.PRIVATE
},
"vpc": vpc,
"security_group": sg
}
)
Do not specify password attribute for your database, CDK assigns a Secrets Manager generated password by default.
Just to note that this construct is still experimental, that means there might be a breaking change in the future.
Related
I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions
I have a running cloud run service user-service. For test purposes I passed client secrets via environment variables as plain text. Now since everything is working fine I'd like to use a secret instead.
In the "Variables" tab of the "Edit Revision" option I can declare environment variables but I have no idea how to pass in a secret? Do I just need to pass the secret name like ${my-secret-id} in the value field of the variable? There is not documentation on how to use secrets in this tab only a hint at the top:
Store and consume secrets using Secret Manager
Which is not very helpful in this case.
You can now read secrets from Secret Manager as environment variables in Cloud Run. This means you can audit your secrets, set permissions per secret, version secrets, etc, and your code doesn't have to change.
You can point to the secrets through the Cloud Console GUI (console.cloud.google.com) or make the configuration when you deploy your Cloud Run service from the command-line:
gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=ENV_VAR_NAME=SECRET_NAME:VERSION
Six-minute video overview: https://youtu.be/JIE89dneaGo
Detailed docs: https://cloud.google.com/run/docs/configuring/secrets
UPDATE 2021: There is now a Cloud Run preview for loading secrets to an environment variable or a volume. https://cloud.google.com/run/docs/configuring/secrets
The question is now answered however I have been experiencing a similar problem using Cloud Run with Java & Quarkus and a native image created using GraalVM.
While Cloud Run is a really interesting technology at the time of writing it lacks the ability to load secrets through the Cloud Run configuration. This has certainly added complexity in my app when doing local development.
Additionally Google's documentation is really quite poor. The quick-start lacks a clear Java example for getting a secret[1] without it being set in the same method - I'd expect this to have been the most common use case!
The javadoc itself seems to be largely autogenerated with protobuf language everywhere. There are various similarly named methods like getSecret, getSecretVersion and accessSecretVersion
I'd really like to see some improvment from Google around this. I don't think it is asking too much for dedicated teams to make libraries for common languages with proper documentation.
Here is a snippet that I'm using to load this information. It requires the GCP Secret library and also the GCP Cloud Core library for loading the project ID.
public String getSecret(final String secretName) {
LOGGER.info("Going to load secret {}", secretName);
// SecretManagerServiceClient should be closed after request
try (SecretManagerServiceClient client = buildClient()) {
// Latest is an alias to the latest version of a secret
final SecretVersionName name = SecretVersionName.of(getProjectId(), secretName, "latest");
return client.accessSecretVersion(name).getPayload().getData().toStringUtf8();
}
}
private String getProjectId() {
if (projectId == null) {
projectId = ServiceOptions.getDefaultProjectId();
}
return projectId;
}
private SecretManagerServiceClient buildClient() {
try {
return SecretManagerServiceClient.create();
} catch(final IOException e) {
throw new RuntimeException(e);
}
}
[1] - https://cloud.google.com/secret-manager/docs/reference/libraries
Google have documentation for the Secret manager client libraries that you can use in your api.
This should help you do what you want
https://cloud.google.com/secret-manager/docs/reference/libraries
Since you haven't specified a language I have a nodejs example of how to access the latest version of your secret using your project id and secret name. The reason I add this is because the documentation is not clear on the string you need to provide as the name.
const [version] = await this.secretClient.accessSecretVersion({
name: `projects/${process.env.project_id}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString()
Be sure to allow secret manager access in your IAM settings for the service account that your api uses within GCP.
I kinda found a way to use secrets as environment variables.
The following doc (https://cloud.google.com/sdk/gcloud/reference/run/deploy) states:
Specify secrets to mount or provide as environment variables. Keys
starting with a forward slash '/' are mount paths. All other keys
correspond to environment variables. The values associated with each
of these should be in the form SECRET_NAME:KEY_IN_SECRET; you may omit
the key within the secret to specify a mount of all keys within the
secret. For example:
'--update-secrets=/my/path=mysecret,ENV=othersecret:key.json' will
create a volume with secret 'mysecret' and mount that volume at
'/my/path'. Because no secret key was specified, all keys in
'mysecret' will be included. An environment variable named ENV will
also be created whose value is the value of 'key.json' in
'othersecret'. At most one of these may be specified
Here is a snippet of Java code to get all secrets of your Cloud Run project. It requires the com.google.cloud/google-cloud-secretmanager artifact.
Map<String, String> secrets = new HashMap<>();
String projectId;
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
Set<String> names = new HashSet<>();
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
ProjectName projectName = ProjectName.of(projectId);
ListSecretsPagedResponse pagedResponse = client.listSecrets(projectName);
pagedResponse
.iterateAll()
.forEach(secret -> { names.add(secret.getName()); });
for (String secretName : names) {
String name = secretName.substring(secretName.lastIndexOf("/") + 1);
SecretVersionName nameParam = SecretVersionName.of(projectId, name, "latest");
String secretValue = client.accessSecretVersion(nameParam).getPayload().getData().toStringUtf8();
secrets.put(secretName, secretValue);
}
}
Cloud Run support for referencing Secret Manager Secrets is now at general availability (GA).
https://cloud.google.com/run/docs/release-notes#November_09_2021
I have gone through the https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_elasticache.html.
How to create an Elasticache Redis template using AWS-CDK. It would be more helpful if you share the sample code.
sorry for late response but can be usefull for others.
CDK doesn't have an High Level Construct to create a Redis Cluster, but you can create it by using the low level construct api.
For Redis Cluster types you can take a look at this: https://aws.amazon.com/it/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/
I've created a single Redis (no replication) Cluster using typescript like this:
const subnetGroup = new CfnSubnetGroup(
this,
"RedisClusterPrivateSubnetGroup",
{
cacheSubnetGroupName: "privata",
subnetIds: privateSubnets.subnetIds,
description: "subnet di sviluppo privata"
}
);
const redis = new CfnCacheCluster(this, `RedisCluster`, {
engine: "redis",
cacheNodeType: "cache.t2.small",
numCacheNodes: 1,
clusterName: "redis-sviluppo",
vpcSecurityGroupIds: [vpc.defaultSecurityGroup.securityGroupId],
cacheSubnetGroupName: subnetGroup.cacheSubnetGroupName
});
redis.addDependsOn(subnetGroup);
If you need a Redis (cluster enabled) Cluster you can you replication group
const redisSubnetGroup = new CfnSubnetGroup(
this,
"RedisClusterPrivateSubnetGroup",
{
cacheSubnetGroupName: "privata",
subnetIds: privateSubnets.subnetIds,
description: "subnet di produzione privata"
}
);
const redisReplication = new CfnReplicationGroup(
this,
`RedisReplicaGroup`,
{
engine: "redis",
cacheNodeType: "cache.m5.xlarge",
replicasPerNodeGroup: 1,
numNodeGroups: 3,
automaticFailoverEnabled: true,
autoMinorVersionUpgrade: true,
replicationGroupDescription: "cluster redis di produzione",
cacheSubnetGroupName: redisSubnetGroup.cacheSubnetGroupName
}
);
redisReplication.addDependsOn(redisSubnetGroup);
Hope this help.
I just struggled for hours creating a Redis cluster mode enabled with just one shard but two nodes. If you create a CfnReplicationGroup with num_cache_clusters=2, it will create a primary & replica node.
The trick is to create a CfnReplicationGroup with num_cache_clusters=2 and set the cache_parameter_group_name="default.redis6.x.cluster.on"
Then it will create a redis cache with cluster mode enable, one shard but two nodes
I need create a VPC Endpoint and an ALB to target the VPC Endpoint in CDK.
I found InterfaceVpcEndpoint can return vpcEndpointNetworkInterfaceIds attribute. So it seems the missing part is how to get private IP address from these ENI IDs in a CDK way.
I found CDK has a custom-resource package, its example shows I can use AwsCustomResource to call an AWS API (EC2/DescribeNetworkInterfaces) to get the IP Address.
I tried write a custom resource like below:
eni = AwsCustomResource(
self, 'DescribeNetworkInterfaces',
on_create=custom_resources.AwsSdkCall(
service='ec2',
action='describeNetworkInterfaces',
parameters= {
'NetworkInterfaceId.N': [eni_id]
},
physical_resource_id=str(time.time())
)
)
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress')
and pass ip into elbv2.IPTarget.
But it seems I missed something, so it complains it needs a scalar not reference?
(.env) ➜ base-stack (master) ✔ cdk synth base --no-staging > template.yaml
jsii.errors.JavaScriptError:
Error: Expected Scalar, got {"$jsii.byref":"#aws-cdk/core.Reference#10015"}
at Object.deserialize (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:12047:23)
at Kernel._toSandbox (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7031:61)
at /Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7084:33
at Array.map (<anonymous>)
at Kernel._boxUnboxParameters (/Volumes/DATA/ci/aws/base-stack/.env/lib/python3.7/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7084:19)
at Kernel
....
Thanks!
The AwsCustomResource.get_data-method return a Reference object, which now causes the issue. To get the CloudFormation token (!GetAtt "DescribeNetworkInterfaces"."NetworkInterfaces.0.PrivateIpAddress") the Reference.to_string method must be used explicitly.
This:
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress')
Becomes:
ip = eni.get_data('NetworkInterfaces.0.PrivateIpAddress').to_string()
I've started using the AWS CDK to stand up a new VPC, but I am struggling when trying to query other existing VPCs and their CIDR ranges - this is to ensure that my new VPC does not overlap with existing CIDR ranges. The return string is not something I can understand. Could you provide an example on how to query for a list of CIDR ranges in subnets?
Thanks.
If you are trying to reference an existing VPC into your CDK stack, you should use the VpcNetwork.import static method which doesn't require you to specify the CIDR blocks of the VPC.
You will need other information specified in VpcNetworkRefProps, which shouldn't be too hard to obtain from the AWS Console or the AWS CLI:
Something like:
const externalVpc = VpcNetwork.import(this, 'ExternalVpc', {
vpcId: 'vpc-bd5656d4',
availabilityZones: [ 'us-east1a', 'us-east-1b' ],
publicSubnetIds: [ 'subnet-1111aaaa', 'subnet-2222bbbb' ],
privateSubnetIds: [ 'subnet-8368fbce', 'subnet-8368abcc' ],
});
We are looking at making this easier (see #506)