What are the allowed values for RDS ClusterParameterGroup family? - aws-cdk

I am trying to configure an Aurora PostgreSQL 2.3 cluster (compatible with PostgreSQL 10.7) but don't know what the rds.ClusterParameterGroup.family should be set to or if that even matters here.
The example I found used "aurora5.6" as shown below but I don't know how that corresponds to the PostgreSQL version.
const dbparams = new rds.ClusterParameterGroup(this, 'DbClusterParams', {
family: 'aurora5.6',
description: 'my parameter group',
parameters: {
character_set_database: 'utf8mb4'
}
});
// create Aurora cluster
const dbcluster = new rds.DatabaseCluster(this, 'DbCluster', {
defaultDatabaseName: 'MySampleDb',
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
masterUser: {
username: 'myadmin',
},
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
vpc
},
parameterGroup: dbparams,
kmsKey,
});
The API documentation doesn't provide any details. https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-rds.ClusterParameterGroup.html

You can find out more about the ClusterParameterGroup Family when taking a look at the CloudFormation Documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbclusterparametergroup.html
The approach described there:
aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily"
returns a complete list of possible values (with duplicates):
...
...
"aurora-mysql5.7",
"aurora-mysql5.7",
"aurora-mysql5.7",
"aurora-mysql5.7",
"docdb3.6",
"neptune1",
"neptune1",
"neptune1",
"aurora-postgresql9.6",
"aurora-postgresql9.6",
"aurora-postgresql9.6",
"aurora-postgresql9.6",
...
...

Related

How to add PTR record for EC2 Instance's Private IP in CDK?

I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions

connect from web to iot core using a custom authorizer

I'm trying to use a custom authorizer to authenticate a web client.
I have succesfully created a dedicated lambda and a custom authorizer. If I launch aws iot describe-authorizer --authorizer-name <authorizer-name> I can see
{
"authorizerDescription": {
"authorizerName": "<authorizer-name>",
"authorizerArn": "...",
"authorizerFunctionArn": "...",
"tokenKeyName": "<token-key-name>",
"tokenSigningPublicKeys": {
"<public-key-name>": "-----BEGIN PUBLIC KEY-----\n<public-key-content>\n-----END PUBLIC KEY-----"
},
"status": "ACTIVE",
"creationDate": "...",
"lastModifiedDate": "...",
"signingDisabled": false,
"enableCachingForHttp": false
}
}
Moreover I can test it succesfully:
$ aws iot test-invoke-authorizer --authorizer-name '<authorizer-name>' --token '<public-key-name>' --token-signature '<private-key-content>'
{
"isAuthenticated": true,
"principalId": "...",
"policyDocuments": [ "..." ],
"refreshAfterInSeconds": 600,
"disconnectAfterInSeconds": 3600
}
$
But I cannot connect using the browser.
I'm using aws-iot-device-sdk and according the SDK documentation I should set customAuthHeaders and/or customAuthQueryString (my understanding is that the latter should be used in web environment due to a limitation of the browsers) with the headers / queryparams X-Amz-CustomAuthorizer-Name, X-Amz-CustomAuthorizer-Signature and TestAuthorizerToken but no matter what combination I set for these values the iot endpoint always close the connection (I see a 1000 / 1005 code for the closed connection)
What I've written so far is
const CUSTOM_AUTHORIZER_NAME = '<authorizer-name>';
const CUSTOM_AUTHORIZER_SIGNATURE = '<private-key-content>';
const TOKEN_KEY_NAME = 'TestAuthorizerToken';
const TEST_AUTHORIZER_TOKEN = '<public-key-name>';
function f(k: string, v?: string, p: string = '&'): string {
if (!v)
return '';
return `${p}${encodeURIComponent(k)}=${encodeURIComponent(v)}`;
}
const client = new device({
region: '...',
clientId: '...',
protocol: 'wss-custom-auth' as any,
host: '...',
debug: true,
// customAuthHeaders: {
// 'X-Amz-CustomAuthorizer-Name': CUSTOM_AUTHORIZER_NAME,
// 'X-Amz-CustomAuthorizer-Signature': CUSTOM_AUTHORIZER_SIGNATURE,
// [TOKEN_KEY_NAME]: TEST_AUTHORIZER_TOKEN
// },
customAuthQueryString: `${f('X-Amz-CustomAuthorizer-Name', CUSTOM_AUTHORIZER_NAME, '?')}${f('X-Amz-CustomAuthorizer-Signature', CUSTOM_AUTHORIZER_SIGNATURE)}${f(TOKEN_KEY_NAME, TEST_AUTHORIZER_TOKEN)}`,
} as any);
As you can see I started having also doubts about the headers names!
After running my code I see that the client tries to do a GET to the host with the querystring that I wrote.
I also see that IoT core responds with a 101 Switching Protocols, and then that my client send the CONNECT command to IoT via websocket and then another packet from my browser to the backend system.
Then the connection is closed by IoT.
Looking at cloudwatch I cannot see any interaction with the lambda, it's like the request is blocked.
my doubts are:
first of all, is it possible to connect via mqtt+wss using only a custom auth, without cognito/certificates? keep in mind that I am able to use a cognito identity pool without errors, but I need to remove it.
is it correct that I just need to set up the customAuthQueryString parameter? my understanding is that this should be used on the web.
what are the values I should set up for the various headers/queryparams? X-Amz-CustomAuthorizer-Name is self explanatory, but I'm not sure about X-Amz-CustomAuthorizer-Signature (it's correct to fill it with the content of my private key?). moreover I'm not sure about the TestAuthorizerToken. Is it the correct key to set up?
I've also tried to run the custom_authorizer_connect of the sdk v2 but it's still not working, and I run out of ideas.
turns out the problem was in the permissions set on the backend systems.

How to use a Google Secret in a deployed Cloud Run Service (managed)?

I have a running cloud run service user-service. For test purposes I passed client secrets via environment variables as plain text. Now since everything is working fine I'd like to use a secret instead.
In the "Variables" tab of the "Edit Revision" option I can declare environment variables but I have no idea how to pass in a secret? Do I just need to pass the secret name like ${my-secret-id} in the value field of the variable? There is not documentation on how to use secrets in this tab only a hint at the top:
Store and consume secrets using Secret Manager
Which is not very helpful in this case.
You can now read secrets from Secret Manager as environment variables in Cloud Run. This means you can audit your secrets, set permissions per secret, version secrets, etc, and your code doesn't have to change.
You can point to the secrets through the Cloud Console GUI (console.cloud.google.com) or make the configuration when you deploy your Cloud Run service from the command-line:
gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=ENV_VAR_NAME=SECRET_NAME:VERSION
Six-minute video overview: https://youtu.be/JIE89dneaGo
Detailed docs: https://cloud.google.com/run/docs/configuring/secrets
UPDATE 2021: There is now a Cloud Run preview for loading secrets to an environment variable or a volume. https://cloud.google.com/run/docs/configuring/secrets
The question is now answered however I have been experiencing a similar problem using Cloud Run with Java & Quarkus and a native image created using GraalVM.
While Cloud Run is a really interesting technology at the time of writing it lacks the ability to load secrets through the Cloud Run configuration. This has certainly added complexity in my app when doing local development.
Additionally Google's documentation is really quite poor. The quick-start lacks a clear Java example for getting a secret[1] without it being set in the same method - I'd expect this to have been the most common use case!
The javadoc itself seems to be largely autogenerated with protobuf language everywhere. There are various similarly named methods like getSecret, getSecretVersion and accessSecretVersion
I'd really like to see some improvment from Google around this. I don't think it is asking too much for dedicated teams to make libraries for common languages with proper documentation.
Here is a snippet that I'm using to load this information. It requires the GCP Secret library and also the GCP Cloud Core library for loading the project ID.
public String getSecret(final String secretName) {
LOGGER.info("Going to load secret {}", secretName);
// SecretManagerServiceClient should be closed after request
try (SecretManagerServiceClient client = buildClient()) {
// Latest is an alias to the latest version of a secret
final SecretVersionName name = SecretVersionName.of(getProjectId(), secretName, "latest");
return client.accessSecretVersion(name).getPayload().getData().toStringUtf8();
}
}
private String getProjectId() {
if (projectId == null) {
projectId = ServiceOptions.getDefaultProjectId();
}
return projectId;
}
private SecretManagerServiceClient buildClient() {
try {
return SecretManagerServiceClient.create();
} catch(final IOException e) {
throw new RuntimeException(e);
}
}
[1] - https://cloud.google.com/secret-manager/docs/reference/libraries
Google have documentation for the Secret manager client libraries that you can use in your api.
This should help you do what you want
https://cloud.google.com/secret-manager/docs/reference/libraries
Since you haven't specified a language I have a nodejs example of how to access the latest version of your secret using your project id and secret name. The reason I add this is because the documentation is not clear on the string you need to provide as the name.
const [version] = await this.secretClient.accessSecretVersion({
name: `projects/${process.env.project_id}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString()
Be sure to allow secret manager access in your IAM settings for the service account that your api uses within GCP.
I kinda found a way to use secrets as environment variables.
The following doc (https://cloud.google.com/sdk/gcloud/reference/run/deploy) states:
Specify secrets to mount or provide as environment variables. Keys
starting with a forward slash '/' are mount paths. All other keys
correspond to environment variables. The values associated with each
of these should be in the form SECRET_NAME:KEY_IN_SECRET; you may omit
the key within the secret to specify a mount of all keys within the
secret. For example:
'--update-secrets=/my/path=mysecret,ENV=othersecret:key.json' will
create a volume with secret 'mysecret' and mount that volume at
'/my/path'. Because no secret key was specified, all keys in
'mysecret' will be included. An environment variable named ENV will
also be created whose value is the value of 'key.json' in
'othersecret'. At most one of these may be specified
Here is a snippet of Java code to get all secrets of your Cloud Run project. It requires the com.google.cloud/google-cloud-secretmanager artifact.
Map<String, String> secrets = new HashMap<>();
String projectId;
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
Set<String> names = new HashSet<>();
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
ProjectName projectName = ProjectName.of(projectId);
ListSecretsPagedResponse pagedResponse = client.listSecrets(projectName);
pagedResponse
.iterateAll()
.forEach(secret -> { names.add(secret.getName()); });
for (String secretName : names) {
String name = secretName.substring(secretName.lastIndexOf("/") + 1);
SecretVersionName nameParam = SecretVersionName.of(projectId, name, "latest");
String secretValue = client.accessSecretVersion(nameParam).getPayload().getData().toStringUtf8();
secrets.put(secretName, secretValue);
}
}
Cloud Run support for referencing Secret Manager Secrets is now at general availability (GA).
https://cloud.google.com/run/docs/release-notes#November_09_2021

Create CfnDBCluster in non-default VPC?

I'm trying to create a serverless aurora database with the AWS CDK (1.19.0). However, it will always be created in the default VPC of the region. If I specify a vpc_security_group_id cloudformation fails because the provided security group is in the vpc created in the same stack as the aurora db.
"The DB instance and EC2 security group are in different VPCs."
Here is my code sample:
from aws_cdk import (
core,
aws_rds as rds,
aws_ec2 as ec2
)
class CdkAuroraStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# The code that defines your stack goes here
vpc = ec2.Vpc(self, "VPC")
sg = ec2.SecurityGroup(self, "SecurityGroup",
vpc = vpc,
allow_all_outbound = True
)
cluster = rds.CfnDBCluster(self, "AuroraDB",
engine="aurora",
engine_mode="serverless",
master_username="admin",
master_user_password="password",
database_name="databasename",
vpc_security_group_ids=[
sg.security_group_id
]
)
Do I miss something and it is possible to create the CfnDbCluster in a specific vpc or is this just not possible atm?
Thanks for any help and advice. Have a nice day!
You should create a DB subnet group and include only the subnets you want Amazon RDS to launch instances into. Amazon RDS creates a DB subnet group in default VPC if none is specified.
You can use db_subnet_group_name property to specify your subnets, however it is better to use high-level constructs. In this case, there is one called DatabaseCluster.
cluster = DatabaseCluster(
scope=self,
id="AuroraDB",
engine=DatabaseClusterEngine.AURORA,
master_user=rds.Login(
username="admin",
password="Do not put passwords in your CDK code directly"
),
instance_props={
"instance_type": ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.SMALL),
"vpc_subnets": {
"subnet_type": ec2.SubnetType.PRIVATE
},
"vpc": vpc,
"security_group": sg
}
)
Do not specify password attribute for your database, CDK assigns a Secrets Manager generated password by default.
Just to note that this construct is still experimental, that means there might be a breaking change in the future.

aws cdk for Elasticache Redis Cluster

I have gone through the https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_elasticache.html.
How to create an Elasticache Redis template using AWS-CDK. It would be more helpful if you share the sample code.
sorry for late response but can be usefull for others.
CDK doesn't have an High Level Construct to create a Redis Cluster, but you can create it by using the low level construct api.
For Redis Cluster types you can take a look at this: https://aws.amazon.com/it/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/
I've created a single Redis (no replication) Cluster using typescript like this:
const subnetGroup = new CfnSubnetGroup(
this,
"RedisClusterPrivateSubnetGroup",
{
cacheSubnetGroupName: "privata",
subnetIds: privateSubnets.subnetIds,
description: "subnet di sviluppo privata"
}
);
const redis = new CfnCacheCluster(this, `RedisCluster`, {
engine: "redis",
cacheNodeType: "cache.t2.small",
numCacheNodes: 1,
clusterName: "redis-sviluppo",
vpcSecurityGroupIds: [vpc.defaultSecurityGroup.securityGroupId],
cacheSubnetGroupName: subnetGroup.cacheSubnetGroupName
});
redis.addDependsOn(subnetGroup);
If you need a Redis (cluster enabled) Cluster you can you replication group
const redisSubnetGroup = new CfnSubnetGroup(
this,
"RedisClusterPrivateSubnetGroup",
{
cacheSubnetGroupName: "privata",
subnetIds: privateSubnets.subnetIds,
description: "subnet di produzione privata"
}
);
const redisReplication = new CfnReplicationGroup(
this,
`RedisReplicaGroup`,
{
engine: "redis",
cacheNodeType: "cache.m5.xlarge",
replicasPerNodeGroup: 1,
numNodeGroups: 3,
automaticFailoverEnabled: true,
autoMinorVersionUpgrade: true,
replicationGroupDescription: "cluster redis di produzione",
cacheSubnetGroupName: redisSubnetGroup.cacheSubnetGroupName
}
);
redisReplication.addDependsOn(redisSubnetGroup);
Hope this help.
I just struggled for hours creating a Redis cluster mode enabled with just one shard but two nodes. If you create a CfnReplicationGroup with num_cache_clusters=2, it will create a primary & replica node.
The trick is to create a CfnReplicationGroup with num_cache_clusters=2 and set the cache_parameter_group_name="default.redis6.x.cluster.on"
Then it will create a redis cache with cluster mode enable, one shard but two nodes

Resources