I am trying to create a secret manager for the purpose of using lambda in serverless redshift.
The redshift cluster created on the security password type selection screen is not visible, so it cannot be created.
Any solution?
The secrets manager "new secret" wizard does not show any serverless Redshift workgroups.
You can however create an other database and enter hostname and more, and edit the secret later to modify the database type in the secret.
Related
I have a multi-tenant application that stores the data of each client in a distinct database. To access the data I retrieve the credentials from the SecretsManager in AWS where the secret is stored by giving it the tenant_id as the name. In code, I can then just retrieve it by passing the tennant id.
Now I'm looking for a clean way to implement multiple environments. But I'm not able to find a way that suits my use case. The restrictions I'm having are:
The tenant_id is actually the Azure Tenant ID of the client and is also used to connect to the GraphQL API. As such just using an ID like "Test-Tenant" would not be possible as I would not be able to run all the code.
Just relying on the same database in staging and testing (which probably is a bad idea anyway) would also not be possible as the database in staging is a document DB and connecting to it is not possible via the local machine (unless via ssh tunneling but then my end point URLs would not match).
What would be a clean way to implement multiple environments in this multi-tenancy setup?
I am have a aws cdk project creating stacks that are based on existing an existing kms key and existing subnet.
I need the subnet_id and the kms key arn for an IAM policy.
Since I did not find a way to find out the kms key arn and subnet id at runtime, I started by hardcoding them into the code.
But now I want to deploy the stack into different accounts (for which of the kms key arn and subnet id of course differs).
When I was using just Cloudformation, without aws cdk, I would use the Mapping section of the cloudformation template to map account ids to the needed information:
Mappings:
KMSKeyArn:
<account-id-1>:
ARN: <kms-key1-arn>
<account-id-2>:
ARN: <kms-key2-arn>
What is a good way to do this with AWS CDK?
Should I use CfnMapping?
Or can I somehow know the Account ID at CDK Execution time?
Is there a better way I am missing here?
similar to the aws-cli, the cdk detects its account and region through the environment. See https://yshen4.github.io/infrastructure/AWS/CDK_context.html
Looking for guidance on what items to store in a simple configuration file versus an Azure Key Vault?
For example, an SQL Database name should be stored in a configuration file while its password should be stored in a KeyVault. Is this correct?
Should there be a key vault for each environment (I think not) or simply one for production and one for non-production?
Yes, you can just store the password of SQL Database in azure key vault and store the database name in configuration file, or we can store the whole connection string of the database in azure key vault.
For your second question about should there be a key vault for each environment, I think it's unnecessary for us to create multiple key vault for each environment, you can just separate them with several different names in one key vault.
Anything that should be protected (passwords, certs, API keys, etc..) should be in a Key Vault and have strict access policies assigned to it.
Agree with Hury on the first half; however, disagree on the one key vault for all environments. Different access policies will be applied for different environments.
Your developers may want access to the Key Vault for the dev credentials. As such they'd have an access policy to the dev Key Vault. I would not want to grant them access to a production Key Vault, which would give them access to all the keys in it.
Key Vault is a globally available resource. So if you have multiple instances in different regions connecting that is fine as you wouldn't need to have a separate Key Vault in a different region from a disaster recovery and availability standpoint.
Here is a similar quesiton and also link to Microsoft best practices which supports this
some days ago I was able to set up one of my apps to be connected to one of my database instances from the google cloud run service configuration form. However lately I notice two things:
I'm no longer able to select the database instance my service is/will be connected to.
On a service that is connected using this method I no longer see the database connection name. at the bottom on the details panel.
Is this a symptom that the database connections feature will disappear from the Google CloudRun settings?.
This seems like a useful case to use the Cloud SDK to confirm your Cloud Run service is able to communicate with Cloud SQL. This will help confirm if you have a UI problem or something deeper. This is especially important given the documentation states that the Console instructions are not available yet.
Cloud Run supports Cloud SQL via gcloud management using a special flag to associate a Cloud SQL instance with an individual service.
Once this is done, the Cloud SQL instance will be available to the Cloud Run service until it is explicitly removed.
You can verify this connection is in place by looking at the service description:
gcloud beta run services describe [SERVICE-NAME]
in the response, you should see the property run.googleapis.com/cloudsql-instances inside spec.runLatest.configuration.revisionTemplate.metadata.annotations.
As long as that annotation is present and contains your Cloud SQL instance connection name, your service should be able to connect to the SQL instance as documented (assuming your service has authorization to connect to the Cloud SQL instance)
I am considering using AWS Amplify to create a backend for my app(s). I was hoping to use OrientDB which I have set up on an EC2, but all the examples and tutorials for Amplify only mention DynamoDB. Before I spend a lot of time learning how to use Amplify, is it possible to connect to any type of DB that can be installed on an EC2, or is DynamoDB all that is available?
Yes, you can.
After amplify init and amplify add host
Run amplify add api
Choose REST
Choose Create a new Lambda function
Don't choose CRUD function for Amazon DynamoDB table
Choose Serverless express function (Integration with Amazon API Gateway)
At your project ./amplify/backend/function, you’ll see your lambda express. And then you can connect to any database you want.
Just need to input the connecting DB code.
Amplify is at the moment tied to dynamoDB in a very strong way. But you can use graphQL queries sent to AppSync (the backend layer of amplify) to trigger lambda functions. From there you can target any type of database you want