I'm new to aws-cdk but was curious if it is possible to rename a bucket once it's created so it doesn't have the naming format of ?
I've tried this logic
const core = require(`#aws-cdk/core`);
const s3 = require(`#aws-cdk/aws-s3`);
class s3Build extends core.Construct {
constructor(scope, id) {
super(scope, id)
const bucketName = process.env.s3Bucket + `-` + process.env.environment
console.log(bucketName);
const bucket = new s3.Bucket(this, bucketName);
const bucketNameOutput = new s3.CfnBucket(this, `NewName`, {
});
bucketNameOutput.overrideLogicalId(`myBucketsNewName`)
}
}
module.exports = {s3Build}
When I run the above logic it does rename the bucket in a sense, but it still retains the original name. Below is the new output:
<stack-name>myBucketsNewName<alpha-numeric-number>
The only thing that changed was the bucket name but it still shows the stack name and the alpha-numeric-number.
Looking at the documentation below it seems I have the right method overrideLogicalId but not getting the output I'm desiring. What I want is to just have the bucket name be myBucketsNewName and not have the stack name and the alpha numeric number.
Am I missing something?
the 'id' - that is the string right after the this/self variable in a cdk construct function, refers to the LogicalID of the resource. The LogicalIDs are what CloudFormation uses to identify a given sub section of at template as a particular resource, and link that sub section to that resource for future updates to your stack.
CDK does NOT recommend you set your own names, but as pointed out the bucketName attribute is available. The reason this is not recommended is because it takes away from some of the power of CDK - which is the ability to redeploy your entire stack in multiple accounts, environments, or even just again in the same account to try a few different things. Since resource names have to be unique per service in an account (and S3 names are GLOBALLY unique!) then by setting a name, you cannot recreate this stack somewhere else without first deleting the old bucket or changing the name.
if you are going to set your buckets name, I highly recomend setting a removalPolicy attribute as well, so the CDK stacks can delete the bucket if its removed from your stack.
I found a way to resolve the roadblock by using the below prop values.
const bucket = new s3.Bucket(this, bucketName, {
bucketName: `myBucketsNewName`
})
When I added that bucketName property value and deployed via the cdk the bucket was renamed to what I put in the bucketName field.
Related
my objective is to call a first level cdk command starting from an interface: IBucket.
I can get the bucket reference starting from this:
const sourceBucket = props.glueTable.bucket;
Afterwords, I need to call:
cfnBucket.replicationConfiguration = {
The procedure is exactly as for the script below:
https://github.com/rogerchi/cdk-s3-bucketreplication/blob/main/src/index.ts
But, as you can see, this script requires:
readonly sourceBucket: s3.Bucket;
Since it is needed to call:
const sourceAccount = cdk.Stack.of(props.sourceBucket).account;
Finally, are there really no other ways to call a cloudformation level 1 command starting from a reference?
It looks odd.
Thank you in advance
Marco
There is an example on exactly this in the aws docs:
If a Construct is missing a feature or you are trying to work around an issue, you can modify the CFN Resource that is encapsulated by the Construct.
All Constructs contain within them the corresponding CFN Resource. For example, the high-level Bucket construct wraps the low-level CfnBucket construct. Because the CfnBucket corresponds directly to the AWS CloudFormation resource, it exposes all features that are available through AWS CloudFormation.
The basic approach to get access to the CFN Resource class is to use construct.node.defaultChild (Python: default_child), cast it to the right type (if necessary), and modify its properties. Again, let's take the example of a Bucket.
// Get the CloudFormation resource
const cfnBucket = bucket.node.defaultChild;
// Change its properties
cfnBucket.analyticsConfiguration = [
{
id: 'Config'
// ...
}
];
From https://docs.aws.amazon.com/cdk/latest/guide/cfn_layer.html
For you, it wouldn't be analyticsconfiguration but bucketreplication of course.
I'm trying to referring an existing AutoScalingGroup from my CodeDeploy with AutoScalingGroup.from_auto_scaling_group_name static method in order to integrate with CodePipeline for automating EC2/On-premise deployment. I have the following code snippet for your reference.
# Refer existing AutoScaling Group
asg_1 = autoscaling.AutoScalingGroup.from_auto_scaling_group_name(self, "AutoScaleGroup", "WSAutoscaleStack-webServerAsgIdASG12345-XXXXXX")
# EC2 Deployment Groups
deployment_group = codedeploy.ServerDeploymentGroup(self, "CodeDeployDeploymentGroup", deployment_group_name="MyDeploymentGroup", install_agent=True, auto_scaling_groups=[asg_1])
After validating the stack with 'cdk ls', I got an error which says,
jsii.errors.JSIIError: Cannot get policy fragment of AMIPipelineStack/AutoScaleGroup, resource imported without a role
As far as I understand, the referenced resource should be imported as an object, so that I can use it all dependents including iam.role from the resource. Any ideas?
Looks like the fromAutoScalingGroupName method doesn't "Import" the role (see here)
One option you have is to implement that import by yourself. The above linked Import class would then look like (in Typescript):
public static fromAutoScalingGroupNameWithRole(scope: Construct, id: string, autoScalingGroupName: string, roleArn:string): IAutoScalingGroup {
class ImportWithRole extends AutoScalingGroupBase {
public autoScalingGroupName = autoScalingGroupName;
public autoScalingGroupArn = Stack.of(this).formatArn({
service: 'autoscaling',
resource: 'autoScalingGroup:*:autoScalingGroupName',
resourceName: this.autoScalingGroupName,
});
public readonly osType = ec2.OperatingSystemType.UNKNOWN;
public readonly grantPrincipal = iam.Role.fromRoleArn(this, `${id}-role`, roleArn)
}
return new ImportWithRole(scope, id);
}
Another maneuver you could do (if applicable to your use-case) is to really import the auto-scaling group and its role into the Cloudformation stack. The resources will then be managed with the CDK/Cloudformation stack and you could use the standard AutoScalingGroup constructor and provide your Role. The downside here is that it's currently quite a painful process (see link)
I bumped into a problem where we want to make sure some conventions are being followed in the naming of CloudFormation resources. The idea is that we use CDK Aspects to process resources. A simple example:
export class BucketConvention implements cdk.IAspect {
private readonly ctx: Context;
constructor(ctx: Context) {
this.ctx = ctx;
}
public visit(node: cdk.IConstruct): void {
if (cdk.CfnResource.isCfnResource(node) && node.cfnResourceType == 'AWS::S3::Bucket') {
const resource = node as s3.CfnBucket;
const resourceId = resource.bucketName ? resource.bucketName : cdk.Stack.of(node).getLogicalId(node);
resource.addPropertyOverride('BucketName', `${ctx.project}-${ctx.environment}-${resourceId}`);
}
}
}
The Context interface simply holds some variables used to create names. The problem with this snippet is that we are trying to interpolate the bucket name if it has been set, if not use the logical ID. Now the method to obtain the logical ID works, however resource.BucketName will return a token of which the resolved value could be undefined (i.e. the user didn't pass a bucket name in constructing the bucket, which happens a lot in high level constructs). So the logical ID will actually never trigger, since a token is always defined. If you would log the interpolation output you could get something like
myproject-myenvironment-${Token[TOKEN.104]}
My question, how can we make this work such that the interpolation happens with the bucket name if it has been supplied and if not use the logical ID? Is there a way to peek whether the token will give an undefined value during synthesis time?
And found the answer to my problem... similar to the logical ID you can use
cdk.Stack.of(resource).resolve(resource.bucketName)
I tried mongo replica sets for the first time.
I am using ubuntu on ec2 and I booted up three instances.
I used the private IP address of each of the instances. I picked on as the primary and below is the code.
mongo --host Private IP Address
rs.initiate()
rs.add(“Private IP Address”)
rs.addArb(“Private IP Address”)
All at this point is fine. When I go to the http://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:28017/_replSet site I see that I have a primary, seconday, and arbitor.
Ok, now for a test.
On the primary create a database in this is the code:
use tt
db.tt.save( { a : 123 } )
on the secondary, I then do this and get the below error:
db.tt.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
I am very new to mongodb and replicates but I thought that if I do something in one, it goes to the other. So, if I add a record in one, what do I have to do to replicate across machines?
You have to set "secondary okay" mode to let the mongo shell know that you're allowing reads from a secondary. This is to protect you and your applications from performing eventually consistent reads by accident. You can do this in the shell with:
rs.secondaryOk()
After that you can query normally from secondaries.
A note about "eventual consistency": under normal circumstances, replica set secondaries have all the same data as primaries within a second or less. Under very high load, data that you've written to the primary may take a while to replicate to the secondaries. This is known as "replica lag", and reading from a lagging secondary is known as an "eventually consistent" read, because, while the newly written data will show up at some point (barring network failures, etc), it may not be immediately available.
Edit: You only need to set secondaryOk when querying from secondaries, and only once per session.
To avoid typing rs.slaveOk() every time, do this:
Create a file named replStart.js, containing one line: rs.slaveOk()
Then include --shell replStart.js when you launch the Mongo shell. Of course, if you're connecting locally to a single instance, this doesn't save any typing.
in mongodb2.0
you should type
rs.slaveOk()
in secondary mongod node
THIS IS JUST A NOTE FOR ANYONE DEALING WITH THIS PROBLEM USING THE RUBY DRIVER
I had this same problem when using the Ruby Gem.
To set slaveOk in Ruby, you just pass it as an argument when you create the client like this:
mongo_client = MongoClient.new("localhost", 27017, { slave_ok: true })
https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial#making-a-connection
mongo_client = MongoClient.new # (optional host/port args)
Notice that 'args' is the third optional argument.
WARNING: slaveOk() is deprecated and may be removed in the next major release. Please use secondaryOk() instead. rs.secondaryOk()
I got here searching for the same error, but from Node.js native driver. The answer for me was combination of answers by campeterson and Prabhat.
The issue is that readPreference setting defaults to primary, which then somehow leads to the confusing slaveOk error. My problem is that I just wan to read from my replica set from any node. I don't even connect to it as to replicaset. I just connect to any node to read from it.
Setting readPreference to primaryPreferred (or better to the ReadPreference.PRIMARY_PREFERRED constant) solved it for me. Just pass it as an option to MongoClient.connect() or to client.db() or to any find(), aggregate() or other function.
https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
http://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html (search readPreference)
const { MongoClient, ReadPreference } = require('mongodb');
const client = await MongoClient.connect(MONGODB_CONNECTIONSTRING, { readPreference: ReadPreference.PRIMARY_PREFERRED });
slaveOk does not work anymore. One needs to use readPreference https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
e.g.
const client = new MongoClient(mongoURL + "?readPreference=primaryPreferred", { useUnifiedTopology: true, useNewUrlParser: true });
I am just adding this answer for an awkward situation from DB provider.
what happened in our case is the primary and secondary db shifted reversely (primary to secondary and vice versa) and we are getting the same error.
so please check in the configuration settings for database status which may help you.
Adding readPreference as PRIMARY
const { MongoClient, ReadPreference } = require('mongodb');
const client = new MongoClient(url, { readPreference: ReadPreference.PRIMARY_PREFERRED});
client.connect();
I have defined two classes (Environment and ConfigurationReader). Both are registered as shared dependencies.
The Environment class tries get the current environment, but for this, needs read a configuration file via ConfigurationReader.
The sequence diagram is:
The classes are:
class Environment
{
...
public function resolve()
{
$config = DI::getDefault()->getCfg();
$config->getValue('pepe', 'db_name');
}
...
}
class ConfigurationReader
{
...
public function getValue($aConfig, $aKey)
{
$path = $this->getFile($aConfig);
}
protected function getFile($aConfig)
{
$env = DI::getDefault()->getEnv();
$path = 'config/' . $env->getShortName() . '/' . $aConfig . '.yml';
return $path;
}
...
}
And are registered and created in the index.php:
...
$di = new FactoryDefault();
$di->setShared('env', function() use ($di) {
$env = new Services\Environment($di);
$env->resolve();
return $env;
});
$di->setShared('cfg', function() use ($di) {
return new Services\ConfigurationReader($di);
});
$di->getShared('cfg');
$di->getShared('env');
...
So, PHP crash at $config = DI::getDefault()->getCfg(); and says:
PHP Fatal error: Maximum recursion depth exceeded
Any ideas ?
A couple remarks
You're passing the di to the constructor, but end up getting it statically (DI::getDefault())
regarding the infinite loop, it's because cfg needs env who needs cfg who needs env etc.....
To have the framework automatically injecting the DI into your service you should either implement InjectionAwareInterface (https://docs.phalconphp.com/en/latest/reference/di.html#automatic-injecting-of-the-di-itself) or
extend the Component class (If you need event management too, use Plugin instead of Component ). Have a look at this discussion : https://forum.phalconphp.com/discussion/383/plugin-vs-component-what-s-the-difference-
Regarding your use case, you don't give enough context for an exhaustive answer, but I think you could simplify it as:
ConfigService: Unless you use configs from different env namespaces, you should pass the value of $env->getShortName() value to the service constructor (without getting it from the env service). In our apps the env is determined by nginx based on the domain name or other parameters and passed as an environment variable to php. Also, if you don't have hundreds of config files, and your app heavily relies on them, you should read and parse them once on instantiation and store the configs in the service (as associative array, config objects, or whatever you prefer). Add some cache layer to avoid wasting resource parsing all your files on each request. Phalcons provide The Config component to do so. It comes with file adapters (only ini and associative array format but you could easily implement your own yml adapter). If most of your app config relies on configurable values, that will probably be the first component you want to instantiate (or at least declare in the di). It shouldn't dependencies to other services.
EnvService: You can access your config values by calling the config service (If you have it to extend Component, you can do something like $this->cfg->getValue($key)).