my objective is to call a first level cdk command starting from an interface: IBucket.
I can get the bucket reference starting from this:
const sourceBucket = props.glueTable.bucket;
Afterwords, I need to call:
cfnBucket.replicationConfiguration = {
The procedure is exactly as for the script below:
https://github.com/rogerchi/cdk-s3-bucketreplication/blob/main/src/index.ts
But, as you can see, this script requires:
readonly sourceBucket: s3.Bucket;
Since it is needed to call:
const sourceAccount = cdk.Stack.of(props.sourceBucket).account;
Finally, are there really no other ways to call a cloudformation level 1 command starting from a reference?
It looks odd.
Thank you in advance
Marco
There is an example on exactly this in the aws docs:
If a Construct is missing a feature or you are trying to work around an issue, you can modify the CFN Resource that is encapsulated by the Construct.
All Constructs contain within them the corresponding CFN Resource. For example, the high-level Bucket construct wraps the low-level CfnBucket construct. Because the CfnBucket corresponds directly to the AWS CloudFormation resource, it exposes all features that are available through AWS CloudFormation.
The basic approach to get access to the CFN Resource class is to use construct.node.defaultChild (Python: default_child), cast it to the right type (if necessary), and modify its properties. Again, let's take the example of a Bucket.
// Get the CloudFormation resource
const cfnBucket = bucket.node.defaultChild;
// Change its properties
cfnBucket.analyticsConfiguration = [
{
id: 'Config'
// ...
}
];
From https://docs.aws.amazon.com/cdk/latest/guide/cfn_layer.html
For you, it wouldn't be analyticsconfiguration but bucketreplication of course.
Related
I am seeing AWS example CDK code from https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html that looks like this
import * as cdk from 'aws-cdk-lib';
import { aws_s3 as s3 } from 'aws-cdk-lib';
export class HelloCdkStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
new s3.Bucket(this, 'MyFirstBucket', {
versioned: true
});
}
}
And CDK synthesis tool somehow knows that this code creates a S3 bucket in the HelloCDK stack. Coming from Java, I have not seen this "Get all classes instantiated with a classes constructor and do something with them", especially because this code
new s3.Bucket(this, 'MyFirstBucket', {
versioned: true
});
Reads to me as a class instance with no pointer, and thus something that would get garbage collected.
How is AWS CDK using this info? Is this a pattern specific to typescript?
Anytime you create a construct in the CDK you have to provide it a scope as the first parameter. In your example, you have provided 'this' as the scope of the bucket. 'this', the stack, becomes the parent of the bucket in a tree the constructs maintain. This is the only internal pointer you have and need to any constructs that get created. Because there is this one reference then there's no garbage to clean up.
Without knowing too much about the details but there is a pointer in the arguments. The first argument attaches the construct to the CDK tree internally build cdk.out/tree.json .
Perhaps a bit like recursion. You can do recursion in Java.
I'm new to aws-cdk but was curious if it is possible to rename a bucket once it's created so it doesn't have the naming format of ?
I've tried this logic
const core = require(`#aws-cdk/core`);
const s3 = require(`#aws-cdk/aws-s3`);
class s3Build extends core.Construct {
constructor(scope, id) {
super(scope, id)
const bucketName = process.env.s3Bucket + `-` + process.env.environment
console.log(bucketName);
const bucket = new s3.Bucket(this, bucketName);
const bucketNameOutput = new s3.CfnBucket(this, `NewName`, {
});
bucketNameOutput.overrideLogicalId(`myBucketsNewName`)
}
}
module.exports = {s3Build}
When I run the above logic it does rename the bucket in a sense, but it still retains the original name. Below is the new output:
<stack-name>myBucketsNewName<alpha-numeric-number>
The only thing that changed was the bucket name but it still shows the stack name and the alpha-numeric-number.
Looking at the documentation below it seems I have the right method overrideLogicalId but not getting the output I'm desiring. What I want is to just have the bucket name be myBucketsNewName and not have the stack name and the alpha numeric number.
Am I missing something?
the 'id' - that is the string right after the this/self variable in a cdk construct function, refers to the LogicalID of the resource. The LogicalIDs are what CloudFormation uses to identify a given sub section of at template as a particular resource, and link that sub section to that resource for future updates to your stack.
CDK does NOT recommend you set your own names, but as pointed out the bucketName attribute is available. The reason this is not recommended is because it takes away from some of the power of CDK - which is the ability to redeploy your entire stack in multiple accounts, environments, or even just again in the same account to try a few different things. Since resource names have to be unique per service in an account (and S3 names are GLOBALLY unique!) then by setting a name, you cannot recreate this stack somewhere else without first deleting the old bucket or changing the name.
if you are going to set your buckets name, I highly recomend setting a removalPolicy attribute as well, so the CDK stacks can delete the bucket if its removed from your stack.
I found a way to resolve the roadblock by using the below prop values.
const bucket = new s3.Bucket(this, bucketName, {
bucketName: `myBucketsNewName`
})
When I added that bucketName property value and deployed via the cdk the bucket was renamed to what I put in the bucketName field.
I bumped into a problem where we want to make sure some conventions are being followed in the naming of CloudFormation resources. The idea is that we use CDK Aspects to process resources. A simple example:
export class BucketConvention implements cdk.IAspect {
private readonly ctx: Context;
constructor(ctx: Context) {
this.ctx = ctx;
}
public visit(node: cdk.IConstruct): void {
if (cdk.CfnResource.isCfnResource(node) && node.cfnResourceType == 'AWS::S3::Bucket') {
const resource = node as s3.CfnBucket;
const resourceId = resource.bucketName ? resource.bucketName : cdk.Stack.of(node).getLogicalId(node);
resource.addPropertyOverride('BucketName', `${ctx.project}-${ctx.environment}-${resourceId}`);
}
}
}
The Context interface simply holds some variables used to create names. The problem with this snippet is that we are trying to interpolate the bucket name if it has been set, if not use the logical ID. Now the method to obtain the logical ID works, however resource.BucketName will return a token of which the resolved value could be undefined (i.e. the user didn't pass a bucket name in constructing the bucket, which happens a lot in high level constructs). So the logical ID will actually never trigger, since a token is always defined. If you would log the interpolation output you could get something like
myproject-myenvironment-${Token[TOKEN.104]}
My question, how can we make this work such that the interpolation happens with the bucket name if it has been supplied and if not use the logical ID? Is there a way to peek whether the token will give an undefined value during synthesis time?
And found the answer to my problem... similar to the logical ID you can use
cdk.Stack.of(resource).resolve(resource.bucketName)
I have trouble understanding Nix overlays and the override pattern. What I want to do is add something to "patches" of gdb without copy/pasting
the whole derivation.
From Nix Pills I kind of see that override just mimics OOP, in reality it is just another attribute of the set. But how does it work then? Override is a function from the original attribute set to a transformed one that again has a predefined override function?
And as Nix is a functional language you also don't have variables only bindings which you can shadow in a different scope. But that still doesn't explain how overlays achieve their "magic".
Through ~/.config/nixpkgs I have configured a test overlay approximately like this:
self: super:
{
test1 = super.gdb // { name = "test1"; buildInputs = [ super.curl ]; };
test2 = super.gdb // { name = "test2"; buildInputs = [ super.coreutils ]; };
test3 = super.gdb.override { pythonSupport = false; };
};
And I get:
nix-repl> "${test1}"
"/nix/store/ib55xzrp60fmbf5dcswxy6v8hjjl0s34-gdb-8.3"
nix-repl> "${test2}"
"/nix/store/ib55xzrp60fmbf5dcswxy6v8hjjl0s34-gdb-8.3"
nix-repl> "${test3}"
"/nix/store/vqlrphs3a2jfw69v8kwk60vhdsadv3k5-gdb-8.3"
But then
$ nix-env -iA nixpkgs.test1
replacing old 'test1'
installing 'test1'
Can you explain me those results please? Am I correct that override can just alter the "defined interface" - that is all parameters of the function and as "patches" isn't a parameter of gdb I won't be able to change it? What is the best alternative then?
I will write an answer in case anyone else stumbles on this.
Edit 21.8.2019:
what I actually wanted is described in https://nixos.org/nixpkgs/manual/#sec-overrides
overrideDerivation and overrideAttrs
overrideDerivation is basically "derivation (drv.drvAttrs // (f drv))" and overrideAttrs is defined as part of mkDerivation in https://github.com/NixOS/nixpkgs/blob/master/pkgs/stdenv/generic/make-derivation.nix
And my code then looks like:
gdb = super.gdb.overrideAttrs (oldAttrs: rec {
patches = oldAttrs.patches ++ [
(super.fetchpatch {
name = "...";
url = "...";
sha256 = "...";
})
];
});
The question title is misleading and comes from my fundamental misunderstanding of derivations. Overlays work exactly as advertised. And they are probably also not that magic. Just some recursion where endresult is result of previous step // output of last overlay function.
What is the purpose of nix-instantiate? What is a store-derivation?
Correct me please wherever I am wrong.
But basically when you evaluate Nix code the "derivation function" turns a descriptive attribute set (name, system, builder) into an "actual derivation". That "actual derivation" is again an attribute set, but the trick is that it is backed by a .drv file in the store. So in some sense derivation has side-effects. The drv encodes how the building is supposed to take place and what dependencies are required. The hash of this file also determines the directory name for the artefacts (despite nothing was built yet). So implicitly the name in the nix store also depends on all build inputs.
When I was creating a new derivation sort of like Frankenstein based on tying together existing derivations all I did was create multiple references to the same .drv file. As if I was copying a pointer with the result of getting two pointers pointing to the same value on the heap. I was able to change some metadata but in the end the build procedure was still the same. Infact as Nix is pure I bet there is no way to even write to the filesystem (to change the .drv file) - except again with something that wraps the derivation function.
Override on the other hand allows you to create a "new instance". Due to "inputs pattern" every package in Nix is a function from a dependencies attribute set to the actual code that in the end invokes the "derivation function". With override you are able to call that function again which makes "derivation function" get different parameters.
I have defined two classes (Environment and ConfigurationReader). Both are registered as shared dependencies.
The Environment class tries get the current environment, but for this, needs read a configuration file via ConfigurationReader.
The sequence diagram is:
The classes are:
class Environment
{
...
public function resolve()
{
$config = DI::getDefault()->getCfg();
$config->getValue('pepe', 'db_name');
}
...
}
class ConfigurationReader
{
...
public function getValue($aConfig, $aKey)
{
$path = $this->getFile($aConfig);
}
protected function getFile($aConfig)
{
$env = DI::getDefault()->getEnv();
$path = 'config/' . $env->getShortName() . '/' . $aConfig . '.yml';
return $path;
}
...
}
And are registered and created in the index.php:
...
$di = new FactoryDefault();
$di->setShared('env', function() use ($di) {
$env = new Services\Environment($di);
$env->resolve();
return $env;
});
$di->setShared('cfg', function() use ($di) {
return new Services\ConfigurationReader($di);
});
$di->getShared('cfg');
$di->getShared('env');
...
So, PHP crash at $config = DI::getDefault()->getCfg(); and says:
PHP Fatal error: Maximum recursion depth exceeded
Any ideas ?
A couple remarks
You're passing the di to the constructor, but end up getting it statically (DI::getDefault())
regarding the infinite loop, it's because cfg needs env who needs cfg who needs env etc.....
To have the framework automatically injecting the DI into your service you should either implement InjectionAwareInterface (https://docs.phalconphp.com/en/latest/reference/di.html#automatic-injecting-of-the-di-itself) or
extend the Component class (If you need event management too, use Plugin instead of Component ). Have a look at this discussion : https://forum.phalconphp.com/discussion/383/plugin-vs-component-what-s-the-difference-
Regarding your use case, you don't give enough context for an exhaustive answer, but I think you could simplify it as:
ConfigService: Unless you use configs from different env namespaces, you should pass the value of $env->getShortName() value to the service constructor (without getting it from the env service). In our apps the env is determined by nginx based on the domain name or other parameters and passed as an environment variable to php. Also, if you don't have hundreds of config files, and your app heavily relies on them, you should read and parse them once on instantiation and store the configs in the service (as associative array, config objects, or whatever you prefer). Add some cache layer to avoid wasting resource parsing all your files on each request. Phalcons provide The Config component to do so. It comes with file adapters (only ini and associative array format but you could easily implement your own yml adapter). If most of your app config relies on configurable values, that will probably be the first component you want to instantiate (or at least declare in the di). It shouldn't dependencies to other services.
EnvService: You can access your config values by calling the config service (If you have it to extend Component, you can do something like $this->cfg->getValue($key)).