Hi I was wondering if there was a recommended pattern for asserting that certain resources do not have properties in the CDK. For example if you're defining IAM policies and you would like to enforce no wildcards are defined in a test that uses the /assertions package in the CDK, what would the "proper" way to do this be? Make your own matcher based off Matcher.objectLike that does the inverse?
Sample IAM definition
// this would be fine
const secretsManagerReadAccess = new iam.PolicyStatement({
actions: ['SecretsManager:GetSecretValue'],
resources: ['arn:aws:secretsmanager:us-east-1:ACCOUNTID:secret:SECRET_NAME'],
});
// this should blow up in a test
const secretsManagerWildcardAccess = new iam.PolicyStatement({
actions: ['SecretsManager:*'],
resources: ['arn:aws:secretsmanager:us-east-1:ACCOUNTID:secret:*'],
});
// the worst possible, probably not written correctly but you get the idea
const everything = new iam.PolicyStatement({
actions: ['*:*'],
resources: ['arn:aws:*:us-east-1:ACCOUNTID:*:*'],
});
Edit: I guess what might be a better way to phrase this is, how would you black-list certain patterns within your CDK definitions?
You can chain Matchers, and you can use Captures to construct pattern filters.
const actionCapture = new Capture();
template.hasResourceProperties(
"AWS::IAM::Role",
Match.not(Match.objectLike({
PolicyDocument: {
Statement: [
{
Action: actionCapture,
},
],
},
}))
);
expect(actionCapture.asString()).toEqual(expect.not.stringContaining("*"));
For more examples, consult the Developer Guide.
Related
my objective is to call a first level cdk command starting from an interface: IBucket.
I can get the bucket reference starting from this:
const sourceBucket = props.glueTable.bucket;
Afterwords, I need to call:
cfnBucket.replicationConfiguration = {
The procedure is exactly as for the script below:
https://github.com/rogerchi/cdk-s3-bucketreplication/blob/main/src/index.ts
But, as you can see, this script requires:
readonly sourceBucket: s3.Bucket;
Since it is needed to call:
const sourceAccount = cdk.Stack.of(props.sourceBucket).account;
Finally, are there really no other ways to call a cloudformation level 1 command starting from a reference?
It looks odd.
Thank you in advance
Marco
There is an example on exactly this in the aws docs:
If a Construct is missing a feature or you are trying to work around an issue, you can modify the CFN Resource that is encapsulated by the Construct.
All Constructs contain within them the corresponding CFN Resource. For example, the high-level Bucket construct wraps the low-level CfnBucket construct. Because the CfnBucket corresponds directly to the AWS CloudFormation resource, it exposes all features that are available through AWS CloudFormation.
The basic approach to get access to the CFN Resource class is to use construct.node.defaultChild (Python: default_child), cast it to the right type (if necessary), and modify its properties. Again, let's take the example of a Bucket.
// Get the CloudFormation resource
const cfnBucket = bucket.node.defaultChild;
// Change its properties
cfnBucket.analyticsConfiguration = [
{
id: 'Config'
// ...
}
];
From https://docs.aws.amazon.com/cdk/latest/guide/cfn_layer.html
For you, it wouldn't be analyticsconfiguration but bucketreplication of course.
I am using AWS CDK Typescript to generate my lambda function
import { Function as lambdaFunction } from '#aws-cdk/aws-lambda'
const my_lambda = new lambdaFunction(this, 'my Lambda', {
code: Code.fromBucket(
lambdaBucket,
'python36/helloworld/hello-world-python3.zip'
),
runtime: Runtime.PYTHON_3_8,
handler: 'lambda_function.lambda_handler',
functionName: 'MyLambda',
logRetention: RetentionDays.ONE_MONTH,
})
This works fine, but behind the scenes, CDK is creating another lambda responsible for adding the logs.
It is easy to add tags to the lambda function I created using Tags.of(my_lambda).add('Name','tag name')
I would like to add tags to the underlying lambda function. Does anyone know a way of tagging the underlying function?
Move tagging code to stack level, this will apply tags to all resources created within the stack
import aws_cdk.core as _core
tags = _core.Tags.of(<<Your Stack Name>>)
tags.add("environment", env_name)
This will tags to all resources created within your stack
I'm relatively new to Dart/Flutter,
Just struggling to understand some code/syntax and wondered if someone can help explain.
Im looking at the example of setting up multiple providers and I cant get my head round the code for setting up the update..
providers: [
// In this sample app, CatalogModel never changes, so a simple Provider
// is sufficient.
Provider(create: (context) => CatalogModel()),
// CartModel is implemented as a ChangeNotifier, which calls for the use
// of ChangeNotifierProvider. Moreover, CartModel depends
// on CatalogModel, so a ProxyProvider is needed.
ChangeNotifierProxyProvider<CatalogModel, CartModel>(
create: (context) => CartModel(),
update: (context, catalog, cart) {
cart.catalog = catalog;
return cart;
},
),
],
Specifically...
update: (context, catalog, cart) {
cart.catalog = catalog;
return cart;
}
I thought it was a function that takes in 3 parameters context, catelog, cart
But I dont see anywhere where they are first instantiated
Can anyone explain what is going on here?
Thanks
update: denotes a parameter to the ChangeNotifierProxyProvider<CatalogModel, CartModel> constructor, passing it an anonymous function that takes three parameters. The code in (or near) the ChangeNotifierProxyProvider will be invoking this function as necessary.
I'm currently rewriting our plain node API-server in NestJS and I've encountered the following issue: I have a CacheService which acts as a wrapper around redis and which is injected in various other services.
Now, if the client-request contains a custom-header (key: x-mock-redis, value: someRedisMockKey) and if the server runs in debug mode, instead of calling redis, a mocked json-value should be returned (the value is read from a file with the name someRedisMockKey).
I could set the scope of my CacheService to "Request" and inject the client-request, allowing me to check if the mocking-header exists and return the mocked value there if running in debug-mode.
But I find this counterintuitive as I'd have logic violating the single responsibility principle and which should not run in production-mode. Also I'd prefer my CacheService to have default scope instead of "Request".
Any recommendations how to do this more elegantly?
In advance, sorry if I misunderstood the question or constraints, will try to paraphrase them and point out how it should look like, I suppose.
production always uses Redis
you can set up the app instance on different port so that it is fully separated from 'staging' (or other) app instance
If you can fulfill the second condition, you can make use of custom modules and apply different client-wrapper (strategy) for your service:
Custom provider for Cache module
import * as redis from 'redis'
import { INTERNAL_CACHE_CLIENT, INTERNAL_CACHE_MODULE } from './cache.constants'
import { CacheModuleAsyncOptions, InternalCacheOptions } from './cache.module'
import CacheClientRedis from './client/cache-client-redis'
// ...
export const createAsyncClientOptions = (options: CacheModuleAsyncOptions) => ({
provide: INTERNAL_CACHE_MODULE,
useFactory: options.useFactory,
inject: options.inject,
})
export const createClient = () => ({
provide: INTERNAL_CACHE_CLIENT,
useFactory: (options: InternalCacheOptions) => {
const { production, debug, noCache, ...redisConfig } = options
// pardon for the ifs ; )
if (noCache) {
return new CacheClientInMemory()
}
if (production) {
return new CacheClientRedis(redis.createClient(redisConfig))
}
if (debug) {
return new MockedCache()
}
return new CacheClientMemory()
},
inject: [INTERNAL_CACHE_MODULE],
})
as noticed, you can have any wrapper around CacheClient, which, in your case, would serve data from file. For simplicity, the example of interface being implemented by any cache client could be:
export interface CacheClient {
set: (key: string, payload: string) => Promise<boolean>
get: (key: string) => Promise<string | null>
del: (key: string) => Promise<boolean>
}
Now on, as we have let the module decide which strategy should be used, service just needs:
constructor(
#Inject(INTERNAL_CACHE_CLIENT) private readonly cacheClient: CacheClient) {
}
Feel free to point out if it still breaks principles or you really need to decide it during runtime.
Cheers!
I have a problem with cross-referencing terminals that are only locally unique (in their block/scope), but not globally. I found tutorials that describe, how I can use fully qualified names or package declarations, but my case is syntactically a little bit different from the example and I cannot change the DSL to support something like explicit fully qualified names or package declarations.
In my DSL I have two types of structured JSON resources:
The instance that contains my data.
A meta model, containing type information etc. for my data.
I can easily parse those two, and get an EMF model with the following Java snippet:
new MyDSLStandaloneSetup().createInjectorAndDoEMFRegistration();
ResourceSet rs = new ResourceSetImpl();
rs.getResource(URI.createPlatformResourceURI("/Foo/meta.json", true), true);
Resource instanceResource= rs.getResource(URI.createPlatformResourceURI("/Bar/instance.json", true), true);
EObject eobject = instanceResource.getContents().get(0);
Simplyfied example:
meta.json
{
"toplevel_1": {
"sublevels": {
"sublevel_1": {
"type": "int"
},
"sublevel_2": {
"type": "long"
}
}
},
"toplevel_2": {
"sublevels": {
"sublevel_1": {
"type": "float"
},
"sublevel_2": {
"type": "double"
}
}
}
}
instance.json
{
"toplevel_1": {
"sublevel_1": "1",
"sublevel_2": "2"
},
"toplevel_2": {
"sublevel_1": "3",
"sublevel_2": "4"
}
}
From this I want to infer that:
toplevel_1:sublevel_1 has type int and value 1
toplevel_1:sublevel_2 has type long and value 2
toplevel_2:sublevel_1 has type float and value 3
toplevel_2:sublevel_2 has type double and value 4
I was able to cross-reference the unique toplevel-elements and iterate over all sublevels until I found the ones that I was looking for, but for my use case that is quite inefficient and complicated. Also, I don't get the generated editor to link between the sublevels this way.
I played around with linking and scoping, but I'm unsure as to what I really need, and if I have to extend the providers-classes AbstractDeclarativeScopeProvider and/or DefaultDeclarativeQualifiedNameProvider.
What's the best way to go?
See also:
Xtext cross reference using custom terminal rule
http://www.eclipse.org/Xtext/documentation.html#scoping
http://www.eclipse.org/Xtext/documentation.html#linking
After some trial and error I solved my problem with a ScopeProvider.
The main issue was that I didn't really understand what a scope is in Xtext-terms, and what I have to provide it to.
Looking at the signature from the documentation:
IScope scope_<RefDeclaringEClass>_<Reference>(<ContextType> ctx, EReference ref)
In my example language:
RefDeclaringEClass would refer to the Sublevel from instance.json,
Reference to the cross-reference to the Sublevel from meta.json, and
ContextType would match the RefDeclaringEClass.
Using the eContainer of ctx I can get the Toplevel from instance.json.
This Toplevel already has a cross-reference to the matching Toplevel from meta.json, which I can use to get the Sublevels from meta.json. This collection of Sublevels is basically the scope within which the current Sublevel should be unique.
To get the IScope I used Scopes#scopeFor(Iterable).
I didn't post any code here because the actual grammar is bigger/different, and therefore doesn't really help the explanation.