How to specify parameter definition in CDK stack? - aws-cdk

Use an AWS CloudFormation Parameter section of AWS CDK mentions how to customize your AWS CloudFormation templates. It refers to CloudFormation template. I would like to add parameters to my CDK stack and get the parameters section of the synthesized CloudFormation template.
Do I understand correctly that documentation suggests adding parameters section to synthesized template? If yes, it will be overwritten with every run of cdk synth.
Any other way to define the parameters section?

Edit: Here is a typescript example that reads a bucket name from the context: https://github.com/cloudshiftstrategies/aws-cdk-examples/tree/master/context-example-typescript-app
You can add parameters to the CDK using the CfnParameter type like so:
new cdk.CfnParameter(this, 'MyParameter', {
type: 'String',
default: 'Blah',
noEcho: true,
});
But that is generally discourages by the CDK. The design is to have fully deployable stacks and use code/config as the conditions for what should be created for a given account. This is from their documentation:
When you run the cdk synth command for an app with multiple stacks, the cloud assembly includes a separate template for each stack instance. Even if the two stacks are instances of the same class, the AWS CDK emits them as two individual templates.
You can synthesize each template by specifying the stack name in the cdk synth command. The following example synthesizes the template for stack1.
This approach is conceptually different from how AWS CloudFormation templates are normally used, where a template can be deployed multiple times and parameterized through AWS CloudFormation parameters. Although AWS CloudFormation parameters can be defined in the AWS CDK, they are generally discouraged because AWS CloudFormation parameters are resolved only during deployment. This means that you cannot determine their value in your code. For example, to conditionally include a resource in your app based on the value of a parameter, you must set up an AWS CloudFormation condition and tag the resource with this condition. Because the AWS CDK takes an approach where concrete templates are resolved at synthesis time, you can use an if statement to check the value to determine whether a resource should be defined or some behavior should be applied.
Parameters in the cdk would be done through context values either from the command line or via the cdk.json as documented here: https://docs.aws.amazon.com/cdk/latest/guide/get_context_var.html

This is the recommended way.
cdk deploy MyStack --parameters uploadBucketName=UpBucket --parameters downloadBucketName=DownBucket
I know it's not exactly your question but posting here for reference.
Source: https://docs.aws.amazon.com/cdk/v2/guide/cli.html#cli-deploy

Related

Can a CDK application discover which command is being invoked?

In my CDK application, I would like to use different logic for validating some context parameters during CDK destroy. Is there a way for the CDK application to determine which command is being invoked?
Unfortunately, there doesn't seem to be a good way to achieve that at the moment.
At the very least in the case of the TypeScript CDK application, there will be a child process spawned that renders the CDK object graph. However, that child process doesn't receive the original arguments you passed to CDK.
There's a way to get around that by accessing process.ppid that will give you parent process PID. Then, on Linux-based systems you can do readFileSync(`/proc/${process.ppid}/cmdline`) to access the parent process command line arguments.
However, that approach is very brittle.
If you truly need to vary your code based on the command executing I'd recommend setting an environment variable. e.g. in your package.json scripts section
"cdk:synth": "CDK_COMMAND=synth cdk synth"

Create an AWS s3 object with information gathered from AWS CDK

Imagine creating a website with static javascript assets hosted on S3.
Assume further that the JS would need access to values reserved within CDK the cdk stack such as ARN's of other resources dynamically created from CDK.
What would be the best way to resolve that information, and perhaps package it on a settings file for deployment to some s3 path that the website can load?
You can define fixed name for exportValue, in your case the ARN of the resource, with CfnOutput in your cdk
You can retrieve this value with the List-Exports function of the cloudformation sdk
If you want to, deploy your resources through cdk, create the settings file from the outputs of the cdk (using lambda or codebuild to create file to s3 for example) and deploy a static website (Also cdk) in one pipeline. I would suggest you looking in to CDK Pipelines

Jenkins: Pass environment variable to the job parameter

Since I have the same static rarely changed parameters used by several jobs I decided to put it somewhere in one place of my Jenkins and use it across jobs.
The first thought that came to my mind was to move my 'static data' to the environment variables and get it using Active choice reactive parameter plugin which allows running simple groovy scripts on the job parameters page.
Please note that I know how to get environment parameters in the pipeline, but I do really need to have this data on the build with parameters screen, e.g. once I clicked build with parameters - I need my groovy code inside Active choice reactive parameter was able to read this environment variable and display as a parameter to the user.
A simple example of this need:
The environment variable contains the list of servers, the job is going to perform deployment of the application to the selected server. In this case, I want to be able to write something like this in the groovy script section of Active choice reactive parameter:
return[${env.SERVERS_LIST}]
Unfortunately the example above doesn't work. I wasn't able to find any working solution for this yet.
Well, after a few more tries I finally found a solution.
Instead of trying to read the environment variable in the pipeline manner the simple
return [SERVERS_LIST]
works perfect

Is it ok to directly overwrite Dataflow template parameters that are set at build-time?

We would like to prevent certain parameters (namely filesToStage) of our Dataflow template from populating in the Dataflow Job page. Is there a recommended way to achieve this? We've found that simply specifying "filesToStage=" when launching the template via gcloud suffices, but we're not sure if this is robust/stable behavior.
For context, we are hosting this Dataflow template for customer usage and would like to hide as much of the implementation as possible (including classpaths).
Specifically the filesToStage can be sent as blank and the files will be inferred based on the Java classpath:
If filesToStage is blank, Dataflow will infer the files to stage based on the Java classpath.
More information on the considerations for this and other fields can be found here.
For other parameters, the recommendation is to use Cloud KMS to keep the parameters hidden.

How do I integrate a swagger file with aws_apigateway

I want to use aws_apigateway and use a swagger file to define the api, how do I code this using AWS CDK either in python or TypeScript?
There is a workaround here, but so far (22/6/2022) is still in the CDK roadmap (issue ref)
The workaround involves some manual steps and initial swagger extraction from CDK init, then it can be fed in somehow.

Resources