I'm using CDK Python API to define a Glue crawler, however, the CDK generated template contains empty 'Targets' block in the Crawler resource.
I've not been able to find an example to emulate. I've tried varying the definition of the targets object, but the object definition seems to be ignored by CDK.
from aws_cdk import cdk
BUCKET='poc-1-bucket43879c71-5uabw2rni0cp'
class PocStack(cdk.Stack):
def __init__(self, app: cdk.App, id: str, **kwargs) -> None:
super().__init__(app, id)
from aws_cdk import (
aws_iam as iam,
aws_glue as glue,
cdk
)
glue_role = iam.Role(
self, 'glue_role',
assumed_by=iam.ServicePrincipal('glue.amazonaws.com'),
managed_policy_arns=['arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole']
)
glue_crawler = glue.CfnCrawler(
self, 'glue_crawler',
database_name='db',
role=glue_role.role_arn,
targets={"S3Targets": [{"Path": f'{BUCKET}/path/'}]},
)
I expect the generated template to contain a valid 'targets' block with a single S3Target. However, cdk synth outputs a template with empty Targets in the AWS::Glue::Crawler resource:
gluecrawler:
Type: AWS::Glue::Crawler
Properties:
DatabaseName: db
Role:
Fn::GetAtt:
- glueroleFCCAEB57
- Arn
Targets: {}
Resolved, thanks to a clever colleague!
Changing "S3Targets" to "s3Targets", and "Path" to "path" resolved the issue. See below.
Hi Bob,
When I use typescript, the following works for me:
new glue.CfnCrawler(this, 'glue_crawler', {
databaseName: 'db',
role: glue_role.roleArn,
targets: {
s3Targets: [{ path: "path" }]
}
}
When I used Python, the following appears working too:
glue_crawler = glue.CfnCrawler(
self, 'glue_crawler',
database_name='db',
role=glue_role.role_arn,
targets={
"s3Targets": [{ "path": f'{BUCKET}/path/'}]
},
)
In Typescript, TargetsProperty is an interface with s3Targets as a property. And in
s3Targets, path is a property as well. I guess during the JSII transformation, it forces
us to use the same names in Python instead of the initial CFN resource names.
A more general way to approach this problem is to dig inside the cdk library in 2 steps:
1.
from aws_cdk import aws_glue
print(aws_glue.__file__)
(.env/lib/python3.8/site-packages/aws_cdk/aws_glue/__init__.py)
Go to that file and see how the mapping/types are defined. As of 16 Aug 2020, you find
#jsii.data_type(
jsii_type="#aws-cdk/aws-glue.CfnCrawler.TargetsProperty",
jsii_struct_bases=[],
name_mapping={
"catalog_targets": "catalogTargets",
"dynamo_db_targets": "dynamoDbTargets",
"jdbc_targets": "jdbcTargets",
"s3_targets": "s3Targets",
}
)
I found that the lowerCamelCase always work, while the pythonic snake_case does not.
Related
I have the following code in a Pipelineconstant.groovy file:
public static final list ACTION_CHOICES = [
N_A,
FULL_BLUE_GREEN,
STAGE,
FLIP,
CLEANUP
]
and this PARAMETERS in Jenkins multi-Rapper-file:
parameters {
string (name: 'ChangeTicket', defaultValue: '000000', description : 'Prod change ticket otherwise 000000')
choice (name: 'AssetAreaName', choices: ['fpukviewwholeof', 'fpukdocrhs', 'fpuklegstatus', 'fpukbooksandjournals', 'fpukleglinks', 'fpukcasesoverview'], description: 'Select the AssetAreaName.')
/* groovylint-disable-next-line DuplicateStringLiteral */
choice (name: 'AssetGroup', choices: ['pdc1c', 'pdc2c'])
}
I would like to ref ACTION_CHOICES in the parameter as this:
choice (name: 'Action', choices: constants.ACTION_CHOICES, description: 'Multi Version deployment actions')
but it doesn't work for me.
I tried to do this:
choice (name: 'Action', choices: constants.ACTION_CHOICES, description: 'Multi Version deployment actions')
but it doesn't work for me.
You're almost there! Jenkinsfile(s) can be extended with variables / constants defined (directly in your file or (better I'd say) from a Jenkins shared library (this scenario).
The parameter syntax within you pipeline was fine as well as the idea of lists of constants, but what was missing: a proper interlink of those parts together - proper library import. See example below (the names below in the example are not carved in stone and can be of course changed but watch out - Jenkins is quite sensitive about filenames, paths, ... (especially in shared libraries]):
Pipelineconstant.groovy should be placed in src/org/pipelines of your Jenkins shared library.
Pipelineconstant.groovy
package org.pipelines
class Pipelineconstant {
public static final List<String> ACTION_CHOICES = ["N_A", "FULL_BLUE_GREEN", "STAGE", "FLIP", "CLEANUP"]
}
and then you can reference this list of constants within your Jenkinsfile pipeline.
Jenkinsfile
#Library('jsl-constants') _
import org.pipelines.Pipelineconstant
pipeline {
agent any
parameters {
choice (name: 'Action', choices: Pipelineconstant.ACTION_CHOICES , description: 'Multi Version deployment actions')
}
// rest of your pipeline code
}
The first two lines of the pipeline are important - the first loads the JSL itself! Therefore the second line of that import can be used (otherwise Jenkins would not know where find that Pipelineconstant.groovy file.
B) Without Jenkins shared library (files in one repo):
I've found this topic discussed and solved for scripted pipeline here: Load jenkins parameters from external groovy file
I'm working for a state machine by CDK.
And getting an issue to check the codebuild project status in the state machine...
Q. Could you let me know the correct format of batchGetBuilds parameters in CallAwsService?
import { CallAwsService } from "aws-cdk-lib/aws-stepfunctions-tasks"
import { JsonPath } from "aws-cdk-lib/aws-stepfunctions"
new CallAwsService(scope, "Check 1-1: Codebuild Status", {
service: "codebuild",
action: "batchGetBuilds",
parameters: {
Ids: [JsonPath.stringAt("$.results.codebuild.id")],
},
iamResources: ["*"],
inputPath: "$",
resultSelector: { "status.$": "$.builds[0].buildStatus" },
resultPath: "$.results.bulidAmi",
})
I tried 2 ways.
JsonPath.stringAt("$.results.codebuild.id")
Then it returns below and execution be failed.
"An error occurred while executing the state 'Check 1-1: Codebuild Status' (entered at the event id #9).
The Parameters '{\"Ids\":\"******-generate-new-ami-project:05763ec2-89a6-4b56-8b44-************\"}' could not be used to start the Task:
[Cannot deserialize instance of `java.util.ArrayList<java.lang.Object>` out of VALUE_STRING token]"
[JsonPath.stringAt("$.results.codebuild.id")]
If I use array, it is failed in the build stage... (I'm using cdk pipeline to deploy this) error message is below
Cannot use JsonPath fields in an array, they must be used in objects
+ Extra Question
I found this during the search
https://stackoverflow.com/questions/70978385/aws-step-functions-wait-for-codebuild-to-finish
Can I use this `sync` on the `CallAwsService`? (Main 1... state is using `CallAwsService` also)
If yes, how can I use it..?
Or do I need to change the `CallAwsService` to `CodeBuildStartBuild`?
Could you let me know the correct format of batchGetBuilds parameters in CallAwsService?
Use the States.Array intrinsic function. These CDK syntaxes are equivalent:
parameters = {
'Ids.$': 'States.Array($.results.codebuild.id)',
Ids: JsonPath.stringAt('States.Array($.results.codebuild.id)'),
Ids: JsonPath.array(JsonPath.stringAt('$.results.codebuild.id'))
}
Can I use this sync on the CallAwsService?
No. The CallAwsService task implements the AWS SDK service integrations, which does not support .sync for CodeBuild actions. As of v2.15, CDK should throw an error if you pass the RUN_JOB (= .sync) pattern to CallAwsService. See this github issue for context.
Or do I need to change the CallAwsService to CodeBuildStartBuild?
Yes. CodeBuildStartBuild works as expected with the RUN_JOB integration pattern.
Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.
I am trying to use a global class that I've defined in a shared library to help organise job parameters. It's not working, and I'm not even sure if it is possible.
My job looks something like this:
pipelineJob('My-Job') {
definition {
// Job definition goes here
}
parameters {
choiceParam('awsAccount', awsAccount.ALL)
}
}
In a file in /vars/awsAccount.groovy I have the following code:
class awsAccount implements Serializable {
final String SANDPIT = "sandpit",
final String DEV = "dev",
final String PROD = "prod"
static String[] ALL = [SANDPIT, DEV, PROD]
}
Global pipeline libraries are configured to load implicitly from the my repository's master branch.
When attempting to update the DSL scripts I receive the error:
ERROR: (myJob.groovy, line 67) No such property: awsAccount for class: javaposse.jobdsl.dsl.helpers.BuildParametersContext
Why does it not find the class, and is it even possible to use shared library classes like this in pipeline job?
Disclaimer: I know it works using Jenkinsfile. Unfortunatelly, not tested usng Declarative Pipelines - but no answers yet, so it may be worth a try
Regarding your first question: there are some reasons why a class from your shared-lib could not be found. Starting from the library import, the library syntax, etc. But they definitvely work for DSL. To be more precise about it, additional information would be great. But be sure that:
You have your groovy class definition using exactly the directory structure as described in the documentation (https://www.jenkins.io/doc/book/pipeline/shared-libraries/)
Give a name to the shared-lib in jenkins as you configure it and be sure is exactly the name you use in the import
Use the import as described in the documentation (under Using Libraries)
Regarding your second question (the one that names this SO question): yes, you can include parameter jobs from information in your shared-lib. At least, using Jenkinsfiles. You can even define properties to be included in the pipelie. I got it working with a tricky syntax due to different problems.
Again, I am using Jenkinsfile and this is what worked for me:
In my shared-lib class, I added a static function that introduces the build parameters. Notice the input parameters that function needs and its usage:
class awsAccount implements Serializable {
//
static giveMeParameters (script) {
return [
// Some parms
script.string(defaultValue: '', description: 'A default parameter', name: 'textParm'),
script.booleanParam(defaultValue: false, description: 'If set to True, do whatever you need - otherwise, do not do it', name: 'boolOption'),
]
}
}
To introduce those parameters in the pipeline, you need to place the returned value of the function into the parameters array
properties (
parameters (
awsAccount.giveMeParameters (this)
)
Again, notice the syntax when calling the function. Similar to this, you can also define functions in the shared-lib that return properties and use them in multiple jobs (disableConcurrentBuilds, buildDiscarder, etc)
After i create a cloud-watch event rule i am trying to add a target to it but i am unable to add a input transformation. Previously the add target had the props allowed for input transformation but it does not anymore.
codeBuildRule.addTarget(new SnsTopic(props.topic));
The aws cdk page provides this solution but i dont exactly understand what it says
You can add additional targets, with optional input transformer using eventRule.addTarget(target[, input]). For example, we can add a SNS topic target which formats a human-readable message for the commit.
You should specify the message prop and use RuleTargetInput static methods. Some of these methods can use strings returned by EventField.fromPath():
// From a path
codeBuildRule.addTarget(new SnsTopic(props.topic, {
message: events.RuleTargetInput.fromEventPath('$.detail')
}));
// Custom object
codeBuildRule.addTarget(new SnsTopic(props.topic, {
message: RuleTargetInput.fromObject({
foo: EventField.fromPath('$.detail.bar')
})
}));
I had the same question trying to implement this tutorial in CDK: Tutorial: Set up a CloudWatch Events rule to receive email notifications for pipeline state changes
I found this helpful as well: Detect and react to changes in pipeline state with Amazon CloudWatch Events
NOTE: I could not get it to work using the Pipeline's class method onStateChange().
I ended up writing a Rule:
const topic = new Topic(this, 'topic', {topicName: 'codepipeline-notes-failure',
});
const description = `Generated by the CDK for stack: ${this.stackName}`;
new Rule(this, 'failed', {
description: description,
eventPattern: {
detail: {state: ['FAILED'], pipeline: ['notes']},
detailType: ['CodePipeline Pipeline Execution State Change'],
source: ['aws.codepipeline'],
},
targets: [
new SnsTopic(topic, {
message: RuleTargetInput.fromText(
`The Pipeline '${EventField.fromPath('$.detail.pipeline')}' has ${EventField.fromPath(
'$.detail.state',
)}`,
),
}),
],
});
After implementing, if you navigate to Amazon EventBridge -> Rules, then select the rule, then select the Target(s) and then click View Details you will see the Target Details with the Input transformer & InputTemplate.
Input transformer:
{"InputPathsMap":{"detail-pipeline":"$.detail.pipeline","detail-state":"$.detail.state"},"InputTemplate":"\"The
Pipeline '<detail-pipeline>' has <detail-state>\""}
This would work for CDK Python. CodeBuild to SNS notifications.
sns_topic = sns.Topic(...)
codebuild_project = codebuild.Project(...)
sns_topic.grant_publish(codebuild_project)
codebuild_project.on_build_failed(
f'rule-on-failed',
target=events_targets.SnsTopic(
sns_topic,
message=events.RuleTargetInput.from_multiline_text(
f"""
Name: {events.EventField.from_path('$.detail.project-name')}
State: {events.EventField.from_path('$.detail.build-status')}
Build: {events.EventField.from_path('$.detail.build-id')}
Account: {events.EventField.from_path('$.account')}
"""
)
)
)
Credits to #pruthvi-raj comment on an answer above