Access context object when defining a Step Function workflow in the CDK? - aws-cdk

I would like to pass in the Step Function Execution ID of the current workflow to my Lambda function when it is executed.
I see in the documentation that to "access the context object, first specify the parameter name by appending .$ to the end" however, I cannot do this as I am defining my workflow using the CDK, which does not have access to parameters. I have to use InputPath.
Despite not being able to follow the instructions I tried a few things anyway. I have tried:
new tasks.LambdaInvoke(this, "InvokeLambdaTask", {
lambdaFunction: myLambda
inputPath: "$$.Execution.id"
})
and got the following error: Invalid path '$.Execution.id' : No results for path: $['Execution']['id']
I also tried a single dollar sign:
new tasks.LambdaInvoke(this, "InvokeLambdaTask", {
lambdaFunction: myLambda
inputPath: "$.Execution.id"
})
and got Invalid path '$.Execution.id' : Missing property in path $['Execution']
Is there any way to achieve this? I've seen a few other questions asking the more or less same thing, however I cannot really make use of these answers using the CDK.

I would like to pass in the Step Function Execution ID of the current workflow to my Lambda function
The LambdaInvoke task's payload is supplied to the Lambda function as input. Note the two equivalent ways of referencing the JSONPath.
new tasks.LambdaInvoke(this, "InvokeLambdaTask", {
lambdaFunction: myLambda,
payload: sfn.TaskInput.fromObject({
executionId: sfn.JsonPath.stringAt("$$.Execution.Id"),
"alsoExecutionId.$": "$$.Execution.Id",
}),
});
The Lambda receives the Execution Id in the event payload:
{
executionId: "arn:aws:states:us-east-1:123456789012:execution:StateMachine2E01...",
alsoExecutionId: "arn:aws:states:us-east-1:123456789012:execution:StateMachine2E01..."
}
The CDK ... does not have access to parameters.
It does. The CDK renders the payload arg into the State Machine definition's Parameters:
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:Lambda...",
"Parameters": {
"executionId.$": "$$.Execution.Id",
"alsoExecutionId.$": "$$.Execution.Id"
}

You have to use stepfunctions.JsonPath.stringAt("$$.Execution.id"). I'm not 100% sure why, but CDK requires you to use JsonPath.xAt() for any references.

Related

Serverless - Change the content before deploy

I'm using Serverless for working with our aws lambda / appsync.
For Error Handling, we are keep erro code with message in a json file. The Codes will be unique. Something like this:
//error-code.json
{
"1"": { code: 1, message: "Invalid User Input"},
"2"": { code: 2, message: "Invalid Input"},
//... so on
}
This wil lbe deploy as layer and all the lambda will use it. Issue is we cannot use it in the resolve template. There are some of the resolver will be only template file. These template files cannot access the json file nor can access the layer. How can I use the error-code.json here?
Solution 1:
Manually write the error code in templates and make sure there are alway unique. Something like this:
#set(#errorInfo = {
"erroCode": "1",
"errorMessage": "Invalid Input"
})
$util.error("Invalid Input", "errorType", $ctx.arguments,#errorInfo)
Rejected: Becasue we have to manually check everytime for the unique of error code. In case of lot of template file, we cannot rely on it.
Solution 2:
Create a table with error code (unique) and error message. Use this table to send error from template.
Rejected: Because we use multiple app sync instance and they all connect to dirferent database. So we have to make this table in all database, and thus unique across the app-sync is not maintained.
Solution 3:
Write the placeholder in vtl where we want to send the error. Before Deploy, replace the placeholder with the actual code using pre-hook script, but not in the actual vtl file but in the generated package that serverless deploy. Does Serverless even such thing?
if your errors are all static, there is one more option for consideration.
You create one more file that holds all errors defined in Velocity.
$util.qr( $ctx.stash.put("errors", {}) ) $util.qr(
$util.qr( $ctx.stash.errors.put("ONE", { "code": 1, "message": "Invalid User
Input"} )
...
$util.qr( $ctx.stash.errors.put("TWENTY", { "code": 20, "message": "20th error description"} )
For every velocity resolver that throws errors, you inject pre-defined errors at the beginning of its request mapping's file. Whenever you want to throw an error, it's done by retrieving a pre-defined error from $ctx.stash
$util.error ( $ctx.stash.errors.ONE.message, $ctx.stash.errors.ONE.code )
The error file is generated from error-code.json, or manually typed again for simplicity. $ctx.stash is used because stash is accessible from everywhere in a resolver, including pipeline ones.

How to set a work item state while creating or updating work item in Azure Devops Rest API?

I have been working to create an API which programatically creates/updates work item in Azure Devops. I have been able to create a work item and populate almost all fields. I have problem with setting the state.
When I am creating a POST request to Azure Devops rest api with any state name like "Active", "Closed", "Rejected", it throws a 400 Bad Request error.
I don't know if I am missing anything or if there something wrong with the way I am trying to set the value.
{
"op" : "add",
"path": "/fields/System.State",
"value"="Active",
}
I have found the solution to this problem and hence I am answering it here.
I was getting a 400 Bad Request error whenever I tried Creating an Item and Setting the state in the same call. I debugged the code and caught the exception. I found out that, there are some validation rules for some of the fields. State is one of them.
The rule for System.State field is, whenever a Work Item is created it takes its configured default value. ( In my case it was "Proposed", in your case it could be "New"). If you try altering the value at the time of work item creation, it throws a 400 Bad Request.
What should I do if I have to Create a Work Item with a specific State?
As of now, the solution that I have found out is to make two calls. One for Work Item Creation and another for Changing the state of the work item to desired state.
CreateWorkItem()
{
var result = await _client.Post(url, jsonData);
var result2 = await _client.Put(result.id, jsonData); // or maybe just the state
return Ok(result2);
}
Check the example here: Update a field
You have to use "value":"Active" in the request body.
[
{
"op" : "add",
"path": "/fields/System.State",
"value": "Active"
}
]

How to fetch SSM Parameters from two different accounts using AWS CDK

I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.

Get SQS URL from within Serverless function?

I'm building a Serverless app that defines an SQS queue in the resources as follows:
resources:
Resources:
TheQueue:
Type: "AWS:SQS:Queue"
Properties:
QueueName: "TheQueue"
I want to send messages to this queue from within one of the functions. How can I access the URL from within the function? I want to place it here:
const params = {
MessageBody: 'message body here',
QueueUrl: 'WHATS_THE_URL_HERE',
DelaySeconds: 5
};
This is a great question!
I like to set the queue URL as an ENV var for my app!
So you've named the queue TheQueue.
Simply add this snippet to your serverless.yml file:
provider:
name: aws
runtime: <YOUR RUNTIME>
environment:
THE_QUEUE_URL: { Ref: TheQueue }
Serverless will automatically grab the queue URL from your CloudFormation and inject it into your ENV.
Then you can access the param as:
const params = {
MessageBody: 'message body here',
QueueUrl: process.env.THE_QUEUE_URL,
DelaySeconds: 5
};
You can use the Get Queue URL API, though I tend to also pass it in to my function. The QueueUrl is the Ref value for an SQS queue in CloudFormation, so you can pretty easily get to it in your CloudFormation. This handy cheat sheet is really helpful for working with CloudFormation attributes and refs.
I go a bit of a different route. I, personally, don't like storing information in environment variables when using lambda, though I really like Aaron Stuyvenberg solution. Therefore, I store information like this is AWS SSM Parameter store.
Then in my code I just call for it when needed. Forgive my JS it has been a while since I did it. I mostly do python
var ssm = new AWS.SSM();
const myHandler = (event, context) => {
var { Value } = await ssm.getParameter({Name: 'some.name.of.parameter'}).promise()
const params = {
MessageBody: 'message body here',
QueueUrl: Value,
DelaySeconds: 5
};
}
There is probably some deconstruction of the returned data structure I got wrong, but this is roughly what I do. In python I wrote a library that does all of this with one line.

Mutation not requesting for actively fetched container data in fatQuery on RANGE_ADD

I am trying to do a RANGE_ADD mutation using what is now known as Relay Classic (this probably is resolved with Modern). I get the error:
Warning: writeRelayUpdatePayload(): Expected response payload to include the newly created edge `newThingEdge` and its `node` field. Did you forget to update the `RANGE_ADD` mutation config?
So, yes, the payload is not sending the anything more than the clientMutationId in the expected response shape because the request mutation is not asking for it.
According to #Joe Savona here, https://github.com/facebook/relay/issues/521, this might be happening if there is no intersecting container requesting for this data. But that's not entirely true for me. My Route is requesting:
things: (Component) => Relay.QL`
query {
allThings(variable: $variable) {
${Component.getFragment('things')},
}
}
`,
while my fatQuery is requesting for:
fragment on AddMockThing {
allThings(variable: "${variable}", first: 100) {
edges {
node {
id,
},
},
},
newThingEdge
}
Now you may say these aren't the same queries, because of the extra first: 100 in the getFatQuery version, but if I don't use that, I get the error:
Error: Error: You supplied the 'edges' field on a connection named 'allThings', but you did not supply an argument necessary to do so. Use either the 'find', 'first', or 'last' argument.
On the other hand, if I add first: 100 to the Route query, I get the error: Error: Invalid root field 'allThings'; Relay only supports root fields with zero or one argument.
Stuck between a fatQuery and a hard place. Would appreciate the help!
You're getting a validation error because the Relay compiler is looking for a connection argument (first: X). You can disable this particular validation by adding the #relay(pattern: true) directive. This marks the fat query as a ‘pattern’ to match against, rather than as something concrete.
fragment on AddMockThing #relay(pattern: true) {
allThings(variable: "${variable}") {
edges {
node {
id,
},
},
},
newThingEdge
}
More info here: https://stackoverflow.com/a/34112045/802047

Resources