Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.
Related
I'm trying to create an infrastructure with AWS CDK. When creating a lambda, it forces me to specify the code that's going in it.
However, that'll be the responsibility of the release pipeline.
Is there a way to create a lambda without specifying the code?
No. code is a required prop in the CDK Lambda Function construct*. Use the InlineCode class as a minimal placeholder:
new lambda.Function(this, "Lambda", {
code: new lambda.InlineCode(
"exports.handler = async (event) => console.log(event)"
),
runtime: lambda.Runtime.NODEJS_18_X,
handler: "index.handler",
});
* It's also required for the CDK L1 CfnFunction. For what it's worth, Code is also a required input in the CreateFunction API and SDK commands.
AWS and other sources consider explicitly specifying the AWS account and region for each stack as best practice. I'm trying to write a CI pipeline that will bootstrap my environments. However, I'm not seeing any straight-forward way to retrieve the stack's explicit env values from here:
regions.forEach((region) =>
new DbUpdateStack(app, `${stackBaseName}-prd-${region}`, {
env: {
account: prdAccount,
region: region
},
environment_instance: 'prd',
vpc_id: undefined,
})
);
EG, base-name-prd-us-east-1 knows the region and account as defined in the code but how do I access this from the command line without doing something hacky?
I need to run cdk bootstrap with those values and I don't want to duplicate them.
The Cloud Assembly module can introspect an App's stack environments. Synth the app, then instantiate a CloudAssembly class by pointing at the cdk output directory:
import * as cx_api from '#aws-cdk/cx-api';
(() => {
const cloudAssembly = new cx_api.CloudAssembly('cdk.out');
const appEnvironments = cloudAssembly.stacks.map(stack => stack.environment);
console.log(appEnvironments);
})();
Result:
[
{
account: '123456789012',
region: 'us-east-1',
name: 'aws://123456789012/us-east-1',
},
];
After i create a cloud-watch event rule i am trying to add a target to it but i am unable to add a input transformation. Previously the add target had the props allowed for input transformation but it does not anymore.
codeBuildRule.addTarget(new SnsTopic(props.topic));
The aws cdk page provides this solution but i dont exactly understand what it says
You can add additional targets, with optional input transformer using eventRule.addTarget(target[, input]). For example, we can add a SNS topic target which formats a human-readable message for the commit.
You should specify the message prop and use RuleTargetInput static methods. Some of these methods can use strings returned by EventField.fromPath():
// From a path
codeBuildRule.addTarget(new SnsTopic(props.topic, {
message: events.RuleTargetInput.fromEventPath('$.detail')
}));
// Custom object
codeBuildRule.addTarget(new SnsTopic(props.topic, {
message: RuleTargetInput.fromObject({
foo: EventField.fromPath('$.detail.bar')
})
}));
I had the same question trying to implement this tutorial in CDK: Tutorial: Set up a CloudWatch Events rule to receive email notifications for pipeline state changes
I found this helpful as well: Detect and react to changes in pipeline state with Amazon CloudWatch Events
NOTE: I could not get it to work using the Pipeline's class method onStateChange().
I ended up writing a Rule:
const topic = new Topic(this, 'topic', {topicName: 'codepipeline-notes-failure',
});
const description = `Generated by the CDK for stack: ${this.stackName}`;
new Rule(this, 'failed', {
description: description,
eventPattern: {
detail: {state: ['FAILED'], pipeline: ['notes']},
detailType: ['CodePipeline Pipeline Execution State Change'],
source: ['aws.codepipeline'],
},
targets: [
new SnsTopic(topic, {
message: RuleTargetInput.fromText(
`The Pipeline '${EventField.fromPath('$.detail.pipeline')}' has ${EventField.fromPath(
'$.detail.state',
)}`,
),
}),
],
});
After implementing, if you navigate to Amazon EventBridge -> Rules, then select the rule, then select the Target(s) and then click View Details you will see the Target Details with the Input transformer & InputTemplate.
Input transformer:
{"InputPathsMap":{"detail-pipeline":"$.detail.pipeline","detail-state":"$.detail.state"},"InputTemplate":"\"The
Pipeline '<detail-pipeline>' has <detail-state>\""}
This would work for CDK Python. CodeBuild to SNS notifications.
sns_topic = sns.Topic(...)
codebuild_project = codebuild.Project(...)
sns_topic.grant_publish(codebuild_project)
codebuild_project.on_build_failed(
f'rule-on-failed',
target=events_targets.SnsTopic(
sns_topic,
message=events.RuleTargetInput.from_multiline_text(
f"""
Name: {events.EventField.from_path('$.detail.project-name')}
State: {events.EventField.from_path('$.detail.build-status')}
Build: {events.EventField.from_path('$.detail.build-id')}
Account: {events.EventField.from_path('$.account')}
"""
)
)
)
Credits to #pruthvi-raj comment on an answer above
I am using sw-precache and I understand that in order to edit the service-worker.js file I need to do this (as detailed in the service-worker.js file)
// This file should be overwritten as part of your build process.
// If you need to extend the behavior of the generated service worker, the best approach is to write
// additional code and include it using the importScripts option:
// https://github.com/GoogleChrome/sw-precache#importscripts-arraystring
but I do not know where to add the importscripts() code. Does it go in the service-worker.js file? Surely that gets over-ridden on each project build.
Just include it like so:
importScripts: ['custom-offline-import.js']
Here is an example with an config including the importScripts config option at the end:
var packageJson = require('../package.json');
var swPrecache = require('../lib/sw-precache.js');
var path = require('path');
function writeServiceWorkerFile(rootDir, handleFetch, callback) {
var config = {
cacheId: packageJson.name,
runtimeCaching: [{
// See https://github.com/GoogleChrome/sw-toolbox#methods
urlPattern: /runtime-caching/,
handler: 'cacheFirst'
}],
staticFileGlobs: [rootDir + '/**/*.{js,html,css,png,jpg,gif}'],
stripPrefix: rootDir,
importScripts: ['custom-offline-import.js']
};
swPrecache.write(path.join(rootDir, 'service-worker.js'), config, callback);
}
Hope I could help you!
Add it to the sw-precache-config.js file.
Good example here:
Rewrite URL offline when using a service worker
I have a running Electron app and is working great so far. For context, I need to run/open a external file which is a Go-lang binary that will do some background tasks.
Basically it will act as a backend and exposing an API that the Electron app will consume.
So far this is what i get into:
I tried to open the file with the "node way" using child_process but i have fail opening the a sample txt file probably due to path issues.
The Electron API expose a open-file event but it lacks of documentation/example and i don't know if it could be useful.
That's it.
How i open an external file in Electron ?
There are a couple api's you may want to study up on and see which helps you.
fs
The fs module allows you to open files for reading and writing directly.
var fs = require('fs');
fs.readFile(p, 'utf8', function (err, data) {
if (err) return console.log(err);
// data is the contents of the text file we just read
});
path
The path module allows you to build and parse paths in a platform agnostic way.
var path = require('path');
var p = path.join(__dirname, '..', 'game.config');
shell
The shell api is an electron only api that you can use to shell execute a file at a given path, which will use the OS default application to open the file.
const {shell} = require('electron');
// Open a local file in the default app
shell.openItem('c:\\example.txt');
// Open a URL in the default way
shell.openExternal('https://github.com');
child_process
Assuming that your golang binary is an executable then you would use child_process.spawn to call it and communicate with it. This is a node api.
var path = require('path');
var spawn = require('child_process').spawn;
var child = spawn(path.join(__dirname, '..', 'mygoap.exe'), ['game.config', '--debug']);
// attach events, etc.
addon
If your golang binary isn't an executable then you will need to make a native addon wrapper.
Maybe you are looking for this ?
dialog.showOpenDialog refer to: https://www.electronjs.org/docs/api/dialog
If using electron#13.1.0, you can do like this:
const { dialog } = require('electron')
console.log(dialog.showOpenDialog({ properties: ['openFile', 'multiSelections'] }))
dialog.showOpenDialog(function(file_paths){
console.info(file_paths) // => this gives the absolute path of selected files.
})
when the above code is triggered, you can see an "open file dialog" like this (diffrent view style for win/mac/linux)
Electron allows the use of nodejs packages.
In other words, import node packages as if you were in node, e.g.:
var fs = require('fs');
To run the golang binary, you can make use of the child_process module. The documentation is thorough.
Edit: You have to solve the path differences. The open-file event is a client-side event, triggered by the window. Not what you want here.
I was also totally struggling with this issue, and almost seven years later the documentation is quite not clear what's the case with Linux.
So, on Linux it falls under Windows treatment in this regard, which means you have to look into process.argv global in the main processor, the first value in the array is the path that fired the app. The second argument, if one exist, is holding the path that requested the app to be opened. For example, here is the output for my test case:
Array(2)
0: "/opt/Blueprint/b-test"
1: "/home/husayngonzalez/2022-01-20.md"
length: 2
So, when you're creating a new window, you check for the length of process.argv and then if it was more than 1, i.e. = 2 it means you have a path that requested to be opened with your app.
Assuming you got your application packaged with the ability to process those files, and also you set the operating system to request your application to open those.
I know this doesn't exactly meet your specification, but it does cleanly separate your golang binary and Electron application.
The way I have done it is to expose the golang binary as a web service. Like this
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
//TODO: put your call here instead of the Fprintf
fmt.Fprintf(w, "HI there from Go Web Svc. %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/api/someMethod", handler)
http.ListenAndServe(":8080", nil)
}
Then from Electron just make ajax calls to the web service with a javascript function. Like this (you could use jQuery, but I find this pure js works fine)
function get(url, responseType) {
return new Promise(function(resolve, reject) {
var request = new XMLHttpRequest();
request.open('GET', url);
request.responseType = responseType;
request.onload = function() {
if (request.status == 200) {
resolve(request.response);
} else {
reject(Error(request.statusText));
}
};
request.onerror = function() {
reject(Error("Network Error"));
};
request.send();
});
With that method you could do something like
get('localhost/api/somemethod', 'text')
.then(function(x){
console.log(x);
}