Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.
I have been able to successfully create an Electron app that links to the web app (using window.loadUrl).
I have been able to pass some command line params to the web app using window.webContents.send .... On the web app, the javascript receives the parameter and updates the screen.
I am using (2) by opening a file (right-click on it from the directory) through file association using process.argv[1]. This too works.
What I would like is that if I right-click on a second file, it must be passed on to the same electron instance. I am having some issues for this.
I have used the recommended approach for preventing multiple instances as below:
...
let myWindow = null
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
// I do not want to quit
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
...
}
In the above logic, when the program is unable to get-the-lock, the boiler-plate code quits. That works fine, in the sense, that a second window does not open. But in my case, I would like to use the process.argv[1] of the second request and pass it to the web program of the existing instance.
I have not been successful in getting a handle to the browserWindow of the other instance. I would not want to work on multiple windows where each window would call another load of the web app. The current webapp has the ability to update multiple tabs in the same window based on different parameters. Basically, that is handled in the web app itself.
What could be the solution? Thanks
I got it working. It starred at my face and I did not see it. Added a few logs and it helped. Something like this would be a solution.
...
let myWindow = null;
...
function createWindow() {
....
return win;
}
function processParams(...) {
...
}
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
//.. this is called the second time
// process any commandLine params
processParams(...)
...
});
app.on('whenReady')
.then(_ => {
myWindow = createWindow();
// this is called the first time
// process any argv params
processParms(...);
});
}
I could not find the docs on electron contextBridge and what is done to the API arguments but obviously something is done.
This is the gist of it:
// preload.js
contextBridge.exposeInMainWorld('fileCache', {
put (file) {
console.log(file) // ==> {}
}
})
// web app
window.fileCache.put(new File([], 'foo.txt'))
How should I pass File or any Blob or Buffer argument ? (making a string is not an option for performance reasons: 20+ Mb files...)
How would I create a folder outside the Electron exe.
I'm planning to build the app as a portable windows exe so I'm not sure how to get the path of the exe.
EDIT #1:
I have tried to use app.getPath("exe"); on the main process, but I'm getting a reference error whenever I run the app ReferenceError: exe is not defined
It was indeed app.getPath("exe"), but it has to be implemented using the Electron event emitter pattern.
To have access to the data I triggered the path on the main process.
ipcMain.on("CALL_PRINT_EXE_FILE_PATH", (event) => {
console.log("printing the file path of the exe");
const exePath = app.getPath("exe");
console.log(`exePath: ${exePath}`);
mainWindow.send("PRINT_EXE_FILE_PATH", exePath);
});
Then inside the renderer (I use React), I emit the event and also trigger an event listener.
const { ipcRenderer } = window.require("electron");
...
componentDidMount() {
ipcRenderer.send("CALL_PRINT_EXE_FILE_PATH");
}
componentWillMount() {
ipcRenderer.on("PRINT_EXE_FILE_PATH", this.handlePrintExePath);
}
componentWillUnmount() {
ipcRenderer.removeListener("PRINT_EXE_FILE_PATH", this.handlePrintExePath);
}
...
handlePrintExePath(event, exePath) {
console.log("printing the app exe in the render");
console.log(`exeFilePath: ${exePath}`);
}
I’d like to resurrect an abandoned add-on that was created for Firefox 11. This add-on controlled a device via a native dll. With the Firefox 32 addon-api and ctx, I don’t see how to:
1) insert lengthy custom init code into bootstrap.js or harness-options.json.
2) include additional binaries into the xpi archive
3) discover or determine the executable path for use of external code within my add-on
I have a copy of the original old xpi. I can see how they put the required dll in “.\plugins\5.9.6.0000\foobar.dll”. I can see they used the “install” function in .\bootstrap.js. I’ve included some of the the original code from bootstrap.js here.
function registerPlugin()
{
var profPath = Components.classes["#mozilla.org/file/directory_service;1"].getService( Components.interfaces.nsIProperties).get("ProfD", Components.interfaces.nsIFile).path;
var wrk = Components.classes["#mozilla.org/windows-registry-key;1"]
.createInstance(Components.interfaces.nsIWindowsRegKey);
wrk.open(wrk.ROOT_KEY_CURRENT_USER, "SOFTWARE", wrk.ACCESS_ALL);
if(!wrk.hasChild("MozillaPlugins"))
wrk = wrk.createChild("MozillaPlugins", wrk.ACCESS_ALL);
else
wrk = wrk.openChild("MozillaPlugins", wrk.ACCESS_ALL);
var t1 = wrk.createChild("blueglow#hardcorps.com", wrk.ACCESS_ALL);
t1.writeStringValue("Description", "CanCan extension for BagMan");
t1.writeStringValue("ProductName", "CanCan extension for BagMan");
t1.writeStringValue("Vendor", "Hardcorps Inc.");
t1.writeStringValue("Version", "5.9.6.0000");
t1.writeStringValue("Path", profPath + "\\extensions\\blueglow#hardcorps.com\\plugins\\5.9.6.0000\\foobar.dll" );
var t2 = t1.createChild("MimeTypes", wrk.ACCESS_ALL);
t2.createChild("application/blueglow-ff-plugin", wrk.ACCESS_ALL);
t2.close();
t1.close();
wrk.close();
Components.classes['#mozilla.org/appshell/window-mediator;1']
.getService(Ci.nsIWindowMediator)
.getMostRecentWindow('navigator:browser')
.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
.getInterface(Components.interfaces.nsIWebNavigation)
.QueryInterface(Components.interfaces.nsIDocShellTreeItem)
.rootTreeItem
.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
.getInterface(Components.interfaces.nsIDOMWindow).navigator.plugins.refresh(false);
}
function install(data, reason)
{
registerPlugin();
}
Drop the dll into your addon and then use it via js-ctypes.
Here's an example on Mac, they use .dylib instead of .dll:
source: https://github.com/vasi/firefox-dock-progress/tree/master
compiled source: https://addons.mozilla.org/en-US/firefox/files/browse/185970/file/chrome/content/DockProgress.jsm